Index: share/man/man4/qat.4 =================================================================== --- share/man/man4/qat.4 +++ share/man/man4/qat.4 @@ -1,110 +1,127 @@ -.\"- -.\" Copyright (c) 2020 Rubicon Communications, LLC (Netgate) -.\" -.\" Redistribution and use in source and binary forms, with or without -.\" modification, are permitted provided that the following conditions -.\" are met: -.\" 1. Redistributions of source code must retain the above copyright -.\" notice, this list of conditions and the following disclaimer. -.\" 2. Redistributions in binary form must reproduce the above copyright -.\" notice, this list of conditions and the following disclaimer in the -.\" documentation and/or other materials provided with the distribution. -.\" -.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND -.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE -.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS -.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) -.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY -.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF -.\" SUCH DAMAGE. -.\" +.\" SPDX-License-Identifier: BSD-3-Clause +.\" Copyright(c) 2007-2022 Intel Corporation .\" $FreeBSD$ -.\" -.Dd May 7, 2021 +.Dd June 30, 2022 .Dt QAT 4 .Os .Sh NAME .Nm qat -.Nd Intel QuickAssist Technology (QAT) driver +.Nd Intel (R) QuickAssist Technology (QAT) driver .Sh SYNOPSIS -To compile this driver into the kernel, -place the following lines in your -kernel configuration file: -.Bd -ragged -offset indent -.Cd "device crypto" -.Cd "device cryptodev" -.Cd "device qat" -.Ed +To load the driver call: .Pp -Alternatively, to load the driver as a -module at boot time, place the following lines in -.Xr loader.conf 5 : -.Bd -literal -offset indent +.Bl -item -compact +.It +kldload qat +.El +.Pp +In order to load the driver on boot add these lines to +.Xr loader.conf 5 selecting firmware(s) suitable for installed device(s) +.Pp +.Bl -item -compact +.It +qat_200xx_fw_load="YES" +.It +qat_c3xxx_fw_load="YES" +.It +qat_c4xxx_fw_load="YES" +.It +qat_c62x_fw_load="YES" +.It +qat_dh895xcc_fw_load="YES" +.It qat_load="YES" -qat_c2xxxfw_load="YES" -qat_c3xxxfw_load="YES" -qat_c62xfw_load="YES" -qat_d15xxfw_load="YES" -qat_dh895xccfw_load="YES" -.Ed +.El .Sh DESCRIPTION The .Nm -driver implements -.Xr crypto 4 -support for some of the cryptographic acceleration functions of the Intel -QuickAssist (QAT) device. +driver supports cryptography and compression acceleration of the +Intel (R) QuickAssist Technology (QAT) devices. +.Pp The .Nm -driver supports the QAT devices integrated with Atom C2000 and C3000 and Xeon -C620 and D-1500 platforms, and the Intel QAT Adapter 8950. -Other platforms and adapters not listed here may also be supported. -QAT devices are enumerated through PCIe and are thus visible in -.Xr pciconf 8 -output. +driver is intended for platforms that contain: +.Bl -bullet -compact +.It +Intel (R) C62x Chipset +.It +Intel (R) Atom C3000 processor product family +.It +Intel (R) QuickAssist Adapter 8960/Intel (R) QuickAssist Adapter 8970 +(formerly known as "Lewis Hill") +.It +Intel (R) Communications Chipset 8925 to 8955 Series +.It +Intel (R) Atom P5300 processor product family +.El .Pp The .Nm -driver can accelerate AES in CBC, CTR, XTS (except for the C2000) and GCM modes, -and can perform authenticated encryption combining the CBC, CTR and XTS modes -with SHA1-HMAC and SHA2-HMAC. +driver supports cryptography and compression acceleration. +A complete API for offloading these operations is exposed in the kernel and may +be used by any other entity directly. +For details of usage and supported operations and algorithms refer to the +following documentation available from +.Lk 01.org : +.Bl -bullet -compact +.It +.Rs +.%A Intel (R) +.%T QuickAssist Technology API Programmer's Guide +.Re +.It +.Rs +.%A Intel (R) +.%T QuickAssist Technology Cryptographic API Reference Manual +.Re +.It +.Rs +.%A Intel (R) +.%T QuickAssist Technology Data Compression API Reference Manual +.Re +.It +.Rs +.%A Intel (R) +.%T QuickAssist Technology Performance Optimization Guide +.Re +.El +.Pp +In addition to exposing complete kernel API for offloading cryptography and +compression operations, the +.Nm +driver also integrates with +.Xr crypto 4 , +allowing offloading supported cryptography operations to Intel (R) QuickAssist +Technology (QAT) devices. +For details of usage and supported operations and algorithms refer to the +documentation mentioned above and +.Sx SEE ALSO +section. +.Sh COMPATIBILITY The .Nm -driver can also compute SHA1 and SHA2 digests. -The implementation of AES-GCM has a firmware-imposed constraint that the length -of any additional authenticated data (AAD) must not exceed 240 bytes. -The driver thus rejects -.Xr crypto 9 -requests that do not satisfy this constraint. +driver replaced previous implementation introduced in +.Fx 13.0 . +Current version, in addition to +.Xr crypto 4 +integration, supports also data compression and exposes a complete API for +offloading data compression and cryptography operations. .Sh SEE ALSO .Xr crypto 4 , .Xr ipsec 4 , .Xr pci 4 , -.Xr random 4 , .Xr crypto 7 , .Xr crypto 9 .Sh HISTORY -The +This .Nm -driver first appeared in -.Fx 13.0 . -.Sh AUTHORS -The +driver was introduced in +.Fx 14.0 . +.Fx 13.0 included a different version of .Nm -driver was written for -.Nx -by -.An Hikaru Abe Aq Mt hikaru@iij.ad.jp . -.An Mark Johnston Aq Mt markj@FreeBSD.org -ported the driver to -.Fx . -.Sh BUGS -Some Atom C2000 QAT devices have two acceleration engines instead of one. +driver. +.Sh AUTHORS The .Nm -driver currently misbehaves when both are enabled and thus does not enable -the second acceleration engine if one is present. +driver was written by +.An Intel (R) Corporation . Index: sys/contrib/dev/qat/LICENSE =================================================================== --- sys/contrib/dev/qat/LICENSE +++ sys/contrib/dev/qat/LICENSE @@ -1,11 +1,39 @@ -Copyright (c) 2007-2016 Intel Corporation. -All rights reserved. -Redistribution. Redistribution and use in binary form, without modification, are permitted provided that the following conditions are met: +Copyright (c) 2021 Intel Corporation - Redistributions must reproduce the above copyright notice and the following disclaimer in the documentation and/or other materials provided with the distribution. - Neither the name of Intel Corporation nor the names of its suppliers may be used to endorse or promote products derived from this software without specific prior written permission. - No reverse engineering, decompilation, or disassembly of this software is permitted. - -Limited patent license. Intel Corporation grants a world-wide, royalty-free, non-exclusive license under patents it now or hereafter owns or controls to make, have made, use, import, offer to sell and sell ("Utilize") this software, but solely to the extent that any such patent is necessary to Utilize the software alone. The patent license shall not apply to any combinations which include this software. No hardware per se is licensed hereunder. +Redistribution. Redistribution and use in binary form, without +modification, are permitted provided that the following conditions are +met: + +* Redistributions must reproduce the above copyright notice and the + following disclaimer in the documentation and/or other materials + provided with the distribution. +* Neither the name of Intel Corporation nor the names of its suppliers + may be used to endorse or promote products derived from this software + without specific prior written permission. +* No reverse engineering, decompilation, or disassembly of this software + is permitted. + +Limited patent license. Intel Corporation grants a world-wide, +royalty-free, non-exclusive license under patents it now or hereafter +owns or controls to make, have made, use, import, offer to sell and +sell ("Utilize") this software, but solely to the extent that any +such patent is necessary to Utilize the software alone, or in +combination with an operating system licensed under an approved Open +Source license as listed by the Open Source Initiative at +http://opensource.org/licenses. The patent license shall not apply to +any other combinations which include this software. No hardware per +se is licensed hereunder. + +DISCLAIMER. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND +CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, +BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND +FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS +OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR +TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE +USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH +DAMAGE. -DISCLAIMER. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Index: sys/dev/qat/include/adf_cfg_dev_dbg.h =================================================================== --- /dev/null +++ sys/dev/qat/include/adf_cfg_dev_dbg.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_DEV_DBG_H_ +#define ADF_CFG_DEV_DBG_H_ + +struct adf_accel_dev; + +int adf_cfg_dev_dbg_add(struct adf_accel_dev *accel_dev); +void adf_cfg_dev_dbg_remove(struct adf_accel_dev *accel_dev); + +#endif /* ADF_CFG_DEV_DBG_H_ */ Index: sys/dev/qat/include/adf_cfg_device.h =================================================================== --- /dev/null +++ sys/dev/qat/include/adf_cfg_device.h @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_DEVICE_H_ +#define ADF_CFG_DEVICE_H_ + +#include "adf_cfg.h" +#include "sal_statistics_strings.h" + +#define ADF_CFG_STATIC_CONF_VER 2 +#define ADF_CFG_STATIC_CONF_CY_ASYM_RING_SIZE 64 +#define ADF_CFG_STATIC_CONF_CY_SYM_RING_SIZE 512 +#define ADF_CFG_STATIC_CONF_DC_INTER_BUF_SIZE 64 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_ENABLED 1 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_DC 1 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_DH 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_DRBG 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_DSA 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_ECC 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_KEYGEN 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_LN 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_PRIME 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_RSA 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_SYM 1 +#define ADF_CFG_STATIC_CONF_POLL 1 +#define ADF_CFG_STATIC_CONF_IRQ 0 +#define ADF_CFG_STATIC_CONF_AUTO_RESET 0 +#define ADF_CFG_STATIC_CONF_NUM_DC_ACCEL_UNITS 2 +#define ADF_CFG_STATIC_CONF_NUM_INLINE_ACCEL_UNITS 0 +#define ADF_CFG_STATIC_CONF_INST_NUM_DC 2 +#define ADF_CFG_STATIC_CONF_INST_NUM_CY_POLL 2 +#define ADF_CFG_STATIC_CONF_INST_NUM_CY_IRQ 2 + +#define ADF_CFG_FW_STRING_TO_ID(str, acc, id) \ + do { \ + typeof(id) id_ = (id); \ + typeof(str) str_; \ + memcpy(str_, (str), sizeof(str_)); \ + if (!strncmp(str_, \ + ADF_SERVICES_DEFAULT, \ + sizeof(ADF_SERVICES_DEFAULT))) \ + *id_ = ADF_FW_IMAGE_DEFAULT; \ + else if (!strncmp(str_, \ + ADF_SERVICES_CRYPTO, \ + sizeof(ADF_SERVICES_CRYPTO))) \ + *id_ = ADF_FW_IMAGE_CRYPTO; \ + else if (!strncmp(str_, \ + ADF_SERVICES_COMPRESSION, \ + sizeof(ADF_SERVICES_COMPRESSION))) \ + *id_ = ADF_FW_IMAGE_COMPRESSION; \ + else if (!strncmp(str_, \ + ADF_SERVICES_CUSTOM1, \ + sizeof(ADF_SERVICES_CUSTOM1))) \ + *id_ = ADF_FW_IMAGE_CUSTOM1; \ + else { \ + *id_ = ADF_FW_IMAGE_DEFAULT; \ + device_printf(GET_DEV(acc), \ + "Invalid SerivesProfile: %s," \ + "Using DEFAULT image\n", \ + str_); \ + } \ + } while (0) + +int adf_cfg_get_ring_pairs(struct adf_cfg_device *device, + struct adf_cfg_instance *inst, + const char *process_name, + struct adf_accel_dev *accel_dev); + +int adf_cfg_device_init(struct adf_cfg_device *device, + struct adf_accel_dev *accel_dev); + +void adf_cfg_device_clear(struct adf_cfg_device *device, + struct adf_accel_dev *accel_dev); + +#endif Index: sys/dev/qat/include/adf_cnvnr_freq_counters.h =================================================================== --- /dev/null +++ sys/dev/qat/include/adf_cnvnr_freq_counters.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CNVNR_CTRS_DBG_H_ +#define ADF_CNVNR_CTRS_DBG_H_ + +struct adf_accel_dev; +int adf_cnvnr_freq_counters_add(struct adf_accel_dev *accel_dev); +void adf_cnvnr_freq_counters_remove(struct adf_accel_dev *accel_dev); + +#endif /* ADF_CNVNR_CTRS_DBG_H_ */ Index: sys/dev/qat/include/adf_dev_err.h =================================================================== --- /dev/null +++ sys/dev/qat/include/adf_dev_err.h @@ -0,0 +1,80 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_DEV_ERR_H_ +#define ADF_DEV_ERR_H_ + +#include +#include +#include "adf_accel_devices.h" + +#define ADF_ERRSOU0 (0x3A000 + 0x00) +#define ADF_ERRSOU1 (0x3A000 + 0x04) +#define ADF_ERRSOU2 (0x3A000 + 0x08) +#define ADF_ERRSOU3 (0x3A000 + 0x0C) +#define ADF_ERRSOU4 (0x3A000 + 0xD0) +#define ADF_ERRSOU5 (0x3A000 + 0xD8) +#define ADF_ERRMSK0 (0x3A000 + 0x10) +#define ADF_ERRMSK1 (0x3A000 + 0x14) +#define ADF_ERRMSK2 (0x3A000 + 0x18) +#define ADF_ERRMSK3 (0x3A000 + 0x1C) +#define ADF_ERRMSK4 (0x3A000 + 0xD4) +#define ADF_ERRMSK5 (0x3A000 + 0xDC) +#define ADF_EMSK3_CPM0_MASK BIT(2) +#define ADF_EMSK3_CPM1_MASK BIT(3) +#define ADF_EMSK5_CPM2_MASK BIT(16) +#define ADF_EMSK5_CPM3_MASK BIT(17) +#define ADF_EMSK5_CPM4_MASK BIT(18) +#define ADF_RICPPINTSTS (0x3A000 + 0x114) +#define ADF_RIERRPUSHID (0x3A000 + 0x118) +#define ADF_RIERRPULLID (0x3A000 + 0x11C) +#define ADF_CPP_CFC_ERR_STATUS (0x30000 + 0xC04) +#define ADF_CPP_CFC_ERR_PPID (0x30000 + 0xC08) +#define ADF_TICPPINTSTS (0x3A400 + 0x13C) +#define ADF_TIERRPUSHID (0x3A400 + 0x140) +#define ADF_TIERRPULLID (0x3A400 + 0x144) +#define ADF_SECRAMUERR (0x3AC00 + 0x04) +#define ADF_SECRAMUERRAD (0x3AC00 + 0x0C) +#define ADF_CPPMEMTGTERR (0x3AC00 + 0x10) +#define ADF_ERRPPID (0x3AC00 + 0x14) +#define ADF_INTSTATSSM(i) ((i)*0x4000 + 0x04) +#define ADF_INTSTATSSM_SHANGERR BIT(13) +#define ADF_PPERR(i) ((i)*0x4000 + 0x08) +#define ADF_PPERRID(i) ((i)*0x4000 + 0x0C) +#define ADF_CERRSSMSH(i) ((i)*0x4000 + 0x10) +#define ADF_UERRSSMSH(i) ((i)*0x4000 + 0x18) +#define ADF_UERRSSMSHAD(i) ((i)*0x4000 + 0x1C) +#define ADF_SLICEHANGSTATUS(i) ((i)*0x4000 + 0x4C) +#define ADF_SLICE_HANG_AUTH0_MASK BIT(0) +#define ADF_SLICE_HANG_AUTH1_MASK BIT(1) +#define ADF_SLICE_HANG_AUTH2_MASK BIT(2) +#define ADF_SLICE_HANG_CPHR0_MASK BIT(4) +#define ADF_SLICE_HANG_CPHR1_MASK BIT(5) +#define ADF_SLICE_HANG_CPHR2_MASK BIT(6) +#define ADF_SLICE_HANG_CMP0_MASK BIT(8) +#define ADF_SLICE_HANG_CMP1_MASK BIT(9) +#define ADF_SLICE_HANG_XLT0_MASK BIT(12) +#define ADF_SLICE_HANG_XLT1_MASK BIT(13) +#define ADF_SLICE_HANG_MMP0_MASK BIT(16) +#define ADF_SLICE_HANG_MMP1_MASK BIT(17) +#define ADF_SLICE_HANG_MMP2_MASK BIT(18) +#define ADF_SLICE_HANG_MMP3_MASK BIT(19) +#define ADF_SLICE_HANG_MMP4_MASK BIT(20) +#define ADF_SSMWDT(i) ((i)*0x4000 + 0x54) +#define ADF_SSMWDTPKE(i) ((i)*0x4000 + 0x58) +#define ADF_SHINTMASKSSM(i) ((i)*0x4000 + 0x1018) +#define ADF_ENABLE_SLICE_HANG 0x000000 +#define ADF_MAX_MMP (5) +#define ADF_MMP_BASE(i) ((i)*0x1000 % 0x3800) +#define ADF_CERRSSMMMP(i, n) ((i)*0x4000 + ADF_MMP_BASE(n) + 0x380) +#define ADF_UERRSSMMMP(i, n) ((i)*0x4000 + ADF_MMP_BASE(n) + 0x388) +#define ADF_UERRSSMMMPAD(i, n) ((i)*0x4000 + ADF_MMP_BASE(n) + 0x38C) + +bool adf_handle_slice_hang(struct adf_accel_dev *accel_dev, + u8 accel_num, + struct resource *csr, + u32 slice_hang_offset); +bool adf_check_slice_hang(struct adf_accel_dev *accel_dev); +void adf_print_err_registers(struct adf_accel_dev *accel_dev); + +#endif Index: sys/dev/qat/include/adf_freebsd_pfvf_ctrs_dbg.h =================================================================== --- /dev/null +++ sys/dev/qat/include/adf_freebsd_pfvf_ctrs_dbg.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_PFVF_CTRS_DBG_H_ +#define ADF_PFVF_CTRS_DBG_H_ + +struct adf_accel_dev; +int adf_pfvf_ctrs_dbg_add(struct adf_accel_dev *accel_dev); + +#endif /* ADF_PFVF_CTRS_DBG_H_ */ Index: sys/dev/qat/include/adf_fw_counters.h =================================================================== --- /dev/null +++ sys/dev/qat/include/adf_fw_counters.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_FW_COUNTERS_H_ +#define ADF_FW_COUNTERS_H_ + +#include +#include "adf_accel_devices.h" + +#define FW_COUNTERS_MAX_STR_LEN 64 +#define FW_COUNTERS_MAX_KEY_LEN_IN_BYTES FW_COUNTERS_MAX_STR_LEN +#define FW_COUNTERS_MAX_VAL_LEN_IN_BYTES FW_COUNTERS_MAX_STR_LEN +#define FW_COUNTERS_MAX_SECTION_LEN_IN_BYTES FW_COUNTERS_MAX_STR_LEN +#define ADF_FW_COUNTERS_NO_RESPONSE -1 + +struct adf_fw_counters_val { + char key[FW_COUNTERS_MAX_KEY_LEN_IN_BYTES]; + char val[FW_COUNTERS_MAX_VAL_LEN_IN_BYTES]; + struct list_head list; +}; + +struct adf_fw_counters_section { + char name[FW_COUNTERS_MAX_SECTION_LEN_IN_BYTES]; + struct list_head list; + struct list_head param_head; +}; + +struct adf_fw_counters_data { + struct list_head ae_sec_list; + struct sysctl_oid *debug; + struct rw_semaphore lock; +}; + +int adf_fw_counters_add(struct adf_accel_dev *accel_dev); +void adf_fw_counters_remove(struct adf_accel_dev *accel_dev); +int adf_fw_count_ras_event(struct adf_accel_dev *accel_dev, + u32 *ras_event, + char *aeidstr); + +#endif /* ADF_FW_COUNTERS_H_ */ Index: sys/dev/qat/include/adf_heartbeat.h =================================================================== --- /dev/null +++ sys/dev/qat/include/adf_heartbeat.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_HEARTBEAT_H_ +#define ADF_HEARTBEAT_H_ + +#include "adf_cfg_common.h" + +struct adf_accel_dev; + +struct qat_sysctl { + unsigned int hb_sysctlvar; + struct sysctl_oid *oid; +}; + +struct adf_heartbeat { + unsigned int hb_sent_counter; + unsigned int hb_failed_counter; + u64 last_hb_check_time; + enum adf_device_heartbeat_status last_hb_status; + struct qat_sysctl heartbeat; + struct qat_sysctl *heartbeat_sent; + struct qat_sysctl *heartbeat_failed; +}; + +int adf_heartbeat_init(struct adf_accel_dev *accel_dev); +void adf_heartbeat_clean(struct adf_accel_dev *accel_dev); + +int adf_get_hb_timer(struct adf_accel_dev *accel_dev, unsigned int *value); +int adf_get_heartbeat_status(struct adf_accel_dev *accel_dev); +int adf_heartbeat_status(struct adf_accel_dev *accel_dev, + enum adf_device_heartbeat_status *hb_status); +#endif /* ADF_HEARTBEAT_H_ */ Index: sys/dev/qat/include/adf_heartbeat_dbg.h =================================================================== --- /dev/null +++ sys/dev/qat/include/adf_heartbeat_dbg.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_HEARTBEAT_DBG_H_ +#define ADF_HEARTBEAT_DBG_H_ + +struct adf_accel_dev; +int adf_heartbeat_dbg_add(struct adf_accel_dev *accel_dev); +int adf_heartbeat_dbg_del(struct adf_accel_dev *accel_dev); + +#endif /* ADF_HEARTBEAT_DBG_H_ */ Index: sys/dev/qat/include/adf_pf2vf_msg.h =================================================================== --- /dev/null +++ sys/dev/qat/include/adf_pf2vf_msg.h @@ -0,0 +1,182 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_PF2VF_MSG_H +#define ADF_PF2VF_MSG_H + +/* + * PF<->VF Messaging + * The PF has an array of 32-bit PF2VF registers, one for each VF. The + * PF can access all these registers; each VF can access only the one + * register associated with that particular VF. + * + * The register functionally is split into two parts: + * The bottom half is for PF->VF messages. In particular when the first + * bit of this register (bit 0) gets set an interrupt will be triggered + * in the respective VF. + * The top half is for VF->PF messages. In particular when the first bit + * of this half of register (bit 16) gets set an interrupt will be triggered + * in the PF. + * + * The remaining bits within this register are available to encode messages. + * and implement a collision control mechanism to prevent concurrent use of + * the PF2VF register by both the PF and VF. + * + * 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 + * _______________________________________________ + * | | | | | | | | | | | | | | | | | + * +-----------------------------------------------+ + * \___________________________/ \_________/ ^ ^ + * ^ ^ | | + * | | | VF2PF Int + * | | Message Origin + * | Message Type + * Message-specific Data/Reserved + * + * 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 + * _______________________________________________ + * | | | | | | | | | | | | | | | | | + * +-----------------------------------------------+ + * \___________________________/ \_________/ ^ ^ + * ^ ^ | | + * | | | PF2VF Int + * | | Message Origin + * | Message Type + * Message-specific Data/Reserved + * + * Message Origin (Should always be 1) + * A legacy out-of-tree QAT driver allowed for a set of messages not supported + * by this driver; these had a Msg Origin of 0 and are ignored by this driver. + * + * When a PF or VF attempts to send a message in the lower or upper 16 bits, + * respectively, the other 16 bits are written to first with a defined + * IN_USE_BY pattern as part of a collision control scheme (see adf_iov_putmsg). + */ + +/* VF/PF compatibility version. */ +/* ADF_PFVF_COMPATIBILITY_EXT_CAP: Support for extended capabilities */ +#define ADF_PFVF_COMPATIBILITY_CAPABILITIES 2 +/* ADF_PFVF_COMPATIBILITY_FAST_ACK: In-use pattern cleared by receiver */ +#define ADF_PFVF_COMPATIBILITY_FAST_ACK 3 +#define ADF_PFVF_COMPATIBILITY_RING_TO_SVC_MAP 4 +#define ADF_PFVF_COMPATIBILITY_VERSION 4 /* PF<->VF compat */ + +/* PF->VF messages */ +#define ADF_PF2VF_INT BIT(0) +#define ADF_PF2VF_MSGORIGIN_SYSTEM BIT(1) +#define ADF_PF2VF_MSGTYPE_MASK 0x0000003C +#define ADF_PF2VF_MSGTYPE_SHIFT 2 +#define ADF_PF2VF_MSGTYPE_RESTARTING 0x01 +#define ADF_PF2VF_MSGTYPE_VERSION_RESP 0x02 +#define ADF_PF2VF_MSGTYPE_BLOCK_RESP 0x03 +#define ADF_PF2VF_MSGTYPE_FATAL_ERROR 0x04 +#define ADF_PF2VF_IN_USE_BY_PF 0x6AC20000 +#define ADF_PF2VF_IN_USE_BY_PF_MASK 0xFFFE0000 + +/* PF->VF Version Response */ +#define ADF_PF2VF_VERSION_RESP_VERS_MASK 0x00003FC0 +#define ADF_PF2VF_VERSION_RESP_VERS_SHIFT 6 +#define ADF_PF2VF_VERSION_RESP_RESULT_MASK 0x0000C000 +#define ADF_PF2VF_VERSION_RESP_RESULT_SHIFT 14 +#define ADF_PF2VF_MINORVERSION_SHIFT 6 +#define ADF_PF2VF_MAJORVERSION_SHIFT 10 +#define ADF_PF2VF_VF_COMPATIBLE 1 +#define ADF_PF2VF_VF_INCOMPATIBLE 2 +#define ADF_PF2VF_VF_COMPAT_UNKNOWN 3 + +/* PF->VF Block Request Type */ +#define ADF_VF2PF_MIN_SMALL_MESSAGE_TYPE 0 +#define ADF_VF2PF_MAX_SMALL_MESSAGE_TYPE (ADF_VF2PF_MIN_SMALL_MESSAGE_TYPE + 15) +#define ADF_VF2PF_MIN_MEDIUM_MESSAGE_TYPE (ADF_VF2PF_MAX_SMALL_MESSAGE_TYPE + 1) +#define ADF_VF2PF_MAX_MEDIUM_MESSAGE_TYPE \ + (ADF_VF2PF_MIN_MEDIUM_MESSAGE_TYPE + 7) +#define ADF_VF2PF_MIN_LARGE_MESSAGE_TYPE (ADF_VF2PF_MAX_MEDIUM_MESSAGE_TYPE + 1) +#define ADF_VF2PF_MAX_LARGE_MESSAGE_TYPE (ADF_VF2PF_MIN_LARGE_MESSAGE_TYPE + 3) +#define ADF_VF2PF_SMALL_PAYLOAD_SIZE 30 +#define ADF_VF2PF_MEDIUM_PAYLOAD_SIZE 62 +#define ADF_VF2PF_LARGE_PAYLOAD_SIZE 126 + +#define ADF_VF2PF_MAX_BLOCK_TYPE 3 +#define ADF_VF2PF_BLOCK_REQ_TYPE_SHIFT 22 +#define ADF_VF2PF_LARGE_BLOCK_BYTE_NUM_SHIFT 24 +#define ADF_VF2PF_MEDIUM_BLOCK_BYTE_NUM_SHIFT 25 +#define ADF_VF2PF_SMALL_BLOCK_BYTE_NUM_SHIFT 26 +#define ADF_VF2PF_BLOCK_REQ_CRC_SHIFT 31 +#define ADF_VF2PF_LARGE_BLOCK_BYTE_NUM_MASK 0x7F000000 +#define ADF_VF2PF_MEDIUM_BLOCK_BYTE_NUM_MASK 0x7E000000 +#define ADF_VF2PF_SMALL_BLOCK_BYTE_NUM_MASK 0x7C000000 +#define ADF_VF2PF_LARGE_BLOCK_REQ_TYPE_MASK 0xC00000 +#define ADF_VF2PF_MEDIUM_BLOCK_REQ_TYPE_MASK 0x1C00000 +#define ADF_VF2PF_SMALL_BLOCK_REQ_TYPE_MASK 0x3C00000 + +/* PF->VF Block Response Type */ +#define ADF_PF2VF_BLOCK_RESP_TYPE_DATA 0x0 +#define ADF_PF2VF_BLOCK_RESP_TYPE_CRC 0x1 +#define ADF_PF2VF_BLOCK_RESP_TYPE_ERROR 0x2 +#define ADF_PF2VF_BLOCK_RESP_TYPE_SHIFT 6 +#define ADF_PF2VF_BLOCK_RESP_DATA_SHIFT 8 +#define ADF_PF2VF_BLOCK_RESP_TYPE_MASK 0x000000C0 +#define ADF_PF2VF_BLOCK_RESP_DATA_MASK 0x0000FF00 + +/* PF-VF block message header bytes */ +#define ADF_VF2PF_BLOCK_VERSION_BYTE 0 +#define ADF_VF2PF_BLOCK_LEN_BYTE 1 +#define ADF_VF2PF_BLOCK_DATA 2 + +/* PF->VF Block Error Code */ +#define ADF_PF2VF_INVALID_BLOCK_TYPE 0x0 +#define ADF_PF2VF_INVALID_BYTE_NUM_REQ 0x1 +#define ADF_PF2VF_PAYLOAD_TRUNCATED 0x2 +#define ADF_PF2VF_UNSPECIFIED_ERROR 0x3 + +/* VF->PF messages */ +#define ADF_VF2PF_IN_USE_BY_VF 0x00006AC2 +#define ADF_VF2PF_IN_USE_BY_VF_MASK 0x0000FFFE +#define ADF_VF2PF_INT BIT(16) +#define ADF_VF2PF_MSGORIGIN_SYSTEM BIT(17) +#define ADF_VF2PF_MSGTYPE_MASK 0x003C0000 +#define ADF_VF2PF_MSGTYPE_SHIFT 18 +#define ADF_VF2PF_MSGTYPE_INIT 0x3 +#define ADF_VF2PF_MSGTYPE_SHUTDOWN 0x4 +#define ADF_VF2PF_MSGTYPE_VERSION_REQ 0x5 +#define ADF_VF2PF_MSGTYPE_COMPAT_VER_REQ 0x6 +#define ADF_VF2PF_MSGTYPE_GET_LARGE_BLOCK_REQ 0x7 +#define ADF_VF2PF_MSGTYPE_GET_MEDIUM_BLOCK_REQ 0x8 +#define ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ 0x9 +#define ADF_VF2PF_MSGTYPE_NOTIFY 0xa +#define ADF_VF2PF_MSGGENC_RESTARTING_COMPLETE 0x0 + +/* Block message types + * 0..15 - 32 byte message + * 16..23 - 64 byte message + * 24..27 - 128 byte message + * 2 - Get Capability Request message + */ +#define ADF_VF2PF_BLOCK_MSG_CAP_SUMMARY 2 +#define ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ 0x3 + +/* VF->PF Compatible Version Request */ +#define ADF_VF2PF_COMPAT_VER_REQ_SHIFT 22 + +/* How long to wait for far side to acknowledge receipt */ +#define ADF_IOV_MSG_ACK_DELAY_US 5 +#define ADF_IOV_MSG_ACK_EXP_MAX_DELAY_US (5 * 1000) +#define ADF_IOV_MSG_ACK_DELAY_MS 5 +#define ADF_IOV_MSG_ACK_LIN_MAX_DELAY_US (2 * 1000 * 1000) +/* If CSR is busy, how long to delay before retrying */ +#define ADF_IOV_MSG_RETRY_DELAY 5 +#define ADF_IOV_MSG_MAX_RETRIES 10 +/* How long to wait for a response from the other side */ +#define ADF_IOV_MSG_RESP_TIMEOUT 100 +/* How often to retry when there is no response */ +#define ADF_IOV_MSG_RESP_RETRIES 5 + +#define ADF_IOV_RATELIMIT_INTERVAL 8 +#define ADF_IOV_RATELIMIT_BURST 130 + +/* CRC Calculation */ +#define ADF_CRC8_INIT_VALUE 0xFF +/* PF VF message byte shift */ +#define ADF_PFVF_DATA_SHIFT 8 +#define ADF_PFVF_DATA_MASK 0xFF +#endif /* ADF_IOV_MSG_H */ Index: sys/dev/qat/include/adf_ver_dbg.h =================================================================== --- /dev/null +++ sys/dev/qat/include/adf_ver_dbg.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_VER_DBG_H_ +#define ADF_VER_DBG_H_ + +struct adf_accel_dev; +int adf_ver_dbg_add(struct adf_accel_dev *accel_dev); +void adf_ver_dbg_del(struct adf_accel_dev *accel_dev); + +#endif /* ADF_VER_DBG_H_ */ Index: sys/dev/qat/include/common/adf_accel_devices.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/adf_accel_devices.h @@ -0,0 +1,585 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_ACCEL_DEVICES_H_ +#define ADF_ACCEL_DEVICES_H_ + +#include "qat_freebsd.h" +#include "adf_cfg_common.h" + +#define ADF_CFG_NUM_SERVICES 4 + +#define ADF_DH895XCC_DEVICE_NAME "dh895xcc" +#define ADF_DH895XCCVF_DEVICE_NAME "dh895xccvf" +#define ADF_C62X_DEVICE_NAME "c6xx" +#define ADF_C62XVF_DEVICE_NAME "c6xxvf" +#define ADF_C3XXX_DEVICE_NAME "c3xxx" +#define ADF_C3XXXVF_DEVICE_NAME "c3xxxvf" +#define ADF_200XX_DEVICE_NAME "200xx" +#define ADF_200XXVF_DEVICE_NAME "200xxvf" +#define ADF_C4XXX_DEVICE_NAME "c4xxx" +#define ADF_C4XXXVF_DEVICE_NAME "c4xxxvf" +#define ADF_DH895XCC_PCI_DEVICE_ID 0x435 +#define ADF_DH895XCCIOV_PCI_DEVICE_ID 0x443 +#define ADF_C62X_PCI_DEVICE_ID 0x37c8 +#define ADF_C62XIOV_PCI_DEVICE_ID 0x37c9 +#define ADF_C3XXX_PCI_DEVICE_ID 0x19e2 +#define ADF_C3XXXIOV_PCI_DEVICE_ID 0x19e3 +#define ADF_200XX_PCI_DEVICE_ID 0x18ee +#define ADF_200XXIOV_PCI_DEVICE_ID 0x18ef +#define ADF_D15XX_PCI_DEVICE_ID 0x6f54 +#define ADF_D15XXIOV_PCI_DEVICE_ID 0x6f55 +#define ADF_C4XXX_PCI_DEVICE_ID 0x18a0 +#define ADF_C4XXXIOV_PCI_DEVICE_ID 0x18a1 + +#define IS_QAT_GEN3(ID) ({ (ID == ADF_C4XXX_PCI_DEVICE_ID); }) +#define ADF_VF2PF_SET_SIZE 32 +#define ADF_MAX_VF2PF_SET 4 +#define ADF_VF2PF_SET_OFFSET(set_nr) ((set_nr)*ADF_VF2PF_SET_SIZE) +#define ADF_VF2PF_VFNR_TO_SET(vf_nr) ((vf_nr) / ADF_VF2PF_SET_SIZE) +#define ADF_VF2PF_VFNR_TO_MASK(vf_nr) \ + ({ \ + u32 vf_nr_ = (vf_nr); \ + BIT((vf_nr_)-ADF_VF2PF_SET_SIZE *ADF_VF2PF_VFNR_TO_SET( \ + vf_nr_)); \ + }) + +#define ADF_DEVICE_FUSECTL_OFFSET 0x40 +#define ADF_DEVICE_LEGFUSE_OFFSET 0x4C +#define ADF_DEVICE_FUSECTL_MASK 0x80000000 +#define ADF_PCI_MAX_BARS 3 +#define ADF_DEVICE_NAME_LENGTH 32 +#define ADF_ETR_MAX_RINGS_PER_BANK 16 +#define ADF_MAX_MSIX_VECTOR_NAME 16 +#define ADF_DEVICE_NAME_PREFIX "qat_" +#define ADF_STOP_RETRY 50 +#define ADF_NUM_THREADS_PER_AE (8) +#define ADF_AE_ADMIN_THREAD (7) +#define ADF_NUM_PKE_STRAND (2) +#define ADF_AE_STRAND0_THREAD (8) +#define ADF_AE_STRAND1_THREAD (9) +#define ADF_NUM_HB_CNT_PER_AE (ADF_NUM_THREADS_PER_AE + ADF_NUM_PKE_STRAND) +#define ADF_CFG_NUM_SERVICES 4 +#define ADF_SRV_TYPE_BIT_LEN 3 +#define ADF_SRV_TYPE_MASK 0x7 +#define ADF_RINGS_PER_SRV_TYPE 2 +#define ADF_THRD_ABILITY_BIT_LEN 4 +#define ADF_THRD_ABILITY_MASK 0xf +#define ADF_VF_OFFSET 0x8 +#define ADF_MAX_FUNC_PER_DEV 0x7 +#define ADF_PCI_DEV_OFFSET 0x3 + +#define ADF_SRV_TYPE_BIT_LEN 3 +#define ADF_SRV_TYPE_MASK 0x7 + +#define GET_SRV_TYPE(ena_srv_mask, srv) \ + (((ena_srv_mask) >> (ADF_SRV_TYPE_BIT_LEN * (srv))) & ADF_SRV_TYPE_MASK) + +#define ADF_DEFAULT_RING_TO_SRV_MAP \ + (CRYPTO | CRYPTO << ADF_CFG_SERV_RING_PAIR_1_SHIFT | \ + NA << ADF_CFG_SERV_RING_PAIR_2_SHIFT | \ + COMP << ADF_CFG_SERV_RING_PAIR_3_SHIFT) + +enum adf_accel_capabilities { + ADF_ACCEL_CAPABILITIES_NULL = 0, + ADF_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC = 1, + ADF_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC = 2, + ADF_ACCEL_CAPABILITIES_CIPHER = 4, + ADF_ACCEL_CAPABILITIES_AUTHENTICATION = 8, + ADF_ACCEL_CAPABILITIES_COMPRESSION = 32, + ADF_ACCEL_CAPABILITIES_DEPRECATED = 64, + ADF_ACCEL_CAPABILITIES_RANDOM_NUMBER = 128 +}; + +struct adf_bar { + rman_res_t base_addr; + struct resource *virt_addr; + rman_res_t size; +} __packed; + +struct adf_accel_msix { + struct msix_entry *entries; + u32 num_entries; +} __packed; + +struct adf_accel_pci { + device_t pci_dev; + struct adf_accel_msix msix_entries; + struct adf_bar pci_bars[ADF_PCI_MAX_BARS]; + uint8_t revid; + uint8_t sku; + int node; +} __packed; + +enum dev_state { DEV_DOWN = 0, DEV_UP }; + +enum dev_sku_info { + DEV_SKU_1 = 0, + DEV_SKU_2, + DEV_SKU_3, + DEV_SKU_4, + DEV_SKU_VF, + DEV_SKU_1_CY, + DEV_SKU_2_CY, + DEV_SKU_3_CY, + DEV_SKU_UNKNOWN +}; + +static inline const char * +get_sku_info(enum dev_sku_info info) +{ + switch (info) { + case DEV_SKU_1: + return "SKU1"; + case DEV_SKU_1_CY: + return "SKU1CY"; + case DEV_SKU_2: + return "SKU2"; + case DEV_SKU_2_CY: + return "SKU2CY"; + case DEV_SKU_3: + return "SKU3"; + case DEV_SKU_3_CY: + return "SKU3CY"; + case DEV_SKU_4: + return "SKU4"; + case DEV_SKU_VF: + return "SKUVF"; + case DEV_SKU_UNKNOWN: + default: + break; + } + return "Unknown SKU"; +} + +enum adf_accel_unit_services { + ADF_ACCEL_SERVICE_NULL = 0, + ADF_ACCEL_INLINE_CRYPTO = 1, + ADF_ACCEL_CRYPTO = 2, + ADF_ACCEL_COMPRESSION = 4 +}; + +struct adf_ae_info { + u32 num_asym_thd; + u32 num_sym_thd; + u32 num_dc_thd; +} __packed; + +struct adf_accel_unit { + u8 au_mask; + u32 accel_mask; + u64 ae_mask; + u64 comp_ae_mask; + u32 num_ae; + enum adf_accel_unit_services services; +} __packed; + +struct adf_accel_unit_info { + u32 inline_ingress_msk; + u32 inline_egress_msk; + u32 sym_ae_msk; + u32 asym_ae_msk; + u32 dc_ae_msk; + u8 num_cy_au; + u8 num_dc_au; + u8 num_inline_au; + struct adf_accel_unit *au; + const struct adf_ae_info *ae_info; +} __packed; + +struct adf_hw_aram_info { + /* Inline Egress mask. "1" = AE is working with egress traffic */ + u32 inline_direction_egress_mask; + /* Inline congestion managmenet profiles set in config file */ + u32 inline_congest_mngt_profile; + /* Initialise CY AE mask, "1" = AE is used for CY operations */ + u32 cy_ae_mask; + /* Initialise DC AE mask, "1" = AE is used for DC operations */ + u32 dc_ae_mask; + /* Number of long words used to define the ARAM regions */ + u32 num_aram_lw_entries; + /* ARAM region definitions */ + u32 mmp_region_size; + u32 mmp_region_offset; + u32 skm_region_size; + u32 skm_region_offset; + /* + * Defines size and offset of compression intermediate buffers stored + * in ARAM (device's on-chip memory). + */ + u32 inter_buff_aram_region_size; + u32 inter_buff_aram_region_offset; + u32 sadb_region_size; + u32 sadb_region_offset; +} __packed; + +struct adf_hw_device_class { + const char *name; + const enum adf_device_type type; + uint32_t instances; +} __packed; + +struct arb_info { + u32 arbiter_offset; + u32 wrk_thd_2_srv_arb_map; + u32 wrk_cfg_offset; +} __packed; + +struct admin_info { + u32 admin_msg_ur; + u32 admin_msg_lr; + u32 mailbox_offset; +} __packed; + +struct adf_cfg_device_data; +struct adf_accel_dev; +struct adf_etr_data; +struct adf_etr_ring_data; + +struct adf_hw_device_data { + struct adf_hw_device_class *dev_class; + uint32_t (*get_accel_mask)(struct adf_accel_dev *accel_dev); + uint32_t (*get_ae_mask)(struct adf_accel_dev *accel_dev); + uint32_t (*get_sram_bar_id)(struct adf_hw_device_data *self); + uint32_t (*get_misc_bar_id)(struct adf_hw_device_data *self); + uint32_t (*get_etr_bar_id)(struct adf_hw_device_data *self); + uint32_t (*get_num_aes)(struct adf_hw_device_data *self); + uint32_t (*get_num_accels)(struct adf_hw_device_data *self); + void (*notify_and_wait_ethernet)(struct adf_accel_dev *accel_dev); + bool (*get_eth_doorbell_msg)(struct adf_accel_dev *accel_dev); + uint32_t (*get_pf2vf_offset)(uint32_t i); + uint32_t (*get_vintmsk_offset)(uint32_t i); + u32 (*get_vintsou_offset)(void); + void (*get_arb_info)(struct arb_info *arb_csrs_info); + void (*get_admin_info)(struct admin_info *admin_csrs_info); + void (*get_errsou_offset)(u32 *errsou3, u32 *errsou5); + uint32_t (*get_num_accel_units)(struct adf_hw_device_data *self); + int (*init_accel_units)(struct adf_accel_dev *accel_dev); + void (*exit_accel_units)(struct adf_accel_dev *accel_dev); + uint32_t (*get_clock_speed)(struct adf_hw_device_data *self); + enum dev_sku_info (*get_sku)(struct adf_hw_device_data *self); + bool (*check_prod_sku)(struct adf_accel_dev *accel_dev); + int (*alloc_irq)(struct adf_accel_dev *accel_dev); + void (*free_irq)(struct adf_accel_dev *accel_dev); + void (*enable_error_correction)(struct adf_accel_dev *accel_dev); + int (*check_uncorrectable_error)(struct adf_accel_dev *accel_dev); + void (*print_err_registers)(struct adf_accel_dev *accel_dev); + void (*disable_error_interrupts)(struct adf_accel_dev *accel_dev); + int (*init_ras)(struct adf_accel_dev *accel_dev); + void (*exit_ras)(struct adf_accel_dev *accel_dev); + void (*disable_arb)(struct adf_accel_dev *accel_dev); + void (*update_ras_errors)(struct adf_accel_dev *accel_dev, int error); + bool (*ras_interrupts)(struct adf_accel_dev *accel_dev, + bool *reset_required); + int (*init_admin_comms)(struct adf_accel_dev *accel_dev); + void (*exit_admin_comms)(struct adf_accel_dev *accel_dev); + int (*send_admin_init)(struct adf_accel_dev *accel_dev); + void (*set_asym_rings_mask)(struct adf_accel_dev *accel_dev); + int (*get_ring_to_svc_map)(struct adf_accel_dev *accel_dev, + u16 *ring_to_svc_map); + uint32_t (*get_accel_cap)(struct adf_accel_dev *accel_dev); + int (*init_arb)(struct adf_accel_dev *accel_dev); + void (*exit_arb)(struct adf_accel_dev *accel_dev); + void (*get_arb_mapping)(struct adf_accel_dev *accel_dev, + const uint32_t **cfg); + int (*get_heartbeat_status)(struct adf_accel_dev *accel_dev); + uint32_t (*get_ae_clock)(struct adf_hw_device_data *self); + void (*disable_iov)(struct adf_accel_dev *accel_dev); + void (*configure_iov_threads)(struct adf_accel_dev *accel_dev, + bool enable); + void (*enable_ints)(struct adf_accel_dev *accel_dev); + bool (*check_slice_hang)(struct adf_accel_dev *accel_dev); + int (*set_ssm_wdtimer)(struct adf_accel_dev *accel_dev); + int (*enable_vf2pf_comms)(struct adf_accel_dev *accel_dev); + int (*disable_vf2pf_comms)(struct adf_accel_dev *accel_dev); + void (*reset_device)(struct adf_accel_dev *accel_dev); + void (*reset_hw_units)(struct adf_accel_dev *accel_dev); + int (*measure_clock)(struct adf_accel_dev *accel_dev); + void (*restore_device)(struct adf_accel_dev *accel_dev); + uint32_t (*get_obj_cfg_ae_mask)(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services services); + int (*add_pke_stats)(struct adf_accel_dev *accel_dev); + void (*remove_pke_stats)(struct adf_accel_dev *accel_dev); + int (*add_misc_error)(struct adf_accel_dev *accel_dev); + int (*count_ras_event)(struct adf_accel_dev *accel_dev, + u32 *ras_event, + char *aeidstr); + void (*remove_misc_error)(struct adf_accel_dev *accel_dev); + int (*configure_accel_units)(struct adf_accel_dev *accel_dev); + uint32_t (*get_objs_num)(struct adf_accel_dev *accel_dev); + const char *(*get_obj_name)(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services services); + void (*pre_reset)(struct adf_accel_dev *accel_dev); + void (*post_reset)(struct adf_accel_dev *accel_dev); + const char *fw_name; + const char *fw_mmp_name; + bool reset_ack; + uint32_t fuses; + uint32_t accel_capabilities_mask; + uint32_t instance_id; + uint16_t accel_mask; + u32 aerucm_mask; + u32 ae_mask; + u32 service_mask; + uint16_t tx_rings_mask; + uint8_t tx_rx_gap; + uint8_t num_banks; + u8 num_rings_per_bank; + uint8_t num_accel; + uint8_t num_logical_accel; + uint8_t num_engines; + uint8_t min_iov_compat_ver; + int (*get_storage_enabled)(struct adf_accel_dev *accel_dev, + uint32_t *storage_enabled); + u8 query_storage_cap; + u32 clock_frequency; + u8 storage_enable; + u32 extended_dc_capabilities; + int (*config_device)(struct adf_accel_dev *accel_dev); + u16 asym_rings_mask; + int (*get_fw_image_type)(struct adf_accel_dev *accel_dev, + enum adf_cfg_fw_image_type *fw_image_type); + u16 ring_to_svc_map; +} __packed; + +/* helper enum for performing CSR operations */ +enum operation { + AND, + OR, +}; + +/* 32-bit CSR write macro */ +#define ADF_CSR_WR(csr_base, csr_offset, val) \ + bus_write_4(csr_base, csr_offset, val) + +/* 64-bit CSR write macro */ +#ifdef __x86_64__ +#define ADF_CSR_WR64(csr_base, csr_offset, val) \ + bus_write_8(csr_base, csr_offset, val) +#else +static __inline void +adf_csr_wr64(struct resource *csr_base, bus_size_t offset, uint64_t value) +{ + bus_write_4(csr_base, offset, (uint32_t)value); + bus_write_4(csr_base, offset + 4, (uint32_t)(value >> 32)); +} +#define ADF_CSR_WR64(csr_base, csr_offset, val) \ + adf_csr_wr64(csr_base, csr_offset, val) +#endif + +/* 32-bit CSR read macro */ +#define ADF_CSR_RD(csr_base, csr_offset) bus_read_4(csr_base, csr_offset) + +/* 64-bit CSR read macro */ +#ifdef __x86_64__ +#define ADF_CSR_RD64(csr_base, csr_offset) bus_read_8(csr_base, csr_offset) +#else +static __inline uint64_t +adf_csr_rd64(struct resource *csr_base, bus_size_t offset) +{ + return (((uint64_t)bus_read_4(csr_base, offset)) | + (((uint64_t)bus_read_4(csr_base, offset + 4)) << 32)); +} +#define ADF_CSR_RD64(csr_base, csr_offset) adf_csr_rd64(csr_base, csr_offset) +#endif + +#define GET_DEV(accel_dev) ((accel_dev)->accel_pci_dev.pci_dev) +#define GET_BARS(accel_dev) ((accel_dev)->accel_pci_dev.pci_bars) +#define GET_HW_DATA(accel_dev) (accel_dev->hw_device) +#define GET_MAX_BANKS(accel_dev) (GET_HW_DATA(accel_dev)->num_banks) +#define GET_DEV_SKU(accel_dev) (accel_dev->accel_pci_dev.sku) +#define GET_NUM_RINGS_PER_BANK(accel_dev) \ + (GET_HW_DATA(accel_dev)->num_rings_per_bank) +#define GET_MAX_ACCELENGINES(accel_dev) (GET_HW_DATA(accel_dev)->num_engines) +#define accel_to_pci_dev(accel_ptr) accel_ptr->accel_pci_dev.pci_dev +#define GET_SRV_TYPE(ena_srv_mask, srv) \ + (((ena_srv_mask) >> (ADF_SRV_TYPE_BIT_LEN * (srv))) & ADF_SRV_TYPE_MASK) +#define SET_ASYM_MASK(asym_mask, srv) \ + ({ \ + typeof(srv) srv_ = (srv); \ + (asym_mask) |= ((1 << (srv_)*ADF_RINGS_PER_SRV_TYPE) | \ + (1 << ((srv_)*ADF_RINGS_PER_SRV_TYPE + 1))); \ + }) + +#define GET_NUM_RINGS_PER_BANK(accel_dev) \ + (GET_HW_DATA(accel_dev)->num_rings_per_bank) +#define GET_MAX_PROCESSES(accel_dev) \ + ({ \ + typeof(accel_dev) dev = (accel_dev); \ + (GET_MAX_BANKS(dev) * (GET_NUM_RINGS_PER_BANK(dev) / 2)); \ + }) +#define GET_DU_TABLE(accel_dev) (accel_dev->du_table) + +static inline void +adf_csr_fetch_and_and(struct resource *csr, size_t offs, unsigned long mask) +{ + unsigned int val = ADF_CSR_RD(csr, offs); + + val &= mask; + ADF_CSR_WR(csr, offs, val); +} + +static inline void +adf_csr_fetch_and_or(struct resource *csr, size_t offs, unsigned long mask) +{ + unsigned int val = ADF_CSR_RD(csr, offs); + + val |= mask; + ADF_CSR_WR(csr, offs, val); +} + +static inline void +adf_csr_fetch_and_update(enum operation op, + struct resource *csr, + size_t offs, + unsigned long mask) +{ + switch (op) { + case AND: + adf_csr_fetch_and_and(csr, offs, mask); + break; + case OR: + adf_csr_fetch_and_or(csr, offs, mask); + break; + } +} + +struct pfvf_stats { + struct dentry *stats_file; + /* Messages put in CSR */ + unsigned int tx; + /* Messages read from CSR */ + unsigned int rx; + /* Interrupt fired but int bit was clear */ + unsigned int spurious; + /* Block messages sent */ + unsigned int blk_tx; + /* Block messages received */ + unsigned int blk_rx; + /* Blocks received with CRC errors */ + unsigned int crc_err; + /* CSR in use by other side */ + unsigned int busy; + /* Receiver did not acknowledge */ + unsigned int no_ack; + /* Collision detected */ + unsigned int collision; + /* Couldn't send a response */ + unsigned int tx_timeout; + /* Didn't receive a response */ + unsigned int rx_timeout; + /* Responses received */ + unsigned int rx_rsp; + /* Messages re-transmitted */ + unsigned int retry; + /* Event put timeout */ + unsigned int event_timeout; +}; + +#define NUM_PFVF_COUNTERS 14 + +void adf_get_admin_info(struct admin_info *admin_csrs_info); +struct adf_admin_comms { + bus_addr_t phy_addr; + bus_addr_t const_tbl_addr; + bus_addr_t aram_map_phys_addr; + bus_addr_t phy_hb_addr; + bus_dmamap_t aram_map; + bus_dmamap_t const_tbl_map; + bus_dmamap_t hb_map; + char *virt_addr; + char *virt_hb_addr; + struct resource *mailbox_addr; + struct sx lock; + struct bus_dmamem dma_mem; + struct bus_dmamem dma_hb; +}; + +struct icp_qat_fw_loader_handle; +struct adf_fw_loader_data { + struct icp_qat_fw_loader_handle *fw_loader; + const struct firmware *uof_fw; + const struct firmware *mmp_fw; +}; + +struct adf_accel_vf_info { + struct adf_accel_dev *accel_dev; + struct mutex pf2vf_lock; /* protect CSR access for PF2VF messages */ + u32 vf_nr; + bool init; + u8 compat_ver; + struct pfvf_stats pfvf_counters; +}; + +struct adf_fw_versions { + u8 fw_version_major; + u8 fw_version_minor; + u8 fw_version_patch; + u8 mmp_version_major; + u8 mmp_version_minor; + u8 mmp_version_patch; +}; + +#define ADF_COMPAT_CHECKER_MAX 8 +typedef int (*adf_iov_compat_checker_t)(struct adf_accel_dev *accel_dev, + u8 vf_compat_ver); +struct adf_accel_compat_manager { + u8 num_chker; + adf_iov_compat_checker_t iov_compat_checkers[ADF_COMPAT_CHECKER_MAX]; +}; + +struct adf_heartbeat; +struct adf_accel_dev { + struct adf_hw_aram_info *aram_info; + struct adf_accel_unit_info *au_info; + struct adf_etr_data *transport; + struct adf_hw_device_data *hw_device; + struct adf_cfg_device_data *cfg; + struct adf_fw_loader_data *fw_loader; + struct adf_admin_comms *admin; + struct adf_heartbeat *heartbeat; + struct adf_fw_versions fw_versions; + unsigned int autoreset_on_error; + struct adf_fw_counters_data *fw_counters_data; + struct sysctl_oid *debugfs_ae_config; + struct list_head crypto_list; + atomic_t *ras_counters; + unsigned long status; + atomic_t ref_count; + bus_dma_tag_t dma_tag; + struct sysctl_ctx_list sysctl_ctx; + struct sysctl_oid *ras_correctable; + struct sysctl_oid *ras_uncorrectable; + struct sysctl_oid *ras_fatal; + struct sysctl_oid *ras_reset; + struct sysctl_oid *pke_replay_dbgfile; + struct sysctl_oid *misc_error_dbgfile; + struct list_head list; + struct adf_accel_pci accel_pci_dev; + struct adf_accel_compat_manager *cm; + u8 compat_ver; + union { + struct { + /* vf_info is non-zero when SR-IOV is init'ed */ + struct adf_accel_vf_info *vf_info; + int num_vfs; + } pf; + struct { + struct resource *irq; + void *cookie; + char *irq_name; + struct task pf2vf_bh_tasklet; + struct mutex vf2pf_lock; /* protect CSR access */ + int iov_msg_completion; + uint8_t compatible; + uint8_t pf_version; + u8 pf2vf_block_byte; + u8 pf2vf_block_resp_type; + struct pfvf_stats pfvf_counters; + } vf; + } u1; + bool is_vf; + u32 accel_id; + void *lac_dev; +}; +#endif Index: sys/dev/qat/include/common/adf_cfg.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/adf_cfg.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_H_ +#define ADF_CFG_H_ + +#include +#include "adf_accel_devices.h" +#include "adf_cfg_common.h" +#include "adf_cfg_strings.h" + +struct adf_cfg_key_val { + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + enum adf_cfg_val_type type; + struct list_head list; +}; + +struct adf_cfg_section { + char name[ADF_CFG_MAX_SECTION_LEN_IN_BYTES]; + bool processed; + bool is_derived; + struct list_head list; + struct list_head param_head; +}; + +struct adf_cfg_device_data { + struct adf_cfg_device *dev; + struct list_head sec_list; + struct sysctl_oid *debug; + struct sx lock; +}; + +struct adf_cfg_depot_list { + struct list_head sec_list; +}; + +int adf_cfg_dev_add(struct adf_accel_dev *accel_dev); +void adf_cfg_dev_remove(struct adf_accel_dev *accel_dev); +int adf_cfg_depot_restore_all(struct adf_accel_dev *accel_dev, + struct adf_cfg_depot_list *dev_hp_cfg); +int adf_cfg_section_add(struct adf_accel_dev *accel_dev, const char *name); +void adf_cfg_del_all(struct adf_accel_dev *accel_dev); +void adf_cfg_depot_del_all(struct list_head *head); +int adf_cfg_add_key_value_param(struct adf_accel_dev *accel_dev, + const char *section_name, + const char *key, + const void *val, + enum adf_cfg_val_type type); +int adf_cfg_get_param_value(struct adf_accel_dev *accel_dev, + const char *section, + const char *name, + char *value); +int adf_cfg_save_section(struct adf_accel_dev *accel_dev, + const char *name, + struct adf_cfg_section *section); +int adf_cfg_depot_save_all(struct adf_accel_dev *accel_dev, + struct adf_cfg_depot_list *dev_hp_cfg); +struct adf_cfg_section *adf_cfg_sec_find(struct adf_accel_dev *accel_dev, + const char *sec_name); +int adf_cfg_derived_section_add(struct adf_accel_dev *accel_dev, + const char *name); +int adf_cfg_remove_key_param(struct adf_accel_dev *accel_dev, + const char *section_name, + const char *key); +int adf_cfg_setup_irq(struct adf_accel_dev *accel_dev); +void adf_cfg_set_asym_rings_mask(struct adf_accel_dev *accel_dev); +void adf_cfg_gen_dispatch_arbiter(struct adf_accel_dev *accel_dev, + const u32 *thrd_to_arb_map, + u32 *thrd_to_arb_map_gen, + u32 total_engines); +int adf_cfg_get_fw_image_type(struct adf_accel_dev *accel_dev, + enum adf_cfg_fw_image_type *fw_image_type); +int adf_cfg_get_services_enabled(struct adf_accel_dev *accel_dev, + u16 *ring_to_svc_map); +int adf_cfg_restore_section(struct adf_accel_dev *accel_dev, + struct adf_cfg_section *section); +void adf_cfg_keyval_del_all(struct list_head *head); +#endif Index: sys/dev/qat/include/common/adf_cfg_common.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/adf_cfg_common.h @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_COMMON_H_ +#define ADF_CFG_COMMON_H_ + +#include +#include +#include + +#define ADF_CFG_MAX_STR_LEN 128 +#define ADF_CFG_MAX_KEY_LEN_IN_BYTES ADF_CFG_MAX_STR_LEN +/* + * Max value length increased to 128 to support more length of values. + * like Dc0CoreAffinity = 0, 1, 2,... config values to max cores + */ +#define ADF_CFG_MAX_VAL_LEN_IN_BYTES 128 +#define ADF_CFG_MAX_SECTION_LEN_IN_BYTES ADF_CFG_MAX_STR_LEN +#define ADF_CFG_NULL_TERM_SIZE 1 +#define ADF_CFG_BASE_DEC 10 +#define ADF_CFG_BASE_HEX 16 +#define ADF_CFG_ALL_DEVICES 0xFFFE +#define ADF_CFG_NO_DEVICE 0xFFFF +#define ADF_CFG_AFFINITY_WHATEVER 0xFF +#define MAX_DEVICE_NAME_SIZE 32 +#define ADF_MAX_DEVICES (32 * 32) +#define ADF_MAX_ACCELENGINES 12 +#define ADF_CFG_STORAGE_ENABLED 1 +#define ADF_DEVS_ARRAY_SIZE BITS_TO_LONGS(ADF_MAX_DEVICES) +#define ADF_SSM_WDT_PKE_DEFAULT_VALUE 0x3000000 +#define ADF_WDT_TIMER_SYM_COMP_MS 3 +#define ADF_MIN_HB_TIMER_MS 100 +#define ADF_CFG_MAX_NUM_OF_SECTIONS 16 +#define ADF_CFG_MAX_NUM_OF_TOKENS 16 +#define ADF_CFG_MAX_TOKENS_IN_CONFIG 8 +#define ADF_CFG_RESP_POLL 1 +#define ADF_CFG_RESP_EPOLL 2 +#define ADF_CFG_DEF_CY_RING_ASYM_SIZE 64 +#define ADF_CFG_DEF_CY_RING_SYM_SIZE 512 +#define ADF_CFG_DEF_DC_RING_SIZE 512 +#define ADF_CFG_MAX_CORE_NUM 256 +#define ADF_CFG_MAX_TOKENS ADF_CFG_MAX_CORE_NUM +#define ADF_CFG_MAX_TOKEN_LEN 10 +#define ADF_CFG_ACCEL_DEF_COALES 1 +#define ADF_CFG_ACCEL_DEF_COALES_TIMER 10000 +#define ADF_CFG_ACCEL_DEF_COALES_NUM_MSG 0 +#define ADF_CFG_ASYM_SRV_MASK 1 +#define ADF_CFG_SYM_SRV_MASK 2 +#define ADF_CFG_DC_SRV_MASK 8 +#define ADF_CFG_UNKNOWN_SRV_MASK 0 +#define ADF_CFG_DEF_ASYM_MASK 0x03 +#define ADF_CFG_MAX_SERVICES 4 +#define ADF_MAX_SERVICES 3 + +enum adf_svc_type { + ADF_SVC_ASYM = 0, + ADF_SVC_SYM = 1, + ADF_SVC_DC = 2, + ADF_SVC_NONE = 3 +}; + +struct adf_pci_address { + unsigned char bus; + unsigned char dev; + unsigned char func; +} __packed; + +#define ADF_CFG_SERV_RING_PAIR_0_SHIFT 0 +#define ADF_CFG_SERV_RING_PAIR_1_SHIFT 3 +#define ADF_CFG_SERV_RING_PAIR_2_SHIFT 6 +#define ADF_CFG_SERV_RING_PAIR_3_SHIFT 9 + +enum adf_cfg_service_type { NA = 0, CRYPTO, COMP, SYM, ASYM, USED }; + +enum adf_cfg_bundle_type { FREE, KERNEL, USER }; + +enum adf_cfg_val_type { ADF_DEC, ADF_HEX, ADF_STR }; + +enum adf_device_type { + DEV_UNKNOWN = 0, + DEV_DH895XCC, + DEV_DH895XCCVF, + DEV_C62X, + DEV_C62XVF, + DEV_C3XXX, + DEV_C3XXXVF, + DEV_200XX, + DEV_200XXVF, + DEV_C4XXX, + DEV_C4XXXVF +}; + +enum adf_cfg_fw_image_type { + ADF_FW_IMAGE_DEFAULT = 0, + ADF_FW_IMAGE_CRYPTO, + ADF_FW_IMAGE_COMPRESSION, + ADF_FW_IMAGE_CUSTOM1 +}; + +struct adf_dev_status_info { + enum adf_device_type type; + uint16_t accel_id; + uint16_t instance_id; + uint8_t num_ae; + uint8_t num_accel; + uint8_t num_logical_accel; + uint8_t banks_per_accel; + uint8_t state; + uint8_t bus; + uint8_t dev; + uint8_t fun; + int domain; + char name[MAX_DEVICE_NAME_SIZE]; + u8 sku; + u32 node_id; + u32 device_mem_available; + u32 pci_device_id; +}; + +struct adf_cfg_device { + /* contains all the bundles info */ + struct adf_cfg_bundle **bundles; + /* contains all the instances info */ + struct adf_cfg_instance **instances; + int bundle_num; + int instance_index; + char name[ADF_CFG_MAX_STR_LEN]; + int dev_id; + int max_kernel_bundle_nr; + u16 total_num_inst; +}; + +enum adf_accel_serv_type { + ADF_ACCEL_SERV_NA = 0x0, + ADF_ACCEL_SERV_ASYM, + ADF_ACCEL_SERV_SYM, + ADF_ACCEL_SERV_RND, + ADF_ACCEL_SERV_DC +}; + +struct adf_cfg_ring { + u8 mode : 1; + enum adf_accel_serv_type serv_type; + u8 number : 4; +}; + +struct adf_cfg_bundle { + /* Section(s) name this bundle is shared by */ + char **sections; + int max_section; + int section_index; + int number; + enum adf_cfg_bundle_type type; + cpuset_t affinity_mask; + int polling_mode; + int instance_num; + int num_of_rings; + /* contains all the info about rings */ + struct adf_cfg_ring **rings; + u16 in_use; +}; + +struct adf_cfg_instance { + enum adf_cfg_service_type stype; + char name[ADF_CFG_MAX_STR_LEN]; + int polling_mode; + cpuset_t affinity_mask; + /* rings within an instance for services */ + int asym_tx; + int asym_rx; + int sym_tx; + int sym_rx; + int dc_tx; + int dc_rx; + int bundle; +}; + +#define ADF_CFG_MAX_CORE_NUM 256 +#define ADF_CFG_MAX_TOKENS_IN_CONFIG 8 +#define ADF_CFG_MAX_TOKEN_LEN 10 +#define ADF_CFG_MAX_TOKENS ADF_CFG_MAX_CORE_NUM +#define ADF_CFG_ACCEL_DEF_COALES 1 +#define ADF_CFG_ACCEL_DEF_COALES_TIMER 10000 +#define ADF_CFG_ACCEL_DEF_COALES_NUM_MSG 0 +#define ADF_CFG_RESP_EPOLL 2 +#define ADF_CFG_SERV_RING_PAIR_1_SHIFT 3 +#define ADF_CFG_SERV_RING_PAIR_2_SHIFT 6 +#define ADF_CFG_SERV_RING_PAIR_3_SHIFT 9 +#define ADF_CFG_RESP_POLL 1 +#define ADF_CFG_ASYM_SRV_MASK 1 +#define ADF_CFG_SYM_SRV_MASK 2 +#define ADF_CFG_DC_SRV_MASK 8 +#define ADF_CFG_UNKNOWN_SRV_MASK 0 +#define ADF_CFG_DEF_ASYM_MASK 0x03 +#define ADF_CFG_MAX_SERVICES 4 + +#define ADF_CFG_HB_DEFAULT_VALUE 500 +#define ADF_CFG_HB_COUNT_THRESHOLD 3 +#define ADF_MIN_HB_TIMER_MS 100 + +enum adf_device_heartbeat_status { + DEV_HB_UNRESPONSIVE = 0, + DEV_HB_ALIVE, + DEV_HB_UNSUPPORTED +}; + +struct adf_dev_heartbeat_status_ctl { + uint16_t device_id; + enum adf_device_heartbeat_status status; +}; +#endif Index: sys/dev/qat/include/common/adf_cfg_strings.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/adf_cfg_strings.h @@ -0,0 +1,132 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_STRINGS_H_ +#define ADF_CFG_STRINGS_H_ + +#define ADF_GENERAL_SEC "GENERAL" +#define ADF_KERNEL_SEC "KERNEL" +#define ADF_ACCEL_SEC "Accelerator" +#define ADF_NUM_CY "NumberCyInstances" +#define ADF_NUM_DC "NumberDcInstances" +#define ADF_RING_SYM_SIZE "NumConcurrentSymRequests" +#define ADF_RING_ASYM_SIZE "NumConcurrentAsymRequests" +#define ADF_RING_DC_SIZE "NumConcurrentRequests" +#define ADF_RING_ASYM_TX "RingAsymTx" +#define ADF_RING_SYM_TX "RingSymTx" +#define ADF_RING_RND_TX "RingNrbgTx" +#define ADF_RING_ASYM_RX "RingAsymRx" +#define ADF_RING_SYM_RX "RingSymRx" +#define ADF_RING_RND_RX "RingNrbgRx" +#define ADF_RING_DC_TX "RingTx" +#define ADF_RING_DC_RX "RingRx" +#define ADF_ETRMGR_BANK "Bank" +#define ADF_RING_BANK_NUM "BankNumber" +#define ADF_CY "Cy" +#define ADF_DC "Dc" +#define ADF_DC_EXTENDED_FEATURES "Device_DcExtendedFeatures" +#define ADF_ETRMGR_COALESCING_ENABLED "InterruptCoalescingEnabled" +#define ADF_ETRMGR_COALESCING_ENABLED_FORMAT \ + ADF_ETRMGR_BANK "%d" ADF_ETRMGR_COALESCING_ENABLED +#define ADF_ETRMGR_COALESCE_TIMER "InterruptCoalescingTimerNs" +#define ADF_ETRMGR_COALESCE_TIMER_FORMAT \ + ADF_ETRMGR_BANK "%d" ADF_ETRMGR_COALESCE_TIMER +#define ADF_ETRMGR_COALESCING_MSG_ENABLED "InterruptCoalescingNumResponses" +#define ADF_ETRMGR_COALESCING_MSG_ENABLED_FORMAT \ + ADF_ETRMGR_BANK "%d" ADF_ETRMGR_COALESCING_MSG_ENABLED +#define ADF_ETRMGR_CORE_AFFINITY "CoreAffinity" +#define ADF_ETRMGR_CORE_AFFINITY_FORMAT \ + ADF_ETRMGR_BANK "%d" ADF_ETRMGR_CORE_AFFINITY +#define ADF_ACCEL_STR "Accelerator%d" +#define ADF_INLINE_SEC "INLINE" +#define ADF_NUM_CY_ACCEL_UNITS "NumCyAccelUnits" +#define ADF_NUM_DC_ACCEL_UNITS "NumDcAccelUnits" +#define ADF_NUM_INLINE_ACCEL_UNITS "NumInlineAccelUnits" +#define ADF_INLINE_INGRESS "InlineIngress" +#define ADF_INLINE_EGRESS "InlineEgress" +#define ADF_INLINE_CONGEST_MNGT_PROFILE "InlineCongestionManagmentProfile" +#define ADF_INLINE_IPSEC_ALGO_GROUP "InlineIPsecAlgoGroup" +#define ADF_SERVICE_CY "cy" +#define ADF_SERVICE_SYM "sym" +#define ADF_SERVICE_DC "dc" +#define ADF_CFG_CY "cy" +#define ADF_CFG_DC "dc" +#define ADF_CFG_ASYM "asym" +#define ADF_CFG_SYM "sym" +#define ADF_SERVICE_INLINE "inline" +#define ADF_SERVICES_ENABLED "ServicesEnabled" +#define ADF_SERVICES_SEPARATOR ";" + +#define ADF_DEV_SSM_WDT_BULK "CySymAndDcWatchDogTimer" +#define ADF_DEV_SSM_WDT_PKE "CyAsymWatchDogTimer" +#define ADF_DH895XCC_AE_FW_NAME "icp_qat_ae.uof" +#define ADF_CXXX_AE_FW_NAME "icp_qat_ae.suof" +#define ADF_HEARTBEAT_TIMER "HeartbeatTimer" +#define ADF_MMP_VER_KEY "Firmware_MmpVer" +#define ADF_UOF_VER_KEY "Firmware_UofVer" +#define ADF_HW_REV_ID_KEY "HW_RevId" +#define ADF_STORAGE_FIRMWARE_ENABLED "StorageEnabled" +#define ADF_DEV_MAX_BANKS "Device_Max_Banks" +#define ADF_DEV_CAPABILITIES_MASK "Device_Capabilities_Mask" +#define ADF_DEV_NODE_ID "Device_NodeId" +#define ADF_DEV_PKG_ID "Device_PkgId" +#define ADF_FIRST_USER_BUNDLE "FirstUserBundle" +#define ADF_INTERNAL_USERSPACE_SEC_SUFF "_INT_" +#define ADF_LIMIT_DEV_ACCESS "LimitDevAccess" +#define DEV_LIMIT_CFG_ACCESS_TMPL "_D_L_ACC" +#define ADF_DEV_MAX_RINGS_PER_BANK "Device_Max_Rings_Per_Bank" +#define ADF_NUM_PROCESSES "NumProcesses" +#define ADF_DH895XCC_AE_FW_NAME_COMPRESSION "compression.uof" +#define ADF_DH895XCC_AE_FW_NAME_CRYPTO "crypto.uof" +#define ADF_DH895XCC_AE_FW_NAME_CUSTOM1 "custom1.uof" +#define ADF_CXXX_AE_FW_NAME_COMPRESSION "compression.suof" +#define ADF_CXXX_AE_FW_NAME_CRYPTO "crypto.suof" +#define ADF_CXXX_AE_FW_NAME_CUSTOM1 "custom1.suof" +#define ADF_DC_EXTENDED_FEATURES "Device_DcExtendedFeatures" +#define ADF_PKE_DISABLED "PkeServiceDisabled" +#define ADF_INTER_BUF_SIZE "DcIntermediateBufferSizeInKB" +#define ADF_AUTO_RESET_ON_ERROR "AutoResetOnError" +#define ADF_KERNEL_SAL_SEC "KERNEL_QAT" +#define ADF_CFG_DEF_CY_RING_ASYM_SIZE 64 +#define ADF_CFG_DEF_CY_RING_SYM_SIZE 512 +#define ADF_CFG_DEF_DC_RING_SIZE 512 +#define ADF_NUM_PROCESSES "NumProcesses" +#define ADF_SERVICES_ENABLED "ServicesEnabled" +#define ADF_CFG_CY "cy" +#define ADF_CFG_SYM "sym" +#define ADF_CFG_ASYM "asym" +#define ADF_CFG_DC "dc" +#define ADF_POLL_MODE "IsPolled" +#define ADF_DEV_KPT_ENABLE "KptEnabled" +#define ADF_STORAGE_FIRMWARE_ENABLED "StorageEnabled" +#define ADF_RL_FIRMWARE_ENABLED "RateLimitingEnabled" +#define ADF_SERVICES_PROFILE "ServicesProfile" +#define ADF_SERVICES_DEFAULT "DEFAULT" +#define ADF_SERVICES_CRYPTO "CRYPTO" +#define ADF_SERVICES_COMPRESSION "COMPRESSION" +#define ADF_SERVICES_CUSTOM1 "CUSTOM1" + +#define ADF_DC_RING_SIZE (ADF_DC ADF_RING_DC_SIZE) +#define ADF_CY_RING_SYM_SIZE (ADF_CY ADF_RING_SYM_SIZE) +#define ADF_CY_RING_ASYM_SIZE (ADF_CY ADF_RING_ASYM_SIZE) +#define ADF_CY_CORE_AFFINITY_FORMAT ADF_CY "%d" ADF_ETRMGR_CORE_AFFINITY +#define ADF_DC_CORE_AFFINITY_FORMAT ADF_DC "%d" ADF_ETRMGR_CORE_AFFINITY +#define ADF_CY_BANK_NUM_FORMAT ADF_CY "%d" ADF_RING_BANK_NUM +#define ADF_DC_BANK_NUM_FORMAT ADF_DC "%d" ADF_RING_BANK_NUM +#define ADF_CY_ASYM_TX_FORMAT ADF_CY "%d" ADF_RING_ASYM_TX +#define ADF_CY_SYM_TX_FORMAT ADF_CY "%d" ADF_RING_SYM_TX +#define ADF_CY_ASYM_RX_FORMAT ADF_CY "%d" ADF_RING_ASYM_RX +#define ADF_CY_SYM_RX_FORMAT ADF_CY "%d" ADF_RING_SYM_RX +#define ADF_DC_TX_FORMAT ADF_DC "%d" ADF_RING_DC_TX +#define ADF_DC_RX_FORMAT ADF_DC "%d" ADF_RING_DC_RX +#define ADF_CY_RING_SYM_SIZE_FORMAT ADF_CY "%d" ADF_RING_SYM_SIZE +#define ADF_CY_RING_ASYM_SIZE_FORMAT ADF_CY "%d" ADF_RING_ASYM_SIZE +#define ADF_DC_RING_SIZE_FORMAT ADF_DC "%d" ADF_RING_DC_SIZE +#define ADF_CY_NAME_FORMAT ADF_CY "%dName" +#define ADF_DC_NAME_FORMAT ADF_DC "%dName" +#define ADF_CY_POLL_MODE_FORMAT ADF_CY "%d" ADF_POLL_MODE +#define ADF_DC_POLL_MODE_FORMAT ADF_DC "%d" ADF_POLL_MODE +#define ADF_USER_SECTION_NAME_FORMAT "%s_INT_%d" +#define ADF_LIMITED_USER_SECTION_NAME_FORMAT "%s_DEV%d_INT_%d" +#define ADF_CONFIG_VERSION "ConfigVersion" +#endif Index: sys/dev/qat/include/common/adf_cfg_user.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/adf_cfg_user.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_USER_H_ +#define ADF_CFG_USER_H_ + +#include "adf_cfg_common.h" +#include "adf_cfg_strings.h" + +struct adf_user_cfg_key_val { + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + union { + struct adf_user_cfg_key_val *next; + uint64_t padding3; + }; + enum adf_cfg_val_type type; +}; + +struct adf_user_cfg_section { + char name[ADF_CFG_MAX_SECTION_LEN_IN_BYTES]; + union { + struct adf_user_cfg_key_val *params; + uint64_t padding1; + }; + union { + struct adf_user_cfg_section *next; + uint64_t padding3; + }; +}; + +struct adf_user_cfg_ctl_data { + union { + struct adf_user_cfg_section *config_section; + uint64_t padding; + }; + u32 device_id; +}; + +struct adf_user_reserve_ring { + u32 accel_id; + u32 bank_nr; + u32 ring_mask; +}; + +#endif Index: sys/dev/qat/include/common/adf_common_drv.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/adf_common_drv.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_DRV_H +#define ADF_DRV_H + +#include +#include "adf_accel_devices.h" +#include "icp_qat_fw_loader_handle.h" +#include "icp_qat_hal.h" +#include "adf_cfg_user.h" + +#define ADF_MAJOR_VERSION 0 +#define ADF_MINOR_VERSION 6 +#define ADF_BUILD_VERSION 0 +#define ADF_DRV_VERSION \ + __stringify(ADF_MAJOR_VERSION) "." __stringify( \ + ADF_MINOR_VERSION) "." __stringify(ADF_BUILD_VERSION) + +#define ADF_STATUS_RESTARTING 0 +#define ADF_STATUS_STARTING 1 +#define ADF_STATUS_CONFIGURED 2 +#define ADF_STATUS_STARTED 3 +#define ADF_STATUS_AE_INITIALISED 4 +#define ADF_STATUS_AE_UCODE_LOADED 5 +#define ADF_STATUS_AE_STARTED 6 +#define ADF_STATUS_PF_RUNNING 7 +#define ADF_STATUS_IRQ_ALLOCATED 8 +#define ADF_PCIE_FLR_ATTEMPT 10 +#define ADF_STATUS_SYSCTL_CTX_INITIALISED 9 + +#define PCI_EXP_AERUCS 0x104 + +/* PMISC BAR upper and lower offsets in PCIe config space */ +#define ADF_PMISC_L_OFFSET 0x18 +#define ADF_PMISC_U_OFFSET 0x1c + +enum adf_dev_reset_mode { ADF_DEV_RESET_ASYNC = 0, ADF_DEV_RESET_SYNC }; + +enum adf_event { + ADF_EVENT_INIT = 0, + ADF_EVENT_START, + ADF_EVENT_STOP, + ADF_EVENT_SHUTDOWN, + ADF_EVENT_RESTARTING, + ADF_EVENT_RESTARTED, + ADF_EVENT_ERROR, +}; + +struct adf_state { + enum adf_event dev_state; + int dev_id; +}; + +struct service_hndl { + int (*event_hld)(struct adf_accel_dev *accel_dev, enum adf_event event); + unsigned long init_status[ADF_DEVS_ARRAY_SIZE]; + unsigned long start_status[ADF_DEVS_ARRAY_SIZE]; + char *name; + struct list_head list; +}; + +static inline int +get_current_node(void) +{ + return PCPU_GET(domain); +} + +int adf_service_register(struct service_hndl *service); +int adf_service_unregister(struct service_hndl *service); + +int adf_dev_init(struct adf_accel_dev *accel_dev); +int adf_dev_start(struct adf_accel_dev *accel_dev); +int adf_dev_stop(struct adf_accel_dev *accel_dev); +void adf_dev_shutdown(struct adf_accel_dev *accel_dev); +int adf_dev_autoreset(struct adf_accel_dev *accel_dev); +int adf_dev_reset(struct adf_accel_dev *accel_dev, + enum adf_dev_reset_mode mode); +int adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev, + enum adf_dev_reset_mode mode); +void adf_error_notifier(uintptr_t arg); +int adf_init_fatal_error_wq(void); +void adf_exit_fatal_error_wq(void); +int adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr); +int adf_iov_notify(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr); +void adf_pf2vf_notify_restarting(struct adf_accel_dev *accel_dev); +int adf_notify_fatal_error(struct adf_accel_dev *accel_dev); +void adf_pf2vf_notify_fatal_error(struct adf_accel_dev *accel_dev); +void adf_pf2vf_notify_uncorrectable_error(struct adf_accel_dev *accel_dev); +void adf_pf2vf_notify_heartbeat_error(struct adf_accel_dev *accel_dev); +typedef int (*adf_iov_block_provider)(struct adf_accel_dev *accel_dev, + u8 **buffer, + u8 *length, + u8 *block_version, + u8 compatibility, + u8 byte_num); +int adf_iov_block_provider_register(u8 block_type, + const adf_iov_block_provider provider); +u8 adf_iov_is_block_provider_registered(u8 block_type); +int adf_iov_block_provider_unregister(u8 block_type, + const adf_iov_block_provider provider); +int adf_iov_block_get(struct adf_accel_dev *accel_dev, + u8 block_type, + u8 *block_version, + u8 *buffer, + u8 *length); +u8 adf_pfvf_crc(u8 start_crc, u8 *buf, u8 len); +int adf_iov_init_compat_manager(struct adf_accel_dev *accel_dev, + struct adf_accel_compat_manager **cm); +int adf_iov_shutdown_compat_manager(struct adf_accel_dev *accel_dev, + struct adf_accel_compat_manager **cm); +int adf_iov_register_compat_checker(struct adf_accel_dev *accel_dev, + const adf_iov_compat_checker_t cc); +int adf_iov_unregister_compat_checker(struct adf_accel_dev *accel_dev, + const adf_iov_compat_checker_t cc); +int adf_pf_enable_vf2pf_comms(struct adf_accel_dev *accel_dev); +int adf_pf_disable_vf2pf_comms(struct adf_accel_dev *accel_dev); +int adf_enable_vf2pf_comms(struct adf_accel_dev *accel_dev); +int adf_disable_vf2pf_comms(struct adf_accel_dev *accel_dev); +void adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info); +void adf_devmgr_update_class_index(struct adf_hw_device_data *hw_data); +void adf_clean_vf_map(bool); +int adf_sysctl_add_fw_versions(struct adf_accel_dev *accel_dev); +int adf_sysctl_remove_fw_versions(struct adf_accel_dev *accel_dev); + +int adf_ctl_dev_register(void); +void adf_ctl_dev_unregister(void); +int adf_pf_vf_capabilities_init(struct adf_accel_dev *accel_dev); +int adf_pf_ext_dc_cap_msg_provider(struct adf_accel_dev *accel_dev, + u8 **buffer, + u8 *length, + u8 *block_version, + u8 compatibility); +int adf_pf_vf_ring_to_svc_init(struct adf_accel_dev *accel_dev); +int adf_pf_ring_to_svc_msg_provider(struct adf_accel_dev *accel_dev, + u8 **buffer, + u8 *length, + u8 *block_version, + u8 compatibility, + u8 byte_num); +int adf_devmgr_add_dev(struct adf_accel_dev *accel_dev, + struct adf_accel_dev *pf); +void adf_devmgr_rm_dev(struct adf_accel_dev *accel_dev, + struct adf_accel_dev *pf); +struct list_head *adf_devmgr_get_head(void); +struct adf_accel_dev *adf_devmgr_get_dev_by_id(uint32_t id); +struct adf_accel_dev *adf_devmgr_get_first(void); +struct adf_accel_dev *adf_devmgr_pci_to_accel_dev(device_t pci_dev); +int adf_devmgr_verify_id(uint32_t *id); +void adf_devmgr_get_num_dev(uint32_t *num); +int adf_devmgr_in_reset(struct adf_accel_dev *accel_dev); +int adf_dev_started(struct adf_accel_dev *accel_dev); +int adf_dev_restarting_notify(struct adf_accel_dev *accel_dev); +int adf_dev_restarting_notify_sync(struct adf_accel_dev *accel_dev); +int adf_dev_restarted_notify(struct adf_accel_dev *accel_dev); +int adf_dev_stop_notify_sync(struct adf_accel_dev *accel_dev); +int adf_ae_init(struct adf_accel_dev *accel_dev); +int adf_ae_shutdown(struct adf_accel_dev *accel_dev); +int adf_ae_fw_load(struct adf_accel_dev *accel_dev); +void adf_ae_fw_release(struct adf_accel_dev *accel_dev); +int adf_ae_start(struct adf_accel_dev *accel_dev); +int adf_ae_stop(struct adf_accel_dev *accel_dev); + +int adf_aer_store_ppaerucm_reg(device_t pdev, + struct adf_hw_device_data *hw_data); + +int adf_enable_aer(struct adf_accel_dev *accel_dev, device_t *adf); +void adf_disable_aer(struct adf_accel_dev *accel_dev); +void adf_reset_sbr(struct adf_accel_dev *accel_dev); +void adf_reset_flr(struct adf_accel_dev *accel_dev); +void adf_dev_pre_reset(struct adf_accel_dev *accel_dev); +void adf_dev_post_reset(struct adf_accel_dev *accel_dev); +void adf_dev_restore(struct adf_accel_dev *accel_dev); +int adf_init_aer(void); +void adf_exit_aer(void); +int adf_put_admin_msg_sync(struct adf_accel_dev *accel_dev, + u32 ae, + void *in, + void *out); +struct icp_qat_fw_init_admin_req; +struct icp_qat_fw_init_admin_resp; +int adf_send_admin(struct adf_accel_dev *accel_dev, + struct icp_qat_fw_init_admin_req *req, + struct icp_qat_fw_init_admin_resp *resp, + u32 ae_mask); +int adf_config_device(struct adf_accel_dev *accel_dev); + +int adf_init_admin_comms(struct adf_accel_dev *accel_dev); +void adf_exit_admin_comms(struct adf_accel_dev *accel_dev); +int adf_send_admin_init(struct adf_accel_dev *accel_dev); +int adf_get_fw_timestamp(struct adf_accel_dev *accel_dev, u64 *timestamp); +int adf_get_fw_pke_stats(struct adf_accel_dev *accel_dev, + u64 *suc_count, + u64 *unsuc_count); +int adf_dev_measure_clock(struct adf_accel_dev *accel_dev, + u32 *frequency, + u32 min, + u32 max); +int adf_clock_debugfs_add(struct adf_accel_dev *accel_dev); +u64 adf_clock_get_current_time(void); +int adf_init_arb(struct adf_accel_dev *accel_dev); +int adf_init_gen2_arb(struct adf_accel_dev *accel_dev); +void adf_exit_arb(struct adf_accel_dev *accel_dev); +void adf_disable_arb(struct adf_accel_dev *accel_dev); +void adf_update_ring_arb(struct adf_etr_ring_data *ring); +void +adf_enable_ring_arb(void *csr_addr, unsigned int bank_nr, unsigned int mask); +void +adf_disable_ring_arb(void *csr_addr, unsigned int bank_nr, unsigned int mask); +int adf_set_ssm_wdtimer(struct adf_accel_dev *accel_dev); +struct adf_accel_dev *adf_devmgr_get_dev_by_bdf(struct adf_pci_address *addr); +struct adf_accel_dev *adf_devmgr_get_dev_by_pci_bus(u8 bus); +int adf_get_vf_nr(struct adf_pci_address *vf_pci_addr, int *vf_nr); +u32 adf_get_slices_for_svc(struct adf_accel_dev *accel_dev, + enum adf_svc_type svc); +bool adf_is_bdf_equal(struct adf_pci_address *bdf1, + struct adf_pci_address *bdf2); +int adf_is_vf_nr_valid(struct adf_accel_dev *accel_dev, int vf_nr); +void adf_dev_get(struct adf_accel_dev *accel_dev); +void adf_dev_put(struct adf_accel_dev *accel_dev); +int adf_dev_in_use(struct adf_accel_dev *accel_dev); +int adf_init_etr_data(struct adf_accel_dev *accel_dev); +void adf_cleanup_etr_data(struct adf_accel_dev *accel_dev); + +struct qat_crypto_instance *qat_crypto_get_instance_node(int node); +void qat_crypto_put_instance(struct qat_crypto_instance *inst); +void qat_alg_callback(void *resp); +void qat_alg_asym_callback(void *resp); +int qat_algs_register(void); +void qat_algs_unregister(void); +int qat_asym_algs_register(void); +void qat_asym_algs_unregister(void); + +int adf_isr_resource_alloc(struct adf_accel_dev *accel_dev); +void adf_isr_resource_free(struct adf_accel_dev *accel_dev); +int adf_vf_isr_resource_alloc(struct adf_accel_dev *accel_dev); +void adf_vf_isr_resource_free(struct adf_accel_dev *accel_dev); + +int qat_hal_init(struct adf_accel_dev *accel_dev); +void qat_hal_deinit(struct icp_qat_fw_loader_handle *handle); +void qat_hal_start(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask); +void qat_hal_stop(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask); +void qat_hal_reset(struct icp_qat_fw_loader_handle *handle); +int qat_hal_clr_reset(struct icp_qat_fw_loader_handle *handle); +void qat_hal_set_live_ctx(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask); +int qat_hal_check_ae_active(struct icp_qat_fw_loader_handle *handle, + unsigned int ae); +int qat_hal_set_ae_lm_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + enum icp_qat_uof_regtype lm_type, + unsigned char mode); +void qat_hal_set_ae_tindex_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char mode); +void qat_hal_set_ae_scs_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char mode); +int qat_hal_set_ae_ctx_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char mode); +int qat_hal_set_ae_nn_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char mode); +void qat_hal_set_pc(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask, + unsigned int upc); +void qat_hal_wr_uwords(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int uaddr, + unsigned int words_num, + const uint64_t *uword); +void qat_hal_wr_coalesce_uwords(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int uaddr, + unsigned int words_num, + uint64_t *uword); + +void qat_hal_wr_umem(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int uword_addr, + unsigned int words_num, + unsigned int *data); +int qat_hal_get_ins_num(void); +int qat_hal_batch_wr_lm(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + struct icp_qat_uof_batch_init *lm_init_header); +int qat_hal_init_gpr(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned long ctx_mask, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int regdata); +int qat_hal_init_wr_xfer(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned long ctx_mask, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int regdata); +int qat_hal_init_rd_xfer(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned long ctx_mask, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int regdata); +int qat_hal_init_nn(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned long ctx_mask, + unsigned short reg_num, + unsigned int regdata); +int qat_hal_wr_lm(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned short lm_addr, + unsigned int value); +int qat_uclo_wr_all_uimage(struct icp_qat_fw_loader_handle *handle); +void qat_uclo_del_obj(struct icp_qat_fw_loader_handle *handle); +void qat_uclo_del_mof(struct icp_qat_fw_loader_handle *handle); +int qat_uclo_wr_mimage(struct icp_qat_fw_loader_handle *handle, + const void *addr_ptr, + int mem_size); +int qat_uclo_map_obj(struct icp_qat_fw_loader_handle *handle, + const void *addr_ptr, + u32 mem_size, + const char *obj_name); + +void qat_hal_get_scs_neigh_ae(unsigned char ae, unsigned char *ae_neigh); +int qat_uclo_set_cfg_ae_mask(struct icp_qat_fw_loader_handle *handle, + unsigned int cfg_ae_mask); +void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev); +void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev); +int adf_init_vf_wq(void); +void adf_exit_vf_wq(void); +void adf_flush_vf_wq(void); +int adf_vf2pf_init(struct adf_accel_dev *accel_dev); +void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev); +static inline int +adf_sriov_configure(device_t *pdev, int numvfs) +{ + return 0; +} + +static inline void +adf_disable_sriov(struct adf_accel_dev *accel_dev) +{ +} + +static inline void +adf_vf2pf_handler(struct adf_accel_vf_info *vf_info) +{ +} + +static inline int +adf_init_pf_wq(void) +{ + return 0; +} + +static inline void +adf_exit_pf_wq(void) +{ +} +#endif Index: sys/dev/qat/include/common/adf_transport.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/adf_transport.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_TRANSPORT_H +#define ADF_TRANSPORT_H + +#include "adf_accel_devices.h" + +struct adf_etr_ring_data; + +typedef void (*adf_callback_fn)(void *resp_msg); + +int adf_create_ring(struct adf_accel_dev *accel_dev, + const char *section, + u32 bank_num, + u32 num_mgs, + u32 msg_size, + const char *ring_name, + adf_callback_fn callback, + int poll_mode, + struct adf_etr_ring_data **ring_ptr); + +int adf_send_message(struct adf_etr_ring_data *ring, u32 *msg); +void adf_remove_ring(struct adf_etr_ring_data *ring); +int adf_poll_bank(u32 accel_id, u32 bank_num, u32 quota); +int adf_poll_all_banks(u32 accel_id, u32 quota); +#endif /* ADF_TRANSPORT_H */ Index: sys/dev/qat/include/common/adf_transport_access_macros.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/adf_transport_access_macros.h @@ -0,0 +1,169 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_TRANSPORT_ACCESS_MACROS_H +#define ADF_TRANSPORT_ACCESS_MACROS_H + +#include "adf_accel_devices.h" +#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL +#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL +#define ADF_BANK_INT_FLAG_CLEAR_MASK 0xFFFF +#define ADF_RING_CSR_RING_CONFIG 0x000 +#define ADF_RING_CSR_RING_LBASE 0x040 +#define ADF_RING_CSR_RING_UBASE 0x080 +#define ADF_RING_CSR_RING_HEAD 0x0C0 +#define ADF_RING_CSR_RING_TAIL 0x100 +#define ADF_RING_CSR_E_STAT 0x14C +#define ADF_RING_CSR_INT_FLAG 0x170 +#define ADF_RING_CSR_INT_SRCSEL 0x174 +#define ADF_RING_CSR_INT_SRCSEL_2 0x178 +#define ADF_RING_CSR_INT_COL_EN 0x17C +#define ADF_RING_CSR_INT_COL_CTL 0x180 +#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184 +#define ADF_RING_CSR_INT_COL_CTL_ENABLE 0x80000000 +#define ADF_RING_BUNDLE_SIZE 0x1000 +#define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A +#define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05 +#define ADF_COALESCING_MIN_TIME 0x1FF +#define ADF_COALESCING_MAX_TIME 0xFFFFF +#define ADF_COALESCING_DEF_TIME 0x27FF +#define ADF_RING_NEAR_WATERMARK_512 0x08 +#define ADF_RING_NEAR_WATERMARK_0 0x00 +#define ADF_RING_EMPTY_SIG 0x7F7F7F7F + +/* Valid internal ring size values */ +#define ADF_RING_SIZE_128 0x01 +#define ADF_RING_SIZE_256 0x02 +#define ADF_RING_SIZE_512 0x03 +#define ADF_RING_SIZE_4K 0x06 +#define ADF_RING_SIZE_16K 0x08 +#define ADF_RING_SIZE_4M 0x10 +#define ADF_MIN_RING_SIZE ADF_RING_SIZE_128 +#define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M +#define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K + +/* Valid internal msg size values */ +#define ADF_MSG_SIZE_32 0x01 +#define ADF_MSG_SIZE_64 0x02 +#define ADF_MSG_SIZE_128 0x04 +#define ADF_MIN_MSG_SIZE ADF_MSG_SIZE_32 +#define ADF_MAX_MSG_SIZE ADF_MSG_SIZE_128 + +/* Size to bytes conversion macros for ring and msg size values */ +#define ADF_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5) +#define ADF_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5) +#define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7) +#define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7) + +/* Set the response quota to a high number */ +#define ADF_NO_RESPONSE_QUOTA 0xFFFFFFFF + +/* Minimum ring bufer size for memory allocation */ +#define ADF_RING_SIZE_BYTES_MIN(SIZE) \ + ((SIZE < ADF_SIZE_TO_RING_SIZE_IN_BYTES(ADF_RING_SIZE_4K)) ? \ + ADF_SIZE_TO_RING_SIZE_IN_BYTES(ADF_RING_SIZE_4K) : \ + SIZE) +#define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6) +#define ADF_SIZE_TO_POW(SIZE) \ + ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | SIZE) & ~0x4) +/* Max outstanding requests */ +#define ADF_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \ + ((((1 << (RING_SIZE - 1)) << 3) >> ADF_SIZE_TO_POW(MSG_SIZE)) - 1) +#define BUILD_RING_CONFIG(size) \ + ((ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_FULL_WM) | \ + (ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_EMPTY_WM) | size) +#define BUILD_RESP_RING_CONFIG(size, watermark_nf, watermark_ne) \ + ((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM) | \ + (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) | size) +#define BUILD_RING_BASE_ADDR(addr, size) \ + ((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size)) +#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \ + ADF_CSR_RD(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_RING_HEAD + \ + (ring << 2)) +#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \ + ADF_CSR_RD(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_RING_TAIL + \ + (ring << 2)) +#define READ_CSR_E_STAT(csr_base_addr, bank) \ + ADF_CSR_RD(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_E_STAT) +#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_RING_CONFIG + \ + (ring << 2), \ + value) +#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \ + do { \ + uint32_t l_base = 0, u_base = 0; \ + l_base = (uint32_t)(value & 0xFFFFFFFF); \ + u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + \ + ADF_RING_CSR_RING_LBASE + (ring << 2), \ + l_base); \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + \ + ADF_RING_CSR_RING_UBASE + (ring << 2), \ + u_base); \ + } while (0) +static inline uint64_t +read_base(struct resource *csr_base_addr, uint32_t bank, uint32_t ring) +{ + uint32_t l_base, u_base; + uint64_t addr; + + l_base = ADF_CSR_RD(csr_base_addr, + (ADF_RING_BUNDLE_SIZE * bank) + + ADF_RING_CSR_RING_LBASE + (ring << 2)); + u_base = ADF_CSR_RD(csr_base_addr, + (ADF_RING_BUNDLE_SIZE * bank) + + ADF_RING_CSR_RING_UBASE + (ring << 2)); + + addr = (uint64_t)l_base & 0x00000000FFFFFFFFULL; + addr |= (uint64_t)u_base << 32 & 0xFFFFFFFF00000000ULL; + + return addr; +} + +#define READ_CSR_RING_BASE(csr_base_addr, bank, ring) \ + read_base(csr_base_addr, bank, ring) +#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_RING_HEAD + \ + (ring << 2), \ + value) +#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_RING_TAIL + \ + (ring << 2), \ + value) +#define WRITE_CSR_INT_FLAG(csr_base_addr, bank, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * (bank)) + ADF_RING_CSR_INT_FLAG, \ + value) +#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \ + do { \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + \ + ADF_RING_CSR_INT_SRCSEL, \ + ADF_BANK_INT_SRC_SEL_MASK_0); \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + \ + ADF_RING_CSR_INT_SRCSEL_2, \ + ADF_BANK_INT_SRC_SEL_MASK_X); \ + } while (0) +#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_INT_COL_EN, \ + value) +#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_INT_COL_CTL, \ + ADF_RING_CSR_INT_COL_CTL_ENABLE | value) +#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + \ + ADF_RING_CSR_INT_FLAG_AND_COL, \ + value) +#endif Index: sys/dev/qat/include/common/adf_transport_internal.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/adf_transport_internal.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_TRANSPORT_INTRN_H +#define ADF_TRANSPORT_INTRN_H + +#include "adf_transport.h" + +struct adf_etr_ring_debug_entry { + char ring_name[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + struct sysctl_oid *debug; +}; + +struct adf_etr_ring_data { + void *base_addr; + atomic_t *inflights; + struct mtx lock; /* protects ring data struct */ + adf_callback_fn callback; + struct adf_etr_bank_data *bank; + bus_addr_t dma_addr; + uint16_t head; + uint16_t tail; + uint8_t ring_number; + uint8_t ring_size; + uint8_t msg_size; + uint8_t reserved; + struct adf_etr_ring_debug_entry *ring_debug; + struct bus_dmamem dma_mem; + u32 csr_tail_offset; + u32 max_inflights; +}; + +struct adf_etr_bank_data { + struct adf_etr_ring_data *rings; + struct task resp_handler; + struct resource *csr_addr; + struct adf_accel_dev *accel_dev; + uint32_t irq_coalesc_timer; + uint16_t ring_mask; + uint16_t irq_mask; + struct mtx lock; /* protects bank data struct */ + struct sysctl_oid *bank_debug_dir; + struct sysctl_oid *bank_debug_cfg; + uint32_t bank_number; +}; + +struct adf_etr_data { + struct adf_etr_bank_data *banks; + struct sysctl_oid *debug; +}; + +void adf_response_handler(uintptr_t bank_addr); +int adf_handle_response(struct adf_etr_ring_data *ring, u32 quota); +int adf_bank_debugfs_add(struct adf_etr_bank_data *bank); +void adf_bank_debugfs_rm(struct adf_etr_bank_data *bank); +int adf_ring_debugfs_add(struct adf_etr_ring_data *ring, const char *name); +void adf_ring_debugfs_rm(struct adf_etr_ring_data *ring); +#endif Index: sys/dev/qat/include/common/icp_qat_fw_loader_handle.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/icp_qat_fw_loader_handle.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef __ICP_QAT_FW_LOADER_HANDLE_H__ +#define __ICP_QAT_FW_LOADER_HANDLE_H__ +#include "icp_qat_uclo.h" + +struct icp_qat_fw_loader_ae_data { + unsigned int state; + unsigned int ustore_size; + unsigned int free_addr; + unsigned int free_size; + unsigned int live_ctx_mask; +}; + +struct icp_qat_fw_loader_hal_handle { + struct icp_qat_fw_loader_ae_data aes[ICP_QAT_UCLO_MAX_AE]; + unsigned int ae_mask; + unsigned int slice_mask; + unsigned int revision_id; + unsigned int ae_max_num; + unsigned int upc_mask; + unsigned int max_ustore; +}; + +struct icp_qat_fw_loader_handle { + struct icp_qat_fw_loader_hal_handle *hal_handle; + struct adf_accel_dev *accel_dev; + device_t pci_dev; + void *obj_handle; + void *sobj_handle; + void *mobj_handle; + bool fw_auth; + unsigned int cfg_ae_mask; + rman_res_t hal_sram_size; + struct resource *hal_sram_addr_v; + unsigned int hal_sram_offset; + struct resource *hal_misc_addr_v; + uintptr_t hal_cap_g_ctl_csr_addr_v; + uintptr_t hal_cap_ae_xfer_csr_addr_v; + uintptr_t hal_cap_ae_local_csr_addr_v; + uintptr_t hal_ep_csr_addr_v; +}; + +struct icp_firml_dram_desc { + struct bus_dmamem dram_mem; + + struct resource *dram_base_addr; + void *dram_base_addr_v; + bus_addr_t dram_bus_addr; + u64 dram_size; +}; +#endif Index: sys/dev/qat/include/common/icp_qat_hal.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/icp_qat_hal.h @@ -0,0 +1,196 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef __ICP_QAT_HAL_H +#define __ICP_QAT_HAL_H +#include "adf_accel_devices.h" +#include "icp_qat_fw_loader_handle.h" + +enum hal_global_csr { + MISC_CONTROL = 0x04, + ICP_RESET = 0x0c, + ICP_GLOBAL_CLK_ENABLE = 0x50 +}; + +enum { MISC_CONTROL_C4XXX = 0xAA0, + ICP_RESET_CPP0 = 0x938, + ICP_RESET_CPP1 = 0x93c, + ICP_GLOBAL_CLK_ENABLE_CPP0 = 0x964, + ICP_GLOBAL_CLK_ENABLE_CPP1 = 0x968 }; + +enum hal_ae_csr { + USTORE_ADDRESS = 0x000, + USTORE_DATA_LOWER = 0x004, + USTORE_DATA_UPPER = 0x008, + ALU_OUT = 0x010, + CTX_ARB_CNTL = 0x014, + CTX_ENABLES = 0x018, + CC_ENABLE = 0x01c, + CSR_CTX_POINTER = 0x020, + CTX_STS_INDIRECT = 0x040, + ACTIVE_CTX_STATUS = 0x044, + CTX_SIG_EVENTS_INDIRECT = 0x048, + CTX_SIG_EVENTS_ACTIVE = 0x04c, + CTX_WAKEUP_EVENTS_INDIRECT = 0x050, + LM_ADDR_0_INDIRECT = 0x060, + LM_ADDR_1_INDIRECT = 0x068, + LM_ADDR_2_INDIRECT = 0x0cc, + LM_ADDR_3_INDIRECT = 0x0d4, + INDIRECT_LM_ADDR_0_BYTE_INDEX = 0x0e0, + INDIRECT_LM_ADDR_1_BYTE_INDEX = 0x0e8, + INDIRECT_LM_ADDR_2_BYTE_INDEX = 0x10c, + INDIRECT_LM_ADDR_3_BYTE_INDEX = 0x114, + INDIRECT_T_INDEX = 0x0f8, + INDIRECT_T_INDEX_BYTE_INDEX = 0x0fc, + FUTURE_COUNT_SIGNAL_INDIRECT = 0x078, + TIMESTAMP_LOW = 0x0c0, + TIMESTAMP_HIGH = 0x0c4, + PROFILE_COUNT = 0x144, + SIGNATURE_ENABLE = 0x150, + AE_MISC_CONTROL = 0x160, + LOCAL_CSR_STATUS = 0x180, +}; + +enum fcu_csr { + FCU_CONTROL = 0x0, + FCU_STATUS = 0x4, + FCU_DRAM_ADDR_LO = 0xc, + FCU_DRAM_ADDR_HI = 0x10, + FCU_RAMBASE_ADDR_HI = 0x14, + FCU_RAMBASE_ADDR_LO = 0x18 +}; + +enum fcu_csr_c4xxx { + FCU_CONTROL_C4XXX = 0x0, + FCU_STATUS_C4XXX = 0x4, + FCU_STATUS1_C4XXX = 0xc, + FCU_AE_LOADED_C4XXX = 0x10, + FCU_DRAM_ADDR_LO_C4XXX = 0x14, + FCU_DRAM_ADDR_HI_C4XXX = 0x18, +}; + +enum fcu_cmd { + FCU_CTRL_CMD_NOOP = 0, + FCU_CTRL_CMD_AUTH = 1, + FCU_CTRL_CMD_LOAD = 2, + FCU_CTRL_CMD_START = 3 +}; + +enum fcu_sts { + FCU_STS_NO_STS = 0, + FCU_STS_VERI_DONE = 1, + FCU_STS_LOAD_DONE = 2, + FCU_STS_VERI_FAIL = 3, + FCU_STS_LOAD_FAIL = 4, + FCU_STS_BUSY = 5 +}; +#define UA_ECS (0x1 << 31) +#define ACS_ABO_BITPOS 31 +#define ACS_ACNO 0x7 +#define CE_ENABLE_BITPOS 0x8 +#define CE_LMADDR_0_GLOBAL_BITPOS 16 +#define CE_LMADDR_1_GLOBAL_BITPOS 17 +#define CE_LMADDR_2_GLOBAL_BITPOS 22 +#define CE_LMADDR_3_GLOBAL_BITPOS 23 +#define CE_T_INDEX_GLOBAL_BITPOS 21 +#define CE_NN_MODE_BITPOS 20 +#define CE_REG_PAR_ERR_BITPOS 25 +#define CE_BREAKPOINT_BITPOS 27 +#define CE_CNTL_STORE_PARITY_ERROR_BITPOS 29 +#define CE_INUSE_CONTEXTS_BITPOS 31 +#define CE_NN_MODE (0x1 << CE_NN_MODE_BITPOS) +#define CE_INUSE_CONTEXTS (0x1 << CE_INUSE_CONTEXTS_BITPOS) +#define XCWE_VOLUNTARY (0x1) +#define LCS_STATUS (0x1) +#define MMC_SHARE_CS_BITPOS 2 +#define GLOBAL_CSR 0xA00 +#define FCU_CTRL_AE_POS 0x8 +#define FCU_AUTH_STS_MASK 0x7 +#define FCU_STS_DONE_POS 0x9 +#define FCU_STS_AUTHFWLD_POS 0X8 +#define FCU_LOADED_AE_POS 0x16 +#define FW_AUTH_WAIT_PERIOD 10 +#define FW_AUTH_MAX_RETRY 300 +#define FCU_OFFSET 0x8c0 +#define FCU_OFFSET_C4XXX 0x1000 +#define MAX_CPP_NUM 2 +#define AE_CPP_NUM 2 +#define AES_PER_CPP 16 +#define SLICES_PER_CPP 6 +#define ICP_QAT_AE_OFFSET 0x20000 +#define ICP_QAT_AE_OFFSET_C4XXX 0x40000 +#define ICP_QAT_CAP_OFFSET (ICP_QAT_AE_OFFSET + 0x10000) +#define ICP_QAT_CAP_OFFSET_C4XXX 0x70000 +#define LOCAL_TO_XFER_REG_OFFSET 0x800 +#define ICP_QAT_EP_OFFSET 0x3a000 +#define ICP_QAT_EP_OFFSET_C4XXX 0x60000 +#define MEM_CFG_ERR_BIT 0x20 + +#define CAP_CSR_ADDR(csr) (csr + handle->hal_cap_g_ctl_csr_addr_v) +#define SET_CAP_CSR(handle, csr, val) \ + ADF_CSR_WR(handle->hal_misc_addr_v, CAP_CSR_ADDR(csr), val) +#define GET_CAP_CSR(handle, csr) \ + ADF_CSR_RD(handle->hal_misc_addr_v, CAP_CSR_ADDR(csr)) +#define SET_GLB_CSR(handle, csr, val) \ + ({ \ + typeof(handle) handle_ = (handle); \ + typeof(csr) csr_ = (csr); \ + typeof(val) val_ = (val); \ + (IS_QAT_GEN3(pci_get_device(GET_DEV(handle_->accel_dev)))) ? \ + SET_CAP_CSR(handle_, (csr_), (val_)) : \ + SET_CAP_CSR(handle_, csr_ + GLOBAL_CSR, val_); \ + }) +#define GET_GLB_CSR(handle, csr) \ + ({ \ + typeof(handle) handle_ = (handle); \ + typeof(csr) csr_ = (csr); \ + (IS_QAT_GEN3(pci_get_device(GET_DEV(handle_->accel_dev)))) ? \ + (GET_CAP_CSR(handle_, (csr_))) : \ + (GET_CAP_CSR(handle_, (GLOBAL_CSR + (csr_)))); \ + }) +#define SET_FCU_CSR(handle, csr, val) \ + ({ \ + typeof(handle) handle_ = (handle); \ + typeof(csr) csr_ = (csr); \ + typeof(val) val_ = (val); \ + (IS_QAT_GEN3(pci_get_device(GET_DEV(handle_->accel_dev)))) ? \ + SET_CAP_CSR(handle_, \ + ((csr_) + FCU_OFFSET_C4XXX), \ + (val_)) : \ + SET_CAP_CSR(handle_, ((csr_) + FCU_OFFSET), (val_)); \ + }) +#define GET_FCU_CSR(handle, csr) \ + ({ \ + typeof(handle) handle_ = (handle); \ + typeof(csr) csr_ = (csr); \ + (IS_QAT_GEN3(pci_get_device(GET_DEV(handle_->accel_dev)))) ? \ + GET_CAP_CSR(handle_, (FCU_OFFSET_C4XXX + (csr_))) : \ + GET_CAP_CSR(handle_, (FCU_OFFSET + (csr_))); \ + }) +#define AE_CSR(handle, ae) \ + ((handle)->hal_cap_ae_local_csr_addr_v + ((ae) << 12)) +#define AE_CSR_ADDR(handle, ae, csr) (AE_CSR(handle, ae) + (0x3ff & (csr))) +#define SET_AE_CSR(handle, ae, csr, val) \ + ADF_CSR_WR(handle->hal_misc_addr_v, AE_CSR_ADDR(handle, ae, csr), val) +#define GET_AE_CSR(handle, ae, csr) \ + ADF_CSR_RD(handle->hal_misc_addr_v, AE_CSR_ADDR(handle, ae, csr)) +#define AE_XFER(handle, ae) \ + ((handle)->hal_cap_ae_xfer_csr_addr_v + ((ae) << 12)) +#define AE_XFER_ADDR(handle, ae, reg) \ + (AE_XFER(handle, ae) + (((reg)&0xff) << 2)) +#define SET_AE_XFER(handle, ae, reg, val) \ + ADF_CSR_WR(handle->hal_misc_addr_v, AE_XFER_ADDR(handle, ae, reg), val) +#define SRAM_WRITE(handle, addr, val) \ + ADF_CSR_WR((handle)->hal_sram_addr_v, addr, val) +#define GET_CSR_OFFSET(device_id, cap_offset_, ae_offset_, ep_offset_) \ + ({ \ + int gen3 = IS_QAT_GEN3(device_id); \ + cap_offset_ = \ + (gen3 ? ICP_QAT_CAP_OFFSET_C4XXX : ICP_QAT_CAP_OFFSET); \ + ae_offset_ = \ + (gen3 ? ICP_QAT_AE_OFFSET_C4XXX : ICP_QAT_AE_OFFSET); \ + ep_offset_ = \ + (gen3 ? ICP_QAT_EP_OFFSET_C4XXX : ICP_QAT_EP_OFFSET); \ + }) + +#endif Index: sys/dev/qat/include/common/icp_qat_uclo.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/icp_qat_uclo.h @@ -0,0 +1,558 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef __ICP_QAT_UCLO_H__ +#define __ICP_QAT_UCLO_H__ + +#define ICP_QAT_AC_895XCC_DEV_TYPE 0x00400000 +#define ICP_QAT_AC_C62X_DEV_TYPE 0x01000000 +#define ICP_QAT_AC_C3XXX_DEV_TYPE 0x02000000 +#define ICP_QAT_AC_200XX_DEV_TYPE 0x02000000 +#define ICP_QAT_AC_C4XXX_DEV_TYPE 0x04000000 +#define ICP_QAT_UCLO_MAX_AE 32 +#define ICP_QAT_UCLO_MAX_CTX 8 +#define ICP_QAT_UCLO_MAX_CPPNUM 2 +#define ICP_QAT_UCLO_MAX_UIMAGE (ICP_QAT_UCLO_MAX_AE * ICP_QAT_UCLO_MAX_CTX) +#define ICP_QAT_UCLO_MAX_USTORE 0x4000 +#define ICP_QAT_UCLO_MAX_XFER_REG 128 +#define ICP_QAT_UCLO_MAX_GPR_REG 128 +#define ICP_QAT_UCLO_MAX_LMEM_REG 1024 +#define ICP_QAT_UCLO_AE_ALL_CTX 0xff +#define ICP_QAT_UOF_OBJID_LEN 8 +#define ICP_QAT_UOF_FID 0xc6c2 +#define ICP_QAT_UOF_MAJVER 0x4 +#define ICP_QAT_UOF_MINVER 0x11 +#define ICP_QAT_UOF_OBJS "UOF_OBJS" +#define ICP_QAT_UOF_STRT "UOF_STRT" +#define ICP_QAT_UOF_IMAG "UOF_IMAG" +#define ICP_QAT_UOF_IMEM "UOF_IMEM" +#define ICP_QAT_UOF_LOCAL_SCOPE 1 +#define ICP_QAT_UOF_INIT_EXPR 0 +#define ICP_QAT_UOF_INIT_REG 1 +#define ICP_QAT_UOF_INIT_REG_CTX 2 +#define ICP_QAT_UOF_INIT_EXPR_ENDIAN_SWAP 3 +#define ICP_QAT_SUOF_OBJ_ID_LEN 8 +#define ICP_QAT_SUOF_FID 0x53554f46 +#define ICP_QAT_SUOF_MAJVER 0x0 +#define ICP_QAT_SUOF_MINVER 0x1 +#define ICP_QAT_SUOF_OBJ_NAME_LEN 128 +#define ICP_QAT_MOF_OBJ_ID_LEN 8 +#define ICP_QAT_MOF_OBJ_CHUNKID_LEN 8 +#define ICP_QAT_MOF_FID 0x00666f6d +#define ICP_QAT_MOF_MAJVER 0x0 +#define ICP_QAT_MOF_MINVER 0x1 +#define ICP_QAT_MOF_SYM_OBJS "SYM_OBJS" +#define ICP_QAT_SUOF_OBJS "SUF_OBJS" +#define ICP_QAT_SUOF_IMAG "SUF_IMAG" +#define ICP_QAT_SIMG_AE_INIT_SEQ_LEN (50 * sizeof(unsigned long long)) +#define ICP_QAT_SIMG_AE_INSTS_LEN (0x4000 * sizeof(unsigned long long)) +#define ICP_QAT_CSS_FWSK_MODULUS_LEN 256 +#define ICP_QAT_CSS_FWSK_EXPONENT_LEN 4 +#define ICP_QAT_CSS_FWSK_PAD_LEN 252 +#define ICP_QAT_CSS_FWSK_PUB_LEN \ + (ICP_QAT_CSS_FWSK_MODULUS_LEN + ICP_QAT_CSS_FWSK_EXPONENT_LEN + \ + ICP_QAT_CSS_FWSK_PAD_LEN) +#define ICP_QAT_CSS_SIGNATURE_LEN 256 +#define ICP_QAT_CSS_AE_IMG_LEN \ + (sizeof(struct icp_qat_simg_ae_mode) + ICP_QAT_SIMG_AE_INIT_SEQ_LEN + \ + ICP_QAT_SIMG_AE_INSTS_LEN) +#define ICP_QAT_CSS_AE_SIMG_LEN \ + (sizeof(struct icp_qat_css_hdr) + ICP_QAT_CSS_FWSK_PUB_LEN + \ + ICP_QAT_CSS_SIGNATURE_LEN + ICP_QAT_CSS_AE_IMG_LEN) +#define ICP_QAT_AE_IMG_OFFSET \ + (sizeof(struct icp_qat_css_hdr) + ICP_QAT_CSS_FWSK_MODULUS_LEN + \ + ICP_QAT_CSS_FWSK_EXPONENT_LEN + ICP_QAT_CSS_SIGNATURE_LEN) +#define ICP_QAT_CSS_MAX_IMAGE_LEN 0x40000 + +#define ICP_QAT_CTX_MODE(ae_mode) ((ae_mode)&0xf) +#define ICP_QAT_NN_MODE(ae_mode) (((ae_mode) >> 0x4) & 0xf) +#define ICP_QAT_SHARED_USTORE_MODE(ae_mode) (((ae_mode) >> 0xb) & 0x1) +#define RELOADABLE_CTX_SHARED_MODE(ae_mode) (((ae_mode) >> 0xc) & 0x1) + +#define ICP_QAT_LOC_MEM0_MODE(ae_mode) (((ae_mode) >> 0x8) & 0x1) +#define ICP_QAT_LOC_MEM1_MODE(ae_mode) (((ae_mode) >> 0x9) & 0x1) +#define ICP_QAT_LOC_MEM2_MODE(ae_mode) (((ae_mode) >> 0x6) & 0x1) +#define ICP_QAT_LOC_MEM3_MODE(ae_mode) (((ae_mode) >> 0x7) & 0x1) +#define ICP_QAT_LOC_TINDEX_MODE(ae_mode) (((ae_mode) >> 0xe) & 0x1) + +enum icp_qat_uof_mem_region { + ICP_QAT_UOF_SRAM_REGION = 0x0, + ICP_QAT_UOF_LMEM_REGION = 0x3, + ICP_QAT_UOF_UMEM_REGION = 0x5 +}; + +enum icp_qat_uof_regtype { + ICP_NO_DEST = 0, + ICP_GPA_REL = 1, + ICP_GPA_ABS = 2, + ICP_GPB_REL = 3, + ICP_GPB_ABS = 4, + ICP_SR_REL = 5, + ICP_SR_RD_REL = 6, + ICP_SR_WR_REL = 7, + ICP_SR_ABS = 8, + ICP_SR_RD_ABS = 9, + ICP_SR_WR_ABS = 10, + ICP_DR_REL = 19, + ICP_DR_RD_REL = 20, + ICP_DR_WR_REL = 21, + ICP_DR_ABS = 22, + ICP_DR_RD_ABS = 23, + ICP_DR_WR_ABS = 24, + ICP_LMEM = 26, + ICP_LMEM0 = 27, + ICP_LMEM1 = 28, + ICP_NEIGH_REL = 31, + ICP_LMEM2 = 61, + ICP_LMEM3 = 62, +}; + +enum icp_qat_css_fwtype { CSS_AE_FIRMWARE = 0, CSS_MMP_FIRMWARE = 1 }; + +struct icp_qat_uclo_page { + struct icp_qat_uclo_encap_page *encap_page; + struct icp_qat_uclo_region *region; + unsigned int flags; +}; + +struct icp_qat_uclo_region { + struct icp_qat_uclo_page *loaded; + struct icp_qat_uclo_page *page; +}; + +struct icp_qat_uclo_aeslice { + struct icp_qat_uclo_region *region; + struct icp_qat_uclo_page *page; + struct icp_qat_uclo_page *cur_page[ICP_QAT_UCLO_MAX_CTX]; + struct icp_qat_uclo_encapme *encap_image; + unsigned int ctx_mask_assigned; + unsigned int new_uaddr[ICP_QAT_UCLO_MAX_CTX]; +}; + +struct icp_qat_uclo_aedata { + unsigned int slice_num; + unsigned int eff_ustore_size; + struct icp_qat_uclo_aeslice ae_slices[ICP_QAT_UCLO_MAX_CTX]; + unsigned int shareable_ustore; +}; + +struct icp_qat_uof_encap_obj { + char *beg_uof; + struct icp_qat_uof_objhdr *obj_hdr; + struct icp_qat_uof_chunkhdr *chunk_hdr; + struct icp_qat_uof_varmem_seg *var_mem_seg; +}; + +struct icp_qat_uclo_encap_uwblock { + unsigned int start_addr; + unsigned int words_num; + uint64_t micro_words; +}; + +struct icp_qat_uclo_encap_page { + unsigned int def_page; + unsigned int page_region; + unsigned int beg_addr_v; + unsigned int beg_addr_p; + unsigned int micro_words_num; + unsigned int uwblock_num; + struct icp_qat_uclo_encap_uwblock *uwblock; +}; + +struct icp_qat_uclo_encapme { + struct icp_qat_uof_image *img_ptr; + struct icp_qat_uclo_encap_page *page; + unsigned int ae_reg_num; + struct icp_qat_uof_ae_reg *ae_reg; + unsigned int init_regsym_num; + struct icp_qat_uof_init_regsym *init_regsym; + unsigned int sbreak_num; + struct icp_qat_uof_sbreak *sbreak; + unsigned int uwords_num; +}; + +struct icp_qat_uclo_init_mem_table { + unsigned int entry_num; + struct icp_qat_uof_initmem *init_mem; +}; + +struct icp_qat_uclo_objhdr { + char *file_buff; + unsigned int checksum; + unsigned int size; +}; + +struct icp_qat_uof_strtable { + unsigned int table_len; + unsigned int reserved; + uint64_t strings; +}; + +struct icp_qat_uclo_objhandle { + unsigned int prod_type; + unsigned int prod_rev; + struct icp_qat_uclo_objhdr *obj_hdr; + struct icp_qat_uof_encap_obj encap_uof_obj; + struct icp_qat_uof_strtable str_table; + struct icp_qat_uclo_encapme ae_uimage[ICP_QAT_UCLO_MAX_UIMAGE]; + struct icp_qat_uclo_aedata ae_data[ICP_QAT_UCLO_MAX_AE]; + struct icp_qat_uclo_init_mem_table init_mem_tab; + struct icp_qat_uof_batch_init *lm_init_tab[ICP_QAT_UCLO_MAX_AE]; + struct icp_qat_uof_batch_init *umem_init_tab[ICP_QAT_UCLO_MAX_AE]; + int uimage_num; + int uword_in_bytes; + int global_inited; + unsigned int ae_num; + unsigned int ustore_phy_size; + void *obj_buf; + uint64_t *uword_buf; +}; + +struct icp_qat_uof_uword_block { + unsigned int start_addr; + unsigned int words_num; + unsigned int uword_offset; + unsigned int reserved; +}; + +struct icp_qat_uof_filehdr { + unsigned short file_id; + unsigned short reserved1; + char min_ver; + char maj_ver; + unsigned short reserved2; + unsigned short max_chunks; + unsigned short num_chunks; +}; + +struct icp_qat_uof_filechunkhdr { + char chunk_id[ICP_QAT_UOF_OBJID_LEN]; + unsigned int checksum; + unsigned int offset; + unsigned int size; +}; + +struct icp_qat_uof_objhdr { + unsigned int ac_dev_type; + unsigned short min_cpu_ver; + unsigned short max_cpu_ver; + short max_chunks; + short num_chunks; + unsigned int reserved1; + unsigned int reserved2; +}; + +struct icp_qat_uof_chunkhdr { + char chunk_id[ICP_QAT_UOF_OBJID_LEN]; + unsigned int offset; + unsigned int size; +}; + +struct icp_qat_uof_memvar_attr { + unsigned int offset_in_byte; + unsigned int value; +}; + +struct icp_qat_uof_initmem { + unsigned int sym_name; + char region; + char scope; + unsigned short reserved1; + unsigned int addr; + unsigned int num_in_bytes; + unsigned int val_attr_num; +}; + +struct icp_qat_uof_init_regsym { + unsigned int sym_name; + char init_type; + char value_type; + char reg_type; + unsigned char ctx; + unsigned int reg_addr; + unsigned int value; +}; + +struct icp_qat_uof_varmem_seg { + unsigned int sram_base; + unsigned int sram_size; + unsigned int sram_alignment; + unsigned int sdram_base; + unsigned int sdram_size; + unsigned int sdram_alignment; + unsigned int sdram1_base; + unsigned int sdram1_size; + unsigned int sdram1_alignment; + unsigned int scratch_base; + unsigned int scratch_size; + unsigned int scratch_alignment; +}; + +struct icp_qat_uof_gtid { + char tool_id[ICP_QAT_UOF_OBJID_LEN]; + int tool_ver; + unsigned int reserved1; + unsigned int reserved2; +}; + +struct icp_qat_uof_sbreak { + unsigned int page_num; + unsigned int virt_uaddr; + unsigned char sbreak_type; + unsigned char reg_type; + unsigned short reserved1; + unsigned int addr_offset; + unsigned int reg_addr; +}; + +struct icp_qat_uof_code_page { + unsigned int page_region; + unsigned int page_num; + unsigned char def_page; + unsigned char reserved2; + unsigned short reserved1; + unsigned int beg_addr_v; + unsigned int beg_addr_p; + unsigned int neigh_reg_tab_offset; + unsigned int uc_var_tab_offset; + unsigned int imp_var_tab_offset; + unsigned int imp_expr_tab_offset; + unsigned int code_area_offset; +}; + +struct icp_qat_uof_image { + unsigned int img_name; + unsigned int ae_assigned; + unsigned int ctx_assigned; + unsigned int ac_dev_type; + unsigned int entry_address; + unsigned int fill_pattern[2]; + unsigned int reloadable_size; + unsigned char sensitivity; + unsigned char reserved; + unsigned short ae_mode; + unsigned short max_ver; + unsigned short min_ver; + unsigned short image_attrib; + unsigned short reserved2; + unsigned short page_region_num; + unsigned short numpages; + unsigned int reg_tab_offset; + unsigned int init_reg_sym_tab; + unsigned int sbreak_tab; + unsigned int app_metadata; +}; + +struct icp_qat_uof_objtable { + unsigned int entry_num; +}; + +struct icp_qat_uof_ae_reg { + unsigned int name; + unsigned int vis_name; + unsigned short type; + unsigned short addr; + unsigned short access_mode; + unsigned char visible; + unsigned char reserved1; + unsigned short ref_count; + unsigned short reserved2; + unsigned int xo_id; +}; + +struct icp_qat_uof_code_area { + unsigned int micro_words_num; + unsigned int uword_block_tab; +}; + +struct icp_qat_uof_batch_init { + unsigned int ae; + unsigned int addr; + unsigned int *value; + unsigned int size; + struct icp_qat_uof_batch_init *next; +}; + +struct icp_qat_suof_img_hdr { + const char *simg_buf; + unsigned long simg_len; + const char *css_header; + const char *css_key; + const char *css_signature; + const char *css_simg; + unsigned long simg_size; + unsigned int ae_num; + unsigned int ae_mask; + unsigned int fw_type; + unsigned long simg_name; + unsigned long appmeta_data; +}; + +struct icp_qat_suof_img_tbl { + unsigned int num_simgs; + struct icp_qat_suof_img_hdr *simg_hdr; +}; + +struct icp_qat_suof_handle { + unsigned int file_id; + unsigned int check_sum; + char min_ver; + char maj_ver; + char fw_type; + const char *suof_buf; + unsigned int suof_size; + char *sym_str; + unsigned int sym_size; + struct icp_qat_suof_img_tbl img_table; +}; + +struct icp_qat_fw_auth_desc { + unsigned int img_len; + unsigned int ae_mask; + unsigned int css_hdr_high; + unsigned int css_hdr_low; + unsigned int img_high; + unsigned int img_low; + unsigned int signature_high; + unsigned int signature_low; + unsigned int fwsk_pub_high; + unsigned int fwsk_pub_low; + unsigned int img_ae_mode_data_high; + unsigned int img_ae_mode_data_low; + unsigned int img_ae_init_data_high; + unsigned int img_ae_init_data_low; + unsigned int img_ae_insts_high; + unsigned int img_ae_insts_low; +}; + +struct icp_qat_auth_chunk { + struct icp_qat_fw_auth_desc fw_auth_desc; + u64 chunk_size; + u64 chunk_bus_addr; +}; + +struct icp_qat_css_hdr { + unsigned int module_type; + unsigned int header_len; + unsigned int header_ver; + unsigned int module_id; + unsigned int module_vendor; + unsigned int date; + unsigned int size; + unsigned int key_size; + unsigned int module_size; + unsigned int exponent_size; + unsigned int fw_type; + unsigned int reserved[21]; +}; + +struct icp_qat_simg_ae_mode { + unsigned int file_id; + unsigned short maj_ver; + unsigned short min_ver; + unsigned int dev_type; + unsigned short devmax_ver; + unsigned short devmin_ver; + unsigned int ae_mask; + unsigned int ctx_enables; + char fw_type; + char ctx_mode; + char nn_mode; + char lm0_mode; + char lm1_mode; + char scs_mode; + char lm2_mode; + char lm3_mode; + char tindex_mode; + unsigned char reserved[7]; + char simg_name[256]; + char appmeta_data[256]; +}; + +struct icp_qat_suof_filehdr { + unsigned int file_id; + unsigned int check_sum; + char min_ver; + char maj_ver; + char fw_type; + char reserved; + unsigned short max_chunks; + unsigned short num_chunks; +}; + +struct icp_qat_suof_chunk_hdr { + char chunk_id[ICP_QAT_SUOF_OBJ_ID_LEN]; + u64 offset; + u64 size; +}; + +struct icp_qat_suof_strtable { + unsigned int tab_length; + unsigned int strings; +}; + +struct icp_qat_suof_objhdr { + unsigned int img_length; + unsigned int reserved; +}; + +struct icp_qat_mof_file_hdr { + unsigned int file_id; + unsigned int checksum; + char min_ver; + char maj_ver; + unsigned short reserved; + unsigned short max_chunks; + unsigned short num_chunks; +}; + +struct icp_qat_mof_chunkhdr { + char chunk_id[ICP_QAT_MOF_OBJ_ID_LEN]; + u64 offset; + u64 size; +}; + +struct icp_qat_mof_str_table { + unsigned int tab_len; + unsigned int strings; +}; + +struct icp_qat_mof_obj_hdr { + unsigned short max_chunks; + unsigned short num_chunks; + unsigned int reserved; +}; + +struct icp_qat_mof_obj_chunkhdr { + char chunk_id[ICP_QAT_MOF_OBJ_CHUNKID_LEN]; + u64 offset; + u64 size; + unsigned int name; + unsigned int reserved; +}; + +struct icp_qat_mof_objhdr { + char *obj_name; + const char *obj_buf; + unsigned int obj_size; +}; + +struct icp_qat_mof_table { + unsigned int num_objs; + struct icp_qat_mof_objhdr *obj_hdr; +}; + +struct icp_qat_mof_handle { + unsigned int file_id; + unsigned int checksum; + char min_ver; + char maj_ver; + const char *mof_buf; + u32 mof_size; + char *sym_str; + unsigned int sym_size; + const char *uobjs_hdr; + const char *sobjs_hdr; + struct icp_qat_mof_table obj_table; +}; +#endif Index: sys/dev/qat/include/common/qat_freebsd.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/qat_freebsd.h @@ -0,0 +1,156 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef QAT_FREEBSD_H_ +#define QAT_FREEBSD_H_ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define PCI_VENDOR_ID_INTEL 0x8086 + +#if !defined(__bool_true_false_are_defined) +#define __bool_true_false_are_defined 1 +#define false 0 +#define true 1 +#if __STDC_VERSION__ < 199901L && __GNUC__ < 3 && !defined(__INTEL_COMPILER) +typedef int _Bool; +#endif +typedef _Bool bool; +#endif /* !__bool_true_false_are_defined && !__cplusplus */ + +#if __STDC_VERSION__ < 199901L && __GNUC__ < 3 && !defined(__INTEL_COMPILER) +typedef int _Bool; +#endif + +#define pause_ms(wmesg, ms) pause_sbt(wmesg, (ms)*SBT_1MS, 0, C_HARDCLOCK) + +/* Function sets the MaxPayload size of a PCI device. */ +int pci_set_max_payload(device_t dev, int payload_size); + +device_t pci_find_pf(device_t vf); + +MALLOC_DECLARE(M_QAT); + +struct msix_entry { + struct resource *irq; + void *cookie; +}; + +struct pci_device_id { + uint16_t vendor; + uint16_t device; +}; + +struct bus_dmamem { + bus_dma_tag_t dma_tag; + bus_dmamap_t dma_map; + void *dma_vaddr; + bus_addr_t dma_baddr; +}; + +/* + * Allocate a mapping. On success, zero is returned and the 'dma_vaddr' + * and 'dma_baddr' fields are populated with the virtual and bus addresses, + * respectively, of the mapping. + */ +int bus_dma_mem_create(struct bus_dmamem *mem, + bus_dma_tag_t parent, + bus_size_t alignment, + bus_addr_t lowaddr, + bus_size_t len, + int flags); + +/* + * Release a mapping created by bus_dma_mem_create(). + */ +void bus_dma_mem_free(struct bus_dmamem *mem); + +#define list_for_each_prev_safe(p, n, h) \ + for (p = (h)->prev, n = (p)->prev; p != (h); p = n, n = (p)->prev) + +static inline int +compat_strtoul(const char *cp, unsigned int base, unsigned long *res) +{ + char *end; + + *res = strtoul(cp, &end, base); + + /* skip newline character, if any */ + if (*end == '\n') + end++; + if (*cp == 0 || *end != 0) + return (-EINVAL); + return (0); +} + +static inline int +compat_strtouint(const char *cp, unsigned int base, unsigned int *res) +{ + char *end; + unsigned long temp; + + *res = temp = strtoul(cp, &end, base); + + /* skip newline character, if any */ + if (*end == '\n') + end++; + if (*cp == 0 || *end != 0) + return (-EINVAL); + if (temp != (unsigned int)temp) + return (-ERANGE); + return (0); +} + +static inline int +compat_strtou8(const char *cp, unsigned int base, unsigned char *res) +{ + char *end; + unsigned long temp; + + *res = temp = strtoul(cp, &end, base); + + /* skip newline character, if any */ + if (*end == '\n') + end++; + if (*cp == 0 || *end != 0) + return -EINVAL; + if (temp != (unsigned char)temp) + return -ERANGE; + return 0; +} + +#if __FreeBSD_version >= 1300500 +#undef dev_to_node +static inline int +dev_to_node(device_t dev) +{ + int numa_domain; + + if (!dev || bus_get_domain(dev, &numa_domain) != 0) + return (-1); + else + return (numa_domain); +} +#endif +#endif Index: sys/dev/qat/include/common/sal_statistics_strings.h =================================================================== --- /dev/null +++ sys/dev/qat/include/common/sal_statistics_strings.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef SAL_STATISTICS_STRINGS_H +#define SAL_STATISTICS_STRINGS_H + +/* + * Config values names for statistics + */ +#define SAL_STATS_CFG_ENABLED "statsGeneral" +/**< Config value name for enabling/disabling statistics */ +#define SAL_STATS_CFG_DC "statsDc" +/**< Config value name for enabling/disabling Compression statistics */ +#define SAL_STATS_CFG_DH "statsDh" +/**< Config value name for enabling/disabling Diffie-Helman statistics */ +#define SAL_STATS_CFG_DRBG "statsDrbg" +/**< Config value name for enabling/disabling DRBG statistics */ +#define SAL_STATS_CFG_DSA "statsDsa" +/**< Config value name for enabling/disabling DSA statistics */ +#define SAL_STATS_CFG_ECC "statsEcc" +/**< Config value name for enabling/disabling ECC statistics */ +#define SAL_STATS_CFG_KEYGEN "statsKeyGen" +/**< Config value name for enabling/disabling Key Gen statistics */ +#define SAL_STATS_CFG_LN "statsLn" +/**< Config value name for enabling/disabling Large Number statistics */ +#define SAL_STATS_CFG_PRIME "statsPrime" +/**< Config value name for enabling/disabling Prime statistics */ +#define SAL_STATS_CFG_RSA "statsRsa" +/**< Config value name for enabling/disabling RSA statistics */ +#define SAL_STATS_CFG_SYM "statsSym" +/**< Config value name for enabling/disabling Symmetric Crypto statistics */ + +#endif Index: sys/dev/qat/include/icp_qat_fw.h =================================================================== --- /dev/null +++ sys/dev/qat/include/icp_qat_fw.h @@ -0,0 +1,292 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef _ICP_QAT_FW_H_ +#define _ICP_QAT_FW_H_ +#include +#include "icp_qat_hw.h" + +#define QAT_FIELD_SET(flags, val, bitpos, mask) \ + { \ + (flags) = (((flags) & (~((mask) << (bitpos)))) | \ + (((val) & (mask)) << (bitpos))); \ + } + +#define QAT_FIELD_GET(flags, bitpos, mask) (((flags) >> (bitpos)) & (mask)) + +#define ICP_QAT_FW_REQ_DEFAULT_SZ 128 +#define ICP_QAT_FW_RESP_DEFAULT_SZ 32 +#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8 +#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF +#define ICP_QAT_FW_NUM_LONGWORDS_1 1 +#define ICP_QAT_FW_NUM_LONGWORDS_2 2 +#define ICP_QAT_FW_NUM_LONGWORDS_3 3 +#define ICP_QAT_FW_NUM_LONGWORDS_4 4 +#define ICP_QAT_FW_NUM_LONGWORDS_5 5 +#define ICP_QAT_FW_NUM_LONGWORDS_6 6 +#define ICP_QAT_FW_NUM_LONGWORDS_7 7 +#define ICP_QAT_FW_NUM_LONGWORDS_10 10 +#define ICP_QAT_FW_NUM_LONGWORDS_13 13 +#define ICP_QAT_FW_NULL_REQ_SERV_ID 1 + +enum icp_qat_fw_comn_resp_serv_id { + ICP_QAT_FW_COMN_RESP_SERV_NULL, + ICP_QAT_FW_COMN_RESP_SERV_CPM_FW, + ICP_QAT_FW_COMN_RESP_SERV_DELIMITER +}; + +enum icp_qat_fw_comn_request_id { + ICP_QAT_FW_COMN_REQ_NULL = 0, + ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3, + ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4, + ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7, + ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9, + ICP_QAT_FW_COMN_REQ_DELIMITER +}; + +struct icp_qat_fw_comn_req_hdr_cd_pars { + union { + struct { + uint64_t content_desc_addr; + uint16_t content_desc_resrvd1; + uint8_t content_desc_params_sz; + uint8_t content_desc_hdr_resrvd2; + uint32_t content_desc_resrvd3; + } s; + struct { + uint32_t serv_specif_fields[4]; + } s1; + } u; +}; + +struct icp_qat_fw_comn_req_mid { + uint64_t opaque_data; + uint64_t src_data_addr; + uint64_t dest_data_addr; + uint32_t src_length; + uint32_t dst_length; +}; + +struct icp_qat_fw_comn_req_cd_ctrl { + uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5]; +}; + +struct icp_qat_fw_comn_req_hdr { + uint8_t resrvd1; + uint8_t service_cmd_id; + uint8_t service_type; + uint8_t hdr_flags; + uint16_t serv_specif_flags; + uint16_t comn_req_flags; +}; + +struct icp_qat_fw_comn_req_rqpars { + uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13]; +}; + +struct icp_qat_fw_comn_req { + struct icp_qat_fw_comn_req_hdr comn_hdr; + struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars; + struct icp_qat_fw_comn_req_mid comn_mid; + struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars; + struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl; +}; + +struct icp_qat_fw_comn_error { + uint8_t xlat_err_code; + uint8_t cmp_err_code; +}; + +struct icp_qat_fw_comn_resp_hdr { + uint8_t resrvd1; + uint8_t service_id; + uint8_t response_type; + uint8_t hdr_flags; + struct icp_qat_fw_comn_error comn_error; + uint8_t comn_status; + uint8_t cmd_id; +}; + +struct icp_qat_fw_comn_resp { + struct icp_qat_fw_comn_resp_hdr comn_hdr; + uint64_t opaque_data; + uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4]; +}; + +#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1 +#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0 +#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7 +#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1 +#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F + +#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \ + icp_qat_fw_comn_req_hdr_t.service_type + +#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \ + icp_qat_fw_comn_req_hdr_t.service_type = val + +#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \ + icp_qat_fw_comn_req_hdr_t.service_cmd_id + +#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \ + icp_qat_fw_comn_req_hdr_t.service_cmd_id = val + +#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \ + ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags) + +#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \ + ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) + +#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \ + QAT_FIELD_GET(hdr_flags, \ + ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \ + ICP_QAT_FW_COMN_VALID_FLAG_MASK) + +#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \ + (hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK) + +#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \ + QAT_FIELD_SET((hdr_t.hdr_flags), \ + (val), \ + ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \ + ICP_QAT_FW_COMN_VALID_FLAG_MASK) + +#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \ + (((valid)&ICP_QAT_FW_COMN_VALID_FLAG_MASK) \ + << ICP_QAT_FW_COMN_VALID_FLAG_BITPOS) + +#define QAT_COMN_PTR_TYPE_BITPOS 0 +#define QAT_COMN_PTR_TYPE_MASK 0x1 +#define QAT_COMN_CD_FLD_TYPE_BITPOS 1 +#define QAT_COMN_CD_FLD_TYPE_MASK 0x1 +#define QAT_COMN_PTR_TYPE_FLAT 0x0 +#define QAT_COMN_PTR_TYPE_SGL 0x1 +#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0 +#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1 + +#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \ + ((((cdt)&QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) | \ + (((ptr)&QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS)) + +#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \ + QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK) + +#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \ + QAT_FIELD_GET(flags, \ + QAT_COMN_CD_FLD_TYPE_BITPOS, \ + QAT_COMN_CD_FLD_TYPE_MASK) + +#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \ + QAT_FIELD_SET(flags, \ + val, \ + QAT_COMN_PTR_TYPE_BITPOS, \ + QAT_COMN_PTR_TYPE_MASK) + +#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \ + QAT_FIELD_SET(flags, \ + val, \ + QAT_COMN_CD_FLD_TYPE_BITPOS, \ + QAT_COMN_CD_FLD_TYPE_MASK) + +#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4 +#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0 +#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0 +#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F + +#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \ + ((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) >> \ + (ICP_QAT_FW_COMN_NEXT_ID_BITPOS)) + +#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \ + { \ + ((cd_ctrl_hdr_t)->next_curr_id) = \ + ((((cd_ctrl_hdr_t)->next_curr_id) & \ + ICP_QAT_FW_COMN_CURR_ID_MASK) | \ + ((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) & \ + ICP_QAT_FW_COMN_NEXT_ID_MASK)); \ + } + +#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \ + (((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK) + +#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \ + { \ + ((cd_ctrl_hdr_t)->next_curr_id) = \ + ((((cd_ctrl_hdr_t)->next_curr_id) & \ + ICP_QAT_FW_COMN_NEXT_ID_MASK) | \ + ((val)&ICP_QAT_FW_COMN_CURR_ID_MASK)); \ + } + +#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7 +#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1 +#define QAT_COMN_RESP_PKE_STATUS_BITPOS 6 +#define QAT_COMN_RESP_PKE_STATUS_MASK 0x1 +#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5 +#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1 +#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4 +#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1 +#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3 +#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1 + +#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \ + ((((crypto)&QAT_COMN_RESP_CRYPTO_STATUS_MASK) \ + << QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \ + (((comp)&QAT_COMN_RESP_CMP_STATUS_MASK) \ + << QAT_COMN_RESP_CMP_STATUS_BITPOS) | \ + (((xlat)&QAT_COMN_RESP_XLAT_STATUS_MASK) \ + << QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \ + (((eolb)&QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) \ + << QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS)) + +#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \ + QAT_COMN_RESP_CRYPTO_STATUS_MASK) + +#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_CMP_STATUS_BITPOS, \ + QAT_COMN_RESP_CMP_STATUS_MASK) + +#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_XLAT_STATUS_BITPOS, \ + QAT_COMN_RESP_XLAT_STATUS_MASK) + +#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \ + QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) + +#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0 +#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1 +#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0 +#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1 +#define ERR_CODE_NO_ERROR 0 +#define ERR_CODE_INVALID_BLOCK_TYPE -1 +#define ERR_CODE_NO_MATCH_ONES_COMP -2 +#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3 +#define ERR_CODE_INCOMPLETE_LEN -4 +#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5 +#define ERR_CODE_RPT_GT_SPEC_LEN -6 +#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7 +#define ERR_CODE_INV_DIS_CODE_LEN -8 +#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9 +#define ERR_CODE_DIS_TOO_FAR_BACK -10 +#define ERR_CODE_OVERFLOW_ERROR -11 +#define ERR_CODE_SOFT_ERROR -12 +#define ERR_CODE_FATAL_ERROR -13 +#define ERR_CODE_SSM_ERROR -14 +#define ERR_CODE_ENDPOINT_ERROR -15 + +enum icp_qat_fw_slice { + ICP_QAT_FW_SLICE_NULL = 0, + ICP_QAT_FW_SLICE_CIPHER = 1, + ICP_QAT_FW_SLICE_AUTH = 2, + ICP_QAT_FW_SLICE_DRAM_RD = 3, + ICP_QAT_FW_SLICE_DRAM_WR = 4, + ICP_QAT_FW_SLICE_COMP = 5, + ICP_QAT_FW_SLICE_XLAT = 6, + ICP_QAT_FW_SLICE_DELIMITER +}; +#endif Index: sys/dev/qat/include/icp_qat_fw_init_admin.h =================================================================== --- /dev/null +++ sys/dev/qat/include/icp_qat_fw_init_admin.h @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef _ICP_QAT_FW_INIT_ADMIN_H_ +#define _ICP_QAT_FW_INIT_ADMIN_H_ + +#include "icp_qat_fw.h" + +enum icp_qat_fw_init_admin_cmd_id { + ICP_QAT_FW_INIT_ME = 0, + ICP_QAT_FW_TRNG_ENABLE = 1, + ICP_QAT_FW_TRNG_DISABLE = 2, + ICP_QAT_FW_CONSTANTS_CFG = 3, + ICP_QAT_FW_STATUS_GET = 4, + ICP_QAT_FW_COUNTERS_GET = 5, + ICP_QAT_FW_LOOPBACK = 6, + ICP_QAT_FW_HEARTBEAT_SYNC = 7, + ICP_QAT_FW_HEARTBEAT_GET = 8, + ICP_QAT_FW_COMP_CAPABILITY_GET = 9, + ICP_QAT_FW_CRYPTO_CAPABILITY_GET = 10, + ICP_QAT_FW_HEARTBEAT_TIMER_SET = 13, + ICP_QAT_FW_RL_SLA_CONFIG = 14, + ICP_QAT_FW_RL_INIT = 15, + ICP_QAT_FW_RL_DU_START = 16, + ICP_QAT_FW_RL_DU_STOP = 17, + ICP_QAT_FW_TIMER_GET = 19, + ICP_QAT_FW_CNV_STATS_GET = 20, + ICP_QAT_FW_PKE_REPLAY_STATS_GET = 21 +}; + +enum icp_qat_fw_init_admin_resp_status { + ICP_QAT_FW_INIT_RESP_STATUS_SUCCESS = 0, + ICP_QAT_FW_INIT_RESP_STATUS_FAIL = 1, + ICP_QAT_FW_INIT_RESP_STATUS_UNSUPPORTED = 4 +}; + +enum icp_qat_fw_cnv_error_type { + CNV_ERR_TYPE_NO_ERROR = 0, + CNV_ERR_TYPE_CHECKSUM_ERROR, + CNV_ERR_TYPE_DECOMP_PRODUCED_LENGTH_ERROR, + CNV_ERR_TYPE_DECOMPRESSION_ERROR, + CNV_ERR_TYPE_TRANSLATION_ERROR, + CNV_ERR_TYPE_DECOMP_CONSUMED_LENGTH_ERROR, + CNV_ERR_TYPE_UNKNOWN_ERROR +}; + +#define CNV_ERROR_TYPE_GET(latest_error) \ + ({ \ + __typeof__(latest_error) _lerror = latest_error; \ + (_lerror >> 12) > CNV_ERR_TYPE_UNKNOWN_ERROR ? \ + CNV_ERR_TYPE_UNKNOWN_ERROR : \ + (enum icp_qat_fw_cnv_error_type)(_lerror >> 12); \ + }) +#define CNV_ERROR_LENGTH_DELTA_GET(latest_error) \ + ({ \ + __typeof__(latest_error) _lerror = latest_error; \ + ((s16)((_lerror & 0x0FFF) | (_lerror & 0x0800 ? 0xF000 : 0))); \ + }) +#define CNV_ERROR_DECOMP_STATUS_GET(latest_error) ((s8)(latest_error & 0xFF)) + +struct icp_qat_fw_init_admin_req { + u16 init_cfg_sz; + u8 resrvd1; + u8 cmd_id; + u32 max_req_duration; + u64 opaque_data; + + union { + /* ICP_QAT_FW_INIT_ME */ + struct { + u64 resrvd2; + u16 ibuf_size_in_kb; + u16 resrvd3; + u32 resrvd4; + }; + /* ICP_QAT_FW_CONSTANTS_CFG */ + struct { + u64 init_cfg_ptr; + u64 resrvd5; + }; + /* ICP_QAT_FW_HEARTBEAT_TIMER_SET */ + struct { + u64 hb_cfg_ptr; + u32 heartbeat_ticks; + u32 resrvd6; + }; + /* ICP_QAT_FW_RL_SLA_CONFIG */ + struct { + u32 credit_per_sla; + u8 service_id; + u8 vf_id; + u8 resrvd7; + u8 resrvd8; + u32 resrvd9; + u32 resrvd10; + }; + /* ICP_QAT_FW_RL_INIT */ + struct { + u32 rl_period; + u8 config; + u8 resrvd11; + u8 num_me; + u8 resrvd12; + u8 pke_svc_arb_map; + u8 bulk_crypto_svc_arb_map; + u8 compression_svc_arb_map; + u8 resrvd13; + u32 resrvd14; + }; + /* ICP_QAT_FW_RL_DU_STOP */ + struct { + u64 cfg_ptr; + u32 resrvd15; + u32 resrvd16; + }; + }; +} __packed; + +struct icp_qat_fw_init_admin_resp { + u8 flags; + u8 resrvd1; + u8 status; + u8 cmd_id; + union { + u32 resrvd2; + u32 ras_event_count; + /* ICP_QAT_FW_STATUS_GET */ + struct { + u16 version_minor_num; + u16 version_major_num; + }; + /* ICP_QAT_FW_COMP_CAPABILITY_GET */ + u32 extended_features; + /* ICP_QAT_FW_CNV_STATS_GET */ + struct { + u16 error_count; + u16 latest_error; + }; + }; + u64 opaque_data; + union { + u32 resrvd3[4]; + /* ICP_QAT_FW_STATUS_GET */ + struct { + u32 version_patch_num; + u8 context_id; + u8 ae_id; + u16 resrvd4; + u64 resrvd5; + }; + /* ICP_QAT_FW_COMP_CAPABILITY_GET */ + struct { + u16 compression_algos; + u16 checksum_algos; + u32 deflate_capabilities; + u32 resrvd6; + u32 deprecated; + }; + /* ICP_QAT_FW_CRYPTO_CAPABILITY_GET */ + struct { + u32 cipher_algos; + u32 hash_algos; + u16 keygen_algos; + u16 other; + u16 public_key_algos; + u16 prime_algos; + }; + /* ICP_QAT_FW_RL_DU_STOP */ + struct { + u32 resrvd7; + u8 granularity; + u8 resrvd8; + u16 resrvd9; + u32 total_du_time; + u32 resrvd10; + }; + /* ICP_QAT_FW_TIMER_GET */ + struct { + u64 timestamp; + u64 resrvd11; + }; + /* ICP_QAT_FW_COUNTERS_GET */ + struct { + u64 req_rec_count; + u64 resp_sent_count; + }; + /* ICP_QAT_FW_PKE_REPLAY_STATS_GET */ + struct { + u32 successful_count; + u32 unsuccessful_count; + u64 resrvd12; + }; + }; +} __packed; + +enum icp_qat_fw_init_admin_init_flag { ICP_QAT_FW_INIT_FLAG_PKE_DISABLED = 0 }; + +struct icp_qat_fw_init_admin_hb_cnt { + u16 resp_heartbeat_cnt; + u16 req_heartbeat_cnt; +}; + +struct icp_qat_fw_init_admin_hb_stats { + struct icp_qat_fw_init_admin_hb_cnt stats[ADF_NUM_HB_CNT_PER_AE]; +}; + +#define ICP_QAT_FW_COMN_HEARTBEAT_OK 0 +#define ICP_QAT_FW_COMN_HEARTBEAT_BLOCKED 1 +#define ICP_QAT_FW_COMN_HEARTBEAT_FLAG_BITPOS 0 +#define ICP_QAT_FW_COMN_HEARTBEAT_FLAG_MASK 0x1 +#define ICP_QAT_FW_COMN_STATUS_RESRVD_FLD_MASK 0xFE +#define ICP_QAT_FW_COMN_HEARTBEAT_HDR_FLAG_GET(hdr_t) \ + ICP_QAT_FW_COMN_HEARTBEAT_FLAG_GET(hdr_t.flags) + +#define ICP_QAT_FW_COMN_HEARTBEAT_HDR_FLAG_SET(hdr_t, val) \ + ICP_QAT_FW_COMN_HEARTBEAT_FLAG_SET(hdr_t, val) + +#define ICP_QAT_FW_COMN_HEARTBEAT_FLAG_GET(flags) \ + QAT_FIELD_GET(flags, \ + ICP_QAT_FW_COMN_HEARTBEAT_FLAG_BITPOS, \ + ICP_QAT_FW_COMN_HEARTBEAT_FLAG_MASK) +#endif Index: sys/dev/qat/include/icp_qat_hw.h =================================================================== --- /dev/null +++ sys/dev/qat/include/icp_qat_hw.h @@ -0,0 +1,326 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef _ICP_QAT_HW_H_ +#define _ICP_QAT_HW_H_ + +enum icp_qat_hw_ae_id { + ICP_QAT_HW_AE_0 = 0, + ICP_QAT_HW_AE_1 = 1, + ICP_QAT_HW_AE_2 = 2, + ICP_QAT_HW_AE_3 = 3, + ICP_QAT_HW_AE_4 = 4, + ICP_QAT_HW_AE_5 = 5, + ICP_QAT_HW_AE_6 = 6, + ICP_QAT_HW_AE_7 = 7, + ICP_QAT_HW_AE_8 = 8, + ICP_QAT_HW_AE_9 = 9, + ICP_QAT_HW_AE_10 = 10, + ICP_QAT_HW_AE_11 = 11, + ICP_QAT_HW_AE_DELIMITER = 12 +}; + +enum icp_qat_hw_qat_id { + ICP_QAT_HW_QAT_0 = 0, + ICP_QAT_HW_QAT_1 = 1, + ICP_QAT_HW_QAT_2 = 2, + ICP_QAT_HW_QAT_3 = 3, + ICP_QAT_HW_QAT_4 = 4, + ICP_QAT_HW_QAT_5 = 5, + ICP_QAT_HW_QAT_DELIMITER = 6 +}; + +enum icp_qat_hw_auth_algo { + ICP_QAT_HW_AUTH_ALGO_NULL = 0, + ICP_QAT_HW_AUTH_ALGO_SHA1 = 1, + ICP_QAT_HW_AUTH_ALGO_MD5 = 2, + ICP_QAT_HW_AUTH_ALGO_SHA224 = 3, + ICP_QAT_HW_AUTH_ALGO_SHA256 = 4, + ICP_QAT_HW_AUTH_ALGO_SHA384 = 5, + ICP_QAT_HW_AUTH_ALGO_SHA512 = 6, + ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7, + ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8, + ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9, + ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10, + ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11, + ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12, + ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13, + ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14, + ICP_QAT_HW_AUTH_RESERVED_1 = 15, + ICP_QAT_HW_AUTH_RESERVED_2 = 16, + ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17, + ICP_QAT_HW_AUTH_RESERVED_3 = 18, + ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19, + ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20 +}; + +enum icp_qat_hw_auth_mode { + ICP_QAT_HW_AUTH_MODE0 = 0, + ICP_QAT_HW_AUTH_MODE1 = 1, + ICP_QAT_HW_AUTH_MODE2 = 2, + ICP_QAT_HW_AUTH_MODE_DELIMITER = 3 +}; + +struct icp_qat_hw_auth_config { + uint32_t config; + uint32_t reserved; +}; +enum icp_qat_slice_mask { + ICP_ACCEL_MASK_CIPHER_SLICE = 0x01, + ICP_ACCEL_MASK_AUTH_SLICE = 0x02, + ICP_ACCEL_MASK_PKE_SLICE = 0x04, + ICP_ACCEL_MASK_COMPRESS_SLICE = 0x08, + ICP_ACCEL_MASK_DEPRECATED = 0x10, + ICP_ACCEL_MASK_EIA3_SLICE = 0x20, + ICP_ACCEL_MASK_SHA3_SLICE = 0x40, + ICP_ACCEL_MASK_CRYPTO0_SLICE = 0x80, + ICP_ACCEL_MASK_CRYPTO1_SLICE = 0x100, + ICP_ACCEL_MASK_CRYPTO2_SLICE = 0x200, + ICP_ACCEL_MASK_SM3_SLICE = 0x400, + ICP_ACCEL_MASK_SM4_SLICE = 0x800 +}; + +enum icp_qat_capabilities_mask { + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC = BIT(0), + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC = BIT(1), + ICP_ACCEL_CAPABILITIES_CIPHER = BIT(2), + ICP_ACCEL_CAPABILITIES_AUTHENTICATION = BIT(3), + ICP_ACCEL_CAPABILITIES_RESERVED_1 = BIT(4), + ICP_ACCEL_CAPABILITIES_COMPRESSION = BIT(5), + ICP_ACCEL_CAPABILITIES_DEPRECATED = BIT(6), + ICP_ACCEL_CAPABILITIES_RAND = BIT(7), + ICP_ACCEL_CAPABILITIES_ZUC = BIT(8), + ICP_ACCEL_CAPABILITIES_SHA3 = BIT(9), + ICP_ACCEL_CAPABILITIES_KPT = BIT(10), + ICP_ACCEL_CAPABILITIES_RL = BIT(11), + ICP_ACCEL_CAPABILITIES_HKDF = BIT(12), + ICP_ACCEL_CAPABILITIES_ECEDMONT = BIT(13), + ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN = BIT(14), + ICP_ACCEL_CAPABILITIES_SHA3_EXT = BIT(15), + ICP_ACCEL_CAPABILITIES_AESGCM_SPC = BIT(16), + ICP_ACCEL_CAPABILITIES_CHACHA_POLY = BIT(17), + ICP_ACCEL_CAPABILITIES_SM2 = BIT(18), + ICP_ACCEL_CAPABILITIES_SM3 = BIT(19), + ICP_ACCEL_CAPABILITIES_SM4 = BIT(20), + ICP_ACCEL_CAPABILITIES_INLINE = BIT(21), + ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY = BIT(22), + ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY64 = BIT(23), + ICP_ACCEL_CAPABILITIES_LZ4_COMPRESSION = BIT(24), + ICP_ACCEL_CAPABILITIES_LZ4S_COMPRESSION = BIT(25), + ICP_ACCEL_CAPABILITIES_AES_V2 = BIT(26), + ICP_ACCEL_CAPABILITIES_KPT2 = BIT(27), +}; + +enum icp_qat_extended_dc_capabilities_mask { + ICP_ACCEL_CAPABILITIES_ADVANCED_COMPRESSION = 0x101 +}; + +#define QAT_AUTH_MODE_BITPOS 4 +#define QAT_AUTH_MODE_MASK 0xF +#define QAT_AUTH_ALGO_BITPOS 0 +#define QAT_AUTH_ALGO_MASK 0xF +#define QAT_AUTH_CMP_BITPOS 8 +#define QAT_AUTH_HIGH_BIT 4 +#define QAT_AUTH_CMP_MASK 0x7F +#define QAT_AUTH_SHA3_PADDING_BITPOS 16 +#define QAT_AUTH_SHA3_PADDING_MASK 0x1 +#define QAT_AUTH_ALGO_SHA3_BITPOS 22 +#define QAT_AUTH_ALGO_SHA3_MASK 0x3 +#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \ + (((mode & QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \ + ((algo & QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \ + (((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) \ + << QAT_AUTH_ALGO_SHA3_BITPOS) | \ + (((((algo == ICP_QAT_HW_AUTH_ALGO_SHA3_256) || \ + (algo == ICP_QAT_HW_AUTH_ALGO_SHA3_512)) ? \ + 1 : \ + 0) & \ + QAT_AUTH_SHA3_PADDING_MASK) \ + << QAT_AUTH_SHA3_PADDING_BITPOS) | \ + ((cmp_len & QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS)) + +struct icp_qat_hw_auth_counter { + __be32 counter; + uint32_t reserved; +}; + +#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF +#define QAT_AUTH_COUNT_BITPOS 0 +#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \ + (((val)&QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS) + +struct icp_qat_hw_auth_setup { + struct icp_qat_hw_auth_config auth_config; + struct icp_qat_hw_auth_counter auth_counter; +}; + +#define QAT_HW_DEFAULT_ALIGNMENT 8 +#define QAT_HW_ROUND_UP(val, n) (((val) + ((n)-1)) & (~(n - 1))) +#define ICP_QAT_HW_NULL_STATE1_SZ 32 +#define ICP_QAT_HW_MD5_STATE1_SZ 16 +#define ICP_QAT_HW_SHA1_STATE1_SZ 20 +#define ICP_QAT_HW_SHA224_STATE1_SZ 32 +#define ICP_QAT_HW_SHA256_STATE1_SZ 32 +#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32 +#define ICP_QAT_HW_SHA384_STATE1_SZ 64 +#define ICP_QAT_HW_SHA512_STATE1_SZ 64 +#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64 +#define ICP_QAT_HW_SHA3_224_STATE1_SZ 28 +#define ICP_QAT_HW_SHA3_384_STATE1_SZ 48 +#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16 +#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16 +#define ICP_QAT_HW_AES_F9_STATE1_SZ 32 +#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16 +#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16 +#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8 +#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8 +#define ICP_QAT_HW_NULL_STATE2_SZ 32 +#define ICP_QAT_HW_MD5_STATE2_SZ 16 +#define ICP_QAT_HW_SHA1_STATE2_SZ 20 +#define ICP_QAT_HW_SHA224_STATE2_SZ 32 +#define ICP_QAT_HW_SHA256_STATE2_SZ 32 +#define ICP_QAT_HW_SHA3_256_STATE2_SZ 0 +#define ICP_QAT_HW_SHA384_STATE2_SZ 64 +#define ICP_QAT_HW_SHA512_STATE2_SZ 64 +#define ICP_QAT_HW_SHA3_512_STATE2_SZ 0 +#define ICP_QAT_HW_SHA3_224_STATE2_SZ 0 +#define ICP_QAT_HW_SHA3_384_STATE2_SZ 0 +#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16 +#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16 +#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16 +#define ICP_QAT_HW_F9_IK_SZ 16 +#define ICP_QAT_HW_F9_FK_SZ 16 +#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ \ + (ICP_QAT_HW_F9_IK_SZ + ICP_QAT_HW_F9_FK_SZ) +#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ +#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24 +#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32 +#define ICP_QAT_HW_GALOIS_H_SZ 16 +#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8 +#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16 + +struct icp_qat_hw_auth_sha512 { + struct icp_qat_hw_auth_setup inner_setup; + uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ]; + struct icp_qat_hw_auth_setup outer_setup; + uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ]; +}; + +struct icp_qat_hw_auth_algo_blk { + struct icp_qat_hw_auth_sha512 sha; +}; + +#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0 +#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF + +enum icp_qat_hw_cipher_algo { + ICP_QAT_HW_CIPHER_ALGO_NULL = 0, + ICP_QAT_HW_CIPHER_ALGO_DES = 1, + ICP_QAT_HW_CIPHER_ALGO_3DES = 2, + ICP_QAT_HW_CIPHER_ALGO_AES128 = 3, + ICP_QAT_HW_CIPHER_ALGO_AES192 = 4, + ICP_QAT_HW_CIPHER_ALGO_AES256 = 5, + ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6, + ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7, + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8, + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9, + ICP_QAT_HW_CIPHER_ALGO_SM4 = 10, + ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305 = 11, + ICP_QAT_HW_CIPHER_DELIMITER = 12 +}; + +enum icp_qat_hw_cipher_mode { + ICP_QAT_HW_CIPHER_ECB_MODE = 0, + ICP_QAT_HW_CIPHER_CBC_MODE = 1, + ICP_QAT_HW_CIPHER_CTR_MODE = 2, + ICP_QAT_HW_CIPHER_F8_MODE = 3, + ICP_QAT_HW_CIPHER_AEAD_MODE = 4, + ICP_QAT_HW_CIPHER_RESERVED_MODE = 5, + ICP_QAT_HW_CIPHER_XTS_MODE = 6, + ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7 +}; + +struct icp_qat_hw_cipher_config { + uint32_t val; + uint32_t reserved; +}; + +enum icp_qat_hw_cipher_dir { + ICP_QAT_HW_CIPHER_ENCRYPT = 0, + ICP_QAT_HW_CIPHER_DECRYPT = 1, +}; + +enum icp_qat_hw_cipher_convert { + ICP_QAT_HW_CIPHER_NO_CONVERT = 0, + ICP_QAT_HW_CIPHER_KEY_CONVERT = 1, +}; + +#define QAT_CIPHER_MODE_BITPOS 4 +#define QAT_CIPHER_MODE_MASK 0xF +#define QAT_CIPHER_ALGO_BITPOS 0 +#define QAT_CIPHER_ALGO_MASK 0xF +#define QAT_CIPHER_CONVERT_BITPOS 9 +#define QAT_CIPHER_CONVERT_MASK 0x1 +#define QAT_CIPHER_DIR_BITPOS 8 +#define QAT_CIPHER_DIR_MASK 0x1 +#define QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK 0x1F +#define QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS 10 +#define QAT_CIPHER_AEAD_AAD_SIZE_LOWER_MASK 0xFF +#define QAT_CIPHER_AEAD_AAD_SIZE_UPPER_MASK 0x3F +#define QAT_CIPHER_AEAD_AAD_UPPER_SHIFT 8 +#define QAT_CIPHER_AEAD_AAD_LOWER_SHIFT 24 +#define QAT_CIPHER_AEAD_AAD_SIZE_BITPOS 16 +#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2 +#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2 +#define ICP_QAT_HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \ + (((mode & QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \ + ((algo & QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \ + ((convert & QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \ + ((dir & QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS)) +#define ICP_QAT_HW_DES_BLK_SZ 8 +#define ICP_QAT_HW_3DES_BLK_SZ 8 +#define ICP_QAT_HW_NULL_BLK_SZ 8 +#define ICP_QAT_HW_AES_BLK_SZ 16 +#define ICP_QAT_HW_KASUMI_BLK_SZ 8 +#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8 +#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8 +#define ICP_QAT_HW_NULL_KEY_SZ 256 +#define ICP_QAT_HW_DES_KEY_SZ 8 +#define ICP_QAT_HW_3DES_KEY_SZ 24 +#define ICP_QAT_HW_AES_128_KEY_SZ 16 +#define ICP_QAT_HW_AES_192_KEY_SZ 24 +#define ICP_QAT_HW_AES_256_KEY_SZ 32 +#define ICP_QAT_HW_AES_128_F8_KEY_SZ \ + (ICP_QAT_HW_AES_128_KEY_SZ * QAT_CIPHER_MODE_F8_KEY_SZ_MULT) +#define ICP_QAT_HW_AES_192_F8_KEY_SZ \ + (ICP_QAT_HW_AES_192_KEY_SZ * QAT_CIPHER_MODE_F8_KEY_SZ_MULT) +#define ICP_QAT_HW_AES_256_F8_KEY_SZ \ + (ICP_QAT_HW_AES_256_KEY_SZ * QAT_CIPHER_MODE_F8_KEY_SZ_MULT) +#define ICP_QAT_HW_AES_128_XTS_KEY_SZ \ + (ICP_QAT_HW_AES_128_KEY_SZ * QAT_CIPHER_MODE_XTS_KEY_SZ_MULT) +#define ICP_QAT_HW_AES_256_XTS_KEY_SZ \ + (ICP_QAT_HW_AES_256_KEY_SZ * QAT_CIPHER_MODE_XTS_KEY_SZ_MULT) +#define ICP_QAT_HW_KASUMI_KEY_SZ 16 +#define ICP_QAT_HW_KASUMI_F8_KEY_SZ \ + (ICP_QAT_HW_KASUMI_KEY_SZ * QAT_CIPHER_MODE_F8_KEY_SZ_MULT) +#define ICP_QAT_HW_AES_128_XTS_KEY_SZ \ + (ICP_QAT_HW_AES_128_KEY_SZ * QAT_CIPHER_MODE_XTS_KEY_SZ_MULT) +#define ICP_QAT_HW_AES_256_XTS_KEY_SZ \ + (ICP_QAT_HW_AES_256_KEY_SZ * QAT_CIPHER_MODE_XTS_KEY_SZ_MULT) +#define ICP_QAT_HW_ARC4_KEY_SZ 256 +#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16 +#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16 +#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16 +#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16 +#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2 +#define INIT_SHRAM_CONSTANTS_TABLE_SZ 1024 + +struct icp_qat_hw_cipher_aes256_f8 { + struct icp_qat_hw_cipher_config cipher_config; + uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ]; +}; + +struct icp_qat_hw_cipher_algo_blk { + struct icp_qat_hw_cipher_aes256_f8 aes; +} __aligned(64); +#endif Index: sys/dev/qat/include/qat_ocf_mem_pool.h =================================================================== --- /dev/null +++ sys/dev/qat/include/qat_ocf_mem_pool.h @@ -0,0 +1,142 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef _QAT_OCF_MEM_POOL_H_ +#define _QAT_OCF_MEM_POOL_H_ + +/* System headers */ +#include + +/* QAT specific headers */ +#include "cpa.h" +#include "cpa_cy_sym_dp.h" +#include "icp_qat_fw_la.h" + +#define QAT_OCF_MAX_LEN (64 * 1024) +#define QAT_OCF_MAX_FLATS (32) +#define QAT_OCF_MAX_DIGEST SHA512_DIGEST_LENGTH +#define QAT_OCF_MAX_SYMREQ (256) +#define QAT_OCF_MEM_POOL_SIZE ((QAT_OCF_MAX_SYMREQ * 2 + 1) * 2) +#define QAT_OCF_MAXLEN 64 * 1024 + +/* Dedicated structure due to flexible arrays not allowed to be + * allocated on stack */ +struct qat_ocf_buffer_list { + Cpa64U reserved0; + Cpa32U numBuffers; + Cpa32U reserved1; + CpaPhysFlatBuffer flatBuffers[QAT_OCF_MAX_FLATS]; +}; + +struct qat_ocf_dma_mem { + bus_dma_tag_t dma_tag; + bus_dmamap_t dma_map; + bus_dma_segment_t dma_seg; + void *dma_vaddr; +} __aligned(64); + +struct qat_ocf_cookie { + /* Source SGLs */ + struct qat_ocf_buffer_list src_buffers; + /* Destination SGL */ + struct qat_ocf_buffer_list dst_buffers; + + /* Cache OP data */ + CpaCySymDpOpData pOpdata; + + /* IV max size taken from cryptdev */ + uint8_t qat_ocf_iv_buf[EALG_MAX_BLOCK_LEN]; + bus_addr_t qat_ocf_iv_buf_paddr; + uint8_t qat_ocf_digest[QAT_OCF_MAX_DIGEST]; + bus_addr_t qat_ocf_digest_paddr; + /* Used only in case of separated AAD and GCM, CCM and RC4 */ + uint8_t qat_ocf_gcm_aad[ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX]; + bus_addr_t qat_ocf_gcm_aad_paddr; + + /* Source SGLs */ + struct qat_ocf_dma_mem src_dma_mem; + bus_addr_t src_buffer_list_paddr; + + /* Destination SGL */ + struct qat_ocf_dma_mem dst_dma_mem; + bus_addr_t dst_buffer_list_paddr; + + /* AAD - used only if separated AAD is used by OCF and HW requires + * to have it at the beginning of source buffer */ + struct qat_ocf_dma_mem gcm_aad_dma_mem; + bus_addr_t gcm_aad_buffer_list_paddr; + CpaBoolean is_sep_aad_used; + + /* Cache OP data */ + bus_addr_t pOpData_paddr; + /* misc */ + struct cryptop *crp_op; + + /* This cookie tag and map */ + bus_dma_tag_t dma_tag; + bus_dmamap_t dma_map; +}; + +struct qat_ocf_session { + CpaCySymSessionCtx sessionCtx; + Cpa32U sessionCtxSize; + Cpa32U authLen; + Cpa32U aadLen; +}; + +struct qat_ocf_dsession { + struct qat_ocf_instance *qatInstance; + struct qat_ocf_session encSession; + struct qat_ocf_session decSession; +}; + +struct qat_ocf_load_cb_arg { + struct cryptop *crp_op; + struct qat_ocf_cookie *qat_cookie; + CpaCySymDpOpData *pOpData; + int error; +}; + +struct qat_ocf_instance { + CpaInstanceHandle cyInstHandle; + struct mtx cyInstMtx; + struct qat_ocf_dma_mem cookie_dmamem[QAT_OCF_MEM_POOL_SIZE]; + struct qat_ocf_cookie *cookie_pool[QAT_OCF_MEM_POOL_SIZE]; + struct qat_ocf_cookie *free_cookie[QAT_OCF_MEM_POOL_SIZE]; + int free_cookie_ptr; + struct mtx cookie_pool_mtx; + int32_t driver_id; +}; + +/* Init/deinit */ +CpaStatus qat_ocf_cookie_pool_init(struct qat_ocf_instance *instance, + device_t dev); +void qat_ocf_cookie_pool_deinit(struct qat_ocf_instance *instance); +/* Alloc/free */ +CpaStatus qat_ocf_cookie_alloc(struct qat_ocf_instance *instance, + struct qat_ocf_cookie **buffers_out); +void qat_ocf_cookie_free(struct qat_ocf_instance *instance, + struct qat_ocf_cookie *cookie); +/* Pre/post sync */ +CpaStatus qat_ocf_cookie_dma_pre_sync(struct cryptop *crp, + CpaCySymDpOpData *pOpData); +CpaStatus qat_ocf_cookie_dma_post_sync(struct cryptop *crp, + CpaCySymDpOpData *pOpData); +/* Bus DMA unload */ +CpaStatus qat_ocf_cookie_dma_unload(struct cryptop *crp, + CpaCySymDpOpData *pOpData); +/* Bus DMA load callbacks */ +void qat_ocf_crypto_load_buf_cb(void *_arg, + bus_dma_segment_t *segs, + int nseg, + int error); +void qat_ocf_crypto_load_obuf_cb(void *_arg, + bus_dma_segment_t *segs, + int nseg, + int error); +void qat_ocf_crypto_load_aadbuf_cb(void *_arg, + bus_dma_segment_t *segs, + int nseg, + int error); + +#endif /* _QAT_OCF_MEM_POOL_H_ */ Index: sys/dev/qat/include/qat_ocf_utils.h =================================================================== --- /dev/null +++ sys/dev/qat/include/qat_ocf_utils.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef _QAT_OCF_UTILS_H_ +#define _QAT_OCF_UTILS_H_ +/* System headers */ +#include +#include +#include + +/* Cryptodev headers */ +#include +#include + +/* QAT specific headers */ +#include "qat_ocf_mem_pool.h" +#include "cpa.h" +#include "cpa_cy_sym_dp.h" + +static inline CpaBoolean +is_gmac_exception(const struct crypto_session_params *csp) +{ + if (CSP_MODE_DIGEST == csp->csp_mode) + if (CRYPTO_AES_NIST_GMAC == csp->csp_auth_alg) + return CPA_TRUE; + + return CPA_FALSE; +} + +static inline CpaBoolean +is_sep_aad_supported(const struct crypto_session_params *csp) +{ + if (CPA_TRUE == is_gmac_exception(csp)) + return CPA_FALSE; + + if (CSP_MODE_AEAD == csp->csp_mode) + if (CRYPTO_AES_NIST_GCM_16 == csp->csp_cipher_alg || + CRYPTO_AES_NIST_GMAC == csp->csp_cipher_alg) + return CPA_TRUE; + + return CPA_FALSE; +} + +static inline CpaBoolean +is_use_sep_digest(const struct crypto_session_params *csp) +{ + /* Use separated digest for all digest/hash operations, + * including GMAC */ + if (CSP_MODE_DIGEST == csp->csp_mode || CSP_MODE_ETA == csp->csp_mode) + return CPA_TRUE; + + return CPA_FALSE; +} + +int qat_ocf_handle_session_update(struct qat_ocf_dsession *ocf_dsession, + struct cryptop *crp); + +CpaStatus qat_ocf_wait_for_session(CpaCySymSessionCtx sessionCtx, + Cpa32U timeoutMS); + +#endif /* _QAT_OCF_UTILS_H_ */ Index: sys/dev/qat/qat.c =================================================================== --- sys/dev/qat/qat.c +++ /dev/null @@ -1,2294 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat.c,v 1.6 2020/06/14 23:23:12 riastradh Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2007-2019 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -__FBSDID("$FreeBSD$"); -#if 0 -__KERNEL_RCSID(0, "$NetBSD: qat.c,v 1.6 2020/06/14 23:23:12 riastradh Exp $"); -#endif - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include - -#include -#include - -#include "cryptodev_if.h" - -#include -#include - -#include "qatreg.h" -#include "qatvar.h" -#include "qat_aevar.h" - -extern struct qat_hw qat_hw_c2xxx; -extern struct qat_hw qat_hw_c3xxx; -extern struct qat_hw qat_hw_c62x; -extern struct qat_hw qat_hw_d15xx; -extern struct qat_hw qat_hw_dh895xcc; - -#define PCI_VENDOR_INTEL 0x8086 -#define PCI_PRODUCT_INTEL_C2000_IQIA_PHYS 0x1f18 -#define PCI_PRODUCT_INTEL_C3K_QAT 0x19e2 -#define PCI_PRODUCT_INTEL_C3K_QAT_VF 0x19e3 -#define PCI_PRODUCT_INTEL_C620_QAT 0x37c8 -#define PCI_PRODUCT_INTEL_C620_QAT_VF 0x37c9 -#define PCI_PRODUCT_INTEL_XEOND_QAT 0x6f54 -#define PCI_PRODUCT_INTEL_XEOND_QAT_VF 0x6f55 -#define PCI_PRODUCT_INTEL_DH895XCC_QAT 0x0435 -#define PCI_PRODUCT_INTEL_DH895XCC_QAT_VF 0x0443 - -static const struct qat_product { - uint16_t qatp_vendor; - uint16_t qatp_product; - const char *qatp_name; - enum qat_chip_type qatp_chip; - const struct qat_hw *qatp_hw; -} qat_products[] = { - { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_C2000_IQIA_PHYS, - "Intel C2000 QuickAssist PF", - QAT_CHIP_C2XXX, &qat_hw_c2xxx }, - { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_C3K_QAT, - "Intel C3000 QuickAssist PF", - QAT_CHIP_C3XXX, &qat_hw_c3xxx }, - { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_C620_QAT, - "Intel C620/Xeon D-2100 QuickAssist PF", - QAT_CHIP_C62X, &qat_hw_c62x }, - { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_XEOND_QAT, - "Intel Xeon D-1500 QuickAssist PF", - QAT_CHIP_D15XX, &qat_hw_d15xx }, - { PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_DH895XCC_QAT, - "Intel 8950 QuickAssist PCIe Adapter PF", - QAT_CHIP_DH895XCC, &qat_hw_dh895xcc }, - { 0, 0, NULL, 0, NULL }, -}; - -/* Hash Algorithm specific structure */ - -/* SHA1 - 20 bytes - Initialiser state can be found in FIPS stds 180-2 */ -static const uint8_t sha1_initial_state[QAT_HASH_SHA1_STATE_SIZE] = { - 0x67, 0x45, 0x23, 0x01, - 0xef, 0xcd, 0xab, 0x89, - 0x98, 0xba, 0xdc, 0xfe, - 0x10, 0x32, 0x54, 0x76, - 0xc3, 0xd2, 0xe1, 0xf0 -}; - -/* SHA 256 - 32 bytes - Initialiser state can be found in FIPS stds 180-2 */ -static const uint8_t sha256_initial_state[QAT_HASH_SHA256_STATE_SIZE] = { - 0x6a, 0x09, 0xe6, 0x67, - 0xbb, 0x67, 0xae, 0x85, - 0x3c, 0x6e, 0xf3, 0x72, - 0xa5, 0x4f, 0xf5, 0x3a, - 0x51, 0x0e, 0x52, 0x7f, - 0x9b, 0x05, 0x68, 0x8c, - 0x1f, 0x83, 0xd9, 0xab, - 0x5b, 0xe0, 0xcd, 0x19 -}; - -/* SHA 384 - 64 bytes - Initialiser state can be found in FIPS stds 180-2 */ -static const uint8_t sha384_initial_state[QAT_HASH_SHA384_STATE_SIZE] = { - 0xcb, 0xbb, 0x9d, 0x5d, 0xc1, 0x05, 0x9e, 0xd8, - 0x62, 0x9a, 0x29, 0x2a, 0x36, 0x7c, 0xd5, 0x07, - 0x91, 0x59, 0x01, 0x5a, 0x30, 0x70, 0xdd, 0x17, - 0x15, 0x2f, 0xec, 0xd8, 0xf7, 0x0e, 0x59, 0x39, - 0x67, 0x33, 0x26, 0x67, 0xff, 0xc0, 0x0b, 0x31, - 0x8e, 0xb4, 0x4a, 0x87, 0x68, 0x58, 0x15, 0x11, - 0xdb, 0x0c, 0x2e, 0x0d, 0x64, 0xf9, 0x8f, 0xa7, - 0x47, 0xb5, 0x48, 0x1d, 0xbe, 0xfa, 0x4f, 0xa4 -}; - -/* SHA 512 - 64 bytes - Initialiser state can be found in FIPS stds 180-2 */ -static const uint8_t sha512_initial_state[QAT_HASH_SHA512_STATE_SIZE] = { - 0x6a, 0x09, 0xe6, 0x67, 0xf3, 0xbc, 0xc9, 0x08, - 0xbb, 0x67, 0xae, 0x85, 0x84, 0xca, 0xa7, 0x3b, - 0x3c, 0x6e, 0xf3, 0x72, 0xfe, 0x94, 0xf8, 0x2b, - 0xa5, 0x4f, 0xf5, 0x3a, 0x5f, 0x1d, 0x36, 0xf1, - 0x51, 0x0e, 0x52, 0x7f, 0xad, 0xe6, 0x82, 0xd1, - 0x9b, 0x05, 0x68, 0x8c, 0x2b, 0x3e, 0x6c, 0x1f, - 0x1f, 0x83, 0xd9, 0xab, 0xfb, 0x41, 0xbd, 0x6b, - 0x5b, 0xe0, 0xcd, 0x19, 0x13, 0x7e, 0x21, 0x79 -}; - -static const struct qat_sym_hash_alg_info sha1_info = { - .qshai_digest_len = QAT_HASH_SHA1_DIGEST_SIZE, - .qshai_block_len = QAT_HASH_SHA1_BLOCK_SIZE, - .qshai_state_size = QAT_HASH_SHA1_STATE_SIZE, - .qshai_init_state = sha1_initial_state, - .qshai_sah = &auth_hash_hmac_sha1, - .qshai_state_offset = 0, - .qshai_state_word = 4, -}; - -static const struct qat_sym_hash_alg_info sha256_info = { - .qshai_digest_len = QAT_HASH_SHA256_DIGEST_SIZE, - .qshai_block_len = QAT_HASH_SHA256_BLOCK_SIZE, - .qshai_state_size = QAT_HASH_SHA256_STATE_SIZE, - .qshai_init_state = sha256_initial_state, - .qshai_sah = &auth_hash_hmac_sha2_256, - .qshai_state_offset = offsetof(SHA256_CTX, state), - .qshai_state_word = 4, -}; - -static const struct qat_sym_hash_alg_info sha384_info = { - .qshai_digest_len = QAT_HASH_SHA384_DIGEST_SIZE, - .qshai_block_len = QAT_HASH_SHA384_BLOCK_SIZE, - .qshai_state_size = QAT_HASH_SHA384_STATE_SIZE, - .qshai_init_state = sha384_initial_state, - .qshai_sah = &auth_hash_hmac_sha2_384, - .qshai_state_offset = offsetof(SHA384_CTX, state), - .qshai_state_word = 8, -}; - -static const struct qat_sym_hash_alg_info sha512_info = { - .qshai_digest_len = QAT_HASH_SHA512_DIGEST_SIZE, - .qshai_block_len = QAT_HASH_SHA512_BLOCK_SIZE, - .qshai_state_size = QAT_HASH_SHA512_STATE_SIZE, - .qshai_init_state = sha512_initial_state, - .qshai_sah = &auth_hash_hmac_sha2_512, - .qshai_state_offset = offsetof(SHA512_CTX, state), - .qshai_state_word = 8, -}; - -static const struct qat_sym_hash_alg_info aes_gcm_info = { - .qshai_digest_len = QAT_HASH_AES_GCM_DIGEST_SIZE, - .qshai_block_len = QAT_HASH_AES_GCM_BLOCK_SIZE, - .qshai_state_size = QAT_HASH_AES_GCM_STATE_SIZE, - .qshai_sah = &auth_hash_nist_gmac_aes_128, -}; - -/* Hash QAT specific structures */ - -static const struct qat_sym_hash_qat_info sha1_config = { - .qshqi_algo_enc = HW_AUTH_ALGO_SHA1, - .qshqi_auth_counter = QAT_HASH_SHA1_BLOCK_SIZE, - .qshqi_state1_len = HW_SHA1_STATE1_SZ, - .qshqi_state2_len = HW_SHA1_STATE2_SZ, -}; - -static const struct qat_sym_hash_qat_info sha256_config = { - .qshqi_algo_enc = HW_AUTH_ALGO_SHA256, - .qshqi_auth_counter = QAT_HASH_SHA256_BLOCK_SIZE, - .qshqi_state1_len = HW_SHA256_STATE1_SZ, - .qshqi_state2_len = HW_SHA256_STATE2_SZ -}; - -static const struct qat_sym_hash_qat_info sha384_config = { - .qshqi_algo_enc = HW_AUTH_ALGO_SHA384, - .qshqi_auth_counter = QAT_HASH_SHA384_BLOCK_SIZE, - .qshqi_state1_len = HW_SHA384_STATE1_SZ, - .qshqi_state2_len = HW_SHA384_STATE2_SZ -}; - -static const struct qat_sym_hash_qat_info sha512_config = { - .qshqi_algo_enc = HW_AUTH_ALGO_SHA512, - .qshqi_auth_counter = QAT_HASH_SHA512_BLOCK_SIZE, - .qshqi_state1_len = HW_SHA512_STATE1_SZ, - .qshqi_state2_len = HW_SHA512_STATE2_SZ -}; - -static const struct qat_sym_hash_qat_info aes_gcm_config = { - .qshqi_algo_enc = HW_AUTH_ALGO_GALOIS_128, - .qshqi_auth_counter = QAT_HASH_AES_GCM_BLOCK_SIZE, - .qshqi_state1_len = HW_GALOIS_128_STATE1_SZ, - .qshqi_state2_len = - HW_GALOIS_H_SZ + HW_GALOIS_LEN_A_SZ + HW_GALOIS_E_CTR0_SZ, -}; - -static const struct qat_sym_hash_def qat_sym_hash_defs[] = { - [QAT_SYM_HASH_SHA1] = { &sha1_info, &sha1_config }, - [QAT_SYM_HASH_SHA256] = { &sha256_info, &sha256_config }, - [QAT_SYM_HASH_SHA384] = { &sha384_info, &sha384_config }, - [QAT_SYM_HASH_SHA512] = { &sha512_info, &sha512_config }, - [QAT_SYM_HASH_AES_GCM] = { &aes_gcm_info, &aes_gcm_config }, -}; - -static const struct qat_product *qat_lookup(device_t); -static int qat_probe(device_t); -static int qat_attach(device_t); -static int qat_init(device_t); -static int qat_start(device_t); -static int qat_detach(device_t); - -static int qat_newsession(device_t dev, crypto_session_t cses, - const struct crypto_session_params *csp); -static void qat_freesession(device_t dev, crypto_session_t cses); - -static int qat_setup_msix_intr(struct qat_softc *); - -static void qat_etr_init(struct qat_softc *); -static void qat_etr_deinit(struct qat_softc *); -static void qat_etr_bank_init(struct qat_softc *, int); -static void qat_etr_bank_deinit(struct qat_softc *sc, int); - -static void qat_etr_ap_bank_init(struct qat_softc *); -static void qat_etr_ap_bank_set_ring_mask(uint32_t *, uint32_t, int); -static void qat_etr_ap_bank_set_ring_dest(struct qat_softc *, uint32_t *, - uint32_t, int); -static void qat_etr_ap_bank_setup_ring(struct qat_softc *, - struct qat_ring *); -static int qat_etr_verify_ring_size(uint32_t, uint32_t); - -static int qat_etr_ring_intr(struct qat_softc *, struct qat_bank *, - struct qat_ring *); -static void qat_etr_bank_intr(void *); - -static void qat_arb_update(struct qat_softc *, struct qat_bank *); - -static struct qat_sym_cookie *qat_crypto_alloc_sym_cookie( - struct qat_crypto_bank *); -static void qat_crypto_free_sym_cookie(struct qat_crypto_bank *, - struct qat_sym_cookie *); -static int qat_crypto_setup_ring(struct qat_softc *, - struct qat_crypto_bank *); -static int qat_crypto_bank_init(struct qat_softc *, - struct qat_crypto_bank *); -static int qat_crypto_init(struct qat_softc *); -static void qat_crypto_deinit(struct qat_softc *); -static int qat_crypto_start(struct qat_softc *); -static void qat_crypto_stop(struct qat_softc *); -static int qat_crypto_sym_rxintr(struct qat_softc *, void *, void *); - -static MALLOC_DEFINE(M_QAT, "qat", "Intel QAT driver"); - -static const struct qat_product * -qat_lookup(device_t dev) -{ - const struct qat_product *qatp; - - for (qatp = qat_products; qatp->qatp_name != NULL; qatp++) { - if (pci_get_vendor(dev) == qatp->qatp_vendor && - pci_get_device(dev) == qatp->qatp_product) - return qatp; - } - return NULL; -} - -static int -qat_probe(device_t dev) -{ - const struct qat_product *prod; - - prod = qat_lookup(dev); - if (prod != NULL) { - device_set_desc(dev, prod->qatp_name); - return BUS_PROBE_DEFAULT; - } - return ENXIO; -} - -static int -qat_attach(device_t dev) -{ - struct qat_softc *sc = device_get_softc(dev); - const struct qat_product *qatp; - int bar, count, error, i; - - sc->sc_dev = dev; - sc->sc_rev = pci_get_revid(dev); - sc->sc_crypto.qcy_cid = -1; - - qatp = qat_lookup(dev); - memcpy(&sc->sc_hw, qatp->qatp_hw, sizeof(struct qat_hw)); - - /* Determine active accelerators and engines */ - sc->sc_accel_mask = sc->sc_hw.qhw_get_accel_mask(sc); - sc->sc_ae_mask = sc->sc_hw.qhw_get_ae_mask(sc); - - sc->sc_accel_num = 0; - for (i = 0; i < sc->sc_hw.qhw_num_accel; i++) { - if (sc->sc_accel_mask & (1 << i)) - sc->sc_accel_num++; - } - sc->sc_ae_num = 0; - for (i = 0; i < sc->sc_hw.qhw_num_engines; i++) { - if (sc->sc_ae_mask & (1 << i)) - sc->sc_ae_num++; - } - - if (!sc->sc_accel_mask || (sc->sc_ae_mask & 0x01) == 0) { - device_printf(sc->sc_dev, "couldn't find acceleration"); - goto fail; - } - - MPASS(sc->sc_accel_num <= MAX_NUM_ACCEL); - MPASS(sc->sc_ae_num <= MAX_NUM_AE); - - /* Determine SKU and capabilities */ - sc->sc_sku = sc->sc_hw.qhw_get_sku(sc); - sc->sc_accel_cap = sc->sc_hw.qhw_get_accel_cap(sc); - sc->sc_fw_uof_name = sc->sc_hw.qhw_get_fw_uof_name(sc); - - i = 0; - if (sc->sc_hw.qhw_sram_bar_id != NO_PCI_REG) { - MPASS(sc->sc_hw.qhw_sram_bar_id == 0); - uint32_t fusectl = pci_read_config(dev, FUSECTL_REG, 4); - /* Skip SRAM BAR */ - i = (fusectl & FUSECTL_MASK) ? 1 : 0; - } - for (bar = 0; bar < PCIR_MAX_BAR_0; bar++) { - uint32_t val = pci_read_config(dev, PCIR_BAR(bar), 4); - if (val == 0 || !PCI_BAR_MEM(val)) - continue; - - sc->sc_rid[i] = PCIR_BAR(bar); - sc->sc_res[i] = bus_alloc_resource_any(dev, SYS_RES_MEMORY, - &sc->sc_rid[i], RF_ACTIVE); - if (sc->sc_res[i] == NULL) { - device_printf(dev, "couldn't map BAR %d\n", bar); - goto fail; - } - - sc->sc_csrt[i] = rman_get_bustag(sc->sc_res[i]); - sc->sc_csrh[i] = rman_get_bushandle(sc->sc_res[i]); - - i++; - if ((val & PCIM_BAR_MEM_TYPE) == PCIM_BAR_MEM_64) - bar++; - } - - pci_enable_busmaster(dev); - - count = sc->sc_hw.qhw_num_banks + 1; - if (pci_msix_count(dev) < count) { - device_printf(dev, "insufficient MSI-X vectors (%d vs. %d)\n", - pci_msix_count(dev), count); - goto fail; - } - error = pci_alloc_msix(dev, &count); - if (error != 0) { - device_printf(dev, "failed to allocate MSI-X vectors\n"); - goto fail; - } - - error = qat_init(dev); - if (error == 0) - return 0; - -fail: - qat_detach(dev); - return ENXIO; -} - -static int -qat_init(device_t dev) -{ - struct qat_softc *sc = device_get_softc(dev); - int error; - - qat_etr_init(sc); - - if (sc->sc_hw.qhw_init_admin_comms != NULL && - (error = sc->sc_hw.qhw_init_admin_comms(sc)) != 0) { - device_printf(sc->sc_dev, - "Could not initialize admin comms: %d\n", error); - return error; - } - - if (sc->sc_hw.qhw_init_arb != NULL && - (error = sc->sc_hw.qhw_init_arb(sc)) != 0) { - device_printf(sc->sc_dev, - "Could not initialize hw arbiter: %d\n", error); - return error; - } - - error = qat_ae_init(sc); - if (error) { - device_printf(sc->sc_dev, - "Could not initialize Acceleration Engine: %d\n", error); - return error; - } - - error = qat_aefw_load(sc); - if (error) { - device_printf(sc->sc_dev, - "Could not load firmware: %d\n", error); - return error; - } - - error = qat_setup_msix_intr(sc); - if (error) { - device_printf(sc->sc_dev, - "Could not setup interrupts: %d\n", error); - return error; - } - - sc->sc_hw.qhw_enable_intr(sc); - - error = qat_crypto_init(sc); - if (error) { - device_printf(sc->sc_dev, - "Could not initialize service: %d\n", error); - return error; - } - - if (sc->sc_hw.qhw_enable_error_correction != NULL) - sc->sc_hw.qhw_enable_error_correction(sc); - - if (sc->sc_hw.qhw_set_ssm_wdtimer != NULL && - (error = sc->sc_hw.qhw_set_ssm_wdtimer(sc)) != 0) { - device_printf(sc->sc_dev, - "Could not initialize watchdog timer: %d\n", error); - return error; - } - - error = qat_start(dev); - if (error) { - device_printf(sc->sc_dev, - "Could not start: %d\n", error); - return error; - } - - return 0; -} - -static int -qat_start(device_t dev) -{ - struct qat_softc *sc = device_get_softc(dev); - int error; - - error = qat_ae_start(sc); - if (error) - return error; - - if (sc->sc_hw.qhw_send_admin_init != NULL && - (error = sc->sc_hw.qhw_send_admin_init(sc)) != 0) { - return error; - } - - error = qat_crypto_start(sc); - if (error) - return error; - - return 0; -} - -static int -qat_detach(device_t dev) -{ - struct qat_softc *sc; - int bar, i; - - sc = device_get_softc(dev); - - qat_crypto_stop(sc); - qat_crypto_deinit(sc); - qat_aefw_unload(sc); - - if (sc->sc_etr_banks != NULL) { - for (i = 0; i < sc->sc_hw.qhw_num_banks; i++) { - struct qat_bank *qb = &sc->sc_etr_banks[i]; - - if (qb->qb_ih_cookie != NULL) - (void)bus_teardown_intr(dev, qb->qb_ih, - qb->qb_ih_cookie); - if (qb->qb_ih != NULL) - (void)bus_release_resource(dev, SYS_RES_IRQ, - i + 1, qb->qb_ih); - } - } - if (sc->sc_ih_cookie != NULL) { - (void)bus_teardown_intr(dev, sc->sc_ih, sc->sc_ih_cookie); - sc->sc_ih_cookie = NULL; - } - if (sc->sc_ih != NULL) { - (void)bus_release_resource(dev, SYS_RES_IRQ, - sc->sc_hw.qhw_num_banks + 1, sc->sc_ih); - sc->sc_ih = NULL; - } - pci_release_msi(dev); - - qat_etr_deinit(sc); - - for (bar = 0; bar < MAX_BARS; bar++) { - if (sc->sc_res[bar] != NULL) { - (void)bus_release_resource(dev, SYS_RES_MEMORY, - sc->sc_rid[bar], sc->sc_res[bar]); - sc->sc_res[bar] = NULL; - } - } - - return 0; -} - -void * -qat_alloc_mem(size_t size) -{ - return (malloc(size, M_QAT, M_WAITOK | M_ZERO)); -} - -void -qat_free_mem(void *ptr) -{ - free(ptr, M_QAT); -} - -static void -qat_alloc_dmamem_cb(void *arg, bus_dma_segment_t *segs, int nseg, - int error) -{ - struct qat_dmamem *qdm; - - if (error != 0) - return; - - KASSERT(nseg == 1, ("%s: nsegs is %d", __func__, nseg)); - qdm = arg; - qdm->qdm_dma_seg = segs[0]; -} - -int -qat_alloc_dmamem(struct qat_softc *sc, struct qat_dmamem *qdm, - int nseg, bus_size_t size, bus_size_t alignment) -{ - int error; - - KASSERT(qdm->qdm_dma_vaddr == NULL, - ("%s: DMA memory descriptor in use", __func__)); - - error = bus_dma_tag_create(bus_get_dma_tag(sc->sc_dev), - alignment, 0, /* alignment, boundary */ - BUS_SPACE_MAXADDR, /* lowaddr */ - BUS_SPACE_MAXADDR, /* highaddr */ - NULL, NULL, /* filter, filterarg */ - size, /* maxsize */ - nseg, /* nsegments */ - size, /* maxsegsize */ - BUS_DMA_COHERENT, /* flags */ - NULL, NULL, /* lockfunc, lockarg */ - &qdm->qdm_dma_tag); - if (error != 0) - return error; - - error = bus_dmamem_alloc(qdm->qdm_dma_tag, &qdm->qdm_dma_vaddr, - BUS_DMA_NOWAIT | BUS_DMA_ZERO | BUS_DMA_COHERENT, - &qdm->qdm_dma_map); - if (error != 0) { - device_printf(sc->sc_dev, - "couldn't allocate dmamem, error = %d\n", error); - goto fail_0; - } - - error = bus_dmamap_load(qdm->qdm_dma_tag, qdm->qdm_dma_map, - qdm->qdm_dma_vaddr, size, qat_alloc_dmamem_cb, qdm, - BUS_DMA_NOWAIT); - if (error) { - device_printf(sc->sc_dev, - "couldn't load dmamem map, error = %d\n", error); - goto fail_1; - } - - return 0; -fail_1: - bus_dmamem_free(qdm->qdm_dma_tag, qdm->qdm_dma_vaddr, qdm->qdm_dma_map); -fail_0: - bus_dma_tag_destroy(qdm->qdm_dma_tag); - return error; -} - -void -qat_free_dmamem(struct qat_softc *sc, struct qat_dmamem *qdm) -{ - if (qdm->qdm_dma_tag != NULL) { - bus_dmamap_unload(qdm->qdm_dma_tag, qdm->qdm_dma_map); - bus_dmamem_free(qdm->qdm_dma_tag, qdm->qdm_dma_vaddr, - qdm->qdm_dma_map); - bus_dma_tag_destroy(qdm->qdm_dma_tag); - explicit_bzero(qdm, sizeof(*qdm)); - } -} - -static int -qat_setup_msix_intr(struct qat_softc *sc) -{ - device_t dev; - int error, i, rid; - - dev = sc->sc_dev; - - for (i = 1; i <= sc->sc_hw.qhw_num_banks; i++) { - struct qat_bank *qb = &sc->sc_etr_banks[i - 1]; - - rid = i; - qb->qb_ih = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, - RF_ACTIVE); - if (qb->qb_ih == NULL) { - device_printf(dev, - "failed to allocate bank intr resource\n"); - return ENXIO; - } - error = bus_setup_intr(dev, qb->qb_ih, - INTR_TYPE_NET | INTR_MPSAFE, NULL, qat_etr_bank_intr, qb, - &qb->qb_ih_cookie); - if (error != 0) { - device_printf(dev, "failed to set up bank intr\n"); - return error; - } - error = bus_bind_intr(dev, qb->qb_ih, (i - 1) % mp_ncpus); - if (error != 0) - device_printf(dev, "failed to bind intr %d\n", i); - } - - rid = i; - sc->sc_ih = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, - RF_ACTIVE); - if (sc->sc_ih == NULL) - return ENXIO; - error = bus_setup_intr(dev, sc->sc_ih, INTR_TYPE_NET | INTR_MPSAFE, - NULL, qat_ae_cluster_intr, sc, &sc->sc_ih_cookie); - - return error; -} - -static void -qat_etr_init(struct qat_softc *sc) -{ - int i; - - sc->sc_etr_banks = qat_alloc_mem( - sizeof(struct qat_bank) * sc->sc_hw.qhw_num_banks); - - for (i = 0; i < sc->sc_hw.qhw_num_banks; i++) - qat_etr_bank_init(sc, i); - - if (sc->sc_hw.qhw_num_ap_banks) { - sc->sc_etr_ap_banks = qat_alloc_mem( - sizeof(struct qat_ap_bank) * sc->sc_hw.qhw_num_ap_banks); - qat_etr_ap_bank_init(sc); - } -} - -static void -qat_etr_deinit(struct qat_softc *sc) -{ - int i; - - if (sc->sc_etr_banks != NULL) { - for (i = 0; i < sc->sc_hw.qhw_num_banks; i++) - qat_etr_bank_deinit(sc, i); - qat_free_mem(sc->sc_etr_banks); - sc->sc_etr_banks = NULL; - } - if (sc->sc_etr_ap_banks != NULL) { - qat_free_mem(sc->sc_etr_ap_banks); - sc->sc_etr_ap_banks = NULL; - } -} - -static void -qat_etr_bank_init(struct qat_softc *sc, int bank) -{ - struct qat_bank *qb = &sc->sc_etr_banks[bank]; - int i, tx_rx_gap = sc->sc_hw.qhw_tx_rx_gap; - - MPASS(bank < sc->sc_hw.qhw_num_banks); - - mtx_init(&qb->qb_bank_mtx, "qb bank", NULL, MTX_DEF); - - qb->qb_sc = sc; - qb->qb_bank = bank; - qb->qb_coalescing_time = COALESCING_TIME_INTERVAL_DEFAULT; - - /* Clean CSRs for all rings within the bank */ - for (i = 0; i < sc->sc_hw.qhw_num_rings_per_bank; i++) { - struct qat_ring *qr = &qb->qb_et_rings[i]; - - qat_etr_bank_ring_write_4(sc, bank, i, - ETR_RING_CONFIG, 0); - qat_etr_bank_ring_base_write_8(sc, bank, i, 0); - - if (sc->sc_hw.qhw_tx_rings_mask & (1 << i)) { - qr->qr_inflight = qat_alloc_mem(sizeof(uint32_t)); - } else if (sc->sc_hw.qhw_tx_rings_mask & - (1 << (i - tx_rx_gap))) { - /* Share inflight counter with rx and tx */ - qr->qr_inflight = - qb->qb_et_rings[i - tx_rx_gap].qr_inflight; - } - } - - if (sc->sc_hw.qhw_init_etr_intr != NULL) { - sc->sc_hw.qhw_init_etr_intr(sc, bank); - } else { - /* common code in qat 1.7 */ - qat_etr_bank_write_4(sc, bank, ETR_INT_REG, - ETR_INT_REG_CLEAR_MASK); - for (i = 0; i < sc->sc_hw.qhw_num_rings_per_bank / - ETR_RINGS_PER_INT_SRCSEL; i++) { - qat_etr_bank_write_4(sc, bank, ETR_INT_SRCSEL + - (i * ETR_INT_SRCSEL_NEXT_OFFSET), - ETR_INT_SRCSEL_MASK); - } - } -} - -static void -qat_etr_bank_deinit(struct qat_softc *sc, int bank) -{ - struct qat_bank *qb; - struct qat_ring *qr; - int i; - - qb = &sc->sc_etr_banks[bank]; - for (i = 0; i < sc->sc_hw.qhw_num_rings_per_bank; i++) { - if (sc->sc_hw.qhw_tx_rings_mask & (1 << i)) { - qr = &qb->qb_et_rings[i]; - qat_free_mem(qr->qr_inflight); - } - } -} - -static void -qat_etr_ap_bank_init(struct qat_softc *sc) -{ - int ap_bank; - - for (ap_bank = 0; ap_bank < sc->sc_hw.qhw_num_ap_banks; ap_bank++) { - struct qat_ap_bank *qab = &sc->sc_etr_ap_banks[ap_bank]; - - qat_etr_ap_bank_write_4(sc, ap_bank, ETR_AP_NF_MASK, - ETR_AP_NF_MASK_INIT); - qat_etr_ap_bank_write_4(sc, ap_bank, ETR_AP_NF_DEST, 0); - qat_etr_ap_bank_write_4(sc, ap_bank, ETR_AP_NE_MASK, - ETR_AP_NE_MASK_INIT); - qat_etr_ap_bank_write_4(sc, ap_bank, ETR_AP_NE_DEST, 0); - - memset(qab, 0, sizeof(*qab)); - } -} - -static void -qat_etr_ap_bank_set_ring_mask(uint32_t *ap_mask, uint32_t ring, int set_mask) -{ - if (set_mask) - *ap_mask |= (1 << ETR_RING_NUMBER_IN_AP_BANK(ring)); - else - *ap_mask &= ~(1 << ETR_RING_NUMBER_IN_AP_BANK(ring)); -} - -static void -qat_etr_ap_bank_set_ring_dest(struct qat_softc *sc, uint32_t *ap_dest, - uint32_t ring, int set_dest) -{ - uint32_t ae_mask; - uint8_t mailbox, ae, nae; - uint8_t *dest = (uint8_t *)ap_dest; - - mailbox = ETR_RING_AP_MAILBOX_NUMBER(ring); - - nae = 0; - ae_mask = sc->sc_ae_mask; - for (ae = 0; ae < sc->sc_hw.qhw_num_engines; ae++) { - if ((ae_mask & (1 << ae)) == 0) - continue; - - if (set_dest) { - dest[nae] = __SHIFTIN(ae, ETR_AP_DEST_AE) | - __SHIFTIN(mailbox, ETR_AP_DEST_MAILBOX) | - ETR_AP_DEST_ENABLE; - } else { - dest[nae] = 0; - } - nae++; - if (nae == ETR_MAX_AE_PER_MAILBOX) - break; - } -} - -static void -qat_etr_ap_bank_setup_ring(struct qat_softc *sc, struct qat_ring *qr) -{ - struct qat_ap_bank *qab; - int ap_bank; - - if (sc->sc_hw.qhw_num_ap_banks == 0) - return; - - ap_bank = ETR_RING_AP_BANK_NUMBER(qr->qr_ring); - MPASS(ap_bank < sc->sc_hw.qhw_num_ap_banks); - qab = &sc->sc_etr_ap_banks[ap_bank]; - - if (qr->qr_cb == NULL) { - qat_etr_ap_bank_set_ring_mask(&qab->qab_ne_mask, qr->qr_ring, 1); - if (!qab->qab_ne_dest) { - qat_etr_ap_bank_set_ring_dest(sc, &qab->qab_ne_dest, - qr->qr_ring, 1); - qat_etr_ap_bank_write_4(sc, ap_bank, ETR_AP_NE_DEST, - qab->qab_ne_dest); - } - } else { - qat_etr_ap_bank_set_ring_mask(&qab->qab_nf_mask, qr->qr_ring, 1); - if (!qab->qab_nf_dest) { - qat_etr_ap_bank_set_ring_dest(sc, &qab->qab_nf_dest, - qr->qr_ring, 1); - qat_etr_ap_bank_write_4(sc, ap_bank, ETR_AP_NF_DEST, - qab->qab_nf_dest); - } - } -} - -static int -qat_etr_verify_ring_size(uint32_t msg_size, uint32_t num_msgs) -{ - int i = QAT_MIN_RING_SIZE; - - for (; i <= QAT_MAX_RING_SIZE; i++) - if ((msg_size * num_msgs) == QAT_SIZE_TO_RING_SIZE_IN_BYTES(i)) - return i; - - return QAT_DEFAULT_RING_SIZE; -} - -int -qat_etr_setup_ring(struct qat_softc *sc, int bank, uint32_t ring, - uint32_t num_msgs, uint32_t msg_size, qat_cb_t cb, void *cb_arg, - const char *name, struct qat_ring **rqr) -{ - struct qat_bank *qb; - struct qat_ring *qr = NULL; - int error; - uint32_t ring_size_bytes, ring_config; - uint64_t ring_base; - uint32_t wm_nf = ETR_RING_CONFIG_NEAR_WM_512; - uint32_t wm_ne = ETR_RING_CONFIG_NEAR_WM_0; - - MPASS(bank < sc->sc_hw.qhw_num_banks); - - /* Allocate a ring from specified bank */ - qb = &sc->sc_etr_banks[bank]; - - if (ring >= sc->sc_hw.qhw_num_rings_per_bank) - return EINVAL; - if (qb->qb_allocated_rings & (1 << ring)) - return ENOENT; - qr = &qb->qb_et_rings[ring]; - qb->qb_allocated_rings |= 1 << ring; - - /* Initialize allocated ring */ - qr->qr_ring = ring; - qr->qr_bank = bank; - qr->qr_name = name; - qr->qr_ring_id = qr->qr_bank * sc->sc_hw.qhw_num_rings_per_bank + ring; - qr->qr_ring_mask = (1 << ring); - qr->qr_cb = cb; - qr->qr_cb_arg = cb_arg; - - /* Setup the shadow variables */ - qr->qr_head = 0; - qr->qr_tail = 0; - qr->qr_msg_size = QAT_BYTES_TO_MSG_SIZE(msg_size); - qr->qr_ring_size = qat_etr_verify_ring_size(msg_size, num_msgs); - - /* - * To make sure that ring is alligned to ring size allocate - * at least 4k and then tell the user it is smaller. - */ - ring_size_bytes = QAT_SIZE_TO_RING_SIZE_IN_BYTES(qr->qr_ring_size); - ring_size_bytes = QAT_RING_SIZE_BYTES_MIN(ring_size_bytes); - error = qat_alloc_dmamem(sc, &qr->qr_dma, 1, ring_size_bytes, - ring_size_bytes); - if (error) - return error; - - qr->qr_ring_vaddr = qr->qr_dma.qdm_dma_vaddr; - qr->qr_ring_paddr = qr->qr_dma.qdm_dma_seg.ds_addr; - - memset(qr->qr_ring_vaddr, QAT_RING_PATTERN, - qr->qr_dma.qdm_dma_seg.ds_len); - - bus_dmamap_sync(qr->qr_dma.qdm_dma_tag, qr->qr_dma.qdm_dma_map, - BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); - - if (cb == NULL) { - ring_config = ETR_RING_CONFIG_BUILD(qr->qr_ring_size); - } else { - ring_config = - ETR_RING_CONFIG_BUILD_RESP(qr->qr_ring_size, wm_nf, wm_ne); - } - qat_etr_bank_ring_write_4(sc, bank, ring, ETR_RING_CONFIG, ring_config); - - ring_base = ETR_RING_BASE_BUILD(qr->qr_ring_paddr, qr->qr_ring_size); - qat_etr_bank_ring_base_write_8(sc, bank, ring, ring_base); - - if (sc->sc_hw.qhw_init_arb != NULL) - qat_arb_update(sc, qb); - - mtx_init(&qr->qr_ring_mtx, "qr ring", NULL, MTX_DEF); - - qat_etr_ap_bank_setup_ring(sc, qr); - - if (cb != NULL) { - uint32_t intr_mask; - - qb->qb_intr_mask |= qr->qr_ring_mask; - intr_mask = qb->qb_intr_mask; - - qat_etr_bank_write_4(sc, bank, ETR_INT_COL_EN, intr_mask); - qat_etr_bank_write_4(sc, bank, ETR_INT_COL_CTL, - ETR_INT_COL_CTL_ENABLE | qb->qb_coalescing_time); - } - - *rqr = qr; - - return 0; -} - -static inline u_int -qat_modulo(u_int data, u_int shift) -{ - u_int div = data >> shift; - u_int mult = div << shift; - return data - mult; -} - -int -qat_etr_put_msg(struct qat_softc *sc, struct qat_ring *qr, uint32_t *msg) -{ - uint32_t inflight; - uint32_t *addr; - - mtx_lock(&qr->qr_ring_mtx); - - inflight = atomic_fetchadd_32(qr->qr_inflight, 1) + 1; - if (inflight > QAT_MAX_INFLIGHTS(qr->qr_ring_size, qr->qr_msg_size)) { - atomic_subtract_32(qr->qr_inflight, 1); - qr->qr_need_wakeup = true; - mtx_unlock(&qr->qr_ring_mtx); - counter_u64_add(sc->sc_ring_full_restarts, 1); - return ERESTART; - } - - addr = (uint32_t *)((uintptr_t)qr->qr_ring_vaddr + qr->qr_tail); - - memcpy(addr, msg, QAT_MSG_SIZE_TO_BYTES(qr->qr_msg_size)); - - bus_dmamap_sync(qr->qr_dma.qdm_dma_tag, qr->qr_dma.qdm_dma_map, - BUS_DMASYNC_PREWRITE); - - qr->qr_tail = qat_modulo(qr->qr_tail + - QAT_MSG_SIZE_TO_BYTES(qr->qr_msg_size), - QAT_RING_SIZE_MODULO(qr->qr_ring_size)); - - qat_etr_bank_ring_write_4(sc, qr->qr_bank, qr->qr_ring, - ETR_RING_TAIL_OFFSET, qr->qr_tail); - - mtx_unlock(&qr->qr_ring_mtx); - - return 0; -} - -static int -qat_etr_ring_intr(struct qat_softc *sc, struct qat_bank *qb, - struct qat_ring *qr) -{ - uint32_t *msg, nmsg = 0; - int handled = 0; - bool blocked = false; - - mtx_lock(&qr->qr_ring_mtx); - - msg = (uint32_t *)((uintptr_t)qr->qr_ring_vaddr + qr->qr_head); - - bus_dmamap_sync(qr->qr_dma.qdm_dma_tag, qr->qr_dma.qdm_dma_map, - BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); - - while (atomic_load_32(msg) != ETR_RING_EMPTY_ENTRY_SIG) { - atomic_subtract_32(qr->qr_inflight, 1); - - if (qr->qr_cb != NULL) { - mtx_unlock(&qr->qr_ring_mtx); - handled |= qr->qr_cb(sc, qr->qr_cb_arg, msg); - mtx_lock(&qr->qr_ring_mtx); - } - - atomic_store_32(msg, ETR_RING_EMPTY_ENTRY_SIG); - - qr->qr_head = qat_modulo(qr->qr_head + - QAT_MSG_SIZE_TO_BYTES(qr->qr_msg_size), - QAT_RING_SIZE_MODULO(qr->qr_ring_size)); - nmsg++; - - msg = (uint32_t *)((uintptr_t)qr->qr_ring_vaddr + qr->qr_head); - } - - bus_dmamap_sync(qr->qr_dma.qdm_dma_tag, qr->qr_dma.qdm_dma_map, - BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); - - if (nmsg > 0) { - qat_etr_bank_ring_write_4(sc, qr->qr_bank, qr->qr_ring, - ETR_RING_HEAD_OFFSET, qr->qr_head); - if (qr->qr_need_wakeup) { - blocked = true; - qr->qr_need_wakeup = false; - } - } - - mtx_unlock(&qr->qr_ring_mtx); - - if (blocked) - crypto_unblock(sc->sc_crypto.qcy_cid, CRYPTO_SYMQ); - - return handled; -} - -static void -qat_etr_bank_intr(void *arg) -{ - struct qat_bank *qb = arg; - struct qat_softc *sc = qb->qb_sc; - uint32_t estat; - int i; - - mtx_lock(&qb->qb_bank_mtx); - - qat_etr_bank_write_4(sc, qb->qb_bank, ETR_INT_COL_CTL, 0); - - /* Now handle all the responses */ - estat = ~qat_etr_bank_read_4(sc, qb->qb_bank, ETR_E_STAT); - estat &= qb->qb_intr_mask; - - qat_etr_bank_write_4(sc, qb->qb_bank, ETR_INT_COL_CTL, - ETR_INT_COL_CTL_ENABLE | qb->qb_coalescing_time); - - mtx_unlock(&qb->qb_bank_mtx); - - while ((i = ffs(estat)) != 0) { - struct qat_ring *qr = &qb->qb_et_rings[--i]; - estat &= ~(1 << i); - (void)qat_etr_ring_intr(sc, qb, qr); - } -} - -void -qat_arb_update(struct qat_softc *sc, struct qat_bank *qb) -{ - - qat_arb_ringsrvarben_write_4(sc, qb->qb_bank, - qb->qb_allocated_rings & 0xff); -} - -static struct qat_sym_cookie * -qat_crypto_alloc_sym_cookie(struct qat_crypto_bank *qcb) -{ - struct qat_sym_cookie *qsc; - - mtx_lock(&qcb->qcb_bank_mtx); - - if (qcb->qcb_symck_free_count == 0) { - mtx_unlock(&qcb->qcb_bank_mtx); - return NULL; - } - - qsc = qcb->qcb_symck_free[--qcb->qcb_symck_free_count]; - - mtx_unlock(&qcb->qcb_bank_mtx); - - return qsc; -} - -static void -qat_crypto_free_sym_cookie(struct qat_crypto_bank *qcb, - struct qat_sym_cookie *qsc) -{ - explicit_bzero(qsc->qsc_iv_buf, EALG_MAX_BLOCK_LEN); - explicit_bzero(qsc->qsc_auth_res, QAT_SYM_HASH_BUFFER_LEN); - - mtx_lock(&qcb->qcb_bank_mtx); - qcb->qcb_symck_free[qcb->qcb_symck_free_count++] = qsc; - mtx_unlock(&qcb->qcb_bank_mtx); -} - -void -qat_memcpy_htobe64(void *dst, const void *src, size_t len) -{ - uint64_t *dst0 = dst; - const uint64_t *src0 = src; - size_t i; - - MPASS(len % sizeof(*dst0) == 0); - - for (i = 0; i < len / sizeof(*dst0); i++) - *(dst0 + i) = htobe64(*(src0 + i)); -} - -void -qat_memcpy_htobe32(void *dst, const void *src, size_t len) -{ - uint32_t *dst0 = dst; - const uint32_t *src0 = src; - size_t i; - - MPASS(len % sizeof(*dst0) == 0); - - for (i = 0; i < len / sizeof(*dst0); i++) - *(dst0 + i) = htobe32(*(src0 + i)); -} - -void -qat_memcpy_htobe(void *dst, const void *src, size_t len, uint32_t wordbyte) -{ - switch (wordbyte) { - case 4: - qat_memcpy_htobe32(dst, src, len); - break; - case 8: - qat_memcpy_htobe64(dst, src, len); - break; - default: - panic("invalid word size %u", wordbyte); - } -} - -void -qat_crypto_gmac_precompute(const struct qat_crypto_desc *desc, - const uint8_t *key, int klen, const struct qat_sym_hash_def *hash_def, - uint8_t *state) -{ - uint32_t ks[4 * (RIJNDAEL_MAXNR + 1)]; - char zeros[AES_BLOCK_LEN]; - int rounds; - - memset(zeros, 0, sizeof(zeros)); - rounds = rijndaelKeySetupEnc(ks, key, klen * NBBY); - rijndaelEncrypt(ks, rounds, zeros, state); - explicit_bzero(ks, sizeof(ks)); -} - -void -qat_crypto_hmac_precompute(const struct qat_crypto_desc *desc, - const uint8_t *key, int klen, const struct qat_sym_hash_def *hash_def, - uint8_t *state1, uint8_t *state2) -{ - union authctx ctx; - const struct auth_hash *sah = hash_def->qshd_alg->qshai_sah; - uint32_t state_offset = hash_def->qshd_alg->qshai_state_offset; - uint32_t state_size = hash_def->qshd_alg->qshai_state_size; - uint32_t state_word = hash_def->qshd_alg->qshai_state_word; - - hmac_init_ipad(sah, key, klen, &ctx); - qat_memcpy_htobe(state1, (uint8_t *)&ctx + state_offset, state_size, - state_word); - hmac_init_opad(sah, key, klen, &ctx); - qat_memcpy_htobe(state2, (uint8_t *)&ctx + state_offset, state_size, - state_word); - explicit_bzero(&ctx, sizeof(ctx)); -} - -static enum hw_cipher_algo -qat_aes_cipher_algo(int klen) -{ - switch (klen) { - case HW_AES_128_KEY_SZ: - return HW_CIPHER_ALGO_AES128; - case HW_AES_192_KEY_SZ: - return HW_CIPHER_ALGO_AES192; - case HW_AES_256_KEY_SZ: - return HW_CIPHER_ALGO_AES256; - default: - panic("invalid key length %d", klen); - } -} - -uint16_t -qat_crypto_load_cipher_session(const struct qat_crypto_desc *desc, - const struct qat_session *qs) -{ - enum hw_cipher_algo algo; - enum hw_cipher_dir dir; - enum hw_cipher_convert key_convert; - enum hw_cipher_mode mode; - - dir = desc->qcd_cipher_dir; - key_convert = HW_CIPHER_NO_CONVERT; - mode = qs->qs_cipher_mode; - switch (mode) { - case HW_CIPHER_CBC_MODE: - case HW_CIPHER_XTS_MODE: - algo = qs->qs_cipher_algo; - - /* - * AES decrypt key needs to be reversed. - * Instead of reversing the key at session registration, - * it is instead reversed on-the-fly by setting the KEY_CONVERT - * bit here. - */ - if (desc->qcd_cipher_dir == HW_CIPHER_DECRYPT) - key_convert = HW_CIPHER_KEY_CONVERT; - break; - case HW_CIPHER_CTR_MODE: - algo = qs->qs_cipher_algo; - dir = HW_CIPHER_ENCRYPT; - break; - default: - panic("unhandled cipher mode %d", mode); - break; - } - - return HW_CIPHER_CONFIG_BUILD(mode, algo, key_convert, dir); -} - -uint16_t -qat_crypto_load_auth_session(const struct qat_crypto_desc *desc, - const struct qat_session *qs, const struct qat_sym_hash_def **hash_def) -{ - enum qat_sym_hash_algorithm algo; - - switch (qs->qs_auth_algo) { - case HW_AUTH_ALGO_SHA1: - algo = QAT_SYM_HASH_SHA1; - break; - case HW_AUTH_ALGO_SHA256: - algo = QAT_SYM_HASH_SHA256; - break; - case HW_AUTH_ALGO_SHA384: - algo = QAT_SYM_HASH_SHA384; - break; - case HW_AUTH_ALGO_SHA512: - algo = QAT_SYM_HASH_SHA512; - break; - case HW_AUTH_ALGO_GALOIS_128: - algo = QAT_SYM_HASH_AES_GCM; - break; - default: - panic("unhandled auth algorithm %d", qs->qs_auth_algo); - break; - } - *hash_def = &qat_sym_hash_defs[algo]; - - return HW_AUTH_CONFIG_BUILD(qs->qs_auth_mode, - (*hash_def)->qshd_qat->qshqi_algo_enc, - (*hash_def)->qshd_alg->qshai_digest_len); -} - -struct qat_crypto_load_cb_arg { - struct qat_session *qs; - struct qat_sym_cookie *qsc; - struct cryptop *crp; - int error; -}; - -static int -qat_crypto_populate_buf_list(struct buffer_list_desc *buffers, - bus_dma_segment_t *segs, int niseg, int noseg, int skip) -{ - struct flat_buffer_desc *flatbuf; - bus_addr_t addr; - bus_size_t len; - int iseg, oseg; - - for (iseg = 0, oseg = noseg; iseg < niseg && oseg < QAT_MAXSEG; - iseg++) { - addr = segs[iseg].ds_addr; - len = segs[iseg].ds_len; - - if (skip > 0) { - if (skip < len) { - addr += skip; - len -= skip; - skip = 0; - } else { - skip -= len; - continue; - } - } - - flatbuf = &buffers->flat_bufs[oseg++]; - flatbuf->data_len_in_bytes = (uint32_t)len; - flatbuf->phy_buffer = (uint64_t)addr; - } - buffers->num_buffers = oseg; - return iseg < niseg ? E2BIG : 0; -} - -static void -qat_crypto_load_aadbuf_cb(void *_arg, bus_dma_segment_t *segs, int nseg, - int error) -{ - struct qat_crypto_load_cb_arg *arg; - struct qat_sym_cookie *qsc; - - arg = _arg; - if (error != 0) { - arg->error = error; - return; - } - - qsc = arg->qsc; - arg->error = qat_crypto_populate_buf_list(&qsc->qsc_buf_list, segs, - nseg, 0, 0); -} - -static void -qat_crypto_load_buf_cb(void *_arg, bus_dma_segment_t *segs, int nseg, - int error) -{ - struct cryptop *crp; - struct qat_crypto_load_cb_arg *arg; - struct qat_session *qs; - struct qat_sym_cookie *qsc; - int noseg, skip; - - arg = _arg; - if (error != 0) { - arg->error = error; - return; - } - - crp = arg->crp; - qs = arg->qs; - qsc = arg->qsc; - - if (qs->qs_auth_algo == HW_AUTH_ALGO_GALOIS_128) { - /* AAD was handled in qat_crypto_load(). */ - skip = crp->crp_payload_start; - noseg = 0; - } else if (crp->crp_aad == NULL && crp->crp_aad_length > 0) { - skip = crp->crp_aad_start; - noseg = 0; - } else { - skip = crp->crp_payload_start; - noseg = crp->crp_aad == NULL ? - 0 : qsc->qsc_buf_list.num_buffers; - } - arg->error = qat_crypto_populate_buf_list(&qsc->qsc_buf_list, segs, - nseg, noseg, skip); -} - -static void -qat_crypto_load_obuf_cb(void *_arg, bus_dma_segment_t *segs, int nseg, - int error) -{ - struct buffer_list_desc *ibufs, *obufs; - struct flat_buffer_desc *ibuf, *obuf; - struct cryptop *crp; - struct qat_crypto_load_cb_arg *arg; - struct qat_session *qs; - struct qat_sym_cookie *qsc; - int buflen, osegs, tocopy; - - arg = _arg; - if (error != 0) { - arg->error = error; - return; - } - - crp = arg->crp; - qs = arg->qs; - qsc = arg->qsc; - - /* - * The payload must start at the same offset in the output SG list as in - * the input SG list. Copy over SG entries from the input corresponding - * to the AAD buffer. - */ - osegs = 0; - if (qs->qs_auth_algo != HW_AUTH_ALGO_GALOIS_128 && - crp->crp_aad_length > 0) { - tocopy = crp->crp_aad == NULL ? - crp->crp_payload_start - crp->crp_aad_start : - crp->crp_aad_length; - - ibufs = &qsc->qsc_buf_list; - obufs = &qsc->qsc_obuf_list; - for (; osegs < ibufs->num_buffers && tocopy > 0; osegs++) { - ibuf = &ibufs->flat_bufs[osegs]; - obuf = &obufs->flat_bufs[osegs]; - - obuf->phy_buffer = ibuf->phy_buffer; - buflen = imin(ibuf->data_len_in_bytes, tocopy); - obuf->data_len_in_bytes = buflen; - tocopy -= buflen; - } - } - - arg->error = qat_crypto_populate_buf_list(&qsc->qsc_obuf_list, segs, - nseg, osegs, crp->crp_payload_output_start); -} - -static int -qat_crypto_load(struct qat_session *qs, struct qat_sym_cookie *qsc, - struct qat_crypto_desc const *desc, struct cryptop *crp) -{ - struct qat_crypto_load_cb_arg arg; - int error; - - crypto_read_iv(crp, qsc->qsc_iv_buf); - - arg.crp = crp; - arg.qs = qs; - arg.qsc = qsc; - arg.error = 0; - - error = 0; - if (qs->qs_auth_algo == HW_AUTH_ALGO_GALOIS_128 && - crp->crp_aad_length > 0) { - /* - * The firmware expects AAD to be in a contiguous buffer and - * padded to a multiple of 16 bytes. To satisfy these - * constraints we bounce the AAD into a per-request buffer. - * There is a small limit on the AAD size so this is not too - * onerous. - */ - memset(qsc->qsc_gcm_aad, 0, QAT_GCM_AAD_SIZE_MAX); - if (crp->crp_aad == NULL) { - crypto_copydata(crp, crp->crp_aad_start, - crp->crp_aad_length, qsc->qsc_gcm_aad); - } else { - memcpy(qsc->qsc_gcm_aad, crp->crp_aad, - crp->crp_aad_length); - } - } else if (crp->crp_aad != NULL) { - error = bus_dmamap_load( - qsc->qsc_dma[QAT_SYM_DMA_AADBUF].qsd_dma_tag, - qsc->qsc_dma[QAT_SYM_DMA_AADBUF].qsd_dmamap, - crp->crp_aad, crp->crp_aad_length, - qat_crypto_load_aadbuf_cb, &arg, BUS_DMA_NOWAIT); - if (error == 0) - error = arg.error; - } - if (error == 0) { - error = bus_dmamap_load_crp_buffer( - qsc->qsc_dma[QAT_SYM_DMA_BUF].qsd_dma_tag, - qsc->qsc_dma[QAT_SYM_DMA_BUF].qsd_dmamap, - &crp->crp_buf, qat_crypto_load_buf_cb, &arg, - BUS_DMA_NOWAIT); - if (error == 0) - error = arg.error; - } - if (error == 0 && CRYPTO_HAS_OUTPUT_BUFFER(crp)) { - error = bus_dmamap_load_crp_buffer( - qsc->qsc_dma[QAT_SYM_DMA_OBUF].qsd_dma_tag, - qsc->qsc_dma[QAT_SYM_DMA_OBUF].qsd_dmamap, - &crp->crp_obuf, qat_crypto_load_obuf_cb, &arg, - BUS_DMA_NOWAIT); - if (error == 0) - error = arg.error; - } - return error; -} - -static inline struct qat_crypto_bank * -qat_crypto_select_bank(struct qat_crypto *qcy) -{ - u_int cpuid = PCPU_GET(cpuid); - - return &qcy->qcy_banks[cpuid % qcy->qcy_num_banks]; -} - -static int -qat_crypto_setup_ring(struct qat_softc *sc, struct qat_crypto_bank *qcb) -{ - char *name; - int bank, curname, error, i, j; - - bank = qcb->qcb_bank; - curname = 0; - - name = qcb->qcb_ring_names[curname++]; - snprintf(name, QAT_RING_NAME_SIZE, "bank%d sym_tx", bank); - error = qat_etr_setup_ring(sc, qcb->qcb_bank, - sc->sc_hw.qhw_ring_sym_tx, QAT_NSYMREQ, sc->sc_hw.qhw_fw_req_size, - NULL, NULL, name, &qcb->qcb_sym_tx); - if (error) - return error; - - name = qcb->qcb_ring_names[curname++]; - snprintf(name, QAT_RING_NAME_SIZE, "bank%d sym_rx", bank); - error = qat_etr_setup_ring(sc, qcb->qcb_bank, - sc->sc_hw.qhw_ring_sym_rx, QAT_NSYMREQ, sc->sc_hw.qhw_fw_resp_size, - qat_crypto_sym_rxintr, qcb, name, &qcb->qcb_sym_rx); - if (error) - return error; - - for (i = 0; i < QAT_NSYMCOOKIE; i++) { - struct qat_dmamem *qdm = &qcb->qcb_symck_dmamems[i]; - struct qat_sym_cookie *qsc; - - error = qat_alloc_dmamem(sc, qdm, 1, - sizeof(struct qat_sym_cookie), QAT_OPTIMAL_ALIGN); - if (error) - return error; - - qsc = qdm->qdm_dma_vaddr; - qsc->qsc_self_dmamap = qdm->qdm_dma_map; - qsc->qsc_self_dma_tag = qdm->qdm_dma_tag; - qsc->qsc_bulk_req_params_buf_paddr = - qdm->qdm_dma_seg.ds_addr + offsetof(struct qat_sym_cookie, - qsc_bulk_cookie.qsbc_req_params_buf); - qsc->qsc_buffer_list_desc_paddr = - qdm->qdm_dma_seg.ds_addr + offsetof(struct qat_sym_cookie, - qsc_buf_list); - qsc->qsc_obuffer_list_desc_paddr = - qdm->qdm_dma_seg.ds_addr + offsetof(struct qat_sym_cookie, - qsc_obuf_list); - qsc->qsc_obuffer_list_desc_paddr = - qdm->qdm_dma_seg.ds_addr + offsetof(struct qat_sym_cookie, - qsc_obuf_list); - qsc->qsc_iv_buf_paddr = - qdm->qdm_dma_seg.ds_addr + offsetof(struct qat_sym_cookie, - qsc_iv_buf); - qsc->qsc_auth_res_paddr = - qdm->qdm_dma_seg.ds_addr + offsetof(struct qat_sym_cookie, - qsc_auth_res); - qsc->qsc_gcm_aad_paddr = - qdm->qdm_dma_seg.ds_addr + offsetof(struct qat_sym_cookie, - qsc_gcm_aad); - qsc->qsc_content_desc_paddr = - qdm->qdm_dma_seg.ds_addr + offsetof(struct qat_sym_cookie, - qsc_content_desc); - qcb->qcb_symck_free[i] = qsc; - qcb->qcb_symck_free_count++; - - for (j = 0; j < QAT_SYM_DMA_COUNT; j++) { - error = bus_dma_tag_create(bus_get_dma_tag(sc->sc_dev), - 1, 0, /* alignment, boundary */ - BUS_SPACE_MAXADDR, /* lowaddr */ - BUS_SPACE_MAXADDR, /* highaddr */ - NULL, NULL, /* filter, filterarg */ - QAT_MAXLEN, /* maxsize */ - QAT_MAXSEG, /* nsegments */ - QAT_MAXLEN, /* maxsegsize */ - BUS_DMA_COHERENT, /* flags */ - NULL, NULL, /* lockfunc, lockarg */ - &qsc->qsc_dma[j].qsd_dma_tag); - if (error != 0) - return error; - error = bus_dmamap_create(qsc->qsc_dma[j].qsd_dma_tag, - BUS_DMA_COHERENT, &qsc->qsc_dma[j].qsd_dmamap); - if (error != 0) - return error; - } - } - - return 0; -} - -static int -qat_crypto_bank_init(struct qat_softc *sc, struct qat_crypto_bank *qcb) -{ - mtx_init(&qcb->qcb_bank_mtx, "qcb bank", NULL, MTX_DEF); - - return qat_crypto_setup_ring(sc, qcb); -} - -static void -qat_crypto_bank_deinit(struct qat_softc *sc, struct qat_crypto_bank *qcb) -{ - struct qat_dmamem *qdm; - struct qat_sym_cookie *qsc; - int i, j; - - for (i = 0; i < QAT_NSYMCOOKIE; i++) { - qdm = &qcb->qcb_symck_dmamems[i]; - qsc = qcb->qcb_symck_free[i]; - for (j = 0; j < QAT_SYM_DMA_COUNT; j++) { - bus_dmamap_destroy(qsc->qsc_dma[j].qsd_dma_tag, - qsc->qsc_dma[j].qsd_dmamap); - bus_dma_tag_destroy(qsc->qsc_dma[j].qsd_dma_tag); - } - qat_free_dmamem(sc, qdm); - } - qat_free_dmamem(sc, &qcb->qcb_sym_tx->qr_dma); - qat_free_dmamem(sc, &qcb->qcb_sym_rx->qr_dma); - - mtx_destroy(&qcb->qcb_bank_mtx); -} - -static int -qat_crypto_init(struct qat_softc *sc) -{ - struct qat_crypto *qcy = &sc->sc_crypto; - struct sysctl_ctx_list *ctx; - struct sysctl_oid *oid; - struct sysctl_oid_list *children; - int bank, error, num_banks; - - qcy->qcy_sc = sc; - - if (sc->sc_hw.qhw_init_arb != NULL) - num_banks = imin(mp_ncpus, sc->sc_hw.qhw_num_banks); - else - num_banks = sc->sc_ae_num; - - qcy->qcy_num_banks = num_banks; - - qcy->qcy_banks = - qat_alloc_mem(sizeof(struct qat_crypto_bank) * num_banks); - - for (bank = 0; bank < num_banks; bank++) { - struct qat_crypto_bank *qcb = &qcy->qcy_banks[bank]; - qcb->qcb_bank = bank; - error = qat_crypto_bank_init(sc, qcb); - if (error) - return error; - } - - mtx_init(&qcy->qcy_crypto_mtx, "qcy crypto", NULL, MTX_DEF); - - ctx = device_get_sysctl_ctx(sc->sc_dev); - oid = device_get_sysctl_tree(sc->sc_dev); - children = SYSCTL_CHILDREN(oid); - oid = SYSCTL_ADD_NODE(ctx, children, OID_AUTO, "stats", - CTLFLAG_RD | CTLFLAG_MPSAFE, NULL, "statistics"); - children = SYSCTL_CHILDREN(oid); - - sc->sc_gcm_aad_restarts = counter_u64_alloc(M_WAITOK); - SYSCTL_ADD_COUNTER_U64(ctx, children, OID_AUTO, "gcm_aad_restarts", - CTLFLAG_RD, &sc->sc_gcm_aad_restarts, - "GCM requests deferred due to AAD size change"); - sc->sc_gcm_aad_updates = counter_u64_alloc(M_WAITOK); - SYSCTL_ADD_COUNTER_U64(ctx, children, OID_AUTO, "gcm_aad_updates", - CTLFLAG_RD, &sc->sc_gcm_aad_updates, - "GCM requests that required session state update"); - sc->sc_ring_full_restarts = counter_u64_alloc(M_WAITOK); - SYSCTL_ADD_COUNTER_U64(ctx, children, OID_AUTO, "ring_full", - CTLFLAG_RD, &sc->sc_ring_full_restarts, - "Requests deferred due to in-flight max reached"); - sc->sc_sym_alloc_failures = counter_u64_alloc(M_WAITOK); - SYSCTL_ADD_COUNTER_U64(ctx, children, OID_AUTO, "sym_alloc_failures", - CTLFLAG_RD, &sc->sc_sym_alloc_failures, - "Request allocation failures"); - - return 0; -} - -static void -qat_crypto_deinit(struct qat_softc *sc) -{ - struct qat_crypto *qcy = &sc->sc_crypto; - struct qat_crypto_bank *qcb; - int bank; - - counter_u64_free(sc->sc_sym_alloc_failures); - counter_u64_free(sc->sc_ring_full_restarts); - counter_u64_free(sc->sc_gcm_aad_updates); - counter_u64_free(sc->sc_gcm_aad_restarts); - - if (qcy->qcy_banks != NULL) { - for (bank = 0; bank < qcy->qcy_num_banks; bank++) { - qcb = &qcy->qcy_banks[bank]; - qat_crypto_bank_deinit(sc, qcb); - } - qat_free_mem(qcy->qcy_banks); - mtx_destroy(&qcy->qcy_crypto_mtx); - } -} - -static int -qat_crypto_start(struct qat_softc *sc) -{ - struct qat_crypto *qcy; - - qcy = &sc->sc_crypto; - qcy->qcy_cid = crypto_get_driverid(sc->sc_dev, - sizeof(struct qat_session), CRYPTOCAP_F_HARDWARE); - if (qcy->qcy_cid < 0) { - device_printf(sc->sc_dev, - "could not get opencrypto driver id\n"); - return ENOENT; - } - - return 0; -} - -static void -qat_crypto_stop(struct qat_softc *sc) -{ - struct qat_crypto *qcy; - - qcy = &sc->sc_crypto; - if (qcy->qcy_cid >= 0) - (void)crypto_unregister_all(qcy->qcy_cid); -} - -static void -qat_crypto_sym_dma_unload(struct qat_sym_cookie *qsc, enum qat_sym_dma i) -{ - bus_dmamap_sync(qsc->qsc_dma[i].qsd_dma_tag, qsc->qsc_dma[i].qsd_dmamap, - BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); - bus_dmamap_unload(qsc->qsc_dma[i].qsd_dma_tag, - qsc->qsc_dma[i].qsd_dmamap); -} - -static int -qat_crypto_sym_rxintr(struct qat_softc *sc, void *arg, void *msg) -{ - char icv[QAT_SYM_HASH_BUFFER_LEN]; - struct qat_crypto_bank *qcb = arg; - struct qat_crypto *qcy; - struct qat_session *qs; - struct qat_sym_cookie *qsc; - struct qat_sym_bulk_cookie *qsbc; - struct cryptop *crp; - int error; - uint16_t auth_sz; - bool blocked; - - qsc = *(void **)((uintptr_t)msg + sc->sc_hw.qhw_crypto_opaque_offset); - - qsbc = &qsc->qsc_bulk_cookie; - qcy = qsbc->qsbc_crypto; - qs = qsbc->qsbc_session; - crp = qsbc->qsbc_cb_tag; - - bus_dmamap_sync(qsc->qsc_self_dma_tag, qsc->qsc_self_dmamap, - BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); - - if (crp->crp_aad != NULL) - qat_crypto_sym_dma_unload(qsc, QAT_SYM_DMA_AADBUF); - qat_crypto_sym_dma_unload(qsc, QAT_SYM_DMA_BUF); - if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) - qat_crypto_sym_dma_unload(qsc, QAT_SYM_DMA_OBUF); - - error = 0; - if ((auth_sz = qs->qs_auth_mlen) != 0) { - if ((crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) != 0) { - crypto_copydata(crp, crp->crp_digest_start, - auth_sz, icv); - if (timingsafe_bcmp(icv, qsc->qsc_auth_res, - auth_sz) != 0) { - error = EBADMSG; - } - } else { - crypto_copyback(crp, crp->crp_digest_start, - auth_sz, qsc->qsc_auth_res); - } - } - - qat_crypto_free_sym_cookie(qcb, qsc); - - blocked = false; - mtx_lock(&qs->qs_session_mtx); - MPASS(qs->qs_status & QAT_SESSION_STATUS_ACTIVE); - qs->qs_inflight--; - if (__predict_false(qs->qs_need_wakeup && qs->qs_inflight == 0)) { - blocked = true; - qs->qs_need_wakeup = false; - } - mtx_unlock(&qs->qs_session_mtx); - - crp->crp_etype = error; - crypto_done(crp); - - if (blocked) - crypto_unblock(qcy->qcy_cid, CRYPTO_SYMQ); - - return 1; -} - -static int -qat_probesession(device_t dev, const struct crypto_session_params *csp) -{ - if ((csp->csp_flags & ~(CSP_F_SEPARATE_OUTPUT | CSP_F_SEPARATE_AAD)) != - 0) - return EINVAL; - - if (csp->csp_cipher_alg == CRYPTO_AES_XTS && - qat_lookup(dev)->qatp_chip == QAT_CHIP_C2XXX) { - /* - * AES-XTS is not supported by the NanoQAT. - */ - return EINVAL; - } - - switch (csp->csp_mode) { - case CSP_MODE_CIPHER: - switch (csp->csp_cipher_alg) { - case CRYPTO_AES_CBC: - case CRYPTO_AES_ICM: - if (csp->csp_ivlen != AES_BLOCK_LEN) - return EINVAL; - break; - case CRYPTO_AES_XTS: - if (csp->csp_ivlen != AES_XTS_IV_LEN) - return EINVAL; - break; - default: - return EINVAL; - } - break; - case CSP_MODE_DIGEST: - switch (csp->csp_auth_alg) { - case CRYPTO_SHA1: - case CRYPTO_SHA1_HMAC: - case CRYPTO_SHA2_256: - case CRYPTO_SHA2_256_HMAC: - case CRYPTO_SHA2_384: - case CRYPTO_SHA2_384_HMAC: - case CRYPTO_SHA2_512: - case CRYPTO_SHA2_512_HMAC: - break; - case CRYPTO_AES_NIST_GMAC: - if (csp->csp_ivlen != AES_GCM_IV_LEN) - return EINVAL; - break; - default: - return EINVAL; - } - break; - case CSP_MODE_AEAD: - switch (csp->csp_cipher_alg) { - case CRYPTO_AES_NIST_GCM_16: - break; - default: - return EINVAL; - } - break; - case CSP_MODE_ETA: - switch (csp->csp_auth_alg) { - case CRYPTO_SHA1_HMAC: - case CRYPTO_SHA2_256_HMAC: - case CRYPTO_SHA2_384_HMAC: - case CRYPTO_SHA2_512_HMAC: - switch (csp->csp_cipher_alg) { - case CRYPTO_AES_CBC: - case CRYPTO_AES_ICM: - if (csp->csp_ivlen != AES_BLOCK_LEN) - return EINVAL; - break; - case CRYPTO_AES_XTS: - if (csp->csp_ivlen != AES_XTS_IV_LEN) - return EINVAL; - break; - default: - return EINVAL; - } - break; - default: - return EINVAL; - } - break; - default: - return EINVAL; - } - - return CRYPTODEV_PROBE_HARDWARE; -} - -static int -qat_newsession(device_t dev, crypto_session_t cses, - const struct crypto_session_params *csp) -{ - struct qat_crypto *qcy; - struct qat_dmamem *qdm; - struct qat_session *qs; - struct qat_softc *sc; - struct qat_crypto_desc *ddesc, *edesc; - int error, slices; - - sc = device_get_softc(dev); - qs = crypto_get_driver_session(cses); - qcy = &sc->sc_crypto; - - qdm = &qs->qs_desc_mem; - error = qat_alloc_dmamem(sc, qdm, QAT_MAXSEG, - sizeof(struct qat_crypto_desc) * 2, QAT_OPTIMAL_ALIGN); - if (error != 0) - return error; - - mtx_init(&qs->qs_session_mtx, "qs session", NULL, MTX_DEF); - qs->qs_aad_length = -1; - - qs->qs_dec_desc = ddesc = qdm->qdm_dma_vaddr; - qs->qs_enc_desc = edesc = ddesc + 1; - - ddesc->qcd_desc_paddr = qdm->qdm_dma_seg.ds_addr; - ddesc->qcd_hash_state_paddr = ddesc->qcd_desc_paddr + - offsetof(struct qat_crypto_desc, qcd_hash_state_prefix_buf); - edesc->qcd_desc_paddr = qdm->qdm_dma_seg.ds_addr + - sizeof(struct qat_crypto_desc); - edesc->qcd_hash_state_paddr = edesc->qcd_desc_paddr + - offsetof(struct qat_crypto_desc, qcd_hash_state_prefix_buf); - - qs->qs_status = QAT_SESSION_STATUS_ACTIVE; - qs->qs_inflight = 0; - - qs->qs_cipher_key = csp->csp_cipher_key; - qs->qs_cipher_klen = csp->csp_cipher_klen; - qs->qs_auth_key = csp->csp_auth_key; - qs->qs_auth_klen = csp->csp_auth_klen; - - switch (csp->csp_cipher_alg) { - case CRYPTO_AES_CBC: - qs->qs_cipher_algo = qat_aes_cipher_algo(csp->csp_cipher_klen); - qs->qs_cipher_mode = HW_CIPHER_CBC_MODE; - break; - case CRYPTO_AES_ICM: - qs->qs_cipher_algo = qat_aes_cipher_algo(csp->csp_cipher_klen); - qs->qs_cipher_mode = HW_CIPHER_CTR_MODE; - break; - case CRYPTO_AES_XTS: - qs->qs_cipher_algo = - qat_aes_cipher_algo(csp->csp_cipher_klen / 2); - qs->qs_cipher_mode = HW_CIPHER_XTS_MODE; - break; - case CRYPTO_AES_NIST_GCM_16: - qs->qs_cipher_algo = qat_aes_cipher_algo(csp->csp_cipher_klen); - qs->qs_cipher_mode = HW_CIPHER_CTR_MODE; - qs->qs_auth_algo = HW_AUTH_ALGO_GALOIS_128; - qs->qs_auth_mode = HW_AUTH_MODE1; - break; - case 0: - break; - default: - panic("%s: unhandled cipher algorithm %d", __func__, - csp->csp_cipher_alg); - } - - switch (csp->csp_auth_alg) { - case CRYPTO_SHA1_HMAC: - qs->qs_auth_algo = HW_AUTH_ALGO_SHA1; - qs->qs_auth_mode = HW_AUTH_MODE1; - break; - case CRYPTO_SHA1: - qs->qs_auth_algo = HW_AUTH_ALGO_SHA1; - qs->qs_auth_mode = HW_AUTH_MODE0; - break; - case CRYPTO_SHA2_256_HMAC: - qs->qs_auth_algo = HW_AUTH_ALGO_SHA256; - qs->qs_auth_mode = HW_AUTH_MODE1; - break; - case CRYPTO_SHA2_256: - qs->qs_auth_algo = HW_AUTH_ALGO_SHA256; - qs->qs_auth_mode = HW_AUTH_MODE0; - break; - case CRYPTO_SHA2_384_HMAC: - qs->qs_auth_algo = HW_AUTH_ALGO_SHA384; - qs->qs_auth_mode = HW_AUTH_MODE1; - break; - case CRYPTO_SHA2_384: - qs->qs_auth_algo = HW_AUTH_ALGO_SHA384; - qs->qs_auth_mode = HW_AUTH_MODE0; - break; - case CRYPTO_SHA2_512_HMAC: - qs->qs_auth_algo = HW_AUTH_ALGO_SHA512; - qs->qs_auth_mode = HW_AUTH_MODE1; - break; - case CRYPTO_SHA2_512: - qs->qs_auth_algo = HW_AUTH_ALGO_SHA512; - qs->qs_auth_mode = HW_AUTH_MODE0; - break; - case CRYPTO_AES_NIST_GMAC: - qs->qs_cipher_algo = qat_aes_cipher_algo(csp->csp_auth_klen); - qs->qs_cipher_mode = HW_CIPHER_CTR_MODE; - qs->qs_auth_algo = HW_AUTH_ALGO_GALOIS_128; - qs->qs_auth_mode = HW_AUTH_MODE1; - - qs->qs_cipher_key = qs->qs_auth_key; - qs->qs_cipher_klen = qs->qs_auth_klen; - break; - case 0: - break; - default: - panic("%s: unhandled auth algorithm %d", __func__, - csp->csp_auth_alg); - } - - slices = 0; - switch (csp->csp_mode) { - case CSP_MODE_AEAD: - case CSP_MODE_ETA: - /* auth then decrypt */ - ddesc->qcd_slices[0] = FW_SLICE_AUTH; - ddesc->qcd_slices[1] = FW_SLICE_CIPHER; - ddesc->qcd_cipher_dir = HW_CIPHER_DECRYPT; - ddesc->qcd_cmd_id = FW_LA_CMD_HASH_CIPHER; - /* encrypt then auth */ - edesc->qcd_slices[0] = FW_SLICE_CIPHER; - edesc->qcd_slices[1] = FW_SLICE_AUTH; - edesc->qcd_cipher_dir = HW_CIPHER_ENCRYPT; - edesc->qcd_cmd_id = FW_LA_CMD_CIPHER_HASH; - slices = 2; - break; - case CSP_MODE_CIPHER: - /* decrypt */ - ddesc->qcd_slices[0] = FW_SLICE_CIPHER; - ddesc->qcd_cipher_dir = HW_CIPHER_DECRYPT; - ddesc->qcd_cmd_id = FW_LA_CMD_CIPHER; - /* encrypt */ - edesc->qcd_slices[0] = FW_SLICE_CIPHER; - edesc->qcd_cipher_dir = HW_CIPHER_ENCRYPT; - edesc->qcd_cmd_id = FW_LA_CMD_CIPHER; - slices = 1; - break; - case CSP_MODE_DIGEST: - if (qs->qs_auth_algo == HW_AUTH_ALGO_GALOIS_128) { - /* auth then decrypt */ - ddesc->qcd_slices[0] = FW_SLICE_AUTH; - ddesc->qcd_slices[1] = FW_SLICE_CIPHER; - ddesc->qcd_cipher_dir = HW_CIPHER_DECRYPT; - ddesc->qcd_cmd_id = FW_LA_CMD_HASH_CIPHER; - /* encrypt then auth */ - edesc->qcd_slices[0] = FW_SLICE_CIPHER; - edesc->qcd_slices[1] = FW_SLICE_AUTH; - edesc->qcd_cipher_dir = HW_CIPHER_ENCRYPT; - edesc->qcd_cmd_id = FW_LA_CMD_CIPHER_HASH; - slices = 2; - } else { - ddesc->qcd_slices[0] = FW_SLICE_AUTH; - ddesc->qcd_cmd_id = FW_LA_CMD_AUTH; - edesc->qcd_slices[0] = FW_SLICE_AUTH; - edesc->qcd_cmd_id = FW_LA_CMD_AUTH; - slices = 1; - } - break; - default: - panic("%s: unhandled crypto algorithm %d, %d", __func__, - csp->csp_cipher_alg, csp->csp_auth_alg); - } - ddesc->qcd_slices[slices] = FW_SLICE_DRAM_WR; - edesc->qcd_slices[slices] = FW_SLICE_DRAM_WR; - - qcy->qcy_sc->sc_hw.qhw_crypto_setup_desc(qcy, qs, ddesc); - qcy->qcy_sc->sc_hw.qhw_crypto_setup_desc(qcy, qs, edesc); - - if (csp->csp_auth_mlen != 0) - qs->qs_auth_mlen = csp->csp_auth_mlen; - else - qs->qs_auth_mlen = edesc->qcd_auth_sz; - - /* Compute the GMAC by specifying a null cipher payload. */ - if (csp->csp_auth_alg == CRYPTO_AES_NIST_GMAC) - ddesc->qcd_cmd_id = edesc->qcd_cmd_id = FW_LA_CMD_AUTH; - - return 0; -} - -static void -qat_crypto_clear_desc(struct qat_crypto_desc *desc) -{ - explicit_bzero(desc->qcd_content_desc, sizeof(desc->qcd_content_desc)); - explicit_bzero(desc->qcd_hash_state_prefix_buf, - sizeof(desc->qcd_hash_state_prefix_buf)); - explicit_bzero(desc->qcd_req_cache, sizeof(desc->qcd_req_cache)); -} - -static void -qat_freesession(device_t dev, crypto_session_t cses) -{ - struct qat_session *qs; - - qs = crypto_get_driver_session(cses); - KASSERT(qs->qs_inflight == 0, - ("%s: session %p has requests in flight", __func__, qs)); - - qat_crypto_clear_desc(qs->qs_enc_desc); - qat_crypto_clear_desc(qs->qs_dec_desc); - qat_free_dmamem(device_get_softc(dev), &qs->qs_desc_mem); - mtx_destroy(&qs->qs_session_mtx); -} - -static int -qat_process(device_t dev, struct cryptop *crp, int hint) -{ - struct qat_crypto *qcy; - struct qat_crypto_bank *qcb; - struct qat_crypto_desc const *desc; - struct qat_session *qs; - struct qat_softc *sc; - struct qat_sym_cookie *qsc; - struct qat_sym_bulk_cookie *qsbc; - int error; - - sc = device_get_softc(dev); - qcy = &sc->sc_crypto; - qs = crypto_get_driver_session(crp->crp_session); - qsc = NULL; - - if (__predict_false(crypto_buffer_len(&crp->crp_buf) > QAT_MAXLEN)) { - error = E2BIG; - goto fail1; - } - - mtx_lock(&qs->qs_session_mtx); - if (qs->qs_auth_algo == HW_AUTH_ALGO_GALOIS_128) { - if (crp->crp_aad_length > QAT_GCM_AAD_SIZE_MAX) { - error = E2BIG; - mtx_unlock(&qs->qs_session_mtx); - goto fail1; - } - - /* - * The firmware interface for GCM annoyingly requires the AAD - * size to be stored in the session's content descriptor, which - * is not really meant to be updated after session - * initialization. For IPSec the AAD size is fixed so this is - * not much of a problem in practice, but we have to catch AAD - * size updates here so that the device code can safely update - * the session's recorded AAD size. - */ - if (__predict_false(crp->crp_aad_length != qs->qs_aad_length)) { - if (qs->qs_inflight == 0) { - if (qs->qs_aad_length != -1) { - counter_u64_add(sc->sc_gcm_aad_updates, - 1); - } - qs->qs_aad_length = crp->crp_aad_length; - } else { - qs->qs_need_wakeup = true; - mtx_unlock(&qs->qs_session_mtx); - counter_u64_add(sc->sc_gcm_aad_restarts, 1); - error = ERESTART; - goto fail1; - } - } - } - qs->qs_inflight++; - mtx_unlock(&qs->qs_session_mtx); - - qcb = qat_crypto_select_bank(qcy); - - qsc = qat_crypto_alloc_sym_cookie(qcb); - if (qsc == NULL) { - counter_u64_add(sc->sc_sym_alloc_failures, 1); - error = ENOBUFS; - goto fail2; - } - - if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) - desc = qs->qs_enc_desc; - else - desc = qs->qs_dec_desc; - - error = qat_crypto_load(qs, qsc, desc, crp); - if (error != 0) - goto fail2; - - qsbc = &qsc->qsc_bulk_cookie; - qsbc->qsbc_crypto = qcy; - qsbc->qsbc_session = qs; - qsbc->qsbc_cb_tag = crp; - - sc->sc_hw.qhw_crypto_setup_req_params(qcb, qs, desc, qsc, crp); - - if (crp->crp_aad != NULL) { - bus_dmamap_sync(qsc->qsc_dma[QAT_SYM_DMA_AADBUF].qsd_dma_tag, - qsc->qsc_dma[QAT_SYM_DMA_AADBUF].qsd_dmamap, - BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); - } - bus_dmamap_sync(qsc->qsc_dma[QAT_SYM_DMA_BUF].qsd_dma_tag, - qsc->qsc_dma[QAT_SYM_DMA_BUF].qsd_dmamap, - BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); - if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) { - bus_dmamap_sync(qsc->qsc_dma[QAT_SYM_DMA_OBUF].qsd_dma_tag, - qsc->qsc_dma[QAT_SYM_DMA_OBUF].qsd_dmamap, - BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); - } - bus_dmamap_sync(qsc->qsc_self_dma_tag, qsc->qsc_self_dmamap, - BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); - - error = qat_etr_put_msg(sc, qcb->qcb_sym_tx, - (uint32_t *)qsbc->qsbc_msg); - if (error) - goto fail2; - - return 0; - -fail2: - if (qsc) - qat_crypto_free_sym_cookie(qcb, qsc); - mtx_lock(&qs->qs_session_mtx); - qs->qs_inflight--; - mtx_unlock(&qs->qs_session_mtx); -fail1: - crp->crp_etype = error; - crypto_done(crp); - return 0; -} - -static device_method_t qat_methods[] = { - /* Device interface */ - DEVMETHOD(device_probe, qat_probe), - DEVMETHOD(device_attach, qat_attach), - DEVMETHOD(device_detach, qat_detach), - - /* Cryptodev interface */ - DEVMETHOD(cryptodev_probesession, qat_probesession), - DEVMETHOD(cryptodev_newsession, qat_newsession), - DEVMETHOD(cryptodev_freesession, qat_freesession), - DEVMETHOD(cryptodev_process, qat_process), - - DEVMETHOD_END -}; - -static driver_t qat_driver = { - .name = "qat", - .methods = qat_methods, - .size = sizeof(struct qat_softc), -}; - -DRIVER_MODULE(qat, pci, qat_driver, 0, 0); -MODULE_VERSION(qat, 1); -MODULE_DEPEND(qat, crypto, 1, 1, 1); -MODULE_DEPEND(qat, pci, 1, 1, 1); Index: sys/dev/qat/qat/qat_ocf.c =================================================================== --- /dev/null +++ sys/dev/qat/qat/qat_ocf.c @@ -0,0 +1,1228 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/* System headers */ +#include +#include +#include +#include +#include +#include +#include +#include + +/* Cryptodev headers */ +#include +#include "cryptodev_if.h" + +/* QAT specific headers */ +#include "cpa.h" +#include "cpa_cy_im.h" +#include "cpa_cy_sym_dp.h" +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "lac_sym_hash_defs.h" +#include "lac_sym_qat_hash_defs_lookup.h" + +/* To get only IRQ instances */ +#include "icp_accel_devices.h" +#include "icp_adf_accel_mgr.h" +#include "lac_sal_types.h" + +/* QAT OCF specific headers */ +#include "qat_ocf_mem_pool.h" +#include "qat_ocf_utils.h" + +#define QAT_OCF_MAX_INSTANCES (256) +#define QAT_OCF_SESSION_WAIT_TIMEOUT_MS (1000) + +MALLOC_DEFINE(M_QAT_OCF, "qat_ocf", "qat_ocf(4) memory allocations"); + +/* QAT OCF internal structures */ +struct qat_ocf_softc { + device_t sc_dev; + int32_t cryptodev_id; + struct qat_ocf_instance cyInstHandles[QAT_OCF_MAX_INSTANCES]; + int32_t numCyInstances; +}; + +/* Function definitions */ +static void qat_ocf_freesession(device_t dev, crypto_session_t cses); +static int qat_ocf_probesession(device_t dev, + const struct crypto_session_params *csp); +static int qat_ocf_newsession(device_t dev, + crypto_session_t cses, + const struct crypto_session_params *csp); +static int qat_ocf_attach(device_t dev); +static int qat_ocf_detach(device_t dev); + +static void +symDpCallback(CpaCySymDpOpData *pOpData, + CpaStatus result, + CpaBoolean verifyResult) +{ + struct qat_ocf_cookie *qat_cookie; + struct cryptop *crp; + struct qat_ocf_dsession *qat_dsession = NULL; + struct qat_ocf_session *qat_session = NULL; + struct qat_ocf_instance *qat_instance = NULL; + CpaStatus status; + int rc = 0; + + qat_cookie = (struct qat_ocf_cookie *)pOpData->pCallbackTag; + if (!qat_cookie) + return; + + crp = qat_cookie->crp_op; + + qat_dsession = crypto_get_driver_session(crp->crp_session); + qat_instance = qat_dsession->qatInstance; + + status = qat_ocf_cookie_dma_post_sync(crp, pOpData); + if (CPA_STATUS_SUCCESS != status) { + rc = EIO; + goto exit; + } + + status = qat_ocf_cookie_dma_unload(crp, pOpData); + if (CPA_STATUS_SUCCESS != status) { + rc = EIO; + goto exit; + } + + /* Verify result */ + if (CPA_STATUS_SUCCESS != result) { + rc = EBADMSG; + goto exit; + } + + /* Verify digest by FW (GCM and CCM only) */ + if (CPA_TRUE != verifyResult) { + rc = EBADMSG; + goto exit; + } + + if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) + qat_session = &qat_dsession->encSession; + else + qat_session = &qat_dsession->decSession; + + /* Copy back digest result if it's stored in separated buffer */ + if (pOpData->digestResult && qat_session->authLen > 0) { + if ((crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) != 0) { + char icv[QAT_OCF_MAX_DIGEST] = { 0 }; + crypto_copydata(crp, + crp->crp_digest_start, + qat_session->authLen, + icv); + if (timingsafe_bcmp(icv, + qat_cookie->qat_ocf_digest, + qat_session->authLen) != 0) { + rc = EBADMSG; + goto exit; + } + } else { + crypto_copyback(crp, + crp->crp_digest_start, + qat_session->authLen, + qat_cookie->qat_ocf_digest); + } + } + +exit: + qat_ocf_cookie_free(qat_instance, qat_cookie); + crp->crp_etype = rc; + crypto_done(crp); + + return; +} + +static inline CpaPhysicalAddr +qatVirtToPhys(void *virtAddr) +{ + return (CpaPhysicalAddr)vtophys(virtAddr); +} + +static int +qat_ocf_probesession(device_t dev, const struct crypto_session_params *csp) +{ + if ((csp->csp_flags & ~(CSP_F_SEPARATE_OUTPUT | CSP_F_SEPARATE_AAD)) != + 0) { + return EINVAL; + } + + switch (csp->csp_mode) { + case CSP_MODE_CIPHER: + switch (csp->csp_cipher_alg) { + case CRYPTO_AES_CBC: + case CRYPTO_AES_ICM: + if (csp->csp_ivlen != AES_BLOCK_LEN) + return EINVAL; + break; + case CRYPTO_AES_XTS: + if (csp->csp_ivlen != AES_XTS_IV_LEN) + return EINVAL; + break; + default: + return EINVAL; + } + break; + case CSP_MODE_DIGEST: + switch (csp->csp_auth_alg) { + case CRYPTO_SHA1: + case CRYPTO_SHA1_HMAC: + case CRYPTO_SHA2_256: + case CRYPTO_SHA2_256_HMAC: + case CRYPTO_SHA2_384: + case CRYPTO_SHA2_384_HMAC: + case CRYPTO_SHA2_512: + case CRYPTO_SHA2_512_HMAC: + break; + case CRYPTO_AES_NIST_GMAC: + if (csp->csp_ivlen != AES_GCM_IV_LEN) + return EINVAL; + break; + default: + return EINVAL; + } + break; + case CSP_MODE_AEAD: + switch (csp->csp_cipher_alg) { + case CRYPTO_AES_NIST_GCM_16: + if (csp->csp_ivlen != AES_GCM_IV_LEN) + return EINVAL; + break; + default: + return EINVAL; + } + break; + case CSP_MODE_ETA: + switch (csp->csp_auth_alg) { + case CRYPTO_SHA1_HMAC: + case CRYPTO_SHA2_256_HMAC: + case CRYPTO_SHA2_384_HMAC: + case CRYPTO_SHA2_512_HMAC: + switch (csp->csp_cipher_alg) { + case CRYPTO_AES_CBC: + case CRYPTO_AES_ICM: + if (csp->csp_ivlen != AES_BLOCK_LEN) + return EINVAL; + break; + case CRYPTO_AES_XTS: + if (csp->csp_ivlen != AES_XTS_IV_LEN) + return EINVAL; + break; + default: + return EINVAL; + } + break; + default: + return EINVAL; + } + break; + default: + return EINVAL; + } + + return CRYPTODEV_PROBE_HARDWARE; +} + +static CpaStatus +qat_ocf_session_init(device_t dev, + struct cryptop *crp, + struct qat_ocf_instance *qat_instance, + struct qat_ocf_session *qat_ssession) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + /* Crytpodev structures */ + crypto_session_t cses; + const struct crypto_session_params *csp; + /* DP API Session configuration */ + CpaCySymSessionSetupData sessionSetupData = { 0 }; + CpaCySymSessionCtx sessionCtx = NULL; + Cpa32U sessionCtxSize = 0; + + cses = crp->crp_session; + if (NULL == cses) { + device_printf(dev, "no crypto session in cryptodev request\n"); + return CPA_STATUS_FAIL; + } + + csp = crypto_get_params(cses); + if (NULL == csp) { + device_printf(dev, "no session in cryptodev session\n"); + return CPA_STATUS_FAIL; + } + + /* Common fields */ + sessionSetupData.sessionPriority = CPA_CY_PRIORITY_HIGH; + /* Cipher key */ + if (crp->crp_cipher_key) + sessionSetupData.cipherSetupData.pCipherKey = + crp->crp_cipher_key; + else + sessionSetupData.cipherSetupData.pCipherKey = + csp->csp_cipher_key; + sessionSetupData.cipherSetupData.cipherKeyLenInBytes = + csp->csp_cipher_klen; + + /* Auth key */ + if (crp->crp_auth_key) + sessionSetupData.hashSetupData.authModeSetupData.authKey = + crp->crp_auth_key; + else + sessionSetupData.hashSetupData.authModeSetupData.authKey = + csp->csp_auth_key; + sessionSetupData.hashSetupData.authModeSetupData.authKeyLenInBytes = + csp->csp_auth_klen; + + qat_ssession->aadLen = crp->crp_aad_length; + if (CPA_TRUE == is_sep_aad_supported(csp)) + sessionSetupData.hashSetupData.authModeSetupData.aadLenInBytes = + crp->crp_aad_length; + else + sessionSetupData.hashSetupData.authModeSetupData.aadLenInBytes = + 0; + + /* Just setup algorithm - regardless of mode */ + if (csp->csp_cipher_alg) { + sessionSetupData.symOperation = CPA_CY_SYM_OP_CIPHER; + + switch (csp->csp_cipher_alg) { + case CRYPTO_AES_CBC: + sessionSetupData.cipherSetupData.cipherAlgorithm = + CPA_CY_SYM_CIPHER_AES_CBC; + break; + case CRYPTO_AES_ICM: + sessionSetupData.cipherSetupData.cipherAlgorithm = + CPA_CY_SYM_CIPHER_AES_CTR; + break; + case CRYPTO_AES_XTS: + sessionSetupData.cipherSetupData.cipherAlgorithm = + CPA_CY_SYM_CIPHER_AES_XTS; + break; + case CRYPTO_AES_NIST_GCM_16: + sessionSetupData.cipherSetupData.cipherAlgorithm = + CPA_CY_SYM_CIPHER_AES_GCM; + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_AES_GCM; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + break; + default: + device_printf(dev, + "cipher_alg: %d not supported\n", + csp->csp_cipher_alg); + status = CPA_STATUS_UNSUPPORTED; + goto fail; + } + } + + if (csp->csp_auth_alg) { + switch (csp->csp_auth_alg) { + case CRYPTO_SHA1_HMAC: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA1; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + break; + case CRYPTO_SHA1: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA1; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_PLAIN; + break; + + case CRYPTO_SHA2_256_HMAC: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA256; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + break; + case CRYPTO_SHA2_256: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA256; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_PLAIN; + break; + + case CRYPTO_SHA2_224_HMAC: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA224; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + break; + case CRYPTO_SHA2_224: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA224; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_PLAIN; + break; + + case CRYPTO_SHA2_384_HMAC: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA384; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + break; + case CRYPTO_SHA2_384: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA384; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_PLAIN; + break; + + case CRYPTO_SHA2_512_HMAC: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA512; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + break; + case CRYPTO_SHA2_512: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA512; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_PLAIN; + break; + case CRYPTO_AES_NIST_GMAC: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_AES_GMAC; + break; + default: + status = CPA_STATUS_UNSUPPORTED; + goto fail; + } + } /* csp->csp_auth_alg */ + + /* Setting digest-length if no cipher-only mode is set */ + if (csp->csp_mode != CSP_MODE_CIPHER) { + lac_sym_qat_hash_defs_t *pHashDefsInfo = NULL; + if (csp->csp_auth_mlen) { + sessionSetupData.hashSetupData.digestResultLenInBytes = + csp->csp_auth_mlen; + qat_ssession->authLen = csp->csp_auth_mlen; + } else { + LacSymQat_HashDefsLookupGet( + qat_instance->cyInstHandle, + sessionSetupData.hashSetupData.hashAlgorithm, + &pHashDefsInfo); + if (NULL == pHashDefsInfo) { + device_printf( + dev, + "unable to find corresponding hash data\n"); + status = CPA_STATUS_UNSUPPORTED; + goto fail; + } + sessionSetupData.hashSetupData.digestResultLenInBytes = + pHashDefsInfo->algInfo->digestLength; + qat_ssession->authLen = + pHashDefsInfo->algInfo->digestLength; + } + sessionSetupData.verifyDigest = CPA_FALSE; + } + + switch (csp->csp_mode) { + case CSP_MODE_AEAD: + sessionSetupData.symOperation = + CPA_CY_SYM_OP_ALGORITHM_CHAINING; + /* Place the digest result in a buffer unrelated to srcBuffer */ + sessionSetupData.digestIsAppended = CPA_TRUE; + /* For GCM and CCM driver forces to verify digest on HW */ + sessionSetupData.verifyDigest = CPA_TRUE; + if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT; + sessionSetupData.algChainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH; + } else { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_DECRYPT; + sessionSetupData.algChainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER; + } + break; + case CSP_MODE_ETA: + sessionSetupData.symOperation = + CPA_CY_SYM_OP_ALGORITHM_CHAINING; + /* Place the digest result in a buffer unrelated to srcBuffer */ + sessionSetupData.digestIsAppended = CPA_FALSE; + /* Due to FW limitation to verify only appended MACs */ + sessionSetupData.verifyDigest = CPA_FALSE; + if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT; + sessionSetupData.algChainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH; + } else { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_DECRYPT; + sessionSetupData.algChainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER; + } + break; + case CSP_MODE_CIPHER: + if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT; + } else { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_DECRYPT; + } + sessionSetupData.symOperation = CPA_CY_SYM_OP_CIPHER; + break; + case CSP_MODE_DIGEST: + sessionSetupData.symOperation = CPA_CY_SYM_OP_HASH; + if (csp->csp_auth_alg == CRYPTO_AES_NIST_GMAC) { + sessionSetupData.symOperation = + CPA_CY_SYM_OP_ALGORITHM_CHAINING; + /* GMAC is always encrypt */ + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT; + sessionSetupData.algChainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH; + sessionSetupData.cipherSetupData.cipherAlgorithm = + CPA_CY_SYM_CIPHER_AES_GCM; + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_AES_GMAC; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + /* Same key for cipher and auth */ + sessionSetupData.cipherSetupData.pCipherKey = + csp->csp_auth_key; + sessionSetupData.cipherSetupData.cipherKeyLenInBytes = + csp->csp_auth_klen; + /* Generated GMAC stored in separated buffer */ + sessionSetupData.digestIsAppended = CPA_FALSE; + /* Digest verification not allowed in GMAC case */ + sessionSetupData.verifyDigest = CPA_FALSE; + /* No AAD allowed */ + sessionSetupData.hashSetupData.authModeSetupData + .aadLenInBytes = 0; + } else { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT; + sessionSetupData.symOperation = CPA_CY_SYM_OP_HASH; + sessionSetupData.digestIsAppended = CPA_FALSE; + } + break; + default: + device_printf(dev, + "%s: unhandled crypto algorithm %d, %d\n", + __func__, + csp->csp_cipher_alg, + csp->csp_auth_alg); + status = CPA_STATUS_FAIL; + goto fail; + } + + /* Extracting session size */ + status = cpaCySymSessionCtxGetSize(qat_instance->cyInstHandle, + &sessionSetupData, + &sessionCtxSize); + if (CPA_STATUS_SUCCESS != status) { + device_printf(dev, "unable to get session size\n"); + goto fail; + } + + /* Allocating contiguous memory for session */ + sessionCtx = contigmalloc(sessionCtxSize, + M_QAT_OCF, + M_NOWAIT, + 0, + ~1UL, + 1 << (bsrl(sessionCtxSize - 1) + 1), + 0); + if (NULL == sessionCtx) { + device_printf(dev, "unable to allocate memory for session\n"); + status = CPA_STATUS_RESOURCE; + goto fail; + } + + status = cpaCySymDpInitSession(qat_instance->cyInstHandle, + &sessionSetupData, + sessionCtx); + if (CPA_STATUS_SUCCESS != status) { + device_printf(dev, "session initialization failed\n"); + goto fail; + } + + /* NOTE: lets keep double session (both directions) approach to overcome + * lack of direction update in FBSD QAT. + */ + qat_ssession->sessionCtx = sessionCtx; + qat_ssession->sessionCtxSize = sessionCtxSize; + + return CPA_STATUS_SUCCESS; + +fail: + /* Release resources if any */ + if (sessionCtx) + contigfree(sessionCtx, sessionCtxSize, M_QAT_OCF); + + return status; +} + +static int +qat_ocf_newsession(device_t dev, + crypto_session_t cses, + const struct crypto_session_params *csp) +{ + /* Cryptodev QAT structures */ + struct qat_ocf_softc *qat_softc; + struct qat_ocf_dsession *qat_dsession; + struct qat_ocf_instance *qat_instance; + u_int cpu_id = PCPU_GET(cpuid); + + /* Create cryptodev session */ + qat_softc = device_get_softc(dev); + qat_instance = + &qat_softc->cyInstHandles[cpu_id % qat_softc->numCyInstances]; + qat_dsession = crypto_get_driver_session(cses); + if (NULL == qat_dsession) { + device_printf(dev, "Unable to create new session\n"); + return (EINVAL); + } + + /* Add only instance at this point remaining operations moved to + * lazy session init */ + qat_dsession->qatInstance = qat_instance; + + return 0; +} + +static CpaStatus +qat_ocf_remove_session(device_t dev, + CpaInstanceHandle cyInstHandle, + struct qat_ocf_session *qat_session) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + if (NULL == qat_session->sessionCtx) + return CPA_STATUS_SUCCESS; + + /* User callback is executed right before decrementing pending + * callback atomic counter. To avoid removing session rejection + * we have to wait a very short while for counter update + * after call back execution. */ + status = qat_ocf_wait_for_session(qat_session->sessionCtx, + QAT_OCF_SESSION_WAIT_TIMEOUT_MS); + if (CPA_STATUS_SUCCESS != status) { + device_printf(dev, "waiting for session un-busy failed\n"); + return CPA_STATUS_FAIL; + } + + status = cpaCySymDpRemoveSession(cyInstHandle, qat_session->sessionCtx); + if (CPA_STATUS_SUCCESS != status) { + device_printf(dev, "error while removing session\n"); + return CPA_STATUS_FAIL; + } + + explicit_bzero(qat_session->sessionCtx, qat_session->sessionCtxSize); + contigfree(qat_session->sessionCtx, + qat_session->sessionCtxSize, + M_QAT_OCF); + qat_session->sessionCtx = NULL; + qat_session->sessionCtxSize = 0; + + return CPA_STATUS_SUCCESS; +} + +static void +qat_ocf_freesession(device_t dev, crypto_session_t cses) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + struct qat_ocf_dsession *qat_dsession = NULL; + struct qat_ocf_instance *qat_instance = NULL; + + qat_dsession = crypto_get_driver_session(cses); + qat_instance = qat_dsession->qatInstance; + mtx_lock(&qat_instance->cyInstMtx); + status = qat_ocf_remove_session(dev, + qat_dsession->qatInstance->cyInstHandle, + &qat_dsession->encSession); + if (CPA_STATUS_SUCCESS != status) + device_printf(dev, "unable to remove encrypt session\n"); + status = qat_ocf_remove_session(dev, + qat_dsession->qatInstance->cyInstHandle, + &qat_dsession->decSession); + if (CPA_STATUS_SUCCESS != status) + device_printf(dev, "unable to remove decrypt session\n"); + mtx_unlock(&qat_instance->cyInstMtx); +} + +/* QAT GCM/CCM FW API are only algorithms which support separated AAD. */ +static CpaStatus +qat_ocf_load_aad_gcm(struct cryptop *crp, struct qat_ocf_cookie *qat_cookie) +{ + CpaCySymDpOpData *pOpData; + + pOpData = &qat_cookie->pOpdata; + + if (NULL != crp->crp_aad) + memcpy(qat_cookie->qat_ocf_gcm_aad, + crp->crp_aad, + crp->crp_aad_length); + else + crypto_copydata(crp, + crp->crp_aad_start, + crp->crp_aad_length, + qat_cookie->qat_ocf_gcm_aad); + + pOpData->pAdditionalAuthData = qat_cookie->qat_ocf_gcm_aad; + pOpData->additionalAuthData = qat_cookie->qat_ocf_gcm_aad_paddr; + + return CPA_STATUS_SUCCESS; +} + +static CpaStatus +qat_ocf_load_aad(struct cryptop *crp, struct qat_ocf_cookie *qat_cookie) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + const struct crypto_session_params *csp; + CpaCySymDpOpData *pOpData; + struct qat_ocf_load_cb_arg args; + + pOpData = &qat_cookie->pOpdata; + pOpData->pAdditionalAuthData = NULL; + pOpData->additionalAuthData = 0UL; + + if (crp->crp_aad_length == 0) + return CPA_STATUS_SUCCESS; + + if (crp->crp_aad_length > ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX) + return CPA_STATUS_FAIL; + + csp = crypto_get_params(crp->crp_session); + + /* Handle GCM/CCM case */ + if (CPA_TRUE == is_sep_aad_supported(csp)) + return qat_ocf_load_aad_gcm(crp, qat_cookie); + + if (NULL == crp->crp_aad) { + /* AAD already embedded in source buffer */ + pOpData->messageLenToCipherInBytes = crp->crp_payload_length; + pOpData->cryptoStartSrcOffsetInBytes = crp->crp_payload_start; + + pOpData->messageLenToHashInBytes = + crp->crp_aad_length + crp->crp_payload_length; + pOpData->hashStartSrcOffsetInBytes = crp->crp_aad_start; + + return CPA_STATUS_SUCCESS; + } + + /* Separated AAD not supported by QAT - lets place the content + * of ADD buffer at the very beginning of source SGL */ + args.crp_op = crp; + args.qat_cookie = qat_cookie; + args.pOpData = pOpData; + args.error = 0; + status = bus_dmamap_load(qat_cookie->gcm_aad_dma_mem.dma_tag, + qat_cookie->gcm_aad_dma_mem.dma_map, + crp->crp_aad, + crp->crp_aad_length, + qat_ocf_crypto_load_aadbuf_cb, + &args, + BUS_DMA_NOWAIT); + qat_cookie->is_sep_aad_used = CPA_TRUE; + + /* Right after this step we have AAD placed in the first flat buffer + * in source SGL */ + pOpData->messageLenToCipherInBytes = crp->crp_payload_length; + pOpData->cryptoStartSrcOffsetInBytes = + crp->crp_aad_length + crp->crp_aad_start + crp->crp_payload_start; + + pOpData->messageLenToHashInBytes = + crp->crp_aad_length + crp->crp_payload_length; + pOpData->hashStartSrcOffsetInBytes = crp->crp_aad_start; + + return status; +} + +static CpaStatus +qat_ocf_load(struct cryptop *crp, struct qat_ocf_cookie *qat_cookie) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaCySymDpOpData *pOpData; + struct qat_ocf_load_cb_arg args; + /* cryptodev internals */ + const struct crypto_session_params *csp; + + pOpData = &qat_cookie->pOpdata; + + csp = crypto_get_params(crp->crp_session); + + /* Load IV buffer if present */ + if (csp->csp_ivlen > 0) { + memset(qat_cookie->qat_ocf_iv_buf, + 0, + sizeof(qat_cookie->qat_ocf_iv_buf)); + crypto_read_iv(crp, qat_cookie->qat_ocf_iv_buf); + pOpData->iv = qat_cookie->qat_ocf_iv_buf_paddr; + pOpData->pIv = qat_cookie->qat_ocf_iv_buf; + pOpData->ivLenInBytes = csp->csp_ivlen; + } + + /* GCM/CCM - load AAD to separated buffer + * AES+SHA - load AAD to first flat in SGL */ + status = qat_ocf_load_aad(crp, qat_cookie); + if (CPA_STATUS_SUCCESS != status) + goto fail; + + /* Load source buffer */ + args.crp_op = crp; + args.qat_cookie = qat_cookie; + args.pOpData = pOpData; + args.error = 0; + status = bus_dmamap_load_crp_buffer(qat_cookie->src_dma_mem.dma_tag, + qat_cookie->src_dma_mem.dma_map, + &crp->crp_buf, + qat_ocf_crypto_load_buf_cb, + &args, + BUS_DMA_NOWAIT); + if (CPA_STATUS_SUCCESS != status) + goto fail; + pOpData->srcBuffer = qat_cookie->src_buffer_list_paddr; + pOpData->srcBufferLen = CPA_DP_BUFLIST; + + /* Load destination buffer */ + if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) { + status = + bus_dmamap_load_crp_buffer(qat_cookie->dst_dma_mem.dma_tag, + qat_cookie->dst_dma_mem.dma_map, + &crp->crp_obuf, + qat_ocf_crypto_load_obuf_cb, + &args, + BUS_DMA_NOWAIT); + if (CPA_STATUS_SUCCESS != status) + goto fail; + pOpData->dstBuffer = qat_cookie->dst_buffer_list_paddr; + pOpData->dstBufferLen = CPA_DP_BUFLIST; + } else { + pOpData->dstBuffer = pOpData->srcBuffer; + pOpData->dstBufferLen = pOpData->srcBufferLen; + } + + if (CPA_TRUE == is_use_sep_digest(csp)) + pOpData->digestResult = qat_cookie->qat_ocf_digest_paddr; + else + pOpData->digestResult = 0UL; + + /* GMAC - aka zero length buffer */ + if (CPA_TRUE == is_gmac_exception(csp)) + pOpData->messageLenToCipherInBytes = 0; + +fail: + return status; +} + +static int +qat_ocf_check_input(device_t dev, struct cryptop *crp) +{ + const struct crypto_session_params *csp; + csp = crypto_get_params(crp->crp_session); + + if (crypto_buffer_len(&crp->crp_buf) > QAT_OCF_MAX_LEN) + return E2BIG; + + if (CPA_TRUE == is_sep_aad_supported(csp) && + (crp->crp_aad_length > ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX)) + return EBADMSG; + + return 0; +} + +static int +qat_ocf_process(device_t dev, struct cryptop *crp, int hint) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + int rc = 0; + struct qat_ocf_dsession *qat_dsession = NULL; + struct qat_ocf_session *qat_session = NULL; + struct qat_ocf_instance *qat_instance = NULL; + CpaCySymDpOpData *pOpData = NULL; + struct qat_ocf_cookie *qat_cookie = NULL; + CpaBoolean memLoaded = CPA_FALSE; + + rc = qat_ocf_check_input(dev, crp); + if (rc) + goto fail; + + qat_dsession = crypto_get_driver_session(crp->crp_session); + + if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) + qat_session = &qat_dsession->encSession; + else + qat_session = &qat_dsession->decSession; + qat_instance = qat_dsession->qatInstance; + + status = qat_ocf_cookie_alloc(qat_instance, &qat_cookie); + if (CPA_STATUS_SUCCESS != status) { + rc = EAGAIN; + goto fail; + } + + qat_cookie->crp_op = crp; + + /* Common request fields */ + pOpData = &qat_cookie->pOpdata; + pOpData->instanceHandle = qat_instance->cyInstHandle; + pOpData->sessionCtx = NULL; + + /* Cipher fields */ + pOpData->cryptoStartSrcOffsetInBytes = crp->crp_payload_start; + pOpData->messageLenToCipherInBytes = crp->crp_payload_length; + /* Digest fields - any exceptions from this basic rules are covered + * in qat_ocf_load */ + pOpData->hashStartSrcOffsetInBytes = crp->crp_payload_start; + pOpData->messageLenToHashInBytes = crp->crp_payload_length; + + status = qat_ocf_load(crp, qat_cookie); + if (CPA_STATUS_SUCCESS != status) { + device_printf(dev, + "unable to load OCF buffers to QAT DMA " + "transaction\n"); + rc = EIO; + goto fail; + } + memLoaded = CPA_TRUE; + + status = qat_ocf_cookie_dma_pre_sync(crp, pOpData); + if (CPA_STATUS_SUCCESS != status) { + device_printf(dev, "unable to sync DMA buffers\n"); + rc = EIO; + goto fail; + } + + mtx_lock(&qat_instance->cyInstMtx); + /* Session initialization at the first request. It's done + * in such way to overcome missing QAT specific session data + * such like AAD length and limited possibility to update + * QAT session while handling traffic. + */ + if (NULL == qat_session->sessionCtx) { + status = + qat_ocf_session_init(dev, crp, qat_instance, qat_session); + if (CPA_STATUS_SUCCESS != status) { + mtx_unlock(&qat_instance->cyInstMtx); + device_printf(dev, "unable to init session\n"); + rc = EIO; + goto fail; + } + } else { + status = qat_ocf_handle_session_update(qat_dsession, crp); + if (CPA_STATUS_RESOURCE == status) { + mtx_unlock(&qat_instance->cyInstMtx); + rc = EAGAIN; + goto fail; + } else if (CPA_STATUS_SUCCESS != status) { + mtx_unlock(&qat_instance->cyInstMtx); + rc = EIO; + goto fail; + } + } + pOpData->sessionCtx = qat_session->sessionCtx; + status = cpaCySymDpEnqueueOp(pOpData, CPA_TRUE); + mtx_unlock(&qat_instance->cyInstMtx); + if (CPA_STATUS_SUCCESS != status) { + if (CPA_STATUS_RETRY == status) { + rc = EAGAIN; + goto fail; + } + device_printf(dev, + "unable to send request. Status: %d\n", + status); + rc = EIO; + goto fail; + } + + return 0; +fail: + if (qat_cookie) { + if (memLoaded) + qat_ocf_cookie_dma_unload(crp, pOpData); + qat_ocf_cookie_free(qat_instance, qat_cookie); + } + crp->crp_etype = rc; + crypto_done(crp); + + return 0; +} + +static void +qat_ocf_identify(driver_t *drv, device_t parent) +{ + if (device_find_child(parent, "qat_ocf", -1) == NULL && + BUS_ADD_CHILD(parent, 200, "qat_ocf", -1) == 0) + device_printf(parent, "qat_ocf: could not attach!"); +} + +static int +qat_ocf_probe(device_t dev) +{ + device_set_desc(dev, "QAT engine"); + return (BUS_PROBE_NOWILDCARD); +} + +static CpaStatus +qat_ocf_get_irq_instances(CpaInstanceHandle *cyInstHandles, + Cpa16U cyInstHandlesSize, + Cpa16U *foundInstances) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + icp_accel_dev_t **pAdfInsts = NULL; + icp_accel_dev_t *dev_addr = NULL; + sal_t *baseAddr = NULL; + sal_list_t *listTemp = NULL; + CpaInstanceHandle cyInstHandle; + CpaInstanceInfo2 info; + Cpa16U numDevices; + Cpa32U instCtr = 0; + Cpa32U i; + + /* Get the number of devices */ + status = icp_amgr_getNumInstances(&numDevices); + if (CPA_STATUS_SUCCESS != status) + return status; + + /* Allocate memory to store addr of accel_devs */ + pAdfInsts = + malloc(numDevices * sizeof(icp_accel_dev_t *), M_QAT_OCF, M_WAITOK); + + /* Get ADF to return all accel_devs that support either + * symmetric or asymmetric crypto */ + status = icp_amgr_getAllAccelDevByCapabilities( + (ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC), pAdfInsts, &numDevices); + if (CPA_STATUS_SUCCESS != status) { + free(pAdfInsts, M_QAT_OCF); + return status; + } + + for (i = 0; i < numDevices; i++) { + dev_addr = (icp_accel_dev_t *)pAdfInsts[i]; + baseAddr = dev_addr->pSalHandle; + if (NULL == baseAddr) + continue; + listTemp = baseAddr->sym_services; + while (NULL != listTemp) { + cyInstHandle = SalList_getObject(listTemp); + status = cpaCyInstanceGetInfo2(cyInstHandle, &info); + if (CPA_STATUS_SUCCESS != status) + continue; + listTemp = SalList_next(listTemp); + if (CPA_TRUE == info.isPolled) + continue; + if (instCtr >= cyInstHandlesSize) + break; + cyInstHandles[instCtr++] = cyInstHandle; + } + } + free(pAdfInsts, M_QAT_OCF); + *foundInstances = instCtr; + + return CPA_STATUS_SUCCESS; +} + +static CpaStatus +qat_ocf_start_instances(struct qat_ocf_softc *qat_softc, device_t dev) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa16U numInstances = 0; + CpaInstanceHandle cyInstHandles[QAT_OCF_MAX_INSTANCES] = { 0 }; + CpaInstanceHandle cyInstHandle = NULL; + Cpa32U startedInstances = 0; + Cpa32U i; + + qat_softc->numCyInstances = 0; + status = qat_ocf_get_irq_instances(cyInstHandles, + QAT_OCF_MAX_INSTANCES, + &numInstances); + if (CPA_STATUS_SUCCESS != status) + return status; + if (0 == numInstances) + return CPA_STATUS_RESOURCE; + + for (i = 0; i < numInstances; i++) { + struct qat_ocf_instance *qat_ocf_instance; + + cyInstHandle = cyInstHandles[i]; + if (!cyInstHandle) + continue; + + /* Starting instance */ + status = cpaCyStartInstance(cyInstHandle); + if (CPA_STATUS_SUCCESS != status) { + device_printf(qat_softc->sc_dev, + "unable to get start instance\n"); + continue; + } + + status = + cpaCySetAddressTranslation(cyInstHandle, qatVirtToPhys); + if (CPA_STATUS_SUCCESS != status) { + device_printf(qat_softc->sc_dev, + "unable to add virt to phys callback"); + goto fail; + } + + status = cpaCySymDpRegCbFunc(cyInstHandle, symDpCallback); + if (CPA_STATUS_SUCCESS != status) { + device_printf(qat_softc->sc_dev, + "unable to add user callback\n"); + goto fail; + } + + qat_ocf_instance = &qat_softc->cyInstHandles[startedInstances]; + qat_ocf_instance->cyInstHandle = cyInstHandle; + mtx_init(&qat_ocf_instance->cyInstMtx, + "Instance MTX", + NULL, + MTX_DEF); + + /* Initialize cookie pool */ + status = qat_ocf_cookie_pool_init(qat_ocf_instance, dev); + if (CPA_STATUS_SUCCESS != status) { + device_printf(qat_softc->sc_dev, + "unable to create cookie pool\n"); + goto fail; + } + + qat_ocf_instance->driver_id = qat_softc->cryptodev_id; + + startedInstances++; + continue; + fail: + /* Stop instance */ + status = cpaCyStopInstance(cyInstHandle); + if (CPA_STATUS_SUCCESS != status) + device_printf(qat_softc->sc_dev, + "unable to stop the instance\n"); + continue; + } + qat_softc->numCyInstances = startedInstances; + + /* Success if at least one instance has been set */ + if (!qat_softc->numCyInstances) + return CPA_STATUS_FAIL; + + return CPA_STATUS_SUCCESS; +} + +static CpaStatus +qat_ocf_stop_instances(struct qat_ocf_softc *qat_softc) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + int i; + + for (i = 0; i < qat_softc->numCyInstances; i++) { + struct qat_ocf_instance *qat_instance; + + qat_instance = &qat_softc->cyInstHandles[i]; + status = cpaCyStopInstance(qat_instance->cyInstHandle); + if (CPA_STATUS_SUCCESS != status) { + pr_err("QAT: stopping instance id: %d failed\n", i); + mtx_unlock(&qat_instance->cyInstMtx); + continue; + } + qat_ocf_cookie_pool_deinit(qat_instance); + mtx_destroy(&qat_instance->cyInstMtx); + } + + return status; +} + +static int +qat_ocf_attach(device_t dev) +{ + int status; + struct qat_ocf_softc *qat_softc; + int32_t cryptodev_id; + + qat_softc = device_get_softc(dev); + qat_softc->sc_dev = dev; + + cryptodev_id = crypto_get_driverid(dev, + sizeof(struct qat_ocf_dsession), + CRYPTOCAP_F_HARDWARE); + if (cryptodev_id < 0) { + device_printf(dev, "cannot initialize!\n"); + goto fail; + } + qat_softc->cryptodev_id = cryptodev_id; + + /* Starting instances for OCF */ + status = qat_ocf_start_instances(qat_softc, dev); + if (status) { + device_printf(dev, "no QAT IRQ instances available\n"); + goto fail; + } + + return 0; +fail: + qat_ocf_detach(dev); + + return (ENXIO); +} + +static int +qat_ocf_detach(device_t dev) +{ + struct qat_ocf_softc *qat_softc = NULL; + CpaStatus cpaStatus; + int status = 0; + + qat_softc = device_get_softc(dev); + + if (qat_softc->cryptodev_id >= 0) { + status = crypto_unregister_all(qat_softc->cryptodev_id); + if (status) + device_printf(dev, + "unable to unregister QAt backend\n"); + } + + /* Stop QAT instances */ + cpaStatus = qat_ocf_stop_instances(qat_softc); + if (CPA_STATUS_SUCCESS != cpaStatus) { + device_printf(dev, "unable to stop instances\n"); + status = EIO; + } + + return status; +} + +static device_method_t qat_ocf_methods[] = + { DEVMETHOD(device_identify, qat_ocf_identify), + DEVMETHOD(device_probe, qat_ocf_probe), + DEVMETHOD(device_attach, qat_ocf_attach), + DEVMETHOD(device_detach, qat_ocf_detach), + + /* Cryptodev interface */ + DEVMETHOD(cryptodev_probesession, qat_ocf_probesession), + DEVMETHOD(cryptodev_newsession, qat_ocf_newsession), + DEVMETHOD(cryptodev_freesession, qat_ocf_freesession), + DEVMETHOD(cryptodev_process, qat_ocf_process), + + DEVMETHOD_END }; + +static driver_t qat_ocf_driver = { + .name = "qat_ocf", + .methods = qat_ocf_methods, + .size = sizeof(struct qat_ocf_softc), +}; + + +DRIVER_MODULE_ORDERED(qat, + nexus, + qat_ocf_driver, + NULL, + NULL, + SI_ORDER_ANY); +MODULE_VERSION(qat, 1); +MODULE_DEPEND(qat, qat_c62x, 1, 1, 1); +MODULE_DEPEND(qat, qat_200xx, 1, 1, 1); +MODULE_DEPEND(qat, qat_c3xxx, 1, 1, 1); +MODULE_DEPEND(qat, qat_c4xxx, 1, 1, 1); +MODULE_DEPEND(qat, qat_dh895xcc, 1, 1, 1); +MODULE_DEPEND(qat, crypto, 1, 1, 1); +MODULE_DEPEND(qat, qat_common, 1, 1, 1); +MODULE_DEPEND(qat, qat_api, 1, 1, 1); +MODULE_DEPEND(qat, linuxkpi, 1, 1, 1); Index: sys/dev/qat/qat/qat_ocf_mem_pool.c =================================================================== --- /dev/null +++ sys/dev/qat/qat/qat_ocf_mem_pool.c @@ -0,0 +1,564 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/* System headers */ +#include +#include +#include +#include +#include +#include +#include + +/* Cryptodev headers */ +#include +#include + +/* QAT specific headers */ +#include "qat_ocf_mem_pool.h" +#include "qat_ocf_utils.h" +#include "cpa.h" + +/* Private functions */ +static void +qat_ocf_alloc_single_cb(void *arg, bus_dma_segment_t *segs, int nseg, int error) +{ + struct qat_ocf_dma_mem *dma_mem; + + if (error != 0) + return; + + dma_mem = arg; + dma_mem->dma_seg = segs[0]; +} + +static int +qat_ocf_populate_buf_list_cb(struct qat_ocf_buffer_list *buffers, + bus_dma_segment_t *segs, + int niseg, + int skip_seg, + int skip_bytes) +{ + CpaPhysFlatBuffer *flatBuffer; + bus_addr_t segment_addr; + bus_size_t segment_len; + int iseg, oseg; + + for (iseg = 0, oseg = skip_seg; + iseg < niseg && oseg < QAT_OCF_MAX_FLATS; + iseg++) { + segment_addr = segs[iseg].ds_addr; + segment_len = segs[iseg].ds_len; + + if (skip_bytes > 0) { + if (skip_bytes < segment_len) { + segment_addr += skip_bytes; + segment_len -= skip_bytes; + skip_bytes = 0; + } else { + skip_bytes -= segment_len; + continue; + } + } + flatBuffer = &buffers->flatBuffers[oseg++]; + flatBuffer->dataLenInBytes = (Cpa32U)segment_len; + flatBuffer->bufferPhysAddr = (CpaPhysicalAddr)segment_addr; + }; + buffers->numBuffers = oseg; + + return iseg < niseg ? E2BIG : 0; +} + +void +qat_ocf_crypto_load_aadbuf_cb(void *_arg, + bus_dma_segment_t *segs, + int nseg, + int error) +{ + struct qat_ocf_load_cb_arg *arg; + struct qat_ocf_cookie *qat_cookie; + + arg = _arg; + if (error != 0) { + arg->error = error; + return; + } + + qat_cookie = arg->qat_cookie; + arg->error = qat_ocf_populate_buf_list_cb( + &qat_cookie->src_buffers, segs, nseg, 0, 0); +} + +void +qat_ocf_crypto_load_buf_cb(void *_arg, + bus_dma_segment_t *segs, + int nseg, + int error) +{ + struct qat_ocf_cookie *qat_cookie; + struct qat_ocf_load_cb_arg *arg; + int start_segment = 0, skip_bytes = 0; + + arg = _arg; + if (error != 0) { + arg->error = error; + return; + } + + qat_cookie = arg->qat_cookie; + + skip_bytes = 0; + start_segment = qat_cookie->src_buffers.numBuffers; + + arg->error = qat_ocf_populate_buf_list_cb( + &qat_cookie->src_buffers, segs, nseg, start_segment, skip_bytes); +} + +void +qat_ocf_crypto_load_obuf_cb(void *_arg, + bus_dma_segment_t *segs, + int nseg, + int error) +{ + struct qat_ocf_load_cb_arg *arg; + struct cryptop *crp; + struct qat_ocf_cookie *qat_cookie; + const struct crypto_session_params *csp; + int osegs = 0, to_copy = 0; + + arg = _arg; + if (error != 0) { + arg->error = error; + return; + } + + crp = arg->crp_op; + qat_cookie = arg->qat_cookie; + csp = crypto_get_params(crp->crp_session); + + /* + * The payload must start at the same offset in the output SG list as in + * the input SG list. Copy over SG entries from the input corresponding + * to the AAD buffer. + */ + if (crp->crp_aad_length == 0 || + (CPA_TRUE == is_sep_aad_supported(csp) && crp->crp_aad)) { + arg->error = + qat_ocf_populate_buf_list_cb(&qat_cookie->dst_buffers, + segs, + nseg, + 0, + crp->crp_payload_output_start); + return; + } + + /* Copy AAD from source SGL to keep payload in the same position in + * destination buffers */ + if (NULL == crp->crp_aad) + to_copy = crp->crp_payload_start - crp->crp_aad_start; + else + to_copy = crp->crp_aad_length; + + for (; osegs < qat_cookie->src_buffers.numBuffers; osegs++) { + CpaPhysFlatBuffer *src_flat; + CpaPhysFlatBuffer *dst_flat; + int data_len; + + if (to_copy <= 0) + break; + + src_flat = &qat_cookie->src_buffers.flatBuffers[osegs]; + dst_flat = &qat_cookie->dst_buffers.flatBuffers[osegs]; + + dst_flat->bufferPhysAddr = src_flat->bufferPhysAddr; + data_len = imin(src_flat->dataLenInBytes, to_copy); + dst_flat->dataLenInBytes = data_len; + to_copy -= data_len; + } + + arg->error = + qat_ocf_populate_buf_list_cb(&qat_cookie->dst_buffers, + segs, + nseg, + osegs, + crp->crp_payload_output_start); +} + +static int +qat_ocf_alloc_dma_mem(device_t dev, + struct qat_ocf_dma_mem *dma_mem, + int nseg, + bus_size_t size, + bus_size_t alignment) +{ + int error; + + error = bus_dma_tag_create(bus_get_dma_tag(dev), + alignment, + 0, /* alignment, boundary */ + BUS_SPACE_MAXADDR, /* lowaddr */ + BUS_SPACE_MAXADDR, /* highaddr */ + NULL, + NULL, /* filter, filterarg */ + size, /* maxsize */ + nseg, /* nsegments */ + size, /* maxsegsize */ + BUS_DMA_COHERENT, /* flags */ + NULL, + NULL, /* lockfunc, lockarg */ + &dma_mem->dma_tag); + if (error != 0) { + device_printf(dev, + "couldn't create DMA tag, error = %d\n", + error); + return error; + } + + error = + bus_dmamem_alloc(dma_mem->dma_tag, + &dma_mem->dma_vaddr, + BUS_DMA_NOWAIT | BUS_DMA_ZERO | BUS_DMA_COHERENT, + &dma_mem->dma_map); + if (error != 0) { + device_printf(dev, + "couldn't allocate dmamem, error = %d\n", + error); + goto fail_0; + } + + error = bus_dmamap_load(dma_mem->dma_tag, + dma_mem->dma_map, + dma_mem->dma_vaddr, + size, + qat_ocf_alloc_single_cb, + dma_mem, + BUS_DMA_NOWAIT); + if (error) { + device_printf(dev, + "couldn't load dmamem map, error = %d\n", + error); + goto fail_1; + } + + return 0; +fail_1: + bus_dmamem_free(dma_mem->dma_tag, dma_mem->dma_vaddr, dma_mem->dma_map); +fail_0: + bus_dma_tag_destroy(dma_mem->dma_tag); + + return error; +} + +static void +qat_ocf_free_dma_mem(struct qat_ocf_dma_mem *qdm) +{ + if (qdm->dma_tag != NULL && qdm->dma_vaddr != NULL) { + bus_dmamap_unload(qdm->dma_tag, qdm->dma_map); + bus_dmamem_free(qdm->dma_tag, qdm->dma_vaddr, qdm->dma_map); + bus_dma_tag_destroy(qdm->dma_tag); + explicit_bzero(qdm, sizeof(*qdm)); + } +} + +static int +qat_ocf_dma_tag_and_map(device_t dev, + struct qat_ocf_dma_mem *dma_mem, + bus_size_t size, + bus_size_t segs) +{ + int error; + + error = bus_dma_tag_create(bus_get_dma_tag(dev), + 1, + 0, /* alignment, boundary */ + BUS_SPACE_MAXADDR, /* lowaddr */ + BUS_SPACE_MAXADDR, /* highaddr */ + NULL, + NULL, /* filter, filterarg */ + size, /* maxsize */ + segs, /* nsegments */ + size, /* maxsegsize */ + BUS_DMA_COHERENT, /* flags */ + NULL, + NULL, /* lockfunc, lockarg */ + &dma_mem->dma_tag); + if (error != 0) + return error; + + error = bus_dmamap_create(dma_mem->dma_tag, + BUS_DMA_COHERENT, + &dma_mem->dma_map); + if (error != 0) + return error; + + return 0; +} + +static void +qat_ocf_clear_cookie(struct qat_ocf_cookie *qat_cookie) +{ + qat_cookie->src_buffers.numBuffers = 0; + qat_cookie->dst_buffers.numBuffers = 0; + qat_cookie->is_sep_aad_used = CPA_FALSE; + explicit_bzero(qat_cookie->qat_ocf_iv_buf, + sizeof(qat_cookie->qat_ocf_iv_buf)); + explicit_bzero(qat_cookie->qat_ocf_digest, + sizeof(qat_cookie->qat_ocf_digest)); + explicit_bzero(qat_cookie->qat_ocf_gcm_aad, + sizeof(qat_cookie->qat_ocf_gcm_aad)); + qat_cookie->crp_op = NULL; +} + +/* Public functions */ +CpaStatus +qat_ocf_cookie_dma_pre_sync(struct cryptop *crp, CpaCySymDpOpData *pOpData) +{ + struct qat_ocf_cookie *qat_cookie; + + if (NULL == pOpData->pCallbackTag) + return CPA_STATUS_FAIL; + + qat_cookie = (struct qat_ocf_cookie *)pOpData->pCallbackTag; + + if (CPA_TRUE == qat_cookie->is_sep_aad_used) { + bus_dmamap_sync(qat_cookie->gcm_aad_dma_mem.dma_tag, + qat_cookie->gcm_aad_dma_mem.dma_map, + BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); + } + + bus_dmamap_sync(qat_cookie->src_dma_mem.dma_tag, + qat_cookie->src_dma_mem.dma_map, + BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); + if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) { + bus_dmamap_sync(qat_cookie->dst_dma_mem.dma_tag, + qat_cookie->dst_dma_mem.dma_map, + BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); + } + bus_dmamap_sync(qat_cookie->dma_tag, + qat_cookie->dma_map, + BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qat_ocf_cookie_dma_post_sync(struct cryptop *crp, CpaCySymDpOpData *pOpData) +{ + struct qat_ocf_cookie *qat_cookie; + + if (NULL == pOpData->pCallbackTag) + return CPA_STATUS_FAIL; + + qat_cookie = (struct qat_ocf_cookie *)pOpData->pCallbackTag; + + bus_dmamap_sync(qat_cookie->src_dma_mem.dma_tag, + qat_cookie->src_dma_mem.dma_map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + + if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) { + bus_dmamap_sync(qat_cookie->dst_dma_mem.dma_tag, + qat_cookie->dst_dma_mem.dma_map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + } + bus_dmamap_sync(qat_cookie->dma_tag, + qat_cookie->dma_map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + + if (qat_cookie->is_sep_aad_used) + bus_dmamap_sync(qat_cookie->gcm_aad_dma_mem.dma_tag, + qat_cookie->gcm_aad_dma_mem.dma_map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qat_ocf_cookie_dma_unload(struct cryptop *crp, CpaCySymDpOpData *pOpData) +{ + struct qat_ocf_cookie *qat_cookie; + + qat_cookie = pOpData->pCallbackTag; + + if (NULL == qat_cookie) + return CPA_STATUS_FAIL; + + bus_dmamap_unload(qat_cookie->src_dma_mem.dma_tag, + qat_cookie->src_dma_mem.dma_map); + if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) + bus_dmamap_unload(qat_cookie->dst_dma_mem.dma_tag, + qat_cookie->dst_dma_mem.dma_map); + if (qat_cookie->is_sep_aad_used) + bus_dmamap_unload(qat_cookie->gcm_aad_dma_mem.dma_tag, + qat_cookie->gcm_aad_dma_mem.dma_map); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qat_ocf_cookie_pool_init(struct qat_ocf_instance *instance, device_t dev) +{ + int i, error = 0; + + mtx_init(&instance->cookie_pool_mtx, + "QAT cookie pool MTX", + NULL, + MTX_DEF); + instance->free_cookie_ptr = 0; + for (i = 0; i < QAT_OCF_MEM_POOL_SIZE; i++) { + struct qat_ocf_cookie *qat_cookie; + struct qat_ocf_dma_mem *entry_dma_mem; + + entry_dma_mem = &instance->cookie_dmamem[i]; + + /* Allocate DMA segment for cache entry. + * Cache has to be stored in DMAable mem due to + * it contains i.a src and dst flat buffer + * lists. + */ + error = qat_ocf_alloc_dma_mem(dev, + entry_dma_mem, + 1, + sizeof(struct qat_ocf_cookie), + (1 << 6)); + if (error) + break; + + qat_cookie = entry_dma_mem->dma_vaddr; + instance->cookie_pool[i] = qat_cookie; + + qat_cookie->dma_map = entry_dma_mem->dma_map; + qat_cookie->dma_tag = entry_dma_mem->dma_tag; + + qat_ocf_clear_cookie(qat_cookie); + + /* Physical address of IV buffer */ + qat_cookie->qat_ocf_iv_buf_paddr = + entry_dma_mem->dma_seg.ds_addr + + offsetof(struct qat_ocf_cookie, qat_ocf_iv_buf); + + /* Physical address of digest buffer */ + qat_cookie->qat_ocf_digest_paddr = + entry_dma_mem->dma_seg.ds_addr + + offsetof(struct qat_ocf_cookie, qat_ocf_digest); + + /* Physical address of AAD buffer */ + qat_cookie->qat_ocf_gcm_aad_paddr = + entry_dma_mem->dma_seg.ds_addr + + offsetof(struct qat_ocf_cookie, qat_ocf_gcm_aad); + + /* We already got physical address of src and dest SGL header */ + qat_cookie->src_buffer_list_paddr = + entry_dma_mem->dma_seg.ds_addr + + offsetof(struct qat_ocf_cookie, src_buffers); + + qat_cookie->dst_buffer_list_paddr = + entry_dma_mem->dma_seg.ds_addr + + offsetof(struct qat_ocf_cookie, dst_buffers); + + /* We already have physical address of pOpdata */ + qat_cookie->pOpData_paddr = entry_dma_mem->dma_seg.ds_addr + + offsetof(struct qat_ocf_cookie, pOpdata); + /* Init QAT DP API OP data with const values */ + qat_cookie->pOpdata.pCallbackTag = (void *)qat_cookie; + qat_cookie->pOpdata.thisPhys = + (CpaPhysicalAddr)qat_cookie->pOpData_paddr; + + error = qat_ocf_dma_tag_and_map(dev, + &qat_cookie->src_dma_mem, + QAT_OCF_MAXLEN, + QAT_OCF_MAX_FLATS); + if (error) + break; + + error = qat_ocf_dma_tag_and_map(dev, + &qat_cookie->dst_dma_mem, + QAT_OCF_MAXLEN, + QAT_OCF_MAX_FLATS); + if (error) + break; + + /* Max one flat buffer for embedded AAD if provided as separated + * by OCF and it's not supported by QAT */ + error = qat_ocf_dma_tag_and_map(dev, + &qat_cookie->gcm_aad_dma_mem, + QAT_OCF_MAXLEN, + 1); + if (error) + break; + + instance->free_cookie[i] = qat_cookie; + instance->free_cookie_ptr++; + } + + return error; +} + +CpaStatus +qat_ocf_cookie_alloc(struct qat_ocf_instance *qat_instance, + struct qat_ocf_cookie **cookie_out) +{ + mtx_lock(&qat_instance->cookie_pool_mtx); + if (qat_instance->free_cookie_ptr == 0) { + mtx_unlock(&qat_instance->cookie_pool_mtx); + return CPA_STATUS_FAIL; + } + *cookie_out = + qat_instance->free_cookie[--qat_instance->free_cookie_ptr]; + mtx_unlock(&qat_instance->cookie_pool_mtx); + + return CPA_STATUS_SUCCESS; +} + +void +qat_ocf_cookie_free(struct qat_ocf_instance *qat_instance, + struct qat_ocf_cookie *cookie) +{ + qat_ocf_clear_cookie(cookie); + mtx_lock(&qat_instance->cookie_pool_mtx); + qat_instance->free_cookie[qat_instance->free_cookie_ptr++] = cookie; + mtx_unlock(&qat_instance->cookie_pool_mtx); +} + +void +qat_ocf_cookie_pool_deinit(struct qat_ocf_instance *qat_instance) +{ + int i; + + for (i = 0; i < QAT_OCF_MEM_POOL_SIZE; i++) { + struct qat_ocf_cookie *cookie; + struct qat_ocf_dma_mem *cookie_dma; + + cookie = qat_instance->cookie_pool[i]; + if (NULL == cookie) + continue; + + /* Destroy tag and map for source SGL */ + if (cookie->src_dma_mem.dma_tag) { + bus_dmamap_destroy(cookie->src_dma_mem.dma_tag, + cookie->src_dma_mem.dma_map); + bus_dma_tag_destroy(cookie->src_dma_mem.dma_tag); + } + + /* Destroy tag and map for dest SGL */ + if (cookie->dst_dma_mem.dma_tag) { + bus_dmamap_destroy(cookie->dst_dma_mem.dma_tag, + cookie->dst_dma_mem.dma_map); + bus_dma_tag_destroy(cookie->dst_dma_mem.dma_tag); + } + + /* Destroy tag and map for separated AAD */ + if (cookie->gcm_aad_dma_mem.dma_tag) { + bus_dmamap_destroy(cookie->gcm_aad_dma_mem.dma_tag, + cookie->gcm_aad_dma_mem.dma_map); + bus_dma_tag_destroy(cookie->gcm_aad_dma_mem.dma_tag); + } + + /* Free DMA memory */ + cookie_dma = &qat_instance->cookie_dmamem[i]; + qat_ocf_free_dma_mem(cookie_dma); + qat_instance->cookie_pool[i] = NULL; + } + mtx_destroy(&qat_instance->cookie_pool_mtx); + + return; +} Index: sys/dev/qat/qat/qat_ocf_utils.c =================================================================== --- /dev/null +++ sys/dev/qat/qat/qat_ocf_utils.c @@ -0,0 +1,172 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/* System headers */ +#include +#include +#include +#include +#include +#include + +/* QAT specific headers */ +#include "qat_ocf_utils.h" +#include "cpa.h" +#include "lac_common.h" +#include "lac_log.h" +#include "lac_mem.h" +#include "lac_mem_pools.h" +#include "lac_list.h" +#include "lac_sym.h" +#include "lac_sym_qat.h" +#include "lac_sal.h" +#include "lac_sal_ctrl.h" +#include "lac_session.h" +#include "lac_sym_cipher.h" +#include "lac_sym_hash.h" +#include "lac_sym_alg_chain.h" +#include "lac_sym_stats.h" +#include "lac_sym_partial.h" +#include "lac_sym_qat_hash_defs_lookup.h" + +#define QAT_OCF_AAD_NOCHANGE (-1) + +CpaStatus +qat_ocf_wait_for_session(CpaCySymSessionCtx sessionCtx, Cpa32U timeoutMS) +{ + CpaBoolean sessionInUse = CPA_TRUE; + CpaStatus status; + struct timespec start_ts; + struct timespec current_ts; + struct timespec delta; + u64 delta_ms; + + nanotime(&start_ts); + for (;;) { + status = cpaCySymSessionInUse(sessionCtx, &sessionInUse); + if (CPA_STATUS_SUCCESS != status) + return CPA_STATUS_FAIL; + if (CPA_FALSE == sessionInUse) + break; + nanotime(¤t_ts); + delta = timespec_sub(current_ts, start_ts); + delta_ms = (delta.tv_sec * 1000) + + (delta.tv_nsec / NSEC_PER_MSEC); + if (delta_ms > (timeoutMS)) + return CPA_STATUS_RESOURCE; + qatUtilsYield(); + } + + return CPA_STATUS_SUCCESS; +} + +static CpaStatus +qat_ocf_session_update(struct qat_ocf_session *ocf_session, + Cpa8U *newCipher, + Cpa8U *newAuth, + Cpa32U newAADLength) +{ + lac_session_desc_t *pSessionDesc = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + CpaBoolean sessionInUse = CPA_TRUE; + + if (!ocf_session->sessionCtx) + return CPA_STATUS_SUCCESS; + + status = cpaCySymSessionInUse(ocf_session->sessionCtx, &sessionInUse); + if (CPA_TRUE == sessionInUse) + return CPA_STATUS_RESOURCE; + + pSessionDesc = + LAC_SYM_SESSION_DESC_FROM_CTX_GET(ocf_session->sessionCtx); + + if (newAADLength != QAT_OCF_AAD_NOCHANGE) { + ocf_session->aadLen = newAADLength; + status = + LacAlgChain_SessionAADUpdate(pSessionDesc, newAADLength); + if (CPA_STATUS_SUCCESS != status) + return status; + } + + if (newCipher) { + status = + LacAlgChain_SessionCipherKeyUpdate(pSessionDesc, newCipher); + if (CPA_STATUS_SUCCESS != status) + return status; + } + + if (newAuth) { + status = + LacAlgChain_SessionAuthKeyUpdate(pSessionDesc, newAuth); + if (CPA_STATUS_SUCCESS != status) + return status; + } + + return status; +} + +CpaStatus +qat_ocf_handle_session_update(struct qat_ocf_dsession *ocf_dsession, + struct cryptop *crp) +{ + Cpa32U newAADLength = QAT_OCF_AAD_NOCHANGE; + Cpa8U *cipherKey; + Cpa8U *authKey; + crypto_session_t cses; + const struct crypto_session_params *csp; + CpaStatus status = CPA_STATUS_SUCCESS; + + if (!ocf_dsession) + return CPA_STATUS_FAIL; + + cses = crp->crp_session; + if (!cses) + return CPA_STATUS_FAIL; + csp = crypto_get_params(cses); + if (!csp) + return CPA_STATUS_FAIL; + + cipherKey = crp->crp_cipher_key; + authKey = crp->crp_auth_key; + + if (is_sep_aad_supported(csp)) { + /* Determine if AAD has change */ + if ((ocf_dsession->encSession.sessionCtx && + ocf_dsession->encSession.aadLen != crp->crp_aad_length) || + (ocf_dsession->decSession.sessionCtx && + ocf_dsession->decSession.aadLen != crp->crp_aad_length)) { + newAADLength = crp->crp_aad_length; + + /* Get auth and cipher keys from session if not present + * in the request. Update keys is required to update + * AAD. + */ + if (!authKey) + authKey = csp->csp_auth_key; + if (!cipherKey) + cipherKey = csp->csp_cipher_key; + } + if (!authKey) + authKey = cipherKey; + } + + if (crp->crp_cipher_key || crp->crp_auth_key || + newAADLength != QAT_OCF_AAD_NOCHANGE) { + /* Update encryption session */ + status = qat_ocf_session_update(&ocf_dsession->encSession, + cipherKey, + authKey, + newAADLength); + if (CPA_STATUS_SUCCESS != status) + return status; + /* Update decryption session */ + status = qat_ocf_session_update(&ocf_dsession->decSession, + cipherKey, + authKey, + newAADLength); + if (CPA_STATUS_SUCCESS != status) + return status; + } + + return status; +} Index: sys/dev/qat/qat_ae.c =================================================================== --- sys/dev/qat/qat_ae.c +++ /dev/null @@ -1,3445 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_ae.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2007-2019 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -__FBSDID("$FreeBSD$"); -#if 0 -__KERNEL_RCSID(0, "$NetBSD: qat_ae.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $"); -#endif - -#include -#include -#include -#include -#include - -#include - -#include -#include - -#include "qatreg.h" -#include "qatvar.h" -#include "qat_aevar.h" - -static int qat_ae_write_4(struct qat_softc *, u_char, bus_size_t, - uint32_t); -static int qat_ae_read_4(struct qat_softc *, u_char, bus_size_t, - uint32_t *); -static void qat_ae_ctx_indr_write(struct qat_softc *, u_char, uint32_t, - bus_size_t, uint32_t); -static int qat_ae_ctx_indr_read(struct qat_softc *, u_char, uint32_t, - bus_size_t, uint32_t *); - -static u_short qat_aereg_get_10bit_addr(enum aereg_type, u_short); -static int qat_aereg_rel_data_write(struct qat_softc *, u_char, u_char, - enum aereg_type, u_short, uint32_t); -static int qat_aereg_rel_data_read(struct qat_softc *, u_char, u_char, - enum aereg_type, u_short, uint32_t *); -static int qat_aereg_rel_rdxfer_write(struct qat_softc *, u_char, u_char, - enum aereg_type, u_short, uint32_t); -static int qat_aereg_rel_wrxfer_write(struct qat_softc *, u_char, u_char, - enum aereg_type, u_short, uint32_t); -static int qat_aereg_rel_nn_write(struct qat_softc *, u_char, u_char, - enum aereg_type, u_short, uint32_t); -static int qat_aereg_abs_to_rel(struct qat_softc *, u_char, u_short, - u_short *, u_char *); -static int qat_aereg_abs_data_write(struct qat_softc *, u_char, - enum aereg_type, u_short, uint32_t); - -static void qat_ae_enable_ctx(struct qat_softc *, u_char, u_int); -static void qat_ae_disable_ctx(struct qat_softc *, u_char, u_int); -static void qat_ae_write_ctx_mode(struct qat_softc *, u_char, u_char); -static void qat_ae_write_nn_mode(struct qat_softc *, u_char, u_char); -static void qat_ae_write_lm_mode(struct qat_softc *, u_char, - enum aereg_type, u_char); -static void qat_ae_write_shared_cs_mode0(struct qat_softc *, u_char, - u_char); -static void qat_ae_write_shared_cs_mode(struct qat_softc *, u_char, u_char); -static int qat_ae_set_reload_ustore(struct qat_softc *, u_char, u_int, int, - u_int); - -static enum qat_ae_status qat_ae_get_status(struct qat_softc *, u_char); -static int qat_ae_is_active(struct qat_softc *, u_char); -static int qat_ae_wait_num_cycles(struct qat_softc *, u_char, int, int); - -static int qat_ae_clear_reset(struct qat_softc *); -static int qat_ae_check(struct qat_softc *); -static int qat_ae_reset_timestamp(struct qat_softc *); -static void qat_ae_clear_xfer(struct qat_softc *); -static int qat_ae_clear_gprs(struct qat_softc *); - -static void qat_ae_get_shared_ustore_ae(u_char, u_char *); -static u_int qat_ae_ucode_parity64(uint64_t); -static uint64_t qat_ae_ucode_set_ecc(uint64_t); -static int qat_ae_ucode_write(struct qat_softc *, u_char, u_int, u_int, - const uint64_t *); -static int qat_ae_ucode_read(struct qat_softc *, u_char, u_int, u_int, - uint64_t *); -static u_int qat_ae_concat_ucode(uint64_t *, u_int, u_int, u_int, u_int *); -static int qat_ae_exec_ucode(struct qat_softc *, u_char, u_char, - uint64_t *, u_int, int, u_int, u_int *); -static int qat_ae_exec_ucode_init_lm(struct qat_softc *, u_char, u_char, - int *, uint64_t *, u_int, - u_int *, u_int *, u_int *, u_int *, u_int *); -static int qat_ae_restore_init_lm_gprs(struct qat_softc *, u_char, u_char, - u_int, u_int, u_int, u_int, u_int); -static int qat_ae_get_inst_num(int); -static int qat_ae_batch_put_lm(struct qat_softc *, u_char, - struct qat_ae_batch_init_list *, size_t); -static int qat_ae_write_pc(struct qat_softc *, u_char, u_int, u_int); - -static u_int qat_aefw_csum(char *, int); -static const char *qat_aefw_uof_string(struct qat_softc *, size_t); -static struct uof_chunk_hdr *qat_aefw_uof_find_chunk(struct qat_softc *, - const char *, struct uof_chunk_hdr *); - -static int qat_aefw_load_mof(struct qat_softc *); -static void qat_aefw_unload_mof(struct qat_softc *); -static int qat_aefw_load_mmp(struct qat_softc *); -static void qat_aefw_unload_mmp(struct qat_softc *); - -static int qat_aefw_mof_find_uof0(struct qat_softc *, - struct mof_uof_hdr *, struct mof_uof_chunk_hdr *, - u_int, size_t, const char *, - size_t *, void **); -static int qat_aefw_mof_find_uof(struct qat_softc *); -static int qat_aefw_mof_parse(struct qat_softc *); - -static int qat_aefw_uof_parse_image(struct qat_softc *, - struct qat_uof_image *, struct uof_chunk_hdr *uch); -static int qat_aefw_uof_parse_images(struct qat_softc *); -static int qat_aefw_uof_parse(struct qat_softc *); - -static int qat_aefw_alloc_auth_dmamem(struct qat_softc *, char *, size_t, - struct qat_dmamem *); -static int qat_aefw_auth(struct qat_softc *, struct qat_dmamem *); -static int qat_aefw_suof_load(struct qat_softc *sc, - struct qat_dmamem *dma); -static int qat_aefw_suof_parse_image(struct qat_softc *, - struct qat_suof_image *, struct suof_chunk_hdr *); -static int qat_aefw_suof_parse(struct qat_softc *); -static int qat_aefw_suof_write(struct qat_softc *); - -static int qat_aefw_uof_assign_image(struct qat_softc *, struct qat_ae *, - struct qat_uof_image *); -static int qat_aefw_uof_init_ae(struct qat_softc *, u_char); -static int qat_aefw_uof_init(struct qat_softc *); - -static int qat_aefw_init_memory_one(struct qat_softc *, - struct uof_init_mem *); -static void qat_aefw_free_lm_init(struct qat_softc *, u_char); -static int qat_aefw_init_ustore(struct qat_softc *); -static int qat_aefw_init_reg(struct qat_softc *, u_char, u_char, - enum aereg_type, u_short, u_int); -static int qat_aefw_init_reg_sym_expr(struct qat_softc *, u_char, - struct qat_uof_image *); -static int qat_aefw_init_memory(struct qat_softc *); -static int qat_aefw_init_globals(struct qat_softc *); -static uint64_t qat_aefw_get_uof_inst(struct qat_softc *, - struct qat_uof_page *, u_int); -static int qat_aefw_do_pagein(struct qat_softc *, u_char, - struct qat_uof_page *); -static int qat_aefw_uof_write_one(struct qat_softc *, - struct qat_uof_image *); -static int qat_aefw_uof_write(struct qat_softc *); - -static int -qat_ae_write_4(struct qat_softc *sc, u_char ae, bus_size_t offset, - uint32_t value) -{ - int times = TIMEOUT_AE_CSR; - - do { - qat_ae_local_write_4(sc, ae, offset, value); - if ((qat_ae_local_read_4(sc, ae, LOCAL_CSR_STATUS) & - LOCAL_CSR_STATUS_STATUS) == 0) - return 0; - - } while (times--); - - device_printf(sc->sc_dev, - "couldn't write AE CSR: ae 0x%hhx offset 0x%lx\n", ae, (long)offset); - return EFAULT; -} - -static int -qat_ae_read_4(struct qat_softc *sc, u_char ae, bus_size_t offset, - uint32_t *value) -{ - int times = TIMEOUT_AE_CSR; - uint32_t v; - - do { - v = qat_ae_local_read_4(sc, ae, offset); - if ((qat_ae_local_read_4(sc, ae, LOCAL_CSR_STATUS) & - LOCAL_CSR_STATUS_STATUS) == 0) { - *value = v; - return 0; - } - } while (times--); - - device_printf(sc->sc_dev, - "couldn't read AE CSR: ae 0x%hhx offset 0x%lx\n", ae, (long)offset); - return EFAULT; -} - -static void -qat_ae_ctx_indr_write(struct qat_softc *sc, u_char ae, uint32_t ctx_mask, - bus_size_t offset, uint32_t value) -{ - int ctx; - uint32_t ctxptr; - - MPASS(offset == CTX_FUTURE_COUNT_INDIRECT || - offset == FUTURE_COUNT_SIGNAL_INDIRECT || - offset == CTX_STS_INDIRECT || - offset == CTX_WAKEUP_EVENTS_INDIRECT || - offset == CTX_SIG_EVENTS_INDIRECT || - offset == LM_ADDR_0_INDIRECT || - offset == LM_ADDR_1_INDIRECT || - offset == INDIRECT_LM_ADDR_0_BYTE_INDEX || - offset == INDIRECT_LM_ADDR_1_BYTE_INDEX); - - qat_ae_read_4(sc, ae, CSR_CTX_POINTER, &ctxptr); - for (ctx = 0; ctx < MAX_AE_CTX; ctx++) { - if ((ctx_mask & (1 << ctx)) == 0) - continue; - qat_ae_write_4(sc, ae, CSR_CTX_POINTER, ctx); - qat_ae_write_4(sc, ae, offset, value); - } - qat_ae_write_4(sc, ae, CSR_CTX_POINTER, ctxptr); -} - -static int -qat_ae_ctx_indr_read(struct qat_softc *sc, u_char ae, uint32_t ctx, - bus_size_t offset, uint32_t *value) -{ - int error; - uint32_t ctxptr; - - MPASS(offset == CTX_FUTURE_COUNT_INDIRECT || - offset == FUTURE_COUNT_SIGNAL_INDIRECT || - offset == CTX_STS_INDIRECT || - offset == CTX_WAKEUP_EVENTS_INDIRECT || - offset == CTX_SIG_EVENTS_INDIRECT || - offset == LM_ADDR_0_INDIRECT || - offset == LM_ADDR_1_INDIRECT || - offset == INDIRECT_LM_ADDR_0_BYTE_INDEX || - offset == INDIRECT_LM_ADDR_1_BYTE_INDEX); - - /* save the ctx ptr */ - qat_ae_read_4(sc, ae, CSR_CTX_POINTER, &ctxptr); - if ((ctxptr & CSR_CTX_POINTER_CONTEXT) != - (ctx & CSR_CTX_POINTER_CONTEXT)) - qat_ae_write_4(sc, ae, CSR_CTX_POINTER, ctx); - - error = qat_ae_read_4(sc, ae, offset, value); - - /* restore ctx ptr */ - if ((ctxptr & CSR_CTX_POINTER_CONTEXT) != - (ctx & CSR_CTX_POINTER_CONTEXT)) - qat_ae_write_4(sc, ae, CSR_CTX_POINTER, ctxptr); - - return error; -} - -static u_short -qat_aereg_get_10bit_addr(enum aereg_type regtype, u_short reg) -{ - u_short addr; - - switch (regtype) { - case AEREG_GPA_ABS: - case AEREG_GPB_ABS: - addr = (reg & 0x7f) | 0x80; - break; - case AEREG_GPA_REL: - case AEREG_GPB_REL: - addr = reg & 0x1f; - break; - case AEREG_SR_RD_REL: - case AEREG_SR_WR_REL: - case AEREG_SR_REL: - addr = 0x180 | (reg & 0x1f); - break; - case AEREG_SR_INDX: - addr = 0x140 | ((reg & 0x3) << 1); - break; - case AEREG_DR_RD_REL: - case AEREG_DR_WR_REL: - case AEREG_DR_REL: - addr = 0x1c0 | (reg & 0x1f); - break; - case AEREG_DR_INDX: - addr = 0x100 | ((reg & 0x3) << 1); - break; - case AEREG_NEIGH_INDX: - addr = 0x241 | ((reg & 0x3) << 1); - break; - case AEREG_NEIGH_REL: - addr = 0x280 | (reg & 0x1f); - break; - case AEREG_LMEM0: - addr = 0x200; - break; - case AEREG_LMEM1: - addr = 0x220; - break; - case AEREG_NO_DEST: - addr = 0x300 | (reg & 0xff); - break; - default: - addr = AEREG_BAD_REGADDR; - break; - } - return (addr); -} - -static int -qat_aereg_rel_data_write(struct qat_softc *sc, u_char ae, u_char ctx, - enum aereg_type regtype, u_short relreg, uint32_t value) -{ - uint16_t srchi, srclo, destaddr, data16hi, data16lo; - uint64_t inst[] = { - 0x0F440000000ull, /* immed_w1[reg, val_hi16] */ - 0x0F040000000ull, /* immed_w0[reg, val_lo16] */ - 0x0F0000C0300ull, /* nop */ - 0x0E000010000ull /* ctx_arb[kill] */ - }; - const int ninst = nitems(inst); - const int imm_w1 = 0, imm_w0 = 1; - unsigned int ctxen; - uint16_t mask; - - /* This logic only works for GPRs and LM index registers, - not NN or XFER registers! */ - MPASS(regtype == AEREG_GPA_REL || regtype == AEREG_GPB_REL || - regtype == AEREG_LMEM0 || regtype == AEREG_LMEM1); - - if ((regtype == AEREG_GPA_REL) || (regtype == AEREG_GPB_REL)) { - /* determine the context mode */ - qat_ae_read_4(sc, ae, CTX_ENABLES, &ctxen); - if (ctxen & CTX_ENABLES_INUSE_CONTEXTS) { - /* 4-ctx mode */ - if (ctx & 0x1) - return EINVAL; - mask = 0x1f; - } else { - /* 8-ctx mode */ - mask = 0x0f; - } - if (relreg & ~mask) - return EINVAL; - } - if ((destaddr = qat_aereg_get_10bit_addr(regtype, relreg)) == - AEREG_BAD_REGADDR) { - return EINVAL; - } - - data16lo = 0xffff & value; - data16hi = 0xffff & (value >> 16); - srchi = qat_aereg_get_10bit_addr(AEREG_NO_DEST, - (uint16_t)(0xff & data16hi)); - srclo = qat_aereg_get_10bit_addr(AEREG_NO_DEST, - (uint16_t)(0xff & data16lo)); - - switch (regtype) { - case AEREG_GPA_REL: /* A rel source */ - inst[imm_w1] = inst[imm_w1] | ((data16hi >> 8) << 20) | - ((srchi & 0x3ff) << 10) | (destaddr & 0x3ff); - inst[imm_w0] = inst[imm_w0] | ((data16lo >> 8) << 20) | - ((srclo & 0x3ff) << 10) | (destaddr & 0x3ff); - break; - default: - inst[imm_w1] = inst[imm_w1] | ((data16hi >> 8) << 20) | - ((destaddr & 0x3ff) << 10) | (srchi & 0x3ff); - inst[imm_w0] = inst[imm_w0] | ((data16lo >> 8) << 20) | - ((destaddr & 0x3ff) << 10) | (srclo & 0x3ff); - break; - } - - return qat_ae_exec_ucode(sc, ae, ctx, inst, ninst, 1, ninst * 5, NULL); -} - -static int -qat_aereg_rel_data_read(struct qat_softc *sc, u_char ae, u_char ctx, - enum aereg_type regtype, u_short relreg, uint32_t *value) -{ - uint64_t inst, savucode; - uint32_t ctxen, misc, nmisc, savctx, ctxarbctl, ulo, uhi; - u_int uaddr, ustore_addr; - int error; - u_short mask, regaddr; - u_char nae; - - MPASS(regtype == AEREG_GPA_REL || regtype == AEREG_GPB_REL || - regtype == AEREG_SR_REL || regtype == AEREG_SR_RD_REL || - regtype == AEREG_DR_REL || regtype == AEREG_DR_RD_REL || - regtype == AEREG_LMEM0 || regtype == AEREG_LMEM1); - - if ((regtype == AEREG_GPA_REL) || (regtype == AEREG_GPB_REL) || - (regtype == AEREG_SR_REL) || (regtype == AEREG_SR_RD_REL) || - (regtype == AEREG_DR_REL) || (regtype == AEREG_DR_RD_REL)) - { - /* determine the context mode */ - qat_ae_read_4(sc, ae, CTX_ENABLES, &ctxen); - if (ctxen & CTX_ENABLES_INUSE_CONTEXTS) { - /* 4-ctx mode */ - if (ctx & 0x1) - return EINVAL; - mask = 0x1f; - } else { - /* 8-ctx mode */ - mask = 0x0f; - } - if (relreg & ~mask) - return EINVAL; - } - if ((regaddr = qat_aereg_get_10bit_addr(regtype, relreg)) == - AEREG_BAD_REGADDR) { - return EINVAL; - } - - /* instruction -- alu[--, --, B, reg] */ - switch (regtype) { - case AEREG_GPA_REL: - /* A rel source */ - inst = 0xA070000000ull | (regaddr & 0x3ff); - break; - default: - inst = (0xA030000000ull | ((regaddr & 0x3ff) << 10)); - break; - } - - /* backup shared control store bit, and force AE to - * none-shared mode before executing ucode snippet */ - qat_ae_read_4(sc, ae, AE_MISC_CONTROL, &misc); - if (misc & AE_MISC_CONTROL_SHARE_CS) { - qat_ae_get_shared_ustore_ae(ae, &nae); - if ((1 << nae) & sc->sc_ae_mask && qat_ae_is_active(sc, nae)) - return EBUSY; - } - - nmisc = misc & ~AE_MISC_CONTROL_SHARE_CS; - qat_ae_write_4(sc, ae, AE_MISC_CONTROL, nmisc); - - /* read current context */ - qat_ae_read_4(sc, ae, ACTIVE_CTX_STATUS, &savctx); - qat_ae_read_4(sc, ae, CTX_ARB_CNTL, &ctxarbctl); - - qat_ae_read_4(sc, ae, CTX_ENABLES, &ctxen); - /* prevent clearing the W1C bits: the breakpoint bit, - ECC error bit, and Parity error bit */ - ctxen &= CTX_ENABLES_IGNORE_W1C_MASK; - - /* change the context */ - if (ctx != (savctx & ACTIVE_CTX_STATUS_ACNO)) - qat_ae_write_4(sc, ae, ACTIVE_CTX_STATUS, - ctx & ACTIVE_CTX_STATUS_ACNO); - /* save a ustore location */ - if ((error = qat_ae_ucode_read(sc, ae, 0, 1, &savucode)) != 0) { - /* restore AE_MISC_CONTROL csr */ - qat_ae_write_4(sc, ae, AE_MISC_CONTROL, misc); - - /* restore the context */ - if (ctx != (savctx & ACTIVE_CTX_STATUS_ACNO)) { - qat_ae_write_4(sc, ae, ACTIVE_CTX_STATUS, - savctx & ACTIVE_CTX_STATUS_ACNO); - } - qat_ae_write_4(sc, ae, CTX_ARB_CNTL, ctxarbctl); - - return (error); - } - - /* turn off ustore parity */ - qat_ae_write_4(sc, ae, CTX_ENABLES, - ctxen & (~CTX_ENABLES_CNTL_STORE_PARITY_ENABLE)); - - /* save ustore-addr csr */ - qat_ae_read_4(sc, ae, USTORE_ADDRESS, &ustore_addr); - - /* write the ALU instruction to ustore, enable ecs bit */ - uaddr = 0 | USTORE_ADDRESS_ECS; - - /* set the uaddress */ - qat_ae_write_4(sc, ae, USTORE_ADDRESS, uaddr); - inst = qat_ae_ucode_set_ecc(inst); - - ulo = (uint32_t)(inst & 0xffffffff); - uhi = (uint32_t)(inst >> 32); - - qat_ae_write_4(sc, ae, USTORE_DATA_LOWER, ulo); - - /* this will auto increment the address */ - qat_ae_write_4(sc, ae, USTORE_DATA_UPPER, uhi); - - /* set the uaddress */ - qat_ae_write_4(sc, ae, USTORE_ADDRESS, uaddr); - - /* delay for at least 8 cycles */ - qat_ae_wait_num_cycles(sc, ae, 0x8, 0); - - /* read ALU output -- the instruction should have been executed - prior to clearing the ECS in putUwords */ - qat_ae_read_4(sc, ae, ALU_OUT, value); - - /* restore ustore-addr csr */ - qat_ae_write_4(sc, ae, USTORE_ADDRESS, ustore_addr); - - /* restore the ustore */ - error = qat_ae_ucode_write(sc, ae, 0, 1, &savucode); - - /* restore the context */ - if (ctx != (savctx & ACTIVE_CTX_STATUS_ACNO)) { - qat_ae_write_4(sc, ae, ACTIVE_CTX_STATUS, - savctx & ACTIVE_CTX_STATUS_ACNO); - } - - qat_ae_write_4(sc, ae, CTX_ARB_CNTL, ctxarbctl); - - /* restore AE_MISC_CONTROL csr */ - qat_ae_write_4(sc, ae, AE_MISC_CONTROL, misc); - - qat_ae_write_4(sc, ae, CTX_ENABLES, ctxen); - - return error; -} - -static int -qat_aereg_rel_rdxfer_write(struct qat_softc *sc, u_char ae, u_char ctx, - enum aereg_type regtype, u_short relreg, uint32_t value) -{ - bus_size_t addr; - int error; - uint32_t ctxen; - u_short mask; - u_short dr_offset; - - MPASS(regtype == AEREG_SR_REL || regtype == AEREG_DR_REL || - regtype == AEREG_SR_RD_REL || regtype == AEREG_DR_RD_REL); - - error = qat_ae_read_4(sc, ae, CTX_ENABLES, &ctxen); - if (ctxen & CTX_ENABLES_INUSE_CONTEXTS) { - if (ctx & 0x1) { - device_printf(sc->sc_dev, - "bad ctx argument in 4-ctx mode,ctx=0x%x\n", ctx); - return EINVAL; - } - mask = 0x1f; - dr_offset = 0x20; - - } else { - mask = 0x0f; - dr_offset = 0x10; - } - - if (relreg & ~mask) - return EINVAL; - - addr = relreg + (ctx << 0x5); - - switch (regtype) { - case AEREG_SR_REL: - case AEREG_SR_RD_REL: - qat_ae_xfer_write_4(sc, ae, addr, value); - break; - case AEREG_DR_REL: - case AEREG_DR_RD_REL: - qat_ae_xfer_write_4(sc, ae, addr + dr_offset, value); - break; - default: - error = EINVAL; - } - - return error; -} - -static int -qat_aereg_rel_wrxfer_write(struct qat_softc *sc, u_char ae, u_char ctx, - enum aereg_type regtype, u_short relreg, uint32_t value) -{ - - panic("notyet"); - - return 0; -} - -static int -qat_aereg_rel_nn_write(struct qat_softc *sc, u_char ae, u_char ctx, - enum aereg_type regtype, u_short relreg, uint32_t value) -{ - - panic("notyet"); - - return 0; -} - -static int -qat_aereg_abs_to_rel(struct qat_softc *sc, u_char ae, - u_short absreg, u_short *relreg, u_char *ctx) -{ - uint32_t ctxen; - - qat_ae_read_4(sc, ae, CTX_ENABLES, &ctxen); - if (ctxen & CTX_ENABLES_INUSE_CONTEXTS) { - /* 4-ctx mode */ - *relreg = absreg & 0x1f; - *ctx = (absreg >> 0x4) & 0x6; - } else { - /* 8-ctx mode */ - *relreg = absreg & 0x0f; - *ctx = (absreg >> 0x4) & 0x7; - } - - return 0; -} - -static int -qat_aereg_abs_data_write(struct qat_softc *sc, u_char ae, - enum aereg_type regtype, u_short absreg, uint32_t value) -{ - int error; - u_short relreg; - u_char ctx; - - qat_aereg_abs_to_rel(sc, ae, absreg, &relreg, &ctx); - - switch (regtype) { - case AEREG_GPA_ABS: - MPASS(absreg < MAX_GPR_REG); - error = qat_aereg_rel_data_write(sc, ae, ctx, AEREG_GPA_REL, - relreg, value); - break; - case AEREG_GPB_ABS: - MPASS(absreg < MAX_GPR_REG); - error = qat_aereg_rel_data_write(sc, ae, ctx, AEREG_GPB_REL, - relreg, value); - break; - case AEREG_DR_RD_ABS: - MPASS(absreg < MAX_XFER_REG); - error = qat_aereg_rel_rdxfer_write(sc, ae, ctx, AEREG_DR_RD_REL, - relreg, value); - break; - case AEREG_SR_RD_ABS: - MPASS(absreg < MAX_XFER_REG); - error = qat_aereg_rel_rdxfer_write(sc, ae, ctx, AEREG_SR_RD_REL, - relreg, value); - break; - case AEREG_DR_WR_ABS: - MPASS(absreg < MAX_XFER_REG); - error = qat_aereg_rel_wrxfer_write(sc, ae, ctx, AEREG_DR_WR_REL, - relreg, value); - break; - case AEREG_SR_WR_ABS: - MPASS(absreg < MAX_XFER_REG); - error = qat_aereg_rel_wrxfer_write(sc, ae, ctx, AEREG_SR_WR_REL, - relreg, value); - break; - case AEREG_NEIGH_ABS: - MPASS(absreg < MAX_NN_REG); - if (absreg >= MAX_NN_REG) - return EINVAL; - error = qat_aereg_rel_nn_write(sc, ae, ctx, AEREG_NEIGH_REL, - relreg, value); - break; - default: - panic("Invalid Register Type"); - } - - return error; -} - -static void -qat_ae_enable_ctx(struct qat_softc *sc, u_char ae, u_int ctx_mask) -{ - uint32_t ctxen; - - qat_ae_read_4(sc, ae, CTX_ENABLES, &ctxen); - ctxen &= CTX_ENABLES_IGNORE_W1C_MASK; - - if (ctxen & CTX_ENABLES_INUSE_CONTEXTS) { - ctx_mask &= 0x55; - } else { - ctx_mask &= 0xff; - } - - ctxen |= __SHIFTIN(ctx_mask, CTX_ENABLES_ENABLE); - qat_ae_write_4(sc, ae, CTX_ENABLES, ctxen); -} - -static void -qat_ae_disable_ctx(struct qat_softc *sc, u_char ae, u_int ctx_mask) -{ - uint32_t ctxen; - - qat_ae_read_4(sc, ae, CTX_ENABLES, &ctxen); - ctxen &= CTX_ENABLES_IGNORE_W1C_MASK; - ctxen &= ~(__SHIFTIN(ctx_mask & AE_ALL_CTX, CTX_ENABLES_ENABLE)); - qat_ae_write_4(sc, ae, CTX_ENABLES, ctxen); -} - -static void -qat_ae_write_ctx_mode(struct qat_softc *sc, u_char ae, u_char mode) -{ - uint32_t val, nval; - - qat_ae_read_4(sc, ae, CTX_ENABLES, &val); - val &= CTX_ENABLES_IGNORE_W1C_MASK; - - if (mode == 4) - nval = val | CTX_ENABLES_INUSE_CONTEXTS; - else - nval = val & ~CTX_ENABLES_INUSE_CONTEXTS; - - if (val != nval) - qat_ae_write_4(sc, ae, CTX_ENABLES, nval); -} - -static void -qat_ae_write_nn_mode(struct qat_softc *sc, u_char ae, u_char mode) -{ - uint32_t val, nval; - - qat_ae_read_4(sc, ae, CTX_ENABLES, &val); - val &= CTX_ENABLES_IGNORE_W1C_MASK; - - if (mode) - nval = val | CTX_ENABLES_NN_MODE; - else - nval = val & ~CTX_ENABLES_NN_MODE; - - if (val != nval) - qat_ae_write_4(sc, ae, CTX_ENABLES, nval); -} - -static void -qat_ae_write_lm_mode(struct qat_softc *sc, u_char ae, - enum aereg_type lm, u_char mode) -{ - uint32_t val, nval; - uint32_t bit; - - qat_ae_read_4(sc, ae, CTX_ENABLES, &val); - val &= CTX_ENABLES_IGNORE_W1C_MASK; - - switch (lm) { - case AEREG_LMEM0: - bit = CTX_ENABLES_LMADDR_0_GLOBAL; - break; - case AEREG_LMEM1: - bit = CTX_ENABLES_LMADDR_1_GLOBAL; - break; - default: - panic("invalid lmem reg type"); - break; - } - - if (mode) - nval = val | bit; - else - nval = val & ~bit; - - if (val != nval) - qat_ae_write_4(sc, ae, CTX_ENABLES, nval); -} - -static void -qat_ae_write_shared_cs_mode0(struct qat_softc *sc, u_char ae, u_char mode) -{ - uint32_t val, nval; - - qat_ae_read_4(sc, ae, AE_MISC_CONTROL, &val); - - if (mode == 1) - nval = val | AE_MISC_CONTROL_SHARE_CS; - else - nval = val & ~AE_MISC_CONTROL_SHARE_CS; - - if (val != nval) - qat_ae_write_4(sc, ae, AE_MISC_CONTROL, nval); -} - -static void -qat_ae_write_shared_cs_mode(struct qat_softc *sc, u_char ae, u_char mode) -{ - u_char nae; - - qat_ae_get_shared_ustore_ae(ae, &nae); - - qat_ae_write_shared_cs_mode0(sc, ae, mode); - - if ((sc->sc_ae_mask & (1 << nae))) { - qat_ae_write_shared_cs_mode0(sc, nae, mode); - } -} - -static int -qat_ae_set_reload_ustore(struct qat_softc *sc, u_char ae, - u_int reload_size, int shared_mode, u_int ustore_dram_addr) -{ - uint32_t val, cs_reload; - - switch (reload_size) { - case 0: - cs_reload = 0x0; - break; - case QAT_2K: - cs_reload = 0x1; - break; - case QAT_4K: - cs_reload = 0x2; - break; - case QAT_8K: - cs_reload = 0x3; - break; - default: - return EINVAL; - } - - if (cs_reload) - QAT_AE(sc, ae).qae_ustore_dram_addr = ustore_dram_addr; - - QAT_AE(sc, ae).qae_reload_size = reload_size; - - qat_ae_read_4(sc, ae, AE_MISC_CONTROL, &val); - val &= ~(AE_MISC_CONTROL_ONE_CTX_RELOAD | - AE_MISC_CONTROL_CS_RELOAD | AE_MISC_CONTROL_SHARE_CS); - val |= __SHIFTIN(cs_reload, AE_MISC_CONTROL_CS_RELOAD) | - __SHIFTIN(shared_mode, AE_MISC_CONTROL_ONE_CTX_RELOAD); - qat_ae_write_4(sc, ae, AE_MISC_CONTROL, val); - - return 0; -} - -static enum qat_ae_status -qat_ae_get_status(struct qat_softc *sc, u_char ae) -{ - int error; - uint32_t val = 0; - - error = qat_ae_read_4(sc, ae, CTX_ENABLES, &val); - if (error || val & CTX_ENABLES_ENABLE) - return QAT_AE_ENABLED; - - qat_ae_read_4(sc, ae, ACTIVE_CTX_STATUS, &val); - if (val & ACTIVE_CTX_STATUS_ABO) - return QAT_AE_ACTIVE; - - return QAT_AE_DISABLED; -} - - -static int -qat_ae_is_active(struct qat_softc *sc, u_char ae) -{ - uint32_t val; - - if (qat_ae_get_status(sc, ae) != QAT_AE_DISABLED) - return 1; - - qat_ae_read_4(sc, ae, ACTIVE_CTX_STATUS, &val); - if (val & ACTIVE_CTX_STATUS_ABO) - return 1; - else - return 0; -} - -/* returns 1 if actually waited for specified number of cycles */ -static int -qat_ae_wait_num_cycles(struct qat_softc *sc, u_char ae, int cycles, int check) -{ - uint32_t cnt, actx; - int pcnt, ccnt, elapsed, times; - - qat_ae_read_4(sc, ae, PROFILE_COUNT, &cnt); - pcnt = cnt & 0xffff; - - times = TIMEOUT_AE_CHECK; - do { - qat_ae_read_4(sc, ae, PROFILE_COUNT, &cnt); - ccnt = cnt & 0xffff; - - elapsed = ccnt - pcnt; - if (elapsed == 0) { - times--; - } - if (times <= 0) { - device_printf(sc->sc_dev, - "qat_ae_wait_num_cycles timeout\n"); - return -1; - } - - if (elapsed < 0) - elapsed += 0x10000; - - if (elapsed >= CYCLES_FROM_READY2EXE && check) { - if (qat_ae_read_4(sc, ae, ACTIVE_CTX_STATUS, - &actx) == 0) { - if ((actx & ACTIVE_CTX_STATUS_ABO) == 0) - return 0; - } - } - } while (cycles > elapsed); - - if (check && qat_ae_read_4(sc, ae, ACTIVE_CTX_STATUS, &actx) == 0) { - if ((actx & ACTIVE_CTX_STATUS_ABO) == 0) - return 0; - } - - return 1; -} - -int -qat_ae_init(struct qat_softc *sc) -{ - int error; - uint32_t mask, val = 0; - u_char ae; - - /* XXX adf_initSysMemInfo */ - - /* XXX Disable clock gating for some chip if debug mode */ - - for (ae = 0, mask = sc->sc_ae_mask; mask; ae++, mask >>= 1) { - struct qat_ae *qae = &sc->sc_ae[ae]; - if (!(mask & 1)) - continue; - - qae->qae_ustore_size = USTORE_SIZE; - - qae->qae_free_addr = 0; - qae->qae_free_size = USTORE_SIZE; - qae->qae_live_ctx_mask = AE_ALL_CTX; - qae->qae_ustore_dram_addr = 0; - qae->qae_reload_size = 0; - } - - /* XXX Enable attention interrupt */ - - error = qat_ae_clear_reset(sc); - if (error) - return error; - - qat_ae_clear_xfer(sc); - - if (!sc->sc_hw.qhw_fw_auth) { - error = qat_ae_clear_gprs(sc); - if (error) - return error; - } - - /* Set SIGNATURE_ENABLE[0] to 0x1 in order to enable ALU_OUT csr */ - for (ae = 0, mask = sc->sc_ae_mask; mask; ae++, mask >>= 1) { - if (!(mask & 1)) - continue; - qat_ae_read_4(sc, ae, SIGNATURE_ENABLE, &val); - val |= 0x1; - qat_ae_write_4(sc, ae, SIGNATURE_ENABLE, val); - } - - error = qat_ae_clear_reset(sc); - if (error) - return error; - - /* XXX XXX XXX Clean MMP memory if mem scrub is supported */ - /* halMem_ScrubMMPMemory */ - - return 0; -} - -int -qat_ae_start(struct qat_softc *sc) -{ - int error; - u_char ae; - - for (ae = 0; ae < sc->sc_ae_num; ae++) { - if ((sc->sc_ae_mask & (1 << ae)) == 0) - continue; - - error = qat_aefw_start(sc, ae, 0xff); - if (error) - return error; - } - - return 0; -} - -void -qat_ae_cluster_intr(void *arg) -{ - /* Nothing to implement until we support SRIOV. */ - printf("qat_ae_cluster_intr\n"); -} - -static int -qat_ae_clear_reset(struct qat_softc *sc) -{ - int error; - uint32_t times, reset, clock, reg, mask; - u_char ae; - - reset = qat_cap_global_read_4(sc, CAP_GLOBAL_CTL_RESET); - reset &= ~(__SHIFTIN(sc->sc_ae_mask, CAP_GLOBAL_CTL_RESET_AE_MASK)); - reset &= ~(__SHIFTIN(sc->sc_accel_mask, CAP_GLOBAL_CTL_RESET_ACCEL_MASK)); - times = TIMEOUT_AE_RESET; - do { - qat_cap_global_write_4(sc, CAP_GLOBAL_CTL_RESET, reset); - if ((times--) == 0) { - device_printf(sc->sc_dev, "couldn't reset AEs\n"); - return EBUSY; - } - reg = qat_cap_global_read_4(sc, CAP_GLOBAL_CTL_RESET); - } while ((__SHIFTIN(sc->sc_ae_mask, CAP_GLOBAL_CTL_RESET_AE_MASK) | - __SHIFTIN(sc->sc_accel_mask, CAP_GLOBAL_CTL_RESET_ACCEL_MASK)) - & reg); - - /* Enable clock for AE and QAT */ - clock = qat_cap_global_read_4(sc, CAP_GLOBAL_CTL_CLK_EN); - clock |= __SHIFTIN(sc->sc_ae_mask, CAP_GLOBAL_CTL_CLK_EN_AE_MASK); - clock |= __SHIFTIN(sc->sc_accel_mask, CAP_GLOBAL_CTL_CLK_EN_ACCEL_MASK); - qat_cap_global_write_4(sc, CAP_GLOBAL_CTL_CLK_EN, clock); - - error = qat_ae_check(sc); - if (error) - return error; - - /* - * Set undefined power-up/reset states to reasonable default values... - * just to make sure we're starting from a known point - */ - for (ae = 0, mask = sc->sc_ae_mask; mask; ae++, mask >>= 1) { - if (!(mask & 1)) - continue; - - /* init the ctx_enable */ - qat_ae_write_4(sc, ae, CTX_ENABLES, - CTX_ENABLES_INIT); - - /* initialize the PCs */ - qat_ae_ctx_indr_write(sc, ae, AE_ALL_CTX, - CTX_STS_INDIRECT, - UPC_MASK & CTX_STS_INDIRECT_UPC_INIT); - - /* init the ctx_arb */ - qat_ae_write_4(sc, ae, CTX_ARB_CNTL, - CTX_ARB_CNTL_INIT); - - /* enable cc */ - qat_ae_write_4(sc, ae, CC_ENABLE, - CC_ENABLE_INIT); - qat_ae_ctx_indr_write(sc, ae, AE_ALL_CTX, - CTX_WAKEUP_EVENTS_INDIRECT, - CTX_WAKEUP_EVENTS_INDIRECT_INIT); - qat_ae_ctx_indr_write(sc, ae, AE_ALL_CTX, - CTX_SIG_EVENTS_INDIRECT, - CTX_SIG_EVENTS_INDIRECT_INIT); - } - - if ((sc->sc_ae_mask != 0) && - sc->sc_flags & QAT_FLAG_ESRAM_ENABLE_AUTO_INIT) { - /* XXX XXX XXX init eSram only when this is boot time */ - } - - if ((sc->sc_ae_mask != 0) && - sc->sc_flags & QAT_FLAG_SHRAM_WAIT_READY) { - /* XXX XXX XXX wait shram to complete initialization */ - } - - qat_ae_reset_timestamp(sc); - - return 0; -} - -static int -qat_ae_check(struct qat_softc *sc) -{ - int error, times, ae; - uint32_t cnt, pcnt, mask; - - for (ae = 0, mask = sc->sc_ae_mask; mask; ae++, mask >>= 1) { - if (!(mask & 1)) - continue; - - times = TIMEOUT_AE_CHECK; - error = qat_ae_read_4(sc, ae, PROFILE_COUNT, &cnt); - if (error) { - device_printf(sc->sc_dev, - "couldn't access AE %d CSR\n", ae); - return error; - } - pcnt = cnt & 0xffff; - - while (1) { - error = qat_ae_read_4(sc, ae, - PROFILE_COUNT, &cnt); - if (error) { - device_printf(sc->sc_dev, - "couldn't access AE %d CSR\n", ae); - return error; - } - cnt &= 0xffff; - if (cnt == pcnt) - times--; - else - break; - if (times <= 0) { - device_printf(sc->sc_dev, - "AE %d CSR is useless\n", ae); - return EFAULT; - } - } - } - - return 0; -} - -static int -qat_ae_reset_timestamp(struct qat_softc *sc) -{ - uint32_t misc, mask; - u_char ae; - - /* stop the timestamp timers */ - misc = qat_cap_global_read_4(sc, CAP_GLOBAL_CTL_MISC); - if (misc & CAP_GLOBAL_CTL_MISC_TIMESTAMP_EN) { - qat_cap_global_write_4(sc, CAP_GLOBAL_CTL_MISC, - misc & (~CAP_GLOBAL_CTL_MISC_TIMESTAMP_EN)); - } - - for (ae = 0, mask = sc->sc_ae_mask; mask; ae++, mask >>= 1) { - if (!(mask & 1)) - continue; - qat_ae_write_4(sc, ae, TIMESTAMP_LOW, 0); - qat_ae_write_4(sc, ae, TIMESTAMP_HIGH, 0); - } - - /* start timestamp timers */ - qat_cap_global_write_4(sc, CAP_GLOBAL_CTL_MISC, - misc | CAP_GLOBAL_CTL_MISC_TIMESTAMP_EN); - - return 0; -} - -static void -qat_ae_clear_xfer(struct qat_softc *sc) -{ - u_int mask, reg; - u_char ae; - - for (ae = 0, mask = sc->sc_ae_mask; mask; ae++, mask >>= 1) { - if (!(mask & 1)) - continue; - - for (reg = 0; reg < MAX_GPR_REG; reg++) { - qat_aereg_abs_data_write(sc, ae, AEREG_SR_RD_ABS, - reg, 0); - qat_aereg_abs_data_write(sc, ae, AEREG_DR_RD_ABS, - reg, 0); - } - } -} - -static int -qat_ae_clear_gprs(struct qat_softc *sc) -{ - uint32_t val; - uint32_t saved_ctx = 0; - int times = TIMEOUT_AE_CHECK, rv; - u_char ae; - u_int mask; - - for (ae = 0, mask = sc->sc_ae_mask; mask; ae++, mask >>= 1) { - if (!(mask & 1)) - continue; - - /* turn off share control store bit */ - val = qat_ae_read_4(sc, ae, AE_MISC_CONTROL, &val); - val &= ~AE_MISC_CONTROL_SHARE_CS; - qat_ae_write_4(sc, ae, AE_MISC_CONTROL, val); - - /* turn off ucode parity */ - /* make sure nn_mode is set to self */ - qat_ae_read_4(sc, ae, CTX_ENABLES, &val); - val &= CTX_ENABLES_IGNORE_W1C_MASK; - val |= CTX_ENABLES_NN_MODE; - val &= ~CTX_ENABLES_CNTL_STORE_PARITY_ENABLE; - qat_ae_write_4(sc, ae, CTX_ENABLES, val); - - /* copy instructions to ustore */ - qat_ae_ucode_write(sc, ae, 0, nitems(ae_clear_gprs_inst), - ae_clear_gprs_inst); - - /* set PC */ - qat_ae_ctx_indr_write(sc, ae, AE_ALL_CTX, CTX_STS_INDIRECT, - UPC_MASK & CTX_STS_INDIRECT_UPC_INIT); - - /* save current context */ - qat_ae_read_4(sc, ae, ACTIVE_CTX_STATUS, &saved_ctx); - /* change the active context */ - /* start the context from ctx 0 */ - qat_ae_write_4(sc, ae, ACTIVE_CTX_STATUS, 0); - - /* wakeup-event voluntary */ - qat_ae_ctx_indr_write(sc, ae, AE_ALL_CTX, - CTX_WAKEUP_EVENTS_INDIRECT, - CTX_WAKEUP_EVENTS_INDIRECT_VOLUNTARY); - /* clean signals */ - qat_ae_ctx_indr_write(sc, ae, AE_ALL_CTX, - CTX_SIG_EVENTS_INDIRECT, 0); - qat_ae_write_4(sc, ae, CTX_SIG_EVENTS_ACTIVE, 0); - - qat_ae_enable_ctx(sc, ae, AE_ALL_CTX); - } - - for (ae = 0, mask = sc->sc_ae_mask; mask; ae++, mask >>= 1) { - if (!(mask & 1)) - continue; - /* wait for AE to finish */ - do { - rv = qat_ae_wait_num_cycles(sc, ae, AE_EXEC_CYCLE, 1); - } while (rv && times--); - if (times <= 0) { - device_printf(sc->sc_dev, - "qat_ae_clear_gprs timeout"); - return ETIMEDOUT; - } - qat_ae_disable_ctx(sc, ae, AE_ALL_CTX); - /* change the active context */ - qat_ae_write_4(sc, ae, ACTIVE_CTX_STATUS, - saved_ctx & ACTIVE_CTX_STATUS_ACNO); - /* init the ctx_enable */ - qat_ae_write_4(sc, ae, CTX_ENABLES, CTX_ENABLES_INIT); - /* initialize the PCs */ - qat_ae_ctx_indr_write(sc, ae, AE_ALL_CTX, - CTX_STS_INDIRECT, UPC_MASK & CTX_STS_INDIRECT_UPC_INIT); - /* init the ctx_arb */ - qat_ae_write_4(sc, ae, CTX_ARB_CNTL, CTX_ARB_CNTL_INIT); - /* enable cc */ - qat_ae_write_4(sc, ae, CC_ENABLE, CC_ENABLE_INIT); - qat_ae_ctx_indr_write(sc, ae, AE_ALL_CTX, - CTX_WAKEUP_EVENTS_INDIRECT, CTX_WAKEUP_EVENTS_INDIRECT_INIT); - qat_ae_ctx_indr_write(sc, ae, AE_ALL_CTX, CTX_SIG_EVENTS_INDIRECT, - CTX_SIG_EVENTS_INDIRECT_INIT); - } - - return 0; -} - -static void -qat_ae_get_shared_ustore_ae(u_char ae, u_char *nae) -{ - if (ae & 0x1) - *nae = ae - 1; - else - *nae = ae + 1; -} - -static u_int -qat_ae_ucode_parity64(uint64_t ucode) -{ - - ucode ^= ucode >> 1; - ucode ^= ucode >> 2; - ucode ^= ucode >> 4; - ucode ^= ucode >> 8; - ucode ^= ucode >> 16; - ucode ^= ucode >> 32; - - return ((u_int)(ucode & 1)); -} - -static uint64_t -qat_ae_ucode_set_ecc(uint64_t ucode) -{ - static const uint64_t - bit0mask=0xff800007fffULL, bit1mask=0x1f801ff801fULL, - bit2mask=0xe387e0781e1ULL, bit3mask=0x7cb8e388e22ULL, - bit4mask=0xaf5b2c93244ULL, bit5mask=0xf56d5525488ULL, - bit6mask=0xdaf69a46910ULL; - - /* clear the ecc bits */ - ucode &= ~(0x7fULL << USTORE_ECC_BIT_0); - - ucode |= (uint64_t)qat_ae_ucode_parity64(bit0mask & ucode) << - USTORE_ECC_BIT_0; - ucode |= (uint64_t)qat_ae_ucode_parity64(bit1mask & ucode) << - USTORE_ECC_BIT_1; - ucode |= (uint64_t)qat_ae_ucode_parity64(bit2mask & ucode) << - USTORE_ECC_BIT_2; - ucode |= (uint64_t)qat_ae_ucode_parity64(bit3mask & ucode) << - USTORE_ECC_BIT_3; - ucode |= (uint64_t)qat_ae_ucode_parity64(bit4mask & ucode) << - USTORE_ECC_BIT_4; - ucode |= (uint64_t)qat_ae_ucode_parity64(bit5mask & ucode) << - USTORE_ECC_BIT_5; - ucode |= (uint64_t)qat_ae_ucode_parity64(bit6mask & ucode) << - USTORE_ECC_BIT_6; - - return (ucode); -} - -static int -qat_ae_ucode_write(struct qat_softc *sc, u_char ae, u_int uaddr, u_int ninst, - const uint64_t *ucode) -{ - uint64_t tmp; - uint32_t ustore_addr, ulo, uhi; - int i; - - qat_ae_read_4(sc, ae, USTORE_ADDRESS, &ustore_addr); - uaddr |= USTORE_ADDRESS_ECS; - - qat_ae_write_4(sc, ae, USTORE_ADDRESS, uaddr); - for (i = 0; i < ninst; i++) { - tmp = qat_ae_ucode_set_ecc(ucode[i]); - ulo = (uint32_t)(tmp & 0xffffffff); - uhi = (uint32_t)(tmp >> 32); - - qat_ae_write_4(sc, ae, USTORE_DATA_LOWER, ulo); - /* this will auto increment the address */ - qat_ae_write_4(sc, ae, USTORE_DATA_UPPER, uhi); - } - qat_ae_write_4(sc, ae, USTORE_ADDRESS, ustore_addr); - - return 0; -} - -static int -qat_ae_ucode_read(struct qat_softc *sc, u_char ae, u_int uaddr, u_int ninst, - uint64_t *ucode) -{ - uint32_t misc, ustore_addr, ulo, uhi; - u_int ii; - u_char nae; - - if (qat_ae_get_status(sc, ae) != QAT_AE_DISABLED) - return EBUSY; - - /* determine whether it neighbour AE runs in shared control store - * status */ - qat_ae_read_4(sc, ae, AE_MISC_CONTROL, &misc); - if (misc & AE_MISC_CONTROL_SHARE_CS) { - qat_ae_get_shared_ustore_ae(ae, &nae); - if ((sc->sc_ae_mask & (1 << nae)) && qat_ae_is_active(sc, nae)) - return EBUSY; - } - - /* if reloadable, then get it all from dram-ustore */ - if (__SHIFTOUT(misc, AE_MISC_CONTROL_CS_RELOAD)) - panic("notyet"); /* XXX getReloadUwords */ - - /* disable SHARE_CS bit to workaround silicon bug */ - qat_ae_write_4(sc, ae, AE_MISC_CONTROL, misc & 0xfffffffb); - - MPASS(uaddr + ninst <= USTORE_SIZE); - - /* save ustore-addr csr */ - qat_ae_read_4(sc, ae, USTORE_ADDRESS, &ustore_addr); - - uaddr |= USTORE_ADDRESS_ECS; /* enable ecs bit */ - for (ii = 0; ii < ninst; ii++) { - qat_ae_write_4(sc, ae, USTORE_ADDRESS, uaddr); - - uaddr++; - qat_ae_read_4(sc, ae, USTORE_DATA_LOWER, &ulo); - qat_ae_read_4(sc, ae, USTORE_DATA_UPPER, &uhi); - ucode[ii] = uhi; - ucode[ii] = (ucode[ii] << 32) | ulo; - } - - /* restore SHARE_CS bit to workaround silicon bug */ - qat_ae_write_4(sc, ae, AE_MISC_CONTROL, misc); - qat_ae_write_4(sc, ae, USTORE_ADDRESS, ustore_addr); - - return 0; -} - -static u_int -qat_ae_concat_ucode(uint64_t *ucode, u_int ninst, u_int size, u_int addr, - u_int *value) -{ - const uint64_t *inst_arr; - u_int ninst0, curvalue; - int ii, vali, fixup, usize = 0; - - if (size == 0) - return 0; - - ninst0 = ninst; - vali = 0; - curvalue = value[vali++]; - - switch (size) { - case 0x1: - inst_arr = ae_inst_1b; - usize = nitems(ae_inst_1b); - break; - case 0x2: - inst_arr = ae_inst_2b; - usize = nitems(ae_inst_2b); - break; - case 0x3: - inst_arr = ae_inst_3b; - usize = nitems(ae_inst_3b); - break; - default: - inst_arr = ae_inst_4b; - usize = nitems(ae_inst_4b); - break; - } - - fixup = ninst; - for (ii = 0; ii < usize; ii++) - ucode[ninst++] = inst_arr[ii]; - - INSERT_IMMED_GPRA_CONST(ucode[fixup], (addr)); - fixup++; - INSERT_IMMED_GPRA_CONST(ucode[fixup], 0); - fixup++; - INSERT_IMMED_GPRB_CONST(ucode[fixup], (curvalue >> 0)); - fixup++; - INSERT_IMMED_GPRB_CONST(ucode[fixup], (curvalue >> 16)); - /* XXX fixup++ ? */ - - if (size <= 0x4) - return (ninst - ninst0); - - size -= sizeof(u_int); - while (size >= sizeof(u_int)) { - curvalue = value[vali++]; - fixup = ninst; - ucode[ninst++] = ae_inst_4b[0x2]; - ucode[ninst++] = ae_inst_4b[0x3]; - ucode[ninst++] = ae_inst_4b[0x8]; - INSERT_IMMED_GPRB_CONST(ucode[fixup], (curvalue >> 16)); - fixup++; - INSERT_IMMED_GPRB_CONST(ucode[fixup], (curvalue >> 0)); - /* XXX fixup++ ? */ - - addr += sizeof(u_int); - size -= sizeof(u_int); - } - /* call this function recusive when the left size less than 4 */ - ninst += - qat_ae_concat_ucode(ucode, ninst, size, addr, value + vali); - - return (ninst - ninst0); -} - -static int -qat_ae_exec_ucode(struct qat_softc *sc, u_char ae, u_char ctx, - uint64_t *ucode, u_int ninst, int cond_code_off, u_int max_cycles, - u_int *endpc) -{ - int error = 0, share_cs = 0; - uint64_t savucode[MAX_EXEC_INST]; - uint32_t indr_lm_addr_0, indr_lm_addr_1; - uint32_t indr_lm_addr_byte_0, indr_lm_addr_byte_1; - uint32_t indr_future_cnt_sig; - uint32_t indr_sig, active_sig; - uint32_t wakeup_ev, savpc, savcc, savctx, ctxarbctl; - uint32_t misc, nmisc, ctxen; - u_char nae; - - MPASS(ninst <= USTORE_SIZE); - - if (qat_ae_is_active(sc, ae)) - return EBUSY; - - /* save current LM addr */ - qat_ae_ctx_indr_read(sc, ae, ctx, LM_ADDR_0_INDIRECT, &indr_lm_addr_0); - qat_ae_ctx_indr_read(sc, ae, ctx, LM_ADDR_1_INDIRECT, &indr_lm_addr_1); - qat_ae_ctx_indr_read(sc, ae, ctx, INDIRECT_LM_ADDR_0_BYTE_INDEX, - &indr_lm_addr_byte_0); - qat_ae_ctx_indr_read(sc, ae, ctx, INDIRECT_LM_ADDR_1_BYTE_INDEX, - &indr_lm_addr_byte_1); - - /* backup shared control store bit, and force AE to - none-shared mode before executing ucode snippet */ - qat_ae_read_4(sc, ae, AE_MISC_CONTROL, &misc); - if (misc & AE_MISC_CONTROL_SHARE_CS) { - share_cs = 1; - qat_ae_get_shared_ustore_ae(ae, &nae); - if ((sc->sc_ae_mask & (1 << nae)) && qat_ae_is_active(sc, nae)) - return EBUSY; - } - nmisc = misc & ~AE_MISC_CONTROL_SHARE_CS; - qat_ae_write_4(sc, ae, AE_MISC_CONTROL, nmisc); - - /* save current states: */ - if (ninst <= MAX_EXEC_INST) { - error = qat_ae_ucode_read(sc, ae, 0, ninst, savucode); - if (error) { - qat_ae_write_4(sc, ae, AE_MISC_CONTROL, misc); - return error; - } - } - - /* save wakeup-events */ - qat_ae_ctx_indr_read(sc, ae, ctx, CTX_WAKEUP_EVENTS_INDIRECT, - &wakeup_ev); - /* save PC */ - qat_ae_ctx_indr_read(sc, ae, ctx, CTX_STS_INDIRECT, &savpc); - savpc &= UPC_MASK; - - /* save ctx enables */ - qat_ae_read_4(sc, ae, CTX_ENABLES, &ctxen); - ctxen &= CTX_ENABLES_IGNORE_W1C_MASK; - /* save conditional-code */ - qat_ae_read_4(sc, ae, CC_ENABLE, &savcc); - /* save current context */ - qat_ae_read_4(sc, ae, ACTIVE_CTX_STATUS, &savctx); - qat_ae_read_4(sc, ae, CTX_ARB_CNTL, &ctxarbctl); - - /* save indirect csrs */ - qat_ae_ctx_indr_read(sc, ae, ctx, FUTURE_COUNT_SIGNAL_INDIRECT, - &indr_future_cnt_sig); - qat_ae_ctx_indr_read(sc, ae, ctx, CTX_SIG_EVENTS_INDIRECT, &indr_sig); - qat_ae_read_4(sc, ae, CTX_SIG_EVENTS_ACTIVE, &active_sig); - - /* turn off ucode parity */ - qat_ae_write_4(sc, ae, CTX_ENABLES, - ctxen & ~CTX_ENABLES_CNTL_STORE_PARITY_ENABLE); - - /* copy instructions to ustore */ - qat_ae_ucode_write(sc, ae, 0, ninst, ucode); - /* set PC */ - qat_ae_ctx_indr_write(sc, ae, 1 << ctx, CTX_STS_INDIRECT, 0); - /* change the active context */ - qat_ae_write_4(sc, ae, ACTIVE_CTX_STATUS, - ctx & ACTIVE_CTX_STATUS_ACNO); - - if (cond_code_off) { - /* disable conditional-code*/ - qat_ae_write_4(sc, ae, CC_ENABLE, savcc & 0xffffdfff); - } - - /* wakeup-event voluntary */ - qat_ae_ctx_indr_write(sc, ae, 1 << ctx, - CTX_WAKEUP_EVENTS_INDIRECT, CTX_WAKEUP_EVENTS_INDIRECT_VOLUNTARY); - - /* clean signals */ - qat_ae_ctx_indr_write(sc, ae, 1 << ctx, CTX_SIG_EVENTS_INDIRECT, 0); - qat_ae_write_4(sc, ae, CTX_SIG_EVENTS_ACTIVE, 0); - - /* enable context */ - qat_ae_enable_ctx(sc, ae, 1 << ctx); - - /* wait for it to finish */ - if (qat_ae_wait_num_cycles(sc, ae, max_cycles, 1) != 0) - error = ETIMEDOUT; - - /* see if we need to get the current PC */ - if (endpc != NULL) { - uint32_t ctx_status; - - qat_ae_ctx_indr_read(sc, ae, ctx, CTX_STS_INDIRECT, - &ctx_status); - *endpc = ctx_status & UPC_MASK; - } -#if 0 - { - uint32_t ctx_status; - - qat_ae_ctx_indr_read(sc, ae, ctx, CTX_STS_INDIRECT, - &ctx_status); - printf("%s: endpc 0x%08x\n", __func__, - ctx_status & UPC_MASK); - } -#endif - - /* retore to previous states: */ - /* disable context */ - qat_ae_disable_ctx(sc, ae, 1 << ctx); - if (ninst <= MAX_EXEC_INST) { - /* instructions */ - qat_ae_ucode_write(sc, ae, 0, ninst, savucode); - } - /* wakeup-events */ - qat_ae_ctx_indr_write(sc, ae, 1 << ctx, CTX_WAKEUP_EVENTS_INDIRECT, - wakeup_ev); - qat_ae_ctx_indr_write(sc, ae, 1 << ctx, CTX_STS_INDIRECT, savpc); - - /* only restore shared control store bit, - other bit might be changed by AE code snippet */ - qat_ae_read_4(sc, ae, AE_MISC_CONTROL, &misc); - if (share_cs) - nmisc = misc | AE_MISC_CONTROL_SHARE_CS; - else - nmisc = misc & ~AE_MISC_CONTROL_SHARE_CS; - qat_ae_write_4(sc, ae, AE_MISC_CONTROL, nmisc); - /* conditional-code */ - qat_ae_write_4(sc, ae, CC_ENABLE, savcc); - /* change the active context */ - qat_ae_write_4(sc, ae, ACTIVE_CTX_STATUS, - savctx & ACTIVE_CTX_STATUS_ACNO); - /* restore the nxt ctx to run */ - qat_ae_write_4(sc, ae, CTX_ARB_CNTL, ctxarbctl); - /* restore current LM addr */ - qat_ae_ctx_indr_write(sc, ae, 1 << ctx, LM_ADDR_0_INDIRECT, - indr_lm_addr_0); - qat_ae_ctx_indr_write(sc, ae, 1 << ctx, LM_ADDR_1_INDIRECT, - indr_lm_addr_1); - qat_ae_ctx_indr_write(sc, ae, 1 << ctx, INDIRECT_LM_ADDR_0_BYTE_INDEX, - indr_lm_addr_byte_0); - qat_ae_ctx_indr_write(sc, ae, 1 << ctx, INDIRECT_LM_ADDR_1_BYTE_INDEX, - indr_lm_addr_byte_1); - - /* restore indirect csrs */ - qat_ae_ctx_indr_write(sc, ae, 1 << ctx, FUTURE_COUNT_SIGNAL_INDIRECT, - indr_future_cnt_sig); - qat_ae_ctx_indr_write(sc, ae, 1 << ctx, CTX_SIG_EVENTS_INDIRECT, - indr_sig); - qat_ae_write_4(sc, ae, CTX_SIG_EVENTS_ACTIVE, active_sig); - - /* ctx-enables */ - qat_ae_write_4(sc, ae, CTX_ENABLES, ctxen); - - return error; -} - -static int -qat_ae_exec_ucode_init_lm(struct qat_softc *sc, u_char ae, u_char ctx, - int *first_exec, uint64_t *ucode, u_int ninst, - u_int *gpr_a0, u_int *gpr_a1, u_int *gpr_a2, u_int *gpr_b0, u_int *gpr_b1) -{ - - if (*first_exec) { - qat_aereg_rel_data_read(sc, ae, ctx, AEREG_GPA_REL, 0, gpr_a0); - qat_aereg_rel_data_read(sc, ae, ctx, AEREG_GPA_REL, 1, gpr_a1); - qat_aereg_rel_data_read(sc, ae, ctx, AEREG_GPA_REL, 2, gpr_a2); - qat_aereg_rel_data_read(sc, ae, ctx, AEREG_GPB_REL, 0, gpr_b0); - qat_aereg_rel_data_read(sc, ae, ctx, AEREG_GPB_REL, 1, gpr_b1); - *first_exec = 0; - } - - return qat_ae_exec_ucode(sc, ae, ctx, ucode, ninst, 1, ninst * 5, NULL); -} - -static int -qat_ae_restore_init_lm_gprs(struct qat_softc *sc, u_char ae, u_char ctx, - u_int gpr_a0, u_int gpr_a1, u_int gpr_a2, u_int gpr_b0, u_int gpr_b1) -{ - qat_aereg_rel_data_write(sc, ae, ctx, AEREG_GPA_REL, 0, gpr_a0); - qat_aereg_rel_data_write(sc, ae, ctx, AEREG_GPA_REL, 1, gpr_a1); - qat_aereg_rel_data_write(sc, ae, ctx, AEREG_GPA_REL, 2, gpr_a2); - qat_aereg_rel_data_write(sc, ae, ctx, AEREG_GPB_REL, 0, gpr_b0); - qat_aereg_rel_data_write(sc, ae, ctx, AEREG_GPB_REL, 1, gpr_b1); - - return 0; -} - -static int -qat_ae_get_inst_num(int lmsize) -{ - int ninst, left; - - if (lmsize == 0) - return 0; - - left = lmsize % sizeof(u_int); - - if (left) { - ninst = nitems(ae_inst_1b) + - qat_ae_get_inst_num(lmsize - left); - } else { - /* 3 instruction is needed for further code */ - ninst = (lmsize - sizeof(u_int)) * 3 / 4 + nitems(ae_inst_4b); - } - - return (ninst); -} - -static int -qat_ae_batch_put_lm(struct qat_softc *sc, u_char ae, - struct qat_ae_batch_init_list *qabi_list, size_t nqabi) -{ - struct qat_ae_batch_init *qabi; - size_t alloc_ninst, ninst; - uint64_t *ucode; - u_int gpr_a0, gpr_a1, gpr_a2, gpr_b0, gpr_b1; - int insnsz, error = 0, execed = 0, first_exec = 1; - - if (STAILQ_FIRST(qabi_list) == NULL) - return 0; - - alloc_ninst = min(USTORE_SIZE, nqabi); - ucode = qat_alloc_mem(sizeof(uint64_t) * alloc_ninst); - - ninst = 0; - STAILQ_FOREACH(qabi, qabi_list, qabi_next) { - insnsz = qat_ae_get_inst_num(qabi->qabi_size); - if (insnsz + ninst > alloc_ninst) { - /* add ctx_arb[kill] */ - ucode[ninst++] = 0x0E000010000ull; - execed = 1; - - error = qat_ae_exec_ucode_init_lm(sc, ae, 0, - &first_exec, ucode, ninst, - &gpr_a0, &gpr_a1, &gpr_a2, &gpr_b0, &gpr_b1); - if (error) { - qat_ae_restore_init_lm_gprs(sc, ae, 0, - gpr_a0, gpr_a1, gpr_a2, gpr_b0, gpr_b1); - qat_free_mem(ucode); - return error; - } - /* run microExec to execute the microcode */ - ninst = 0; - } - ninst += qat_ae_concat_ucode(ucode, ninst, - qabi->qabi_size, qabi->qabi_addr, qabi->qabi_value); - } - - if (ninst > 0) { - ucode[ninst++] = 0x0E000010000ull; - execed = 1; - - error = qat_ae_exec_ucode_init_lm(sc, ae, 0, - &first_exec, ucode, ninst, - &gpr_a0, &gpr_a1, &gpr_a2, &gpr_b0, &gpr_b1); - } - if (execed) { - qat_ae_restore_init_lm_gprs(sc, ae, 0, - gpr_a0, gpr_a1, gpr_a2, gpr_b0, gpr_b1); - } - - qat_free_mem(ucode); - - return error; -} - -static int -qat_ae_write_pc(struct qat_softc *sc, u_char ae, u_int ctx_mask, u_int upc) -{ - - if (qat_ae_is_active(sc, ae)) - return EBUSY; - - qat_ae_ctx_indr_write(sc, ae, ctx_mask, CTX_STS_INDIRECT, - UPC_MASK & upc); - return 0; -} - -static inline u_int -qat_aefw_csum_calc(u_int reg, int ch) -{ - int i; - u_int topbit = CRC_BITMASK(CRC_WIDTH - 1); - u_int inbyte = (u_int)((reg >> 0x18) ^ ch); - - reg ^= inbyte << (CRC_WIDTH - 0x8); - for (i = 0; i < 0x8; i++) { - if (reg & topbit) - reg = (reg << 1) ^ CRC_POLY; - else - reg <<= 1; - } - - return (reg & CRC_WIDTHMASK(CRC_WIDTH)); -} - -static u_int -qat_aefw_csum(char *buf, int size) -{ - u_int csum = 0; - - while (size--) { - csum = qat_aefw_csum_calc(csum, *buf++); - } - - return csum; -} - -static const char * -qat_aefw_uof_string(struct qat_softc *sc, size_t offset) -{ - if (offset >= sc->sc_aefw_uof.qafu_str_tab_size) - return NULL; - if (sc->sc_aefw_uof.qafu_str_tab == NULL) - return NULL; - - return (const char *)((uintptr_t)sc->sc_aefw_uof.qafu_str_tab + offset); -} - -static struct uof_chunk_hdr * -qat_aefw_uof_find_chunk(struct qat_softc *sc, - const char *id, struct uof_chunk_hdr *cur) -{ - struct uof_obj_hdr *uoh = sc->sc_aefw_uof.qafu_obj_hdr; - struct uof_chunk_hdr *uch; - int i; - - uch = (struct uof_chunk_hdr *)(uoh + 1); - for (i = 0; i < uoh->uoh_num_chunks; i++, uch++) { - if (uch->uch_offset + uch->uch_size > sc->sc_aefw_uof.qafu_size) - return NULL; - - if (cur < uch && !strncmp(uch->uch_id, id, UOF_OBJ_ID_LEN)) - return uch; - } - - return NULL; -} - -static int -qat_aefw_load_mof(struct qat_softc *sc) -{ - const struct firmware *fw; - - fw = firmware_get(sc->sc_hw.qhw_mof_fwname); - if (fw == NULL) { - device_printf(sc->sc_dev, "couldn't load MOF firmware %s\n", - sc->sc_hw.qhw_mof_fwname); - return ENXIO; - } - - sc->sc_fw_mof = qat_alloc_mem(fw->datasize); - sc->sc_fw_mof_size = fw->datasize; - memcpy(sc->sc_fw_mof, fw->data, fw->datasize); - firmware_put(fw, FIRMWARE_UNLOAD); - return 0; -} - -static void -qat_aefw_unload_mof(struct qat_softc *sc) -{ - if (sc->sc_fw_mof != NULL) { - qat_free_mem(sc->sc_fw_mof); - sc->sc_fw_mof = NULL; - } -} - -static int -qat_aefw_load_mmp(struct qat_softc *sc) -{ - const struct firmware *fw; - - fw = firmware_get(sc->sc_hw.qhw_mmp_fwname); - if (fw == NULL) { - device_printf(sc->sc_dev, "couldn't load MOF firmware %s\n", - sc->sc_hw.qhw_mmp_fwname); - return ENXIO; - } - - sc->sc_fw_mmp = qat_alloc_mem(fw->datasize); - sc->sc_fw_mmp_size = fw->datasize; - memcpy(sc->sc_fw_mmp, fw->data, fw->datasize); - firmware_put(fw, FIRMWARE_UNLOAD); - return 0; -} - -static void -qat_aefw_unload_mmp(struct qat_softc *sc) -{ - if (sc->sc_fw_mmp != NULL) { - qat_free_mem(sc->sc_fw_mmp); - sc->sc_fw_mmp = NULL; - } -} - -static int -qat_aefw_mof_find_uof0(struct qat_softc *sc, - struct mof_uof_hdr *muh, struct mof_uof_chunk_hdr *head, - u_int nchunk, size_t size, const char *id, - size_t *fwsize, void **fwptr) -{ - int i; - char *uof_name; - - for (i = 0; i < nchunk; i++) { - struct mof_uof_chunk_hdr *much = &head[i]; - - if (strncmp(much->much_id, id, MOF_OBJ_ID_LEN)) - return EINVAL; - - if (much->much_offset + much->much_size > size) - return EINVAL; - - if (sc->sc_mof.qmf_sym_size <= much->much_name) - return EINVAL; - - uof_name = (char *)((uintptr_t)sc->sc_mof.qmf_sym + - much->much_name); - - if (!strcmp(uof_name, sc->sc_fw_uof_name)) { - *fwptr = (void *)((uintptr_t)muh + - (uintptr_t)much->much_offset); - *fwsize = (size_t)much->much_size; - return 0; - } - } - - return ENOENT; -} - -static int -qat_aefw_mof_find_uof(struct qat_softc *sc) -{ - struct mof_uof_hdr *uof_hdr, *suof_hdr; - u_int nuof_chunks = 0, nsuof_chunks = 0; - int error; - - uof_hdr = sc->sc_mof.qmf_uof_objs; - suof_hdr = sc->sc_mof.qmf_suof_objs; - - if (uof_hdr != NULL) { - if (uof_hdr->muh_max_chunks < uof_hdr->muh_num_chunks) { - return EINVAL; - } - nuof_chunks = uof_hdr->muh_num_chunks; - } - if (suof_hdr != NULL) { - if (suof_hdr->muh_max_chunks < suof_hdr->muh_num_chunks) - return EINVAL; - nsuof_chunks = suof_hdr->muh_num_chunks; - } - - if (nuof_chunks + nsuof_chunks == 0) - return EINVAL; - - if (uof_hdr != NULL) { - error = qat_aefw_mof_find_uof0(sc, uof_hdr, - (struct mof_uof_chunk_hdr *)(uof_hdr + 1), nuof_chunks, - sc->sc_mof.qmf_uof_objs_size, UOF_IMAG, - &sc->sc_fw_uof_size, &sc->sc_fw_uof); - if (error && error != ENOENT) - return error; - } - - if (suof_hdr != NULL) { - error = qat_aefw_mof_find_uof0(sc, suof_hdr, - (struct mof_uof_chunk_hdr *)(suof_hdr + 1), nsuof_chunks, - sc->sc_mof.qmf_suof_objs_size, SUOF_IMAG, - &sc->sc_fw_suof_size, &sc->sc_fw_suof); - if (error && error != ENOENT) - return error; - } - - if (sc->sc_fw_uof == NULL && sc->sc_fw_suof == NULL) - return ENOENT; - - return 0; -} - -static int -qat_aefw_mof_parse(struct qat_softc *sc) -{ - const struct mof_file_hdr *mfh; - const struct mof_file_chunk_hdr *mfch; - size_t size; - u_int csum; - int error, i; - - size = sc->sc_fw_mof_size; - - if (size < sizeof(struct mof_file_hdr)) - return EINVAL; - size -= sizeof(struct mof_file_hdr); - - mfh = sc->sc_fw_mof; - - if (mfh->mfh_fid != MOF_FID) - return EINVAL; - - csum = qat_aefw_csum((char *)((uintptr_t)sc->sc_fw_mof + - offsetof(struct mof_file_hdr, mfh_min_ver)), - sc->sc_fw_mof_size - - offsetof(struct mof_file_hdr, mfh_min_ver)); - if (mfh->mfh_csum != csum) - return EINVAL; - - if (mfh->mfh_min_ver != MOF_MIN_VER || - mfh->mfh_maj_ver != MOF_MAJ_VER) - return EINVAL; - - if (mfh->mfh_max_chunks < mfh->mfh_num_chunks) - return EINVAL; - - if (size < sizeof(struct mof_file_chunk_hdr) * mfh->mfh_num_chunks) - return EINVAL; - mfch = (const struct mof_file_chunk_hdr *)(mfh + 1); - - for (i = 0; i < mfh->mfh_num_chunks; i++, mfch++) { - if (mfch->mfch_offset + mfch->mfch_size > sc->sc_fw_mof_size) - return EINVAL; - - if (!strncmp(mfch->mfch_id, SYM_OBJS, MOF_OBJ_ID_LEN)) { - if (sc->sc_mof.qmf_sym != NULL) - return EINVAL; - - sc->sc_mof.qmf_sym = - (void *)((uintptr_t)sc->sc_fw_mof + - (uintptr_t)mfch->mfch_offset + sizeof(u_int)); - sc->sc_mof.qmf_sym_size = - *(u_int *)((uintptr_t)sc->sc_fw_mof + - (uintptr_t)mfch->mfch_offset); - - if (sc->sc_mof.qmf_sym_size % sizeof(u_int) != 0) - return EINVAL; - if (mfch->mfch_size != sc->sc_mof.qmf_sym_size + - sizeof(u_int) || mfch->mfch_size == 0) - return EINVAL; - if (*(char *)((uintptr_t)sc->sc_mof.qmf_sym + - sc->sc_mof.qmf_sym_size - 1) != '\0') - return EINVAL; - - } else if (!strncmp(mfch->mfch_id, UOF_OBJS, MOF_OBJ_ID_LEN)) { - if (sc->sc_mof.qmf_uof_objs != NULL) - return EINVAL; - - sc->sc_mof.qmf_uof_objs = - (void *)((uintptr_t)sc->sc_fw_mof + - (uintptr_t)mfch->mfch_offset); - sc->sc_mof.qmf_uof_objs_size = mfch->mfch_size; - - } else if (!strncmp(mfch->mfch_id, SUOF_OBJS, MOF_OBJ_ID_LEN)) { - if (sc->sc_mof.qmf_suof_objs != NULL) - return EINVAL; - - sc->sc_mof.qmf_suof_objs = - (void *)((uintptr_t)sc->sc_fw_mof + - (uintptr_t)mfch->mfch_offset); - sc->sc_mof.qmf_suof_objs_size = mfch->mfch_size; - } - } - - if (sc->sc_mof.qmf_sym == NULL || - (sc->sc_mof.qmf_uof_objs == NULL && - sc->sc_mof.qmf_suof_objs == NULL)) - return EINVAL; - - error = qat_aefw_mof_find_uof(sc); - if (error) - return error; - return 0; -} - -static int -qat_aefw_uof_parse_image(struct qat_softc *sc, - struct qat_uof_image *qui, struct uof_chunk_hdr *uch) -{ - struct uof_image *image; - struct uof_code_page *page; - uintptr_t base = (uintptr_t)sc->sc_aefw_uof.qafu_obj_hdr; - size_t lim = uch->uch_offset + uch->uch_size, size; - int i, p; - - size = uch->uch_size; - if (size < sizeof(struct uof_image)) - return EINVAL; - size -= sizeof(struct uof_image); - - qui->qui_image = image = - (struct uof_image *)(base + uch->uch_offset); - -#define ASSIGN_OBJ_TAB(np, typep, type, base, off, lim) \ -do { \ - u_int nent; \ - nent = ((struct uof_obj_table *)((base) + (off)))->uot_nentries;\ - if ((lim) < off + sizeof(struct uof_obj_table) + \ - sizeof(type) * nent) \ - return EINVAL; \ - *(np) = nent; \ - if (nent > 0) \ - *(typep) = (type)((struct uof_obj_table *) \ - ((base) + (off)) + 1); \ - else \ - *(typep) = NULL; \ -} while (0) - - ASSIGN_OBJ_TAB(&qui->qui_num_ae_reg, &qui->qui_ae_reg, - struct uof_ae_reg *, base, image->ui_reg_tab, lim); - ASSIGN_OBJ_TAB(&qui->qui_num_init_reg_sym, &qui->qui_init_reg_sym, - struct uof_init_reg_sym *, base, image->ui_init_reg_sym_tab, lim); - ASSIGN_OBJ_TAB(&qui->qui_num_sbreak, &qui->qui_sbreak, - struct qui_sbreak *, base, image->ui_sbreak_tab, lim); - - if (size < sizeof(struct uof_code_page) * image->ui_num_pages) - return EINVAL; - if (nitems(qui->qui_pages) < image->ui_num_pages) - return EINVAL; - - page = (struct uof_code_page *)(image + 1); - - for (p = 0; p < image->ui_num_pages; p++, page++) { - struct qat_uof_page *qup = &qui->qui_pages[p]; - struct uof_code_area *uca; - - qup->qup_page_num = page->ucp_page_num; - qup->qup_def_page = page->ucp_def_page; - qup->qup_page_region = page->ucp_page_region; - qup->qup_beg_vaddr = page->ucp_beg_vaddr; - qup->qup_beg_paddr = page->ucp_beg_paddr; - - ASSIGN_OBJ_TAB(&qup->qup_num_uc_var, &qup->qup_uc_var, - struct uof_uword_fixup *, base, - page->ucp_uc_var_tab, lim); - ASSIGN_OBJ_TAB(&qup->qup_num_imp_var, &qup->qup_imp_var, - struct uof_import_var *, base, - page->ucp_imp_var_tab, lim); - ASSIGN_OBJ_TAB(&qup->qup_num_imp_expr, &qup->qup_imp_expr, - struct uof_uword_fixup *, base, - page->ucp_imp_expr_tab, lim); - ASSIGN_OBJ_TAB(&qup->qup_num_neigh_reg, &qup->qup_neigh_reg, - struct uof_uword_fixup *, base, - page->ucp_neigh_reg_tab, lim); - - if (lim < page->ucp_code_area + sizeof(struct uof_code_area)) - return EINVAL; - - uca = (struct uof_code_area *)(base + page->ucp_code_area); - qup->qup_num_micro_words = uca->uca_num_micro_words; - - ASSIGN_OBJ_TAB(&qup->qup_num_uw_blocks, &qup->qup_uw_blocks, - struct qat_uof_uword_block *, base, - uca->uca_uword_block_tab, lim); - - for (i = 0; i < qup->qup_num_uw_blocks; i++) { - u_int uwordoff = ((struct uof_uword_block *)( - &qup->qup_uw_blocks[i]))->uub_uword_offset; - - if (lim < uwordoff) - return EINVAL; - - qup->qup_uw_blocks[i].quub_micro_words = - (base + uwordoff); - } - } - -#undef ASSIGN_OBJ_TAB - - return 0; -} - -static int -qat_aefw_uof_parse_images(struct qat_softc *sc) -{ - struct uof_chunk_hdr *uch = NULL; - int i, error; - - for (i = 0; i < MAX_NUM_AE * MAX_AE_CTX; i++) { - uch = qat_aefw_uof_find_chunk(sc, UOF_IMAG, uch); - if (uch == NULL) - break; - - if (i >= nitems(sc->sc_aefw_uof.qafu_imgs)) - return ENOENT; - - error = qat_aefw_uof_parse_image(sc, &sc->sc_aefw_uof.qafu_imgs[i], uch); - if (error) - return error; - - sc->sc_aefw_uof.qafu_num_imgs++; - } - - return 0; -} - -static int -qat_aefw_uof_parse(struct qat_softc *sc) -{ - struct uof_file_hdr *ufh; - struct uof_file_chunk_hdr *ufch; - struct uof_obj_hdr *uoh; - struct uof_chunk_hdr *uch; - void *uof = NULL; - size_t size, uof_size, hdr_size; - uintptr_t base; - u_int csum; - int i; - - size = sc->sc_fw_uof_size; - if (size < MIN_UOF_SIZE) - return EINVAL; - size -= sizeof(struct uof_file_hdr); - - ufh = sc->sc_fw_uof; - - if (ufh->ufh_id != UOF_FID) - return EINVAL; - if (ufh->ufh_min_ver != UOF_MIN_VER || ufh->ufh_maj_ver != UOF_MAJ_VER) - return EINVAL; - - if (ufh->ufh_max_chunks < ufh->ufh_num_chunks) - return EINVAL; - if (size < sizeof(struct uof_file_chunk_hdr) * ufh->ufh_num_chunks) - return EINVAL; - ufch = (struct uof_file_chunk_hdr *)(ufh + 1); - - uof_size = 0; - for (i = 0; i < ufh->ufh_num_chunks; i++, ufch++) { - if (ufch->ufch_offset + ufch->ufch_size > sc->sc_fw_uof_size) - return EINVAL; - - if (!strncmp(ufch->ufch_id, UOF_OBJS, UOF_OBJ_ID_LEN)) { - if (uof != NULL) - return EINVAL; - - uof = - (void *)((uintptr_t)sc->sc_fw_uof + - ufch->ufch_offset); - uof_size = ufch->ufch_size; - - csum = qat_aefw_csum(uof, uof_size); - if (csum != ufch->ufch_csum) - return EINVAL; - } - } - - if (uof == NULL) - return ENOENT; - - size = uof_size; - if (size < sizeof(struct uof_obj_hdr)) - return EINVAL; - size -= sizeof(struct uof_obj_hdr); - - uoh = uof; - - if (size < sizeof(struct uof_chunk_hdr) * uoh->uoh_num_chunks) - return EINVAL; - - /* Check if the UOF objects are compatible with the chip */ - if ((uoh->uoh_cpu_type & sc->sc_hw.qhw_prod_type) == 0) - return ENOTSUP; - - if (uoh->uoh_min_cpu_ver > sc->sc_rev || - uoh->uoh_max_cpu_ver < sc->sc_rev) - return ENOTSUP; - - sc->sc_aefw_uof.qafu_size = uof_size; - sc->sc_aefw_uof.qafu_obj_hdr = uoh; - - base = (uintptr_t)sc->sc_aefw_uof.qafu_obj_hdr; - - /* map uof string-table */ - uch = qat_aefw_uof_find_chunk(sc, UOF_STRT, NULL); - if (uch != NULL) { - hdr_size = offsetof(struct uof_str_tab, ust_strings); - sc->sc_aefw_uof.qafu_str_tab = - (void *)(base + uch->uch_offset + hdr_size); - sc->sc_aefw_uof.qafu_str_tab_size = uch->uch_size - hdr_size; - } - - /* get ustore mem inits table -- should be only one */ - uch = qat_aefw_uof_find_chunk(sc, UOF_IMEM, NULL); - if (uch != NULL) { - if (uch->uch_size < sizeof(struct uof_obj_table)) - return EINVAL; - sc->sc_aefw_uof.qafu_num_init_mem = ((struct uof_obj_table *)(base + - uch->uch_offset))->uot_nentries; - if (sc->sc_aefw_uof.qafu_num_init_mem) { - sc->sc_aefw_uof.qafu_init_mem = - (struct uof_init_mem *)(base + uch->uch_offset + - sizeof(struct uof_obj_table)); - sc->sc_aefw_uof.qafu_init_mem_size = - uch->uch_size - sizeof(struct uof_obj_table); - } - } - - uch = qat_aefw_uof_find_chunk(sc, UOF_MSEG, NULL); - if (uch != NULL) { - if (uch->uch_size < sizeof(struct uof_obj_table) + - sizeof(struct uof_var_mem_seg)) - return EINVAL; - sc->sc_aefw_uof.qafu_var_mem_seg = - (struct uof_var_mem_seg *)(base + uch->uch_offset + - sizeof(struct uof_obj_table)); - } - - return qat_aefw_uof_parse_images(sc); -} - -static int -qat_aefw_suof_parse_image(struct qat_softc *sc, struct qat_suof_image *qsi, - struct suof_chunk_hdr *sch) -{ - struct qat_aefw_suof *qafs = &sc->sc_aefw_suof; - struct simg_ae_mode *ae_mode; - u_int maj_ver; - - qsi->qsi_simg_buf = qafs->qafs_suof_buf + sch->sch_offset + - sizeof(struct suof_obj_hdr); - qsi->qsi_simg_len = - ((struct suof_obj_hdr *) - (qafs->qafs_suof_buf + sch->sch_offset))->soh_img_length; - - qsi->qsi_css_header = qsi->qsi_simg_buf; - qsi->qsi_css_key = qsi->qsi_css_header + sizeof(struct css_hdr); - qsi->qsi_css_signature = qsi->qsi_css_key + - CSS_FWSK_MODULUS_LEN + CSS_FWSK_EXPONENT_LEN; - qsi->qsi_css_simg = qsi->qsi_css_signature + CSS_SIGNATURE_LEN; - - ae_mode = (struct simg_ae_mode *)qsi->qsi_css_simg; - qsi->qsi_ae_mask = ae_mode->sam_ae_mask; - qsi->qsi_simg_name = (u_long)&ae_mode->sam_simg_name; - qsi->qsi_appmeta_data = (u_long)&ae_mode->sam_appmeta_data; - qsi->qsi_fw_type = ae_mode->sam_fw_type; - - if (ae_mode->sam_dev_type != sc->sc_hw.qhw_prod_type) - return EINVAL; - - maj_ver = (QAT_PID_MAJOR_REV | (sc->sc_rev & QAT_PID_MINOR_REV)) & 0xff; - if ((maj_ver > ae_mode->sam_devmax_ver) || - (maj_ver < ae_mode->sam_devmin_ver)) { - return EINVAL; - } - - return 0; -} - -static int -qat_aefw_suof_parse(struct qat_softc *sc) -{ - struct suof_file_hdr *sfh; - struct suof_chunk_hdr *sch; - struct qat_aefw_suof *qafs = &sc->sc_aefw_suof; - struct qat_suof_image *qsi; - size_t size; - u_int csum; - int ae0_img = MAX_AE; - int i, error; - - size = sc->sc_fw_suof_size; - if (size < sizeof(struct suof_file_hdr)) - return EINVAL; - - sfh = sc->sc_fw_suof; - - if (sfh->sfh_file_id != SUOF_FID) - return EINVAL; - if (sfh->sfh_fw_type != 0) - return EINVAL; - if (sfh->sfh_num_chunks <= 1) - return EINVAL; - if (sfh->sfh_min_ver != SUOF_MIN_VER || - sfh->sfh_maj_ver != SUOF_MAJ_VER) - return EINVAL; - - csum = qat_aefw_csum((char *)&sfh->sfh_min_ver, - size - offsetof(struct suof_file_hdr, sfh_min_ver)); - if (csum != sfh->sfh_check_sum) - return EINVAL; - - size -= sizeof(struct suof_file_hdr); - - qafs->qafs_file_id = SUOF_FID; - qafs->qafs_suof_buf = sc->sc_fw_suof; - qafs->qafs_suof_size = sc->sc_fw_suof_size; - qafs->qafs_check_sum = sfh->sfh_check_sum; - qafs->qafs_min_ver = sfh->sfh_min_ver; - qafs->qafs_maj_ver = sfh->sfh_maj_ver; - qafs->qafs_fw_type = sfh->sfh_fw_type; - - if (size < sizeof(struct suof_chunk_hdr)) - return EINVAL; - sch = (struct suof_chunk_hdr *)(sfh + 1); - size -= sizeof(struct suof_chunk_hdr); - - if (size < sizeof(struct suof_str_tab)) - return EINVAL; - size -= offsetof(struct suof_str_tab, sst_strings); - - qafs->qafs_sym_size = ((struct suof_str_tab *) - (qafs->qafs_suof_buf + sch->sch_offset))->sst_tab_length; - if (size < qafs->qafs_sym_size) - return EINVAL; - qafs->qafs_sym_str = qafs->qafs_suof_buf + sch->sch_offset + - offsetof(struct suof_str_tab, sst_strings); - - qafs->qafs_num_simgs = sfh->sfh_num_chunks - 1; - if (qafs->qafs_num_simgs == 0) - return EINVAL; - - qsi = qat_alloc_mem( - sizeof(struct qat_suof_image) * qafs->qafs_num_simgs); - qafs->qafs_simg = qsi; - - for (i = 0; i < qafs->qafs_num_simgs; i++) { - error = qat_aefw_suof_parse_image(sc, &qsi[i], &sch[i + 1]); - if (error) - return error; - if ((qsi[i].qsi_ae_mask & 0x1) != 0) - ae0_img = i; - } - - if (ae0_img != qafs->qafs_num_simgs - 1) { - struct qat_suof_image last_qsi; - - memcpy(&last_qsi, &qsi[qafs->qafs_num_simgs - 1], - sizeof(struct qat_suof_image)); - memcpy(&qsi[qafs->qafs_num_simgs - 1], &qsi[ae0_img], - sizeof(struct qat_suof_image)); - memcpy(&qsi[ae0_img], &last_qsi, - sizeof(struct qat_suof_image)); - } - - return 0; -} - -static int -qat_aefw_alloc_auth_dmamem(struct qat_softc *sc, char *image, size_t size, - struct qat_dmamem *dma) -{ - struct css_hdr *css = (struct css_hdr *)image; - struct auth_chunk *auth_chunk; - struct fw_auth_desc *auth_desc; - size_t mapsize, simg_offset = sizeof(struct auth_chunk); - bus_size_t bus_addr; - uintptr_t virt_addr; - int error; - - if (size > AE_IMG_OFFSET + CSS_MAX_IMAGE_LEN) - return EINVAL; - - mapsize = (css->css_fw_type == CSS_AE_FIRMWARE) ? - CSS_AE_SIMG_LEN + simg_offset : - size + CSS_FWSK_PAD_LEN + simg_offset; - error = qat_alloc_dmamem(sc, dma, 1, mapsize, PAGE_SIZE); - if (error) - return error; - - memset(dma->qdm_dma_vaddr, 0, mapsize); - - auth_chunk = dma->qdm_dma_vaddr; - auth_chunk->ac_chunk_size = mapsize; - auth_chunk->ac_chunk_bus_addr = dma->qdm_dma_seg.ds_addr; - - virt_addr = (uintptr_t)dma->qdm_dma_vaddr; - virt_addr += simg_offset; - bus_addr = auth_chunk->ac_chunk_bus_addr; - bus_addr += simg_offset; - - auth_desc = &auth_chunk->ac_fw_auth_desc; - auth_desc->fad_css_hdr_high = (uint64_t)bus_addr >> 32; - auth_desc->fad_css_hdr_low = bus_addr; - - memcpy((void *)virt_addr, image, sizeof(struct css_hdr)); - /* pub key */ - virt_addr += sizeof(struct css_hdr); - bus_addr += sizeof(struct css_hdr); - image += sizeof(struct css_hdr); - - auth_desc->fad_fwsk_pub_high = (uint64_t)bus_addr >> 32; - auth_desc->fad_fwsk_pub_low = bus_addr; - - memcpy((void *)virt_addr, image, CSS_FWSK_MODULUS_LEN); - memset((void *)(virt_addr + CSS_FWSK_MODULUS_LEN), 0, CSS_FWSK_PAD_LEN); - memcpy((void *)(virt_addr + CSS_FWSK_MODULUS_LEN + CSS_FWSK_PAD_LEN), - image + CSS_FWSK_MODULUS_LEN, sizeof(uint32_t)); - - virt_addr += CSS_FWSK_PUB_LEN; - bus_addr += CSS_FWSK_PUB_LEN; - image += CSS_FWSK_MODULUS_LEN + CSS_FWSK_EXPONENT_LEN; - - auth_desc->fad_signature_high = (uint64_t)bus_addr >> 32; - auth_desc->fad_signature_low = bus_addr; - - memcpy((void *)virt_addr, image, CSS_SIGNATURE_LEN); - - virt_addr += CSS_SIGNATURE_LEN; - bus_addr += CSS_SIGNATURE_LEN; - image += CSS_SIGNATURE_LEN; - - auth_desc->fad_img_high = (uint64_t)bus_addr >> 32; - auth_desc->fad_img_low = bus_addr; - auth_desc->fad_img_len = size - AE_IMG_OFFSET; - - memcpy((void *)virt_addr, image, auth_desc->fad_img_len); - - if (css->css_fw_type == CSS_AE_FIRMWARE) { - auth_desc->fad_img_ae_mode_data_high = auth_desc->fad_img_high; - auth_desc->fad_img_ae_mode_data_low = auth_desc->fad_img_low; - - bus_addr += sizeof(struct simg_ae_mode); - - auth_desc->fad_img_ae_init_data_high = (uint64_t)bus_addr >> 32; - auth_desc->fad_img_ae_init_data_low = bus_addr; - - bus_addr += SIMG_AE_INIT_SEQ_LEN; - - auth_desc->fad_img_ae_insts_high = (uint64_t)bus_addr >> 32; - auth_desc->fad_img_ae_insts_low = bus_addr; - } else { - auth_desc->fad_img_ae_insts_high = auth_desc->fad_img_high; - auth_desc->fad_img_ae_insts_low = auth_desc->fad_img_low; - } - - bus_dmamap_sync(dma->qdm_dma_tag, dma->qdm_dma_map, - BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); - - return 0; -} - -static int -qat_aefw_auth(struct qat_softc *sc, struct qat_dmamem *dma) -{ - bus_addr_t addr; - uint32_t fcu, sts; - int retry = 0; - - addr = dma->qdm_dma_seg.ds_addr; - qat_cap_global_write_4(sc, FCU_DRAM_ADDR_HI, (uint64_t)addr >> 32); - qat_cap_global_write_4(sc, FCU_DRAM_ADDR_LO, addr); - qat_cap_global_write_4(sc, FCU_CTRL, FCU_CTRL_CMD_AUTH); - - do { - DELAY(FW_AUTH_WAIT_PERIOD * 1000); - fcu = qat_cap_global_read_4(sc, FCU_STATUS); - sts = __SHIFTOUT(fcu, FCU_STATUS_STS); - if (sts == FCU_STATUS_STS_VERI_FAIL) - goto fail; - if (fcu & FCU_STATUS_AUTHFWLD && - sts == FCU_STATUS_STS_VERI_DONE) { - return 0; - } - } while (retry++ < FW_AUTH_MAX_RETRY); - -fail: - device_printf(sc->sc_dev, - "firmware authentication error: status 0x%08x retry %d\n", - fcu, retry); - return EINVAL; -} - -static int -qat_aefw_suof_load(struct qat_softc *sc, struct qat_dmamem *dma) -{ - struct simg_ae_mode *ae_mode; - uint32_t fcu, sts, loaded; - u_int mask; - u_char ae; - int retry = 0; - - ae_mode = (struct simg_ae_mode *)((uintptr_t)dma->qdm_dma_vaddr + - sizeof(struct auth_chunk) + sizeof(struct css_hdr) + - CSS_FWSK_PUB_LEN + CSS_SIGNATURE_LEN); - - for (ae = 0, mask = sc->sc_ae_mask; mask; ae++, mask >>= 1) { - if (!(mask & 1)) - continue; - if (!((ae_mode->sam_ae_mask >> ae) & 0x1)) - continue; - if (qat_ae_is_active(sc, ae)) { - device_printf(sc->sc_dev, "AE %d is active\n", ae); - return EINVAL; - } - qat_cap_global_write_4(sc, FCU_CTRL, - FCU_CTRL_CMD_LOAD | __SHIFTIN(ae, FCU_CTRL_AE)); - do { - DELAY(FW_AUTH_WAIT_PERIOD * 1000); - fcu = qat_cap_global_read_4(sc, FCU_STATUS); - sts = __SHIFTOUT(fcu, FCU_STATUS_STS); - loaded = __SHIFTOUT(fcu, FCU_STATUS_LOADED_AE); - if (sts == FCU_STATUS_STS_LOAD_DONE && - (loaded & (1 << ae))) { - break; - } - } while (retry++ < FW_AUTH_MAX_RETRY); - - if (retry > FW_AUTH_MAX_RETRY) { - device_printf(sc->sc_dev, - "firmware load timeout: status %08x\n", fcu); - return EINVAL; - } - } - - return 0; -} - -static int -qat_aefw_suof_write(struct qat_softc *sc) -{ - struct qat_suof_image *qsi; - int i, error = 0; - - for (i = 0; i < sc->sc_aefw_suof.qafs_num_simgs; i++) { - qsi = &sc->sc_aefw_suof.qafs_simg[i]; - error = qat_aefw_alloc_auth_dmamem(sc, qsi->qsi_simg_buf, - qsi->qsi_simg_len, &qsi->qsi_dma); - if (error) - return error; - error = qat_aefw_auth(sc, &qsi->qsi_dma); - if (error) { - qat_free_dmamem(sc, &qsi->qsi_dma); - return error; - } - error = qat_aefw_suof_load(sc, &qsi->qsi_dma); - if (error) { - qat_free_dmamem(sc, &qsi->qsi_dma); - return error; - } - qat_free_dmamem(sc, &qsi->qsi_dma); - } - qat_free_mem(sc->sc_aefw_suof.qafs_simg); - - return 0; -} - -static int -qat_aefw_uof_assign_image(struct qat_softc *sc, struct qat_ae *qae, - struct qat_uof_image *qui) -{ - struct qat_ae_slice *slice; - int i, npages, nregions; - - if (qae->qae_num_slices >= nitems(qae->qae_slices)) - return ENOENT; - - if (qui->qui_image->ui_ae_mode & - (AE_MODE_RELOAD_CTX_SHARED | AE_MODE_SHARED_USTORE)) { - /* XXX */ - device_printf(sc->sc_dev, - "shared ae mode is not supported yet\n"); - return ENOTSUP; - } - - qae->qae_shareable_ustore = 0; /* XXX */ - qae->qae_effect_ustore_size = USTORE_SIZE; - - slice = &qae->qae_slices[qae->qae_num_slices]; - - slice->qas_image = qui; - slice->qas_assigned_ctx_mask = qui->qui_image->ui_ctx_assigned; - - nregions = qui->qui_image->ui_num_page_regions; - npages = qui->qui_image->ui_num_pages; - - if (nregions > nitems(slice->qas_regions)) - return ENOENT; - if (npages > nitems(slice->qas_pages)) - return ENOENT; - - for (i = 0; i < nregions; i++) { - STAILQ_INIT(&slice->qas_regions[i].qar_waiting_pages); - } - for (i = 0; i < npages; i++) { - struct qat_ae_page *page = &slice->qas_pages[i]; - int region; - - page->qap_page = &qui->qui_pages[i]; - region = page->qap_page->qup_page_region; - if (region >= nregions) - return EINVAL; - - page->qap_region = &slice->qas_regions[region]; - } - - qae->qae_num_slices++; - - return 0; -} - -static int -qat_aefw_uof_init_ae(struct qat_softc *sc, u_char ae) -{ - struct uof_image *image; - struct qat_ae *qae = &(QAT_AE(sc, ae)); - int s; - u_char nn_mode; - - for (s = 0; s < qae->qae_num_slices; s++) { - if (qae->qae_slices[s].qas_image == NULL) - continue; - - image = qae->qae_slices[s].qas_image->qui_image; - qat_ae_write_ctx_mode(sc, ae, - __SHIFTOUT(image->ui_ae_mode, AE_MODE_CTX_MODE)); - - nn_mode = __SHIFTOUT(image->ui_ae_mode, AE_MODE_NN_MODE); - if (nn_mode != AE_MODE_NN_MODE_DONTCARE) - qat_ae_write_nn_mode(sc, ae, nn_mode); - - qat_ae_write_lm_mode(sc, ae, AEREG_LMEM0, - __SHIFTOUT(image->ui_ae_mode, AE_MODE_LMEM0)); - qat_ae_write_lm_mode(sc, ae, AEREG_LMEM1, - __SHIFTOUT(image->ui_ae_mode, AE_MODE_LMEM1)); - - qat_ae_write_shared_cs_mode(sc, ae, - __SHIFTOUT(image->ui_ae_mode, AE_MODE_SHARED_USTORE)); - qat_ae_set_reload_ustore(sc, ae, image->ui_reloadable_size, - __SHIFTOUT(image->ui_ae_mode, AE_MODE_RELOAD_CTX_SHARED), - qae->qae_reloc_ustore_dram); - } - - return 0; -} - -static int -qat_aefw_uof_init(struct qat_softc *sc) -{ - int ae, i, error; - uint32_t mask; - - for (ae = 0, mask = sc->sc_ae_mask; mask; ae++, mask >>= 1) { - struct qat_ae *qae; - - if (!(mask & 1)) - continue; - - qae = &(QAT_AE(sc, ae)); - - for (i = 0; i < sc->sc_aefw_uof.qafu_num_imgs; i++) { - if ((sc->sc_aefw_uof.qafu_imgs[i].qui_image->ui_ae_assigned & - (1 << ae)) == 0) - continue; - - error = qat_aefw_uof_assign_image(sc, qae, - &sc->sc_aefw_uof.qafu_imgs[i]); - if (error) - return error; - } - - /* XXX UcLo_initNumUwordUsed */ - - qae->qae_reloc_ustore_dram = UINT_MAX; /* XXX */ - - error = qat_aefw_uof_init_ae(sc, ae); - if (error) - return error; - } - - return 0; -} - -int -qat_aefw_load(struct qat_softc *sc) -{ - int error; - - error = qat_aefw_load_mof(sc); - if (error) - return error; - - error = qat_aefw_load_mmp(sc); - if (error) - return error; - - error = qat_aefw_mof_parse(sc); - if (error) { - device_printf(sc->sc_dev, "couldn't parse mof: %d\n", error); - return error; - } - - if (sc->sc_hw.qhw_fw_auth) { - error = qat_aefw_suof_parse(sc); - if (error) { - device_printf(sc->sc_dev, "couldn't parse suof: %d\n", - error); - return error; - } - - error = qat_aefw_suof_write(sc); - if (error) { - device_printf(sc->sc_dev, - "could not write firmware: %d\n", error); - return error; - } - - } else { - error = qat_aefw_uof_parse(sc); - if (error) { - device_printf(sc->sc_dev, "couldn't parse uof: %d\n", - error); - return error; - } - - error = qat_aefw_uof_init(sc); - if (error) { - device_printf(sc->sc_dev, - "couldn't init for aefw: %d\n", error); - return error; - } - - error = qat_aefw_uof_write(sc); - if (error) { - device_printf(sc->sc_dev, - "Could not write firmware: %d\n", error); - return error; - } - } - - return 0; -} - -void -qat_aefw_unload(struct qat_softc *sc) -{ - qat_aefw_unload_mmp(sc); - qat_aefw_unload_mof(sc); -} - -int -qat_aefw_start(struct qat_softc *sc, u_char ae, u_int ctx_mask) -{ - uint32_t fcu; - int retry = 0; - - if (sc->sc_hw.qhw_fw_auth) { - qat_cap_global_write_4(sc, FCU_CTRL, FCU_CTRL_CMD_START); - do { - DELAY(FW_AUTH_WAIT_PERIOD * 1000); - fcu = qat_cap_global_read_4(sc, FCU_STATUS); - if (fcu & FCU_STATUS_DONE) - return 0; - } while (retry++ < FW_AUTH_MAX_RETRY); - - device_printf(sc->sc_dev, - "firmware start timeout: status %08x\n", fcu); - return EINVAL; - } else { - qat_ae_ctx_indr_write(sc, ae, (~ctx_mask) & AE_ALL_CTX, - CTX_WAKEUP_EVENTS_INDIRECT, - CTX_WAKEUP_EVENTS_INDIRECT_SLEEP); - qat_ae_enable_ctx(sc, ae, ctx_mask); - } - - return 0; -} - -static int -qat_aefw_init_memory_one(struct qat_softc *sc, struct uof_init_mem *uim) -{ - struct qat_aefw_uof *qafu = &sc->sc_aefw_uof; - struct qat_ae_batch_init_list *qabi_list; - struct uof_mem_val_attr *memattr; - size_t *curinit; - u_long ael; - int i; - const char *sym; - char *ep; - - memattr = (struct uof_mem_val_attr *)(uim + 1); - - switch (uim->uim_region) { - case LMEM_REGION: - if ((uim->uim_addr + uim->uim_num_bytes) > MAX_LMEM_REG * 4) { - device_printf(sc->sc_dev, - "Invalid lmem addr or bytes\n"); - return ENOBUFS; - } - if (uim->uim_scope != UOF_SCOPE_LOCAL) - return EINVAL; - sym = qat_aefw_uof_string(sc, uim->uim_sym_name); - ael = strtoul(sym, &ep, 10); - if (ep == sym || ael > MAX_AE) - return EINVAL; - if ((sc->sc_ae_mask & (1 << ael)) == 0) - return 0; /* ae is fused out */ - - curinit = &qafu->qafu_num_lm_init[ael]; - qabi_list = &qafu->qafu_lm_init[ael]; - - for (i = 0; i < uim->uim_num_val_attr; i++, memattr++) { - struct qat_ae_batch_init *qabi; - - qabi = qat_alloc_mem(sizeof(struct qat_ae_batch_init)); - if (*curinit == 0) - STAILQ_INIT(qabi_list); - STAILQ_INSERT_TAIL(qabi_list, qabi, qabi_next); - - qabi->qabi_ae = (u_int)ael; - qabi->qabi_addr = - uim->uim_addr + memattr->umva_byte_offset; - qabi->qabi_value = &memattr->umva_value; - qabi->qabi_size = 4; - qafu->qafu_num_lm_init_inst[ael] += - qat_ae_get_inst_num(qabi->qabi_size); - (*curinit)++; - if (*curinit >= MAX_LMEM_REG) { - device_printf(sc->sc_dev, - "Invalid lmem val attr\n"); - return ENOBUFS; - } - } - break; - case SRAM_REGION: - case DRAM_REGION: - case DRAM1_REGION: - case SCRATCH_REGION: - case UMEM_REGION: - /* XXX */ - /* fallthrough */ - default: - device_printf(sc->sc_dev, - "unsupported memory region to init: %d\n", - uim->uim_region); - return ENOTSUP; - } - - return 0; -} - -static void -qat_aefw_free_lm_init(struct qat_softc *sc, u_char ae) -{ - struct qat_aefw_uof *qafu = &sc->sc_aefw_uof; - struct qat_ae_batch_init *qabi; - - while ((qabi = STAILQ_FIRST(&qafu->qafu_lm_init[ae])) != NULL) { - STAILQ_REMOVE_HEAD(&qafu->qafu_lm_init[ae], qabi_next); - qat_free_mem(qabi); - } - - qafu->qafu_num_lm_init[ae] = 0; - qafu->qafu_num_lm_init_inst[ae] = 0; -} - -static int -qat_aefw_init_ustore(struct qat_softc *sc) -{ - uint64_t *fill; - uint32_t dont_init; - int a, i, p; - int error = 0; - int usz, end, start; - u_char ae, nae; - - fill = qat_alloc_mem(MAX_USTORE * sizeof(uint64_t)); - - for (a = 0; a < sc->sc_aefw_uof.qafu_num_imgs; a++) { - struct qat_uof_image *qui = &sc->sc_aefw_uof.qafu_imgs[a]; - struct uof_image *ui = qui->qui_image; - - for (i = 0; i < MAX_USTORE; i++) - memcpy(&fill[i], ui->ui_fill_pattern, sizeof(uint64_t)); - /* - * Compute do_not_init value as a value that will not be equal - * to fill data when cast to an int - */ - dont_init = 0; - if (dont_init == (uint32_t)fill[0]) - dont_init = 0xffffffff; - - for (p = 0; p < ui->ui_num_pages; p++) { - struct qat_uof_page *qup = &qui->qui_pages[p]; - if (!qup->qup_def_page) - continue; - - for (i = qup->qup_beg_paddr; - i < qup->qup_beg_paddr + qup->qup_num_micro_words; - i++ ) { - fill[i] = (uint64_t)dont_init; - } - } - - for (ae = 0; ae < sc->sc_ae_num; ae++) { - MPASS(ae < UOF_MAX_NUM_OF_AE); - if ((ui->ui_ae_assigned & (1 << ae)) == 0) - continue; - - if (QAT_AE(sc, ae).qae_shareable_ustore && (ae & 1)) { - qat_ae_get_shared_ustore_ae(ae, &nae); - if (ui->ui_ae_assigned & (1 << ae)) - continue; - } - usz = QAT_AE(sc, ae).qae_effect_ustore_size; - - /* initialize the areas not going to be overwritten */ - end = -1; - do { - /* find next uword that needs to be initialized */ - for (start = end + 1; start < usz; start++) { - if ((uint32_t)fill[start] != dont_init) - break; - } - /* see if there are no more such uwords */ - if (start >= usz) - break; - for (end = start + 1; end < usz; end++) { - if ((uint32_t)fill[end] == dont_init) - break; - } - if (QAT_AE(sc, ae).qae_shareable_ustore) { - error = ENOTSUP; /* XXX */ - goto out; - } else { - error = qat_ae_ucode_write(sc, ae, - start, end - start, &fill[start]); - if (error) { - goto out; - } - } - - } while (end < usz); - } - } - -out: - qat_free_mem(fill); - return error; -} - -static int -qat_aefw_init_reg(struct qat_softc *sc, u_char ae, u_char ctx_mask, - enum aereg_type regtype, u_short regaddr, u_int value) -{ - int error = 0; - u_char ctx; - - switch (regtype) { - case AEREG_GPA_REL: - case AEREG_GPB_REL: - case AEREG_SR_REL: - case AEREG_SR_RD_REL: - case AEREG_SR_WR_REL: - case AEREG_DR_REL: - case AEREG_DR_RD_REL: - case AEREG_DR_WR_REL: - case AEREG_NEIGH_REL: - /* init for all valid ctx */ - for (ctx = 0; ctx < MAX_AE_CTX; ctx++) { - if ((ctx_mask & (1 << ctx)) == 0) - continue; - error = qat_aereg_rel_data_write(sc, ae, ctx, regtype, - regaddr, value); - } - break; - case AEREG_GPA_ABS: - case AEREG_GPB_ABS: - case AEREG_SR_ABS: - case AEREG_SR_RD_ABS: - case AEREG_SR_WR_ABS: - case AEREG_DR_ABS: - case AEREG_DR_RD_ABS: - case AEREG_DR_WR_ABS: - error = qat_aereg_abs_data_write(sc, ae, regtype, - regaddr, value); - break; - default: - error = EINVAL; - break; - } - - return error; -} - -static int -qat_aefw_init_reg_sym_expr(struct qat_softc *sc, u_char ae, - struct qat_uof_image *qui) -{ - u_int i, expres; - u_char ctx_mask; - - for (i = 0; i < qui->qui_num_init_reg_sym; i++) { - struct uof_init_reg_sym *uirs = &qui->qui_init_reg_sym[i]; - - if (uirs->uirs_value_type == EXPR_VAL) { - /* XXX */ - device_printf(sc->sc_dev, - "does not support initializing EXPR_VAL\n"); - return ENOTSUP; - } else { - expres = uirs->uirs_value; - } - - switch (uirs->uirs_init_type) { - case INIT_REG: - if (__SHIFTOUT(qui->qui_image->ui_ae_mode, - AE_MODE_CTX_MODE) == MAX_AE_CTX) { - ctx_mask = 0xff; /* 8-ctx mode */ - } else { - ctx_mask = 0x55; /* 4-ctx mode */ - } - qat_aefw_init_reg(sc, ae, ctx_mask, - (enum aereg_type)uirs->uirs_reg_type, - (u_short)uirs->uirs_addr_offset, expres); - break; - case INIT_REG_CTX: - if (__SHIFTOUT(qui->qui_image->ui_ae_mode, - AE_MODE_CTX_MODE) == MAX_AE_CTX) { - ctx_mask = 0xff; /* 8-ctx mode */ - } else { - ctx_mask = 0x55; /* 4-ctx mode */ - } - if (((1 << uirs->uirs_ctx) & ctx_mask) == 0) - return EINVAL; - qat_aefw_init_reg(sc, ae, 1 << uirs->uirs_ctx, - (enum aereg_type)uirs->uirs_reg_type, - (u_short)uirs->uirs_addr_offset, expres); - break; - case INIT_EXPR: - case INIT_EXPR_ENDIAN_SWAP: - default: - device_printf(sc->sc_dev, - "does not support initializing init_type %d\n", - uirs->uirs_init_type); - return ENOTSUP; - } - } - - return 0; -} - -static int -qat_aefw_init_memory(struct qat_softc *sc) -{ - struct qat_aefw_uof *qafu = &sc->sc_aefw_uof; - size_t uimsz, initmemsz = qafu->qafu_init_mem_size; - struct uof_init_mem *uim; - int error, i; - u_char ae; - - uim = qafu->qafu_init_mem; - for (i = 0; i < qafu->qafu_num_init_mem; i++) { - uimsz = sizeof(struct uof_init_mem) + - sizeof(struct uof_mem_val_attr) * uim->uim_num_val_attr; - if (uimsz > initmemsz) { - device_printf(sc->sc_dev, - "invalid uof_init_mem or uof_mem_val_attr size\n"); - return EINVAL; - } - - if (uim->uim_num_bytes > 0) { - error = qat_aefw_init_memory_one(sc, uim); - if (error) { - device_printf(sc->sc_dev, - "Could not init ae memory: %d\n", error); - return error; - } - } - uim = (struct uof_init_mem *)((uintptr_t)uim + uimsz); - initmemsz -= uimsz; - } - - /* run Batch put LM API */ - for (ae = 0; ae < MAX_AE; ae++) { - error = qat_ae_batch_put_lm(sc, ae, &qafu->qafu_lm_init[ae], - qafu->qafu_num_lm_init_inst[ae]); - if (error) - device_printf(sc->sc_dev, "Could not put lm\n"); - - qat_aefw_free_lm_init(sc, ae); - } - - error = qat_aefw_init_ustore(sc); - - /* XXX run Batch put LM API */ - - return error; -} - -static int -qat_aefw_init_globals(struct qat_softc *sc) -{ - struct qat_aefw_uof *qafu = &sc->sc_aefw_uof; - int error, i, p, s; - u_char ae; - - /* initialize the memory segments */ - if (qafu->qafu_num_init_mem > 0) { - error = qat_aefw_init_memory(sc); - if (error) - return error; - } else { - error = qat_aefw_init_ustore(sc); - if (error) - return error; - } - - /* XXX bind import variables with ivd values */ - - /* XXX bind the uC global variables - * local variables will done on-the-fly */ - for (i = 0; i < sc->sc_aefw_uof.qafu_num_imgs; i++) { - for (p = 0; p < sc->sc_aefw_uof.qafu_imgs[i].qui_image->ui_num_pages; p++) { - struct qat_uof_page *qup = - &sc->sc_aefw_uof.qafu_imgs[i].qui_pages[p]; - if (qup->qup_num_uw_blocks && - (qup->qup_num_uc_var || qup->qup_num_imp_var)) { - device_printf(sc->sc_dev, - "not support uC global variables\n"); - return ENOTSUP; - } - } - } - - for (ae = 0; ae < sc->sc_ae_num; ae++) { - struct qat_ae *qae = &(QAT_AE(sc, ae)); - - for (s = 0; s < qae->qae_num_slices; s++) { - struct qat_ae_slice *qas = &qae->qae_slices[s]; - - if (qas->qas_image == NULL) - continue; - - error = - qat_aefw_init_reg_sym_expr(sc, ae, qas->qas_image); - if (error) - return error; - } - } - - return 0; -} - -static uint64_t -qat_aefw_get_uof_inst(struct qat_softc *sc, struct qat_uof_page *qup, - u_int addr) -{ - uint64_t uinst = 0; - u_int i; - - /* find the block */ - for (i = 0; i < qup->qup_num_uw_blocks; i++) { - struct qat_uof_uword_block *quub = &qup->qup_uw_blocks[i]; - - if ((addr >= quub->quub_start_addr) && - (addr <= (quub->quub_start_addr + - (quub->quub_num_words - 1)))) { - /* unpack n bytes and assigned to the 64-bit uword value. - note: the microwords are stored as packed bytes. - */ - addr -= quub->quub_start_addr; - addr *= AEV2_PACKED_UWORD_BYTES; - memcpy(&uinst, - (void *)((uintptr_t)quub->quub_micro_words + addr), - AEV2_PACKED_UWORD_BYTES); - uinst = uinst & UWORD_MASK; - - return uinst; - } - } - - return INVLD_UWORD; -} - -static int -qat_aefw_do_pagein(struct qat_softc *sc, u_char ae, struct qat_uof_page *qup) -{ - struct qat_ae *qae = &(QAT_AE(sc, ae)); - uint64_t fill, *ucode_cpybuf; - u_int error, i, upaddr, ninst, cpylen; - - if (qup->qup_num_uc_var || qup->qup_num_neigh_reg || - qup->qup_num_imp_var || qup->qup_num_imp_expr) { - device_printf(sc->sc_dev, - "does not support fixup locals\n"); - return ENOTSUP; - } - - ucode_cpybuf = qat_alloc_mem(UWORD_CPYBUF_SIZE * sizeof(uint64_t)); - - /* XXX get fill-pattern from an image -- they are all the same */ - memcpy(&fill, sc->sc_aefw_uof.qafu_imgs[0].qui_image->ui_fill_pattern, - sizeof(uint64_t)); - - upaddr = qup->qup_beg_paddr; - ninst = qup->qup_num_micro_words; - while (ninst > 0) { - cpylen = min(ninst, UWORD_CPYBUF_SIZE); - - /* load the buffer */ - for (i = 0; i < cpylen; i++) { - /* keep below code structure in case there are - * different handling for shared secnarios */ - if (!qae->qae_shareable_ustore) { - /* qat_aefw_get_uof_inst() takes an address that - * is relative to the start of the page. - * So we don't need to add in the physical - * offset of the page. */ - if (qup->qup_page_region != 0) { - /* XXX */ - device_printf(sc->sc_dev, - "region != 0 is not supported\n"); - qat_free_mem(ucode_cpybuf); - return ENOTSUP; - } else { - /* for mixing case, it should take - * physical address */ - ucode_cpybuf[i] = qat_aefw_get_uof_inst( - sc, qup, upaddr + i); - if (ucode_cpybuf[i] == INVLD_UWORD) { - /* fill hole in the uof */ - ucode_cpybuf[i] = fill; - } - } - } else { - /* XXX */ - qat_free_mem(ucode_cpybuf); - return ENOTSUP; - } - } - - /* copy the buffer to ustore */ - if (!qae->qae_shareable_ustore) { - error = qat_ae_ucode_write(sc, ae, upaddr, cpylen, - ucode_cpybuf); - if (error) - return error; - } else { - /* XXX */ - qat_free_mem(ucode_cpybuf); - return ENOTSUP; - } - upaddr += cpylen; - ninst -= cpylen; - } - - qat_free_mem(ucode_cpybuf); - - return 0; -} - -static int -qat_aefw_uof_write_one(struct qat_softc *sc, struct qat_uof_image *qui) -{ - struct uof_image *ui = qui->qui_image; - struct qat_ae_page *qap; - u_int s, p, c; - int error; - u_char ae, ctx_mask; - - if (__SHIFTOUT(ui->ui_ae_mode, AE_MODE_CTX_MODE) == MAX_AE_CTX) - ctx_mask = 0xff; /* 8-ctx mode */ - else - ctx_mask = 0x55; /* 4-ctx mode */ - - /* load the default page and set assigned CTX PC - * to the entrypoint address */ - for (ae = 0; ae < sc->sc_ae_num; ae++) { - struct qat_ae *qae = &(QAT_AE(sc, ae)); - struct qat_ae_slice *qas; - u_int metadata; - - MPASS(ae < UOF_MAX_NUM_OF_AE); - - if ((ui->ui_ae_assigned & (1 << ae)) == 0) - continue; - - /* find the slice to which this image is assigned */ - for (s = 0; s < qae->qae_num_slices; s++) { - qas = &qae->qae_slices[s]; - if (ui->ui_ctx_assigned & qas->qas_assigned_ctx_mask) - break; - } - if (s >= qae->qae_num_slices) - continue; - - qas = &qae->qae_slices[s]; - - for (p = 0; p < ui->ui_num_pages; p++) { - qap = &qas->qas_pages[p]; - - /* Only load pages loaded by default */ - if (!qap->qap_page->qup_def_page) - continue; - - error = qat_aefw_do_pagein(sc, ae, qap->qap_page); - if (error) - return error; - } - - metadata = qas->qas_image->qui_image->ui_app_metadata; - if (metadata != 0xffffffff && bootverbose) { - device_printf(sc->sc_dev, - "loaded firmware: %s\n", - qat_aefw_uof_string(sc, metadata)); - } - - /* Assume starting page is page 0 */ - qap = &qas->qas_pages[0]; - for (c = 0; c < MAX_AE_CTX; c++) { - if (ctx_mask & (1 << c)) - qas->qas_cur_pages[c] = qap; - else - qas->qas_cur_pages[c] = NULL; - } - - /* set the live context */ - qae->qae_live_ctx_mask = ui->ui_ctx_assigned; - - /* set context PC to the image entrypoint address */ - error = qat_ae_write_pc(sc, ae, ui->ui_ctx_assigned, - ui->ui_entry_address); - if (error) - return error; - } - - /* XXX store the checksum for convenience */ - - return 0; -} - -static int -qat_aefw_uof_write(struct qat_softc *sc) -{ - int error = 0; - int i; - - error = qat_aefw_init_globals(sc); - if (error) { - device_printf(sc->sc_dev, - "Could not initialize globals\n"); - return error; - } - - for (i = 0; i < sc->sc_aefw_uof.qafu_num_imgs; i++) { - error = qat_aefw_uof_write_one(sc, - &sc->sc_aefw_uof.qafu_imgs[i]); - if (error) - break; - } - - /* XXX UcLo_computeFreeUstore */ - - return error; -} Index: sys/dev/qat/qat_aevar.h =================================================================== --- sys/dev/qat/qat_aevar.h +++ /dev/null @@ -1,73 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_aevar.h,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2007-2019 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* $FreeBSD$ */ - -#ifndef _DEV_PCI_QAT_AEVAR_H_ -#define _DEV_PCI_QAT_AEVAR_H_ - -int qat_ae_init(struct qat_softc *); -int qat_ae_start(struct qat_softc *); -void qat_ae_cluster_intr(void *); - -int qat_aefw_load(struct qat_softc *); -void qat_aefw_unload(struct qat_softc *); -int qat_aefw_start(struct qat_softc *, u_char, u_int); - -#endif Index: sys/dev/qat/qat_api/common/compression/dc_buffers.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/compression/dc_buffers.c @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_buffers.c + * + * @defgroup Dc_DataCompression DC Data Compression + * + * @ingroup Dc_DataCompression + * + * @description + * Implementation of the buffer management operations for + * Data Compression service. + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" +#include "cpa_dc.h" +#include "cpa_dc_bp.h" + +#include "sal_types_compression.h" +#include "icp_qat_fw_comp.h" + +#define CPA_DC_CEIL_DIV(x, y) (((x) + (y)-1) / (y)) +#define DC_DEST_BUFF_EXTRA_DEFLATE_GEN2 (55) + +CpaStatus +cpaDcBufferListGetMetaSize(const CpaInstanceHandle instanceHandle, + Cpa32U numBuffers, + Cpa32U *pSizeInBytes) +{ + CpaInstanceHandle insHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = instanceHandle; + } + + LAC_CHECK_INSTANCE_HANDLE(insHandle); + LAC_CHECK_NULL_PARAM(pSizeInBytes); + + /* Ensure this is a compression instance */ + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + if (0 == numBuffers) { + QAT_UTILS_LOG("Number of buffers is 0.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + *pSizeInBytes = (sizeof(icp_buffer_list_desc_t) + + (sizeof(icp_flat_buffer_desc_t) * (numBuffers + 1)) + + ICP_DESCRIPTOR_ALIGNMENT_BYTES); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcBnpBufferListGetMetaSize(const CpaInstanceHandle instanceHandle, + Cpa32U numJobs, + Cpa32U *pSizeInBytes) +{ + return CPA_STATUS_UNSUPPORTED; +} + +static inline CpaStatus +dcDeflateBoundGen2(CpaDcHuffType huffType, Cpa32U inputSize, Cpa32U *outputSize) +{ + /* Formula for GEN2 deflate: + * ceil(9 * Total input bytes / 8) + 55 bytes. + * 55 bytes is the skid pad value for GEN2 devices. + */ + *outputSize = + CPA_DC_CEIL_DIV(9 * inputSize, 8) + DC_DEST_BUFF_EXTRA_DEFLATE_GEN2; + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcDeflateCompressBound(const CpaInstanceHandle dcInstance, + CpaDcHuffType huffType, + Cpa32U inputSize, + Cpa32U *outputSize) +{ + CpaInstanceHandle insHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + LAC_CHECK_INSTANCE_HANDLE(insHandle); + LAC_CHECK_NULL_PARAM(outputSize); + /* Ensure this is a compression instance */ + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + if (!inputSize) { + QAT_UTILS_LOG( + "The input size needs to be greater than zero.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((CPA_DC_HT_STATIC != huffType) && + (CPA_DC_HT_FULL_DYNAMIC != huffType)) { + QAT_UTILS_LOG("Invalid huffType value.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + return dcDeflateBoundGen2(huffType, inputSize, outputSize); +} Index: sys/dev/qat/qat_api/common/compression/dc_datapath.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/compression/dc_datapath.c @@ -0,0 +1,1790 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_datapath.c + * + * @defgroup Dc_DataCompression DC Data Compression + * + * @ingroup Dc_DataCompression + * + * @description + * Implementation of the Data Compression datapath operations. + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" +#include "cpa_dc.h" +#include "cpa_dc_dp.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "dc_session.h" +#include "dc_datapath.h" +#include "sal_statistics.h" +#include "lac_common.h" +#include "lac_mem.h" +#include "lac_mem_pools.h" +#include "sal_types_compression.h" +#include "dc_stats.h" +#include "lac_buffer_desc.h" +#include "lac_sal.h" +#include "lac_log.h" +#include "lac_sync.h" +#include "sal_service_state.h" +#include "sal_qat_cmn_msg.h" +#include "dc_error_counter.h" +#define DC_COMP_MAX_BUFF_SIZE (1024 * 64) + +static QatUtilsAtomic dcErrorCount[MAX_DC_ERROR_TYPE]; + +void +dcErrorLog(CpaDcReqStatus dcError) +{ + Cpa32U absError = 0; + + absError = abs(dcError); + if ((dcError < CPA_DC_OK) && (absError < MAX_DC_ERROR_TYPE)) { + qatUtilsAtomicInc(&(dcErrorCount[absError])); + } +} + +Cpa64U +getDcErrorCounter(CpaDcReqStatus dcError) +{ + Cpa32U absError = 0; + + absError = abs(dcError); + if (!(dcError >= CPA_DC_OK || dcError < CPA_DC_EMPTY_DYM_BLK)) { + return (Cpa64U)qatUtilsAtomicGet(&dcErrorCount[absError]); + } + + return 0; +} + +void +dcCompression_ProcessCallback(void *pRespMsg) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + icp_qat_fw_comp_resp_t *pCompRespMsg = NULL; + void *callbackTag = NULL; + Cpa64U *pReqData = NULL; + CpaDcDpOpData *pResponse = NULL; + CpaDcRqResults *pResults = NULL; + CpaDcCallbackFn pCbFunc = NULL; + dc_session_desc_t *pSessionDesc = NULL; + sal_compression_service_t *pService = NULL; + dc_compression_cookie_t *pCookie = NULL; + CpaDcOpData *pOpData = NULL; + CpaBoolean cmpPass = CPA_TRUE, xlatPass = CPA_TRUE; + CpaBoolean verifyHwIntegrityCrcs = CPA_FALSE; + Cpa8U cmpErr = ERR_CODE_NO_ERROR, xlatErr = ERR_CODE_NO_ERROR; + dc_request_dir_t compDecomp = DC_COMPRESSION_REQUEST; + Cpa8U opStatus = ICP_QAT_FW_COMN_STATUS_FLAG_OK; + Cpa8U hdrFlags = 0; + + /* Cast response message to compression response message type */ + pCompRespMsg = (icp_qat_fw_comp_resp_t *)pRespMsg; + + /* Extract request data pointer from the opaque data */ + LAC_MEM_SHARED_READ_TO_PTR(pCompRespMsg->opaque_data, pReqData); + + /* Extract fields from the request data structure */ + pCookie = (dc_compression_cookie_t *)pReqData; + if (!pCookie) + return; + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pCookie->pSessionHandle); + + if (CPA_TRUE == pSessionDesc->isDcDp) { + pResponse = (CpaDcDpOpData *)pReqData; + pResults = &(pResponse->results); + + if (CPA_DC_DIR_DECOMPRESS == pSessionDesc->sessDirection) { + compDecomp = DC_DECOMPRESSION_REQUEST; + } + } else { + pSessionDesc = pCookie->pSessionDesc; + pResults = pCookie->pResults; + callbackTag = pCookie->callbackTag; + pCbFunc = pCookie->pSessionDesc->pCompressionCb; + compDecomp = pCookie->compDecomp; + pOpData = pCookie->pDcOpData; + } + + pService = (sal_compression_service_t *)(pCookie->dcInstance); + + opStatus = pCompRespMsg->comn_resp.comn_status; + + if (NULL != pOpData) { + verifyHwIntegrityCrcs = pOpData->verifyHwIntegrityCrcs; + } + + hdrFlags = pCompRespMsg->comn_resp.hdr_flags; + + /* Get the cmp error code */ + cmpErr = pCompRespMsg->comn_resp.comn_error.s1.cmp_err_code; + if (ICP_QAT_FW_COMN_RESP_UNSUPPORTED_REQUEST_STAT_GET(opStatus)) { + /* Compression not supported by firmware, set produced/consumed + to zero + and call the cb function with status CPA_STATUS_UNSUPPORTED + */ + QAT_UTILS_LOG("Compression feature not supported\n"); + status = CPA_STATUS_UNSUPPORTED; + pResults->status = (Cpa8S)cmpErr; + pResults->consumed = 0; + pResults->produced = 0; + if (CPA_TRUE == pSessionDesc->isDcDp) { + if (pResponse) + pResponse->responseStatus = + CPA_STATUS_UNSUPPORTED; + (pService->pDcDpCb)(pResponse); + } else { + /* Free the memory pool */ + Lac_MemPoolEntryFree(pCookie); + pCookie = NULL; + if (NULL != pCbFunc) { + pCbFunc(callbackTag, status); + } + } + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC(numCompCompletedErrors, pService); + } else { + COMPRESSION_STAT_INC(numDecompCompletedErrors, + pService); + } + return; + } else { + /* Check compression response status */ + cmpPass = + (CpaBoolean)(ICP_QAT_FW_COMN_STATUS_FLAG_OK == + ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(opStatus)); + } + + if (CPA_DC_INCOMPLETE_FILE_ERR == (Cpa8S)cmpErr) { + cmpPass = CPA_TRUE; + cmpErr = ERR_CODE_NO_ERROR; + } + /* log the slice hang and endpoint push/pull error inside the response + */ + if (ERR_CODE_SSM_ERROR == (Cpa8S)cmpErr) { + QAT_UTILS_LOG( + "Slice hang detected on the compression slice.\n"); + } else if (ERR_CODE_ENDPOINT_ERROR == (Cpa8S)cmpErr) { + QAT_UTILS_LOG( + "PCIe End Point Push/Pull or TI/RI Parity error detected.\n"); + } + + /* We return the compression error code for now. We would need to update + * the API if we decide to return both error codes */ + pResults->status = (Cpa8S)cmpErr; + + /* Check the translator status */ + if ((DC_COMPRESSION_REQUEST == compDecomp) && + (CPA_DC_HT_FULL_DYNAMIC == pSessionDesc->huffType)) { + /* Check translator response status */ + xlatPass = + (CpaBoolean)(ICP_QAT_FW_COMN_STATUS_FLAG_OK == + ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(opStatus)); + + /* Get the translator error code */ + xlatErr = pCompRespMsg->comn_resp.comn_error.s1.xlat_err_code; + + /* Return a fatal error or a potential error in the translator + * slice + * if the compression slice did not return any error */ + if ((CPA_DC_OK == pResults->status) || + (CPA_DC_FATALERR == (Cpa8S)xlatErr)) { + pResults->status = (Cpa8S)xlatErr; + } + } + /* Update dc error counter */ + dcErrorLog(pResults->status); + + if (CPA_FALSE == pSessionDesc->isDcDp) { + /* In case of any error for an end of packet request, we need to + * update + * the request type for the following request */ + if (CPA_DC_FLUSH_FINAL == pCookie->flushFlag && cmpPass && + xlatPass) { + pSessionDesc->requestType = DC_REQUEST_FIRST; + } else { + pSessionDesc->requestType = DC_REQUEST_SUBSEQUENT; + } + if ((CPA_DC_STATEFUL == pSessionDesc->sessState) || + ((CPA_DC_STATELESS == pSessionDesc->sessState) && + (DC_COMPRESSION_REQUEST == compDecomp))) { + /* Overflow is a valid use case for Traditional API + * only. + * Stateful Overflow is supported in both compression + * and + * decompression direction. + * Stateless Overflow is supported only in compression + * direction. + */ + if (CPA_DC_OVERFLOW == (Cpa8S)cmpErr) + cmpPass = CPA_TRUE; + + if (CPA_DC_OVERFLOW == (Cpa8S)xlatErr) { + xlatPass = CPA_TRUE; + } + } + } else { + if (CPA_DC_OVERFLOW == (Cpa8S)cmpErr) { + cmpPass = CPA_FALSE; + } + if (CPA_DC_OVERFLOW == (Cpa8S)xlatErr) { + xlatPass = CPA_FALSE; + } + } + + if ((CPA_TRUE == cmpPass) && (CPA_TRUE == xlatPass)) { + /* Extract the response from the firmware */ + pResults->consumed = + pCompRespMsg->comp_resp_pars.input_byte_counter; + pResults->produced = + pCompRespMsg->comp_resp_pars.output_byte_counter; + pSessionDesc->cumulativeConsumedBytes += pResults->consumed; + + if (CPA_DC_OVERFLOW != (Cpa8S)xlatErr) { + if (CPA_DC_CRC32 == pSessionDesc->checksumType) { + pResults->checksum = + pCompRespMsg->comp_resp_pars.crc.legacy + .curr_crc32; + } else if (CPA_DC_ADLER32 == + pSessionDesc->checksumType) { + pResults->checksum = + pCompRespMsg->comp_resp_pars.crc.legacy + .curr_adler_32; + } + pSessionDesc->previousChecksum = pResults->checksum; + } + + if (DC_DECOMPRESSION_REQUEST == compDecomp) { + pResults->endOfLastBlock = + (ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET == + ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET( + opStatus)); + } + + /* Save the checksum for the next request */ + if ((CPA_DC_OVERFLOW != (Cpa8S)xlatErr) && + (CPA_TRUE == verifyHwIntegrityCrcs)) { + pSessionDesc->previousChecksum = + pSessionDesc->seedSwCrc.swCrcI; + } + + /* Check if a CNV recovery happened and + * increase stats counter + */ + if ((DC_COMPRESSION_REQUEST == compDecomp) && + ICP_QAT_FW_COMN_HDR_CNV_FLAG_GET(hdrFlags) && + ICP_QAT_FW_COMN_HDR_CNVNR_FLAG_GET(hdrFlags)) { + COMPRESSION_STAT_INC(numCompCnvErrorsRecovered, + pService); + } + + if (CPA_TRUE == pSessionDesc->isDcDp) { + if (pResponse) + pResponse->responseStatus = CPA_STATUS_SUCCESS; + } else { + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC(numCompCompleted, + pService); + } else { + COMPRESSION_STAT_INC(numDecompCompleted, + pService); + } + } + } else { + pResults->consumed = 0; + pResults->produced = 0; + if (CPA_DC_OVERFLOW == pResults->status && + CPA_DC_STATELESS == pSessionDesc->sessState) { + /* This error message will be returned by Data Plane API + * in both + * compression and decompression direction. With + * Traditional API + * this error message will be returned only in stateless + * decompression direction */ + QAT_UTILS_LOG( + "Unrecoverable error: stateless overflow. You may need to increase the size of your destination buffer.\n"); + } + + if (CPA_TRUE == pSessionDesc->isDcDp) { + if (pResponse) + pResponse->responseStatus = CPA_STATUS_FAIL; + } else { + if (CPA_DC_OK != pResults->status && + CPA_DC_INCOMPLETE_FILE_ERR != pResults->status) { + status = CPA_STATUS_FAIL; + } + + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC(numCompCompletedErrors, + pService); + } else { + COMPRESSION_STAT_INC(numDecompCompletedErrors, + pService); + } + } + } + + if (CPA_TRUE == pSessionDesc->isDcDp) { + /* Decrement number of stateless pending callbacks for session + */ + pSessionDesc->pendingDpStatelessCbCount--; + (pService->pDcDpCb)(pResponse); + } else { + /* Decrement number of pending callbacks for session */ + if (CPA_DC_STATELESS == pSessionDesc->sessState) { + qatUtilsAtomicDec( + &(pCookie->pSessionDesc->pendingStatelessCbCount)); + } else if (0 != + qatUtilsAtomicGet(&pCookie->pSessionDesc + ->pendingStatefulCbCount)) { + qatUtilsAtomicDec( + &(pCookie->pSessionDesc->pendingStatefulCbCount)); + } + + /* Free the memory pool */ + if (NULL != pCookie) { + Lac_MemPoolEntryFree(pCookie); + pCookie = NULL; + } + + if (NULL != pCbFunc) { + pCbFunc(callbackTag, status); + } + } +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Check that all the parameters in the pOpData structure are valid + * + * @description + * Check that all the parameters in the pOpData structure are valid + * + * @param[in] pService Pointer to the compression service + * @param[in] pOpData Pointer to request information structure + * holding parameters for cpaDcCompress2 and + * CpaDcDecompressData2 + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +dcCheckOpData(sal_compression_service_t *pService, CpaDcOpData *pOpData) +{ + CpaDcSkipMode skipMode = 0; + + if ((pOpData->flushFlag < CPA_DC_FLUSH_NONE) || + (pOpData->flushFlag > CPA_DC_FLUSH_FULL)) { + LAC_INVALID_PARAM_LOG("Invalid flushFlag value"); + return CPA_STATUS_INVALID_PARAM; + } + + skipMode = pOpData->inputSkipData.skipMode; + if ((skipMode < CPA_DC_SKIP_DISABLED) || + (skipMode > CPA_DC_SKIP_STRIDE)) { + LAC_INVALID_PARAM_LOG("Invalid input skip mode value"); + return CPA_STATUS_INVALID_PARAM; + } + + skipMode = pOpData->outputSkipData.skipMode; + if ((skipMode < CPA_DC_SKIP_DISABLED) || + (skipMode > CPA_DC_SKIP_STRIDE)) { + LAC_INVALID_PARAM_LOG("Invalid output skip mode value"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData->integrityCrcCheck == CPA_FALSE && + pOpData->verifyHwIntegrityCrcs == CPA_TRUE) { + LAC_INVALID_PARAM_LOG( + "integrityCrcCheck must be set to true" + "in order to enable verifyHwIntegrityCrcs"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData->integrityCrcCheck != CPA_TRUE && + pOpData->integrityCrcCheck != CPA_FALSE) { + LAC_INVALID_PARAM_LOG("Invalid integrityCrcCheck value"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData->verifyHwIntegrityCrcs != CPA_TRUE && + pOpData->verifyHwIntegrityCrcs != CPA_FALSE) { + LAC_INVALID_PARAM_LOG("Invalid verifyHwIntegrityCrcs value"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData->compressAndVerify != CPA_TRUE && + pOpData->compressAndVerify != CPA_FALSE) { + LAC_INVALID_PARAM_LOG("Invalid cnv decompress check value"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_TRUE == pOpData->integrityCrcCheck && + CPA_FALSE == pService->generic_service_info.integrityCrcCheck) { + LAC_INVALID_PARAM_LOG("Integrity CRC check is not " + "supported on this device"); + return CPA_STATUS_INVALID_PARAM; + } + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Check the compression source buffer for Batch and Pack API. + * + * @description + * Check that all the parameters used for Pack compression + * request are valid. This function essentially checks the source buffer + * parameters and results structure parameters. + * + * @param[in] pSessionHandle Session handle + * @param[in] pSrcBuff Pointer to data buffer for compression + * @param[in] pDestBuff Pointer to buffer space allocated for + * output data + * @param[in] pResults Pointer to results structure + * @param[in] flushFlag Indicates the type of flush to be + * performed + * @param[in] srcBuffSize Size of the source buffer + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +dcCheckSourceData(CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + Cpa64U srcBuffSize, + CpaDcSkipData *skipData) +{ + dc_session_desc_t *pSessionDesc = NULL; + + LAC_CHECK_NULL_PARAM(pSessionHandle); + LAC_CHECK_NULL_PARAM(pSrcBuff); + LAC_CHECK_NULL_PARAM(pDestBuff); + LAC_CHECK_NULL_PARAM(pResults); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + if (NULL == pSessionDesc) { + LAC_INVALID_PARAM_LOG("Session handle not as expected"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((flushFlag < CPA_DC_FLUSH_NONE) || + (flushFlag > CPA_DC_FLUSH_FULL)) { + LAC_INVALID_PARAM_LOG("Invalid flushFlag value"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pSrcBuff == pDestBuff) { + LAC_INVALID_PARAM_LOG("In place operation not supported"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Compressing zero bytes is not supported for stateless sessions + * for non Batch and Pack requests */ + if ((CPA_DC_STATELESS == pSessionDesc->sessState) && + (0 == srcBuffSize) && (NULL == skipData)) { + LAC_INVALID_PARAM_LOG( + "The source buffer size needs to be greater than " + "zero bytes for stateless sessions"); + return CPA_STATUS_INVALID_PARAM; + } + + if (srcBuffSize > DC_BUFFER_MAX_SIZE) { + LAC_INVALID_PARAM_LOG( + "The source buffer size needs to be less than or " + "equal to 2^32-1 bytes"); + return CPA_STATUS_INVALID_PARAM; + } + + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Check the compression or decompression function parameters. + * + * @description + * Check that all the parameters used for a Batch and Pack compression + * request are valid. This function essentially checks the destination + * buffer parameters and intermediate buffer parameters. + * + * @param[in] pService Pointer to the compression service + * @param[in] pSessionHandle Session handle + * @param[in] pDestBuff Pointer to buffer space allocated for + * output data + * @param[in] compDecomp Direction of the operation + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +dcCheckDestinationData(sal_compression_service_t *pService, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pDestBuff, + dc_request_dir_t compDecomp) +{ + dc_session_desc_t *pSessionDesc = NULL; + Cpa64U destBuffSize = 0; + + LAC_CHECK_NULL_PARAM(pSessionHandle); + LAC_CHECK_NULL_PARAM(pDestBuff); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + if (NULL == pSessionDesc) { + LAC_INVALID_PARAM_LOG("Session handle not as expected"); + return CPA_STATUS_INVALID_PARAM; + } + + if (LacBuffDesc_BufferListVerify(pDestBuff, + &destBuffSize, + LAC_NO_ALIGNMENT_SHIFT) != + CPA_STATUS_SUCCESS) { + LAC_INVALID_PARAM_LOG( + "Invalid destination buffer list parameter"); + return CPA_STATUS_INVALID_PARAM; + } + + if (destBuffSize > DC_BUFFER_MAX_SIZE) { + LAC_INVALID_PARAM_LOG( + "The destination buffer size needs to be less " + "than or equal to 2^32-1 bytes"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_TRUE == pSessionDesc->isDcDp) { + LAC_INVALID_PARAM_LOG( + "The session type should not be data plane"); + return CPA_STATUS_INVALID_PARAM; + } + + if (DC_COMPRESSION_REQUEST == compDecomp) { + if (CPA_DC_HT_FULL_DYNAMIC == pSessionDesc->huffType) { + + /* Check if intermediate buffers are supported */ + if ((0 == pService->pInterBuffPtrsArrayPhyAddr) || + (NULL == pService->pInterBuffPtrsArray)) { + LAC_LOG_ERROR( + "No intermediate buffer defined for this instance " + "- see cpaDcStartInstance"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Ensure that the destination buffer size is greater or + * equal to 128B */ + if (destBuffSize < DC_DEST_BUFFER_DYN_MIN_SIZE) { + LAC_INVALID_PARAM_LOG( + "Destination buffer size should be " + "greater or equal to 128B"); + return CPA_STATUS_INVALID_PARAM; + } + } else + { + /* Ensure that the destination buffer size is greater or + * equal to devices min output buff size */ + if (destBuffSize < + pService->comp_device_data.minOutputBuffSize) { + LAC_INVALID_PARAM_LOG1( + "Destination buffer size should be " + "greater or equal to %d bytes", + pService->comp_device_data + .minOutputBuffSize); + return CPA_STATUS_INVALID_PARAM; + } + } + } else { + /* Ensure that the destination buffer size is greater than + * 0 bytes */ + if (destBuffSize < DC_DEST_BUFFER_DEC_MIN_SIZE) { + LAC_INVALID_PARAM_LOG( + "Destination buffer size should be " + "greater than 0 bytes"); + return CPA_STATUS_INVALID_PARAM; + } + } + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Populate the compression request parameters + * + * @description + * This function will populate the compression request parameters + * + * @param[out] pCompReqParams Pointer to the compression request parameters + * @param[in] pCookie Pointer to the compression cookie + * + *****************************************************************************/ +static void +dcCompRequestParamsPopulate(icp_qat_fw_comp_req_params_t *pCompReqParams, + dc_compression_cookie_t *pCookie) +{ + pCompReqParams->comp_len = pCookie->srcTotalDataLenInBytes; + pCompReqParams->out_buffer_sz = pCookie->dstTotalDataLenInBytes; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Create the requests for compression or decompression + * + * @description + * Create the requests for compression or decompression. This function + * will update the cookie will all required information. + * + * @param{out] pCookie Pointer to the compression cookie + * @param[in] pService Pointer to the compression service + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in pSessionHandle Session handle + * @param[in] pSrcBuff Pointer to data buffer for compression + * @param[in] pDestBuff Pointer to buffer space for data after + * compression + * @param[in] pResults Pointer to results structure + * @param[in] flushFlag Indicates the type of flush to be + * performed + * @param[in] pOpData Pointer to request information structure + * holding parameters for cpaDcCompress2 + * and CpaDcDecompressData2 + * @param[in] callbackTag Pointer to the callback tag + * @param[in] compDecomp Direction of the operation + * @param[in] compressAndVerify Compress and Verify + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +dcCreateRequest(dc_compression_cookie_t *pCookie, + sal_compression_service_t *pService, + dc_session_desc_t *pSessionDesc, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + CpaDcOpData *pOpData, + void *callbackTag, + dc_request_dir_t compDecomp, + dc_cnv_mode_t cnvMode) +{ + icp_qat_fw_comp_req_t *pMsg = NULL; + icp_qat_fw_comp_req_params_t *pCompReqParams = NULL; + Cpa64U srcAddrPhys = 0, dstAddrPhys = 0; + Cpa64U srcTotalDataLenInBytes = 0, dstTotalDataLenInBytes = 0; + + Cpa32U rpCmdFlags = 0; + Cpa8U sop = ICP_QAT_FW_COMP_SOP; + Cpa8U eop = ICP_QAT_FW_COMP_EOP; + Cpa8U bFinal = ICP_QAT_FW_COMP_NOT_BFINAL; + Cpa8U crcMode = ICP_QAT_FW_COMP_CRC_MODE_LEGACY; + Cpa8U cnvDecompReq = ICP_QAT_FW_COMP_NO_CNV; + Cpa8U cnvRecovery = ICP_QAT_FW_COMP_NO_CNV_RECOVERY; + CpaBoolean integrityCrcCheck = CPA_FALSE; + CpaStatus status = CPA_STATUS_SUCCESS; + CpaDcFlush flush = CPA_DC_FLUSH_NONE; + Cpa32U initial_adler = 1; + Cpa32U initial_crc32 = 0; + icp_qat_fw_comp_req_t *pReqCache = NULL; + + /* Write the buffer descriptors */ + status = LacBuffDesc_BufferListDescWriteAndGetSize( + pSrcBuff, + &srcAddrPhys, + CPA_FALSE, + &srcTotalDataLenInBytes, + &(pService->generic_service_info)); + if (status != CPA_STATUS_SUCCESS) { + return status; + } + + status = LacBuffDesc_BufferListDescWriteAndGetSize( + pDestBuff, + &dstAddrPhys, + CPA_FALSE, + &dstTotalDataLenInBytes, + &(pService->generic_service_info)); + if (status != CPA_STATUS_SUCCESS) { + return status; + } + + /* Populate the compression cookie */ + pCookie->dcInstance = pService; + pCookie->pSessionHandle = pSessionHandle; + pCookie->callbackTag = callbackTag; + pCookie->pSessionDesc = pSessionDesc; + pCookie->pDcOpData = pOpData; + pCookie->pResults = pResults; + pCookie->compDecomp = compDecomp; + pCookie->pUserSrcBuff = NULL; + pCookie->pUserDestBuff = NULL; + + /* Extract flush flag from either the opData or from the + * parameter. Opdata have been introduce with APIs + * cpaDcCompressData2 and cpaDcDecompressData2 */ + if (NULL != pOpData) { + flush = pOpData->flushFlag; + integrityCrcCheck = pOpData->integrityCrcCheck; + } else { + flush = flushFlag; + } + pCookie->flushFlag = flush; + + /* The firmware expects the length in bytes for source and destination + * to be Cpa32U parameters. However the total data length could be + * bigger as allocated by the user. We ensure that this is not the case + * in dcCheckSourceData and cast the values to Cpa32U here */ + pCookie->srcTotalDataLenInBytes = (Cpa32U)srcTotalDataLenInBytes; + if ((DC_COMPRESSION_REQUEST == compDecomp) && + (CPA_DC_HT_FULL_DYNAMIC == pSessionDesc->huffType)) { + if (pService->minInterBuffSizeInBytes < + (Cpa32U)dstTotalDataLenInBytes) { + pCookie->dstTotalDataLenInBytes = + (Cpa32U)(pService->minInterBuffSizeInBytes); + } else { + pCookie->dstTotalDataLenInBytes = + (Cpa32U)dstTotalDataLenInBytes; + } + } else + { + pCookie->dstTotalDataLenInBytes = + (Cpa32U)dstTotalDataLenInBytes; + } + + /* Device can not decompress an odd byte decompression request + * if bFinal is not set + */ + if (CPA_TRUE != pService->comp_device_data.oddByteDecompNobFinal) { + if ((CPA_DC_STATEFUL == pSessionDesc->sessState) && + (CPA_DC_FLUSH_FINAL != flushFlag) && + (DC_DECOMPRESSION_REQUEST == compDecomp) && + (pCookie->srcTotalDataLenInBytes & 0x1)) { + pCookie->srcTotalDataLenInBytes--; + } + } + /* Device can not decompress odd byte interim requests */ + if (CPA_TRUE != pService->comp_device_data.oddByteDecompInterim) { + if ((CPA_DC_STATEFUL == pSessionDesc->sessState) && + (CPA_DC_FLUSH_FINAL != flushFlag) && + (CPA_DC_FLUSH_FULL != flushFlag) && + (DC_DECOMPRESSION_REQUEST == compDecomp) && + (pCookie->srcTotalDataLenInBytes & 0x1)) { + pCookie->srcTotalDataLenInBytes--; + } + } + + pMsg = (icp_qat_fw_comp_req_t *)&pCookie->request; + + if (DC_COMPRESSION_REQUEST == compDecomp) { + pReqCache = &(pSessionDesc->reqCacheComp); + } else { + pReqCache = &(pSessionDesc->reqCacheDecomp); + } + + /* Fills the msg from the template cached in the session descriptor */ + memcpy((void *)pMsg, + (void *)(pReqCache), + LAC_QAT_DC_REQ_SZ_LW * LAC_LONG_WORD_IN_BYTES); + + if (DC_REQUEST_FIRST == pSessionDesc->requestType) { + initial_adler = 1; + initial_crc32 = 0; + + if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + pSessionDesc->previousChecksum = 1; + } else { + pSessionDesc->previousChecksum = 0; + } + } else if (CPA_DC_STATELESS == pSessionDesc->sessState) { + pSessionDesc->previousChecksum = pResults->checksum; + + if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + initial_adler = pSessionDesc->previousChecksum; + } else { + initial_crc32 = pSessionDesc->previousChecksum; + } + } + + /* Backup source and destination buffer addresses, + * CRC calculations both for CNV and translator overflow + * will be performed on them in the callback function. + */ + pCookie->pUserSrcBuff = pSrcBuff; + pCookie->pUserDestBuff = pDestBuff; + + /* + * Due to implementation of CNV support and need for backwards + * compatibility certain fields in the request and response structs had + * been changed, moved or placed in unions cnvMode flag signifies fields + * to be selected from req/res + * + * Doing extended crc checks makes sense only when we want to do the + * actual CNV + */ + if (CPA_TRUE == pService->generic_service_info.integrityCrcCheck && + CPA_TRUE == integrityCrcCheck) { + pMsg->comp_pars.crc.crc_data_addr = + pSessionDesc->physDataIntegrityCrcs; + crcMode = ICP_QAT_FW_COMP_CRC_MODE_E2E; + } else { + /* Legacy request structure */ + pMsg->comp_pars.crc.legacy.initial_adler = initial_adler; + pMsg->comp_pars.crc.legacy.initial_crc32 = initial_crc32; + crcMode = ICP_QAT_FW_COMP_CRC_MODE_LEGACY; + } + + /* Populate the cmdFlags */ + if (CPA_DC_STATEFUL == pSessionDesc->sessState) { + pSessionDesc->previousRequestType = pSessionDesc->requestType; + + if (DC_REQUEST_FIRST == pSessionDesc->requestType) { + /* Update the request type for following requests */ + pSessionDesc->requestType = DC_REQUEST_SUBSEQUENT; + + /* Reinitialise the cumulative amount of consumed bytes + */ + pSessionDesc->cumulativeConsumedBytes = 0; + + if (DC_COMPRESSION_REQUEST == compDecomp) { + pSessionDesc->isSopForCompressionProcessed = + CPA_TRUE; + } else if (DC_DECOMPRESSION_REQUEST == compDecomp) { + pSessionDesc->isSopForDecompressionProcessed = + CPA_TRUE; + } + } else { + if (DC_COMPRESSION_REQUEST == compDecomp) { + if (CPA_TRUE == + pSessionDesc + ->isSopForCompressionProcessed) { + sop = ICP_QAT_FW_COMP_NOT_SOP; + } else { + pSessionDesc + ->isSopForCompressionProcessed = + CPA_TRUE; + } + } else if (DC_DECOMPRESSION_REQUEST == compDecomp) { + if (CPA_TRUE == + pSessionDesc + ->isSopForDecompressionProcessed) { + sop = ICP_QAT_FW_COMP_NOT_SOP; + } else { + pSessionDesc + ->isSopForDecompressionProcessed = + CPA_TRUE; + } + } + } + + if ((CPA_DC_FLUSH_FINAL == flush) || + (CPA_DC_FLUSH_FULL == flush)) { + /* Update the request type for following requests */ + pSessionDesc->requestType = DC_REQUEST_FIRST; + } else { + eop = ICP_QAT_FW_COMP_NOT_EOP; + } + } else { + + if (DC_REQUEST_FIRST == pSessionDesc->requestType) { + /* Reinitialise the cumulative amount of consumed bytes + */ + pSessionDesc->cumulativeConsumedBytes = 0; + } + } + + /* (LW 14 - 15) */ + pCompReqParams = &(pMsg->comp_pars); + dcCompRequestParamsPopulate(pCompReqParams, pCookie); + if (CPA_DC_FLUSH_FINAL == flush) { + bFinal = ICP_QAT_FW_COMP_BFINAL; + } + + switch (cnvMode) { + case DC_CNVNR: + cnvRecovery = ICP_QAT_FW_COMP_CNV_RECOVERY; + /* Fall through is intended here, because for CNVNR + * cnvDecompReq also needs to be set */ + case DC_CNV: + cnvDecompReq = ICP_QAT_FW_COMP_CNV; + break; + case DC_NO_CNV: + cnvDecompReq = ICP_QAT_FW_COMP_NO_CNV; + cnvRecovery = ICP_QAT_FW_COMP_NO_CNV_RECOVERY; + break; + } + + /* LW 18 */ + rpCmdFlags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD( + sop, eop, bFinal, cnvDecompReq, cnvRecovery, crcMode); + pMsg->comp_pars.req_par_flags = rpCmdFlags; + + /* Populates the QAT common request middle part of the message + * (LW 6 to 11) */ + SalQatMsg_CmnMidWrite((icp_qat_fw_la_bulk_req_t *)pMsg, + pCookie, + DC_DEFAULT_QAT_PTR_TYPE, + srcAddrPhys, + dstAddrPhys, + 0, + 0); + + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Send a compression request to QAT + * + * @description + * Send the requests for compression or decompression to QAT + * + * @param{in] pCookie Pointer to the compression cookie + * @param[in] pService Pointer to the compression service + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in] compDecomp Direction of the operation + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +dcSendRequest(dc_compression_cookie_t *pCookie, + sal_compression_service_t *pService, + dc_session_desc_t *pSessionDesc, + dc_request_dir_t compDecomp) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + /* Send to QAT */ + status = icp_adf_transPutMsg(pService->trans_handle_compression_tx, + (void *)&(pCookie->request), + LAC_QAT_DC_REQ_SZ_LW); + + if ((CPA_DC_STATEFUL == pSessionDesc->sessState) && + (CPA_STATUS_RETRY == status)) { + /* reset requestType after receiving an retry on + * the stateful request */ + pSessionDesc->requestType = pSessionDesc->previousRequestType; + } + + return status; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Process the synchronous and asynchronous case for compression or + * decompression + * + * @description + * Process the synchronous and asynchronous case for compression or + * decompression. This function will then create and send the request to + * the firmware. + * + * @param[in] pService Pointer to the compression service + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in] dcInstance Instance handle derived from discovery + * functions + * @param[in] pSessionHandle Session handle + * @param[in] numRequests Number of operations in the batch request + * @param[in] pBatchOpData Address of the list of jobs to be processed + * @param[in] pSrcBuff Pointer to data buffer for compression + * @param[in] pDestBuff Pointer to buffer space for data after + * compression + * @param[in] pResults Pointer to results structure + * @param[in] flushFlag Indicates the type of flush to be + * performed + * @param[in] pOpData Pointer to request information structure + * holding parameters for cpaDcCompress2 and + * CpaDcDecompressData2 + * @param[in] callbackTag Pointer to the callback tag + * @param[in] compDecomp Direction of the operation + * @param[in] isAsyncMode Used to know if synchronous or asynchronous + * mode + * @param[in] cnvMode CNV Mode + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_RETRY Retry operation + * @retval CPA_STATUS_FAIL Function failed + * @retval CPA_STATUS_RESOURCE Resource error + * + *****************************************************************************/ +static CpaStatus +dcCompDecompData(sal_compression_service_t *pService, + dc_session_desc_t *pSessionDesc, + CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + CpaDcOpData *pOpData, + void *callbackTag, + dc_request_dir_t compDecomp, + CpaBoolean isAsyncMode, + dc_cnv_mode_t cnvMode) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + dc_compression_cookie_t *pCookie = NULL; + + if ((LacSync_GenWakeupSyncCaller == pSessionDesc->pCompressionCb) && + isAsyncMode == CPA_TRUE) { + lac_sync_op_data_t *pSyncCallbackData = NULL; + + status = LacSync_CreateSyncCookie(&pSyncCallbackData); + + if (CPA_STATUS_SUCCESS == status) { + status = dcCompDecompData(pService, + pSessionDesc, + dcInstance, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + pOpData, + pSyncCallbackData, + compDecomp, + CPA_FALSE, + cnvMode); + } else { + return status; + } + + if (CPA_STATUS_SUCCESS == status) { + CpaStatus syncStatus = CPA_STATUS_SUCCESS; + + syncStatus = + LacSync_WaitForCallback(pSyncCallbackData, + DC_SYNC_CALLBACK_TIMEOUT, + &status, + NULL); + + /* If callback doesn't come back */ + if (CPA_STATUS_SUCCESS != syncStatus) { + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC( + numCompCompletedErrors, pService); + } else { + COMPRESSION_STAT_INC( + numDecompCompletedErrors, pService); + } + LAC_LOG_ERROR("Callback timed out"); + status = syncStatus; + } + } else { + /* As the Request was not sent the Callback will never + * be called, so need to indicate that we're finished + * with cookie so it can be destroyed. */ + LacSync_SetSyncCookieComplete(pSyncCallbackData); + } + + LacSync_DestroySyncCookie(&pSyncCallbackData); + return status; + } + + /* Allocate the compression cookie + * The memory is freed in callback or in sendRequest if an error occurs + */ + pCookie = (dc_compression_cookie_t *)Lac_MemPoolEntryAlloc( + pService->compression_mem_pool); + if (NULL == pCookie) { + LAC_LOG_ERROR("Cannot get mem pool entry for compression"); + status = CPA_STATUS_RESOURCE; + } else if ((void *)CPA_STATUS_RETRY == pCookie) { + pCookie = NULL; + status = CPA_STATUS_RETRY; + } + + if (CPA_STATUS_SUCCESS == status) { + status = dcCreateRequest(pCookie, + pService, + pSessionDesc, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + pOpData, + callbackTag, + compDecomp, + cnvMode); + } + + if (CPA_STATUS_SUCCESS == status) { + /* Increment number of pending callbacks for session */ + if (CPA_DC_STATELESS == pSessionDesc->sessState) { + qatUtilsAtomicInc( + &(pSessionDesc->pendingStatelessCbCount)); + } + status = + dcSendRequest(pCookie, pService, pSessionDesc, compDecomp); + } + + if (CPA_STATUS_SUCCESS == status) { + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC(numCompRequests, pService); + } else { + COMPRESSION_STAT_INC(numDecompRequests, pService); + } + } else { + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC(numCompRequestsErrors, pService); + } else { + COMPRESSION_STAT_INC(numDecompRequestsErrors, pService); + } + + /* Decrement number of pending callbacks for session */ + if (CPA_DC_STATELESS == pSessionDesc->sessState) { + qatUtilsAtomicDec( + &(pSessionDesc->pendingStatelessCbCount)); + } else { + qatUtilsAtomicDec( + &(pSessionDesc->pendingStatefulCbCount)); + } + + /* Free the memory pool */ + if (NULL != pCookie) { + if (status != CPA_STATUS_UNSUPPORTED) { + /* Free the memory pool */ + Lac_MemPoolEntryFree(pCookie); + pCookie = NULL; + } + } + } + + return status; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Handle zero length compression or decompression requests + * + * @description + * Handle zero length compression or decompression requests + * + * @param[in] pService Pointer to the compression service + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in] pResults Pointer to results structure + * @param[in] flushFlag Indicates the type of flush to be + * performed + * @param[in] callbackTag User supplied value to help correlate + * the callback with its associated request + * @param[in] compDecomp Direction of the operation + * + * @retval CPA_TRUE Zero length SOP or MOP processed + * @retval CPA_FALSE Zero length EOP + * + *****************************************************************************/ +static CpaStatus +dcZeroLengthRequests(sal_compression_service_t *pService, + dc_session_desc_t *pSessionDesc, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + void *callbackTag, + dc_request_dir_t compDecomp) +{ + CpaBoolean status = CPA_FALSE; + CpaDcCallbackFn pCbFunc = pSessionDesc->pCompressionCb; + + if (DC_REQUEST_FIRST == pSessionDesc->requestType) { + /* Reinitialise the cumulative amount of consumed bytes */ + pSessionDesc->cumulativeConsumedBytes = 0; + + /* Zero length SOP */ + if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + pResults->checksum = 1; + } else { + pResults->checksum = 0; + } + + status = CPA_TRUE; + } else if ((CPA_DC_FLUSH_NONE == flushFlag) || + (CPA_DC_FLUSH_SYNC == flushFlag)) { + /* Zero length MOP */ + pResults->checksum = pSessionDesc->previousChecksum; + status = CPA_TRUE; + } + + if (CPA_TRUE == status) { + pResults->status = CPA_DC_OK; + pResults->produced = 0; + pResults->consumed = 0; + + /* Increment statistics */ + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC(numCompRequests, pService); + COMPRESSION_STAT_INC(numCompCompleted, pService); + } else { + COMPRESSION_STAT_INC(numDecompRequests, pService); + COMPRESSION_STAT_INC(numDecompCompleted, pService); + } + + if (CPA_STATUS_SUCCESS != + LAC_SPINUNLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot unlock session lock"); + } + + if ((NULL != pCbFunc) && + (LacSync_GenWakeupSyncCaller != pCbFunc)) { + pCbFunc(callbackTag, CPA_STATUS_SUCCESS); + } + + return CPA_TRUE; + } + + return CPA_FALSE; +} + +static CpaStatus +dcParamCheck(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + sal_compression_service_t *pService, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + dc_session_desc_t *pSessionDesc, + CpaDcFlush flushFlag, + Cpa64U srcBuffSize) +{ + + if (dcCheckSourceData(pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + srcBuffSize, + NULL) != CPA_STATUS_SUCCESS) { + return CPA_STATUS_INVALID_PARAM; + } + if (dcCheckDestinationData( + pService, pSessionHandle, pDestBuff, DC_COMPRESSION_REQUEST) != + CPA_STATUS_SUCCESS) { + return CPA_STATUS_INVALID_PARAM; + } + if (CPA_DC_DIR_DECOMPRESS == pSessionDesc->sessDirection) { + LAC_INVALID_PARAM_LOG("Invalid sessDirection value"); + return CPA_STATUS_INVALID_PARAM; + } + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcCompressData(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + void *callbackTag) +{ + sal_compression_service_t *pService = NULL; + dc_session_desc_t *pSessionDesc = NULL; + CpaInstanceHandle insHandle = NULL; + Cpa64U srcBuffSize = 0; + + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + pService = (sal_compression_service_t *)insHandle; + + LAC_CHECK_NULL_PARAM(insHandle); + LAC_CHECK_NULL_PARAM(pSessionHandle); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(insHandle); + + /* This check is outside the parameter checking as it is needed to + * manage zero length requests */ + if (LacBuffDesc_BufferListVerifyNull(pSrcBuff, + &srcBuffSize, + LAC_NO_ALIGNMENT_SHIFT) != + CPA_STATUS_SUCCESS) { + LAC_INVALID_PARAM_LOG("Invalid source buffer list parameter"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Ensure this is a compression instance */ + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + if (CPA_STATUS_SUCCESS != + dcParamCheck(insHandle, + pSessionHandle, + pService, + pSrcBuff, + pDestBuff, + pResults, + pSessionDesc, + flushFlag, + srcBuffSize)) { + return CPA_STATUS_INVALID_PARAM; + } + if (CPA_DC_STATEFUL == pSessionDesc->sessState) { + LAC_INVALID_PARAM_LOG( + "Invalid session state, stateful sessions " + "are not supported"); + return CPA_STATUS_UNSUPPORTED; + } + + if (!(pService->generic_service_info.dcExtendedFeatures & + DC_CNV_EXTENDED_CAPABILITY)) { + LAC_INVALID_PARAM_LOG( + "CompressAndVerify feature not supported"); + return CPA_STATUS_UNSUPPORTED; + } + + if (!(pService->generic_service_info.dcExtendedFeatures & + DC_CNVNR_EXTENDED_CAPABILITY)) { + LAC_INVALID_PARAM_LOG( + "CompressAndVerifyAndRecovery feature not supported"); + return CPA_STATUS_UNSUPPORTED; + } + + return dcCompDecompData(pService, + pSessionDesc, + dcInstance, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + NULL, + callbackTag, + DC_COMPRESSION_REQUEST, + CPA_TRUE, + DC_CNVNR); +} + +CpaStatus +cpaDcCompressData2(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcOpData *pOpData, + CpaDcRqResults *pResults, + void *callbackTag) +{ + sal_compression_service_t *pService = NULL; + dc_session_desc_t *pSessionDesc = NULL; + CpaInstanceHandle insHandle = NULL; + Cpa64U srcBuffSize = 0; + dc_cnv_mode_t cnvMode = DC_NO_CNV; + + LAC_CHECK_NULL_PARAM(pOpData); + + if (((CPA_TRUE != pOpData->compressAndVerify) && + (CPA_FALSE != pOpData->compressAndVerify)) || + ((CPA_FALSE != pOpData->compressAndVerifyAndRecover) && + (CPA_TRUE != pOpData->compressAndVerifyAndRecover))) { + return CPA_STATUS_INVALID_PARAM; + } + + if ((CPA_FALSE == pOpData->compressAndVerify) && + (CPA_TRUE == pOpData->compressAndVerifyAndRecover)) { + return CPA_STATUS_INVALID_PARAM; + } + + + if ((CPA_TRUE == pOpData->compressAndVerify) && + (CPA_TRUE == pOpData->compressAndVerifyAndRecover) && + (CPA_FALSE == pOpData->integrityCrcCheck)) { + return cpaDcCompressData(dcInstance, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + pOpData->flushFlag, + callbackTag); + } + + if (CPA_FALSE == pOpData->compressAndVerify) { + LAC_INVALID_PARAM_LOG( + "Data compression without verification not allowed"); + return CPA_STATUS_UNSUPPORTED; + } + + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + pService = (sal_compression_service_t *)insHandle; + + LAC_CHECK_NULL_PARAM(insHandle); + LAC_CHECK_NULL_PARAM(pSessionHandle); + LAC_CHECK_NULL_PARAM(pOpData); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(insHandle); + + /* This check is outside the parameter checking as it is needed to + * manage zero length requests */ + if (LacBuffDesc_BufferListVerifyNull(pSrcBuff, + &srcBuffSize, + LAC_NO_ALIGNMENT_SHIFT) != + CPA_STATUS_SUCCESS) { + LAC_INVALID_PARAM_LOG("Invalid source buffer list parameter"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Ensure this is a compression instance */ + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + + if (CPA_TRUE == pOpData->compressAndVerify && + CPA_DC_STATEFUL == pSessionDesc->sessState) { + LAC_INVALID_PARAM_LOG( + "Invalid session state, stateful sessions " + "not supported with CNV"); + return CPA_STATUS_UNSUPPORTED; + } + + if (!(pService->generic_service_info.dcExtendedFeatures & + DC_CNV_EXTENDED_CAPABILITY) && + (CPA_TRUE == pOpData->compressAndVerify)) { + LAC_INVALID_PARAM_LOG( + "CompressAndVerify feature not supported"); + return CPA_STATUS_UNSUPPORTED; + } + + if (CPA_STATUS_SUCCESS != + dcParamCheck(insHandle, + pSessionHandle, + pService, + pSrcBuff, + pDestBuff, + pResults, + pSessionDesc, + pOpData->flushFlag, + srcBuffSize)) { + return CPA_STATUS_INVALID_PARAM; + } + if (CPA_STATUS_SUCCESS != dcCheckOpData(pService, pOpData)) { + return CPA_STATUS_INVALID_PARAM; + } + if (CPA_TRUE != pOpData->compressAndVerify) { + if (srcBuffSize > DC_COMP_MAX_BUFF_SIZE) { + LAC_LOG_ERROR( + "Compression payload greater than 64KB is " + "unsupported, when CnV is disabled\n"); + return CPA_STATUS_UNSUPPORTED; + } + } + + if (CPA_DC_STATEFUL == pSessionDesc->sessState) { + /* Lock the session to check if there are in-flight stateful + * requests */ + if (CPA_STATUS_SUCCESS != + LAC_SPINLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot unlock session lock"); + } + + /* Check if there is already one in-flight stateful request */ + if (0 != + qatUtilsAtomicGet( + &(pSessionDesc->pendingStatefulCbCount))) { + LAC_LOG_ERROR( + "Only one in-flight stateful request supported"); + if (CPA_STATUS_SUCCESS != + LAC_SPINUNLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot unlock session lock"); + } + return CPA_STATUS_RETRY; + } + + if (0 == srcBuffSize) { + if (CPA_TRUE == + dcZeroLengthRequests(pService, + pSessionDesc, + pResults, + pOpData->flushFlag, + callbackTag, + DC_COMPRESSION_REQUEST)) { + return CPA_STATUS_SUCCESS; + } + } + + qatUtilsAtomicInc(&(pSessionDesc->pendingStatefulCbCount)); + if (CPA_STATUS_SUCCESS != + LAC_SPINUNLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot unlock session lock"); + } + } + + if (CPA_TRUE == pOpData->compressAndVerify) { + cnvMode = DC_CNV; + } + + return dcCompDecompData(pService, + pSessionDesc, + dcInstance, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + pOpData->flushFlag, + pOpData, + callbackTag, + DC_COMPRESSION_REQUEST, + CPA_TRUE, + cnvMode); +} + +static CpaStatus +dcDecompressDataCheck(CpaInstanceHandle insHandle, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + Cpa64U *srcBufferSize) +{ + sal_compression_service_t *pService = NULL; + dc_session_desc_t *pSessionDesc = NULL; + Cpa64U srcBuffSize = 0; + + pService = (sal_compression_service_t *)insHandle; + + LAC_CHECK_NULL_PARAM(insHandle); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(insHandle); + + /* This check is outside the parameter checking as it is needed to + * manage zero length requests */ + if (LacBuffDesc_BufferListVerifyNull(pSrcBuff, + &srcBuffSize, + LAC_NO_ALIGNMENT_SHIFT) != + CPA_STATUS_SUCCESS) { + LAC_INVALID_PARAM_LOG("Invalid source buffer list parameter"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Ensure this is a compression instance */ + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + if (dcCheckSourceData(pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + srcBuffSize, + NULL) != CPA_STATUS_SUCCESS) { + return CPA_STATUS_INVALID_PARAM; + } + if (dcCheckDestinationData(pService, + pSessionHandle, + pDestBuff, + DC_DECOMPRESSION_REQUEST) != + CPA_STATUS_SUCCESS) { + return CPA_STATUS_INVALID_PARAM; + } + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + + if (CPA_DC_DIR_COMPRESS == pSessionDesc->sessDirection) { + LAC_INVALID_PARAM_LOG("Invalid sessDirection value"); + return CPA_STATUS_INVALID_PARAM; + } + + + *srcBufferSize = srcBuffSize; + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcDecompressData(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + void *callbackTag) +{ + sal_compression_service_t *pService = NULL; + dc_session_desc_t *pSessionDesc = NULL; + CpaInstanceHandle insHandle = NULL; + Cpa64U srcBuffSize = 0; + CpaStatus status = CPA_STATUS_SUCCESS; + + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + status = dcDecompressDataCheck(insHandle, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + &srcBuffSize); + if (CPA_STATUS_SUCCESS != status) { + return status; + } + + pService = (sal_compression_service_t *)insHandle; + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + + if (CPA_DC_STATEFUL == pSessionDesc->sessState) { + /* Lock the session to check if there are in-flight stateful + * requests */ + if (CPA_STATUS_SUCCESS != + LAC_SPINLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot lock session lock"); + return CPA_STATUS_RESOURCE; + } + + /* Check if there is already one in-flight stateful request */ + if (0 != + qatUtilsAtomicGet( + &(pSessionDesc->pendingStatefulCbCount))) { + LAC_LOG_ERROR( + "Only one in-flight stateful request supported"); + if (CPA_STATUS_SUCCESS != + LAC_SPINUNLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot unlock session lock"); + } + return CPA_STATUS_RETRY; + } + + if ((0 == srcBuffSize) || + ((1 == srcBuffSize) && (CPA_DC_FLUSH_FINAL != flushFlag) && + (CPA_DC_FLUSH_FULL != flushFlag))) { + if (CPA_TRUE == + dcZeroLengthRequests(pService, + pSessionDesc, + pResults, + flushFlag, + callbackTag, + DC_DECOMPRESSION_REQUEST)) { + return CPA_STATUS_SUCCESS; + } + } + + qatUtilsAtomicInc(&(pSessionDesc->pendingStatefulCbCount)); + if (CPA_STATUS_SUCCESS != + LAC_SPINUNLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot unlock session lock"); + } + } + + return dcCompDecompData(pService, + pSessionDesc, + dcInstance, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + NULL, + callbackTag, + DC_DECOMPRESSION_REQUEST, + CPA_TRUE, + DC_NO_CNV); +} + +CpaStatus +cpaDcDecompressData2(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcOpData *pOpData, + CpaDcRqResults *pResults, + void *callbackTag) +{ + sal_compression_service_t *pService = NULL; + dc_session_desc_t *pSessionDesc = NULL; + CpaInstanceHandle insHandle = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa64U srcBuffSize = 0; + LAC_CHECK_NULL_PARAM(pOpData); + + if (CPA_FALSE == pOpData->integrityCrcCheck) { + + return cpaDcDecompressData(dcInstance, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + pOpData->flushFlag, + callbackTag); + } + + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + status = dcDecompressDataCheck(insHandle, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + pOpData->flushFlag, + &srcBuffSize); + if (CPA_STATUS_SUCCESS != status) { + return status; + } + + pService = (sal_compression_service_t *)insHandle; + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + + if (CPA_DC_STATEFUL == pSessionDesc->sessState) { + LAC_INVALID_PARAM_LOG("Invalid session: Stateful session is " + "not supported"); + return CPA_STATUS_INVALID_PARAM; + } + + return dcCompDecompData(pService, + pSessionDesc, + insHandle, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + pOpData->flushFlag, + pOpData, + callbackTag, + DC_DECOMPRESSION_REQUEST, + CPA_TRUE, + DC_NO_CNV); +} Index: sys/dev/qat/qat_api/common/compression/dc_dp.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/compression/dc_dp.c @@ -0,0 +1,545 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_dp.c + * + * @defgroup cpaDcDp Data Compression Data Plane API + * + * @ingroup cpaDcDp + * + * @description + * Implementation of the Data Compression DP operations. + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" +#include "cpa_dc.h" +#include "cpa_dc_dp.h" + +#include "icp_qat_fw_comp.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "dc_session.h" +#include "dc_datapath.h" +#include "lac_common.h" +#include "lac_mem.h" +#include "lac_mem_pools.h" +#include "sal_types_compression.h" +#include "lac_sal.h" +#include "lac_sync.h" +#include "sal_service_state.h" +#include "sal_qat_cmn_msg.h" +#include "icp_sal_poll.h" + +/** + ***************************************************************************** + * @ingroup cpaDcDp + * Check that pOpData is valid + * + * @description + * Check that all the parameters defined in the pOpData are valid + * + * @param[in] pOpData Pointer to a structure containing the + * request parameters + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +dcDataPlaneParamCheck(const CpaDcDpOpData *pOpData) +{ + sal_compression_service_t *pService = NULL; + dc_session_desc_t *pSessionDesc = NULL; + + LAC_CHECK_NULL_PARAM(pOpData); + LAC_CHECK_NULL_PARAM(pOpData->dcInstance); + LAC_CHECK_NULL_PARAM(pOpData->pSessionHandle); + + /* Ensure this is a compression instance */ + SAL_CHECK_INSTANCE_TYPE(pOpData->dcInstance, + SAL_SERVICE_TYPE_COMPRESSION); + + pService = (sal_compression_service_t *)(pOpData->dcInstance); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pOpData->pSessionHandle); + if (NULL == pSessionDesc) { + QAT_UTILS_LOG("Session handle not as expected.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_FALSE == pSessionDesc->isDcDp) { + QAT_UTILS_LOG("The session type should be data plane.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Compressing zero byte is not supported */ + if ((CPA_DC_DIR_COMPRESS == pSessionDesc->sessDirection) && + (0 == pOpData->bufferLenToCompress)) { + QAT_UTILS_LOG( + "The source buffer length to compress needs to be greater than zero byte.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData->sessDirection > CPA_DC_DIR_DECOMPRESS) { + QAT_UTILS_LOG("Invalid direction of operation.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (0 == pOpData->srcBuffer) { + QAT_UTILS_LOG("Invalid srcBuffer\n"); + return CPA_STATUS_INVALID_PARAM; + } + if (0 == pOpData->destBuffer) { + QAT_UTILS_LOG("Invalid destBuffer\n"); + return CPA_STATUS_INVALID_PARAM; + } + if (pOpData->srcBuffer == pOpData->destBuffer) { + QAT_UTILS_LOG("In place operation is not supported.\n"); + return CPA_STATUS_INVALID_PARAM; + } + if (0 == pOpData->thisPhys) { + QAT_UTILS_LOG("Invalid thisPhys\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((CPA_TRUE != pOpData->compressAndVerify) && + (CPA_FALSE != pOpData->compressAndVerify)) { + QAT_UTILS_LOG("Invalid compressAndVerify\n"); + return CPA_STATUS_INVALID_PARAM; + } + if ((CPA_TRUE == pOpData->compressAndVerify) && + !(pService->generic_service_info.dcExtendedFeatures & + DC_CNV_EXTENDED_CAPABILITY)) { + QAT_UTILS_LOG("Invalid compressAndVerify, no CNV capability\n"); + return CPA_STATUS_UNSUPPORTED; + } + if ((CPA_TRUE != pOpData->compressAndVerifyAndRecover) && + (CPA_FALSE != pOpData->compressAndVerifyAndRecover)) { + QAT_UTILS_LOG("Invalid compressAndVerifyAndRecover\n"); + return CPA_STATUS_INVALID_PARAM; + } + if ((CPA_TRUE == pOpData->compressAndVerifyAndRecover) && + (CPA_FALSE == pOpData->compressAndVerify)) { + QAT_UTILS_LOG("CnVnR option set without setting CnV\n"); + return CPA_STATUS_INVALID_PARAM; + } + if ((CPA_TRUE == pOpData->compressAndVerifyAndRecover) && + !(pService->generic_service_info.dcExtendedFeatures & + DC_CNVNR_EXTENDED_CAPABILITY)) { + QAT_UTILS_LOG( + "Invalid CnVnR option set and no CnVnR capability.\n"); + return CPA_STATUS_UNSUPPORTED; + } + + if ((CPA_DP_BUFLIST == pOpData->srcBufferLen) && + (CPA_DP_BUFLIST != pOpData->destBufferLen)) { + QAT_UTILS_LOG( + "The source and destination buffers need to be of the same type (both flat buffers or buffer lists).\n"); + return CPA_STATUS_INVALID_PARAM; + } + if ((CPA_DP_BUFLIST != pOpData->srcBufferLen) && + (CPA_DP_BUFLIST == pOpData->destBufferLen)) { + QAT_UTILS_LOG( + "The source and destination buffers need to be of the same type (both flat buffers or buffer lists).\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_DP_BUFLIST != pOpData->srcBufferLen) { + if (pOpData->srcBufferLen < pOpData->bufferLenToCompress) { + QAT_UTILS_LOG( + "srcBufferLen is smaller than bufferLenToCompress.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData->destBufferLen < pOpData->bufferLenForData) { + QAT_UTILS_LOG( + "destBufferLen is smaller than bufferLenForData.\n"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + /* We are assuming that there is enough memory in the source and + * destination buffer lists. We only receive physical addresses + * of the + * buffers so we are unable to test it here */ + LAC_CHECK_8_BYTE_ALIGNMENT(pOpData->srcBuffer); + LAC_CHECK_8_BYTE_ALIGNMENT(pOpData->destBuffer); + } + + LAC_CHECK_8_BYTE_ALIGNMENT(pOpData->thisPhys); + + if ((CPA_DC_DIR_COMPRESS == pSessionDesc->sessDirection) || + (CPA_DC_DIR_COMBINED == pSessionDesc->sessDirection)) { + if (CPA_DC_HT_FULL_DYNAMIC == pSessionDesc->huffType) { + /* Check if Intermediate Buffer Array pointer is NULL */ + if ((0 == pService->pInterBuffPtrsArrayPhyAddr) || + (NULL == pService->pInterBuffPtrsArray)) { + QAT_UTILS_LOG( + "No intermediate buffer defined for this instance - see cpaDcStartInstance.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Ensure that the destination buffer length for data is + * greater + * or equal to 128B */ + if (pOpData->bufferLenForData < + DC_DEST_BUFFER_DYN_MIN_SIZE) { + QAT_UTILS_LOG( + "Destination buffer length for data should be greater or equal to 128B.\n"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + /* Ensure that the destination buffer length for data is + * greater + * or equal to min output buffsize */ + if (pOpData->bufferLenForData < + pService->comp_device_data.minOutputBuffSize) { + QAT_UTILS_LOG( + "Destination buffer size should be greater or equal to %d bytes.\n", + pService->comp_device_data + .minOutputBuffSize); + return CPA_STATUS_INVALID_PARAM; + } + } + } + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcDpGetSessionSize(CpaInstanceHandle dcInstance, + CpaDcSessionSetupData *pSessionData, + Cpa32U *pSessionSize) +{ + return dcGetSessionSize(dcInstance, pSessionData, pSessionSize, NULL); +} + +CpaStatus +cpaDcDpInitSession(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaDcSessionSetupData *pSessionData) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + dc_session_desc_t *pSessionDesc = NULL; + sal_compression_service_t *pService = NULL; + + LAC_CHECK_INSTANCE_HANDLE(dcInstance); + SAL_CHECK_INSTANCE_TYPE(dcInstance, SAL_SERVICE_TYPE_COMPRESSION); + + pService = (sal_compression_service_t *)dcInstance; + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(pService); + + /* Stateful is not supported */ + if (CPA_DC_STATELESS != pSessionData->sessState) { + QAT_UTILS_LOG("Invalid sessState value\n"); + return CPA_STATUS_INVALID_PARAM; + } + + status = + dcInitSession(dcInstance, pSessionHandle, pSessionData, NULL, NULL); + if (CPA_STATUS_SUCCESS == status) { + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + pSessionDesc->isDcDp = CPA_TRUE; + + ICP_QAT_FW_COMN_PTR_TYPE_SET( + pSessionDesc->reqCacheDecomp.comn_hdr.comn_req_flags, + DC_DP_QAT_PTR_TYPE); + ICP_QAT_FW_COMN_PTR_TYPE_SET( + pSessionDesc->reqCacheComp.comn_hdr.comn_req_flags, + DC_DP_QAT_PTR_TYPE); + } + + return status; +} + +CpaStatus +cpaDcDpRemoveSession(const CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle) +{ + return cpaDcRemoveSession(dcInstance, pSessionHandle); +} + +CpaStatus +cpaDcDpRegCbFunc(const CpaInstanceHandle dcInstance, + const CpaDcDpCallbackFn pNewCb) +{ + sal_compression_service_t *pService = NULL; + + LAC_CHECK_NULL_PARAM(dcInstance); + SAL_CHECK_INSTANCE_TYPE(dcInstance, SAL_SERVICE_TYPE_COMPRESSION); + LAC_CHECK_NULL_PARAM(pNewCb); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(dcInstance); + + pService = (sal_compression_service_t *)dcInstance; + pService->pDcDpCb = pNewCb; + + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup cpaDcDp + * + * @description + * Writes the message to the ring + * + * @param[in] pOpData Pointer to a structure containing the + * request parameters + * @param[in] pCurrentQatMsg Pointer to current QAT message on the ring + * + *****************************************************************************/ +static void +dcDpWriteRingMsg(CpaDcDpOpData *pOpData, icp_qat_fw_comp_req_t *pCurrentQatMsg) +{ + icp_qat_fw_comp_req_t *pReqCache = NULL; + dc_session_desc_t *pSessionDesc = NULL; + Cpa8U bufferFormat; + + Cpa8U cnvDecompReq = ICP_QAT_FW_COMP_NO_CNV; + Cpa8U cnvnrCompReq = ICP_QAT_FW_COMP_NO_CNV_RECOVERY; + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pOpData->pSessionHandle); + + if (CPA_DC_DIR_COMPRESS == pOpData->sessDirection) { + pReqCache = &(pSessionDesc->reqCacheComp); + /* CNV check */ + if (CPA_TRUE == pOpData->compressAndVerify) { + cnvDecompReq = ICP_QAT_FW_COMP_CNV; + /* CNVNR check */ + if (CPA_TRUE == pOpData->compressAndVerifyAndRecover) { + cnvnrCompReq = ICP_QAT_FW_COMP_CNV_RECOVERY; + } + } + } else { + pReqCache = &(pSessionDesc->reqCacheDecomp); + } + + /* Fills in the template DC ET ring message - cached from the + * session descriptor */ + memcpy((void *)pCurrentQatMsg, + (void *)(pReqCache), + (LAC_QAT_DC_REQ_SZ_LW * LAC_LONG_WORD_IN_BYTES)); + + if (CPA_DP_BUFLIST == pOpData->srcBufferLen) { + bufferFormat = QAT_COMN_PTR_TYPE_SGL; + } else { + bufferFormat = QAT_COMN_PTR_TYPE_FLAT; + } + + pCurrentQatMsg->comp_pars.req_par_flags |= + ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD( + 0, 0, 0, cnvDecompReq, cnvnrCompReq, 0); + + SalQatMsg_CmnMidWrite((icp_qat_fw_la_bulk_req_t *)pCurrentQatMsg, + pOpData, + bufferFormat, + pOpData->srcBuffer, + pOpData->destBuffer, + pOpData->srcBufferLen, + pOpData->destBufferLen); + + pCurrentQatMsg->comp_pars.comp_len = pOpData->bufferLenToCompress; + pCurrentQatMsg->comp_pars.out_buffer_sz = pOpData->bufferLenForData; +} + +CpaStatus +cpaDcDpEnqueueOp(CpaDcDpOpData *pOpData, const CpaBoolean performOpNow) +{ + icp_qat_fw_comp_req_t *pCurrentQatMsg = NULL; + icp_comms_trans_handle trans_handle = NULL; + dc_session_desc_t *pSessionDesc = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + + status = dcDataPlaneParamCheck(pOpData); + if (CPA_STATUS_SUCCESS != status) { + return status; + } + + if ((CPA_FALSE == pOpData->compressAndVerify) && + (CPA_DC_DIR_COMPRESS == pOpData->sessDirection)) { + return CPA_STATUS_UNSUPPORTED; + } + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(pOpData->dcInstance); + + trans_handle = ((sal_compression_service_t *)pOpData->dcInstance) + ->trans_handle_compression_tx; + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pOpData->pSessionHandle); + + if ((CPA_DC_DIR_COMPRESS == pOpData->sessDirection) && + (CPA_DC_DIR_DECOMPRESS == pSessionDesc->sessDirection)) { + QAT_UTILS_LOG( + "The session does not support this direction of operation.\n"); + return CPA_STATUS_INVALID_PARAM; + } else if ((CPA_DC_DIR_DECOMPRESS == pOpData->sessDirection) && + (CPA_DC_DIR_COMPRESS == pSessionDesc->sessDirection)) { + QAT_UTILS_LOG( + "The session does not support this direction of operation.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + icp_adf_getSingleQueueAddr(trans_handle, (void **)&pCurrentQatMsg); + if (NULL == pCurrentQatMsg) { + return CPA_STATUS_RETRY; + } + + dcDpWriteRingMsg(pOpData, pCurrentQatMsg); + pSessionDesc->pendingDpStatelessCbCount++; + + if (CPA_TRUE == performOpNow) { + SalQatMsg_updateQueueTail(trans_handle); + } + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcDpEnqueueOpBatch(const Cpa32U numberRequests, + CpaDcDpOpData *pOpData[], + const CpaBoolean performOpNow) +{ + icp_qat_fw_comp_req_t *pCurrentQatMsg = NULL; + icp_comms_trans_handle trans_handle = NULL; + dc_session_desc_t *pSessionDesc = NULL; + Cpa32U i = 0; + CpaStatus status = CPA_STATUS_SUCCESS; + sal_compression_service_t *pService = NULL; + + LAC_CHECK_NULL_PARAM(pOpData); + LAC_CHECK_NULL_PARAM(pOpData[0]); + LAC_CHECK_NULL_PARAM(pOpData[0]->dcInstance); + + pService = (sal_compression_service_t *)(pOpData[0]->dcInstance); + if ((numberRequests == 0) || + (numberRequests > pService->maxNumCompConcurrentReq)) { + QAT_UTILS_LOG( + "The number of requests needs to be between 1 and %d.\n", + pService->maxNumCompConcurrentReq); + return CPA_STATUS_INVALID_PARAM; + } + + for (i = 0; i < numberRequests; i++) { + status = dcDataPlaneParamCheck(pOpData[i]); + if (CPA_STATUS_SUCCESS != status) { + return status; + } + + /* Check that all instance handles and session handles are the + * same */ + if (pOpData[i]->dcInstance != pOpData[0]->dcInstance) { + QAT_UTILS_LOG( + "All instance handles should be the same in the pOpData.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData[i]->pSessionHandle != pOpData[0]->pSessionHandle) { + QAT_UTILS_LOG( + "All session handles should be the same in the pOpData.\n"); + return CPA_STATUS_INVALID_PARAM; + } + } + + for (i = 0; i < numberRequests; i++) { + if ((CPA_FALSE == pOpData[i]->compressAndVerify) && + (CPA_DC_DIR_COMPRESS == pOpData[i]->sessDirection)) { + return CPA_STATUS_UNSUPPORTED; + } + } + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(pOpData[0]->dcInstance); + + trans_handle = ((sal_compression_service_t *)pOpData[0]->dcInstance) + ->trans_handle_compression_tx; + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pOpData[0]->pSessionHandle); + + for (i = 0; i < numberRequests; i++) { + if ((CPA_DC_DIR_COMPRESS == pOpData[i]->sessDirection) && + (CPA_DC_DIR_DECOMPRESS == pSessionDesc->sessDirection)) { + QAT_UTILS_LOG( + "The session does not support this direction of operation.\n"); + return CPA_STATUS_INVALID_PARAM; + } else if ((CPA_DC_DIR_DECOMPRESS == + pOpData[i]->sessDirection) && + (CPA_DC_DIR_COMPRESS == + pSessionDesc->sessDirection)) { + QAT_UTILS_LOG( + "The session does not support this direction of operation.\n"); + return CPA_STATUS_INVALID_PARAM; + } + } + + icp_adf_getQueueMemory(trans_handle, + numberRequests, + (void **)&pCurrentQatMsg); + if (NULL == pCurrentQatMsg) { + return CPA_STATUS_RETRY; + } + + for (i = 0; i < numberRequests; i++) { + dcDpWriteRingMsg(pOpData[i], pCurrentQatMsg); + icp_adf_getQueueNext(trans_handle, (void **)&pCurrentQatMsg); + } + + pSessionDesc->pendingDpStatelessCbCount += numberRequests; + + if (CPA_TRUE == performOpNow) { + SalQatMsg_updateQueueTail(trans_handle); + } + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +icp_sal_DcPollDpInstance(CpaInstanceHandle dcInstance, Cpa32U responseQuota) +{ + icp_comms_trans_handle trans_handle = NULL; + + LAC_CHECK_INSTANCE_HANDLE(dcInstance); + SAL_CHECK_INSTANCE_TYPE(dcInstance, SAL_SERVICE_TYPE_COMPRESSION); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(dcInstance); + + trans_handle = ((sal_compression_service_t *)dcInstance) + ->trans_handle_compression_rx; + + return icp_adf_pollQueue(trans_handle, responseQuota); +} + +CpaStatus +cpaDcDpPerformOpNow(CpaInstanceHandle dcInstance) +{ + icp_comms_trans_handle trans_handle = NULL; + + LAC_CHECK_NULL_PARAM(dcInstance); + SAL_CHECK_INSTANCE_TYPE(dcInstance, SAL_SERVICE_TYPE_COMPRESSION); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(dcInstance); + + trans_handle = ((sal_compression_service_t *)dcInstance) + ->trans_handle_compression_tx; + + if (CPA_TRUE == icp_adf_queueDataToSend(trans_handle)) { + SalQatMsg_updateQueueTail(trans_handle); + } + + return CPA_STATUS_SUCCESS; +} Index: sys/dev/qat/qat_api/common/compression/dc_header_footer.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/compression/dc_header_footer.c @@ -0,0 +1,237 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_header_footer.c + * + * @ingroup Dc_DataCompression + * + * @description + * Implementation of the Data Compression header and footer operations. + * + *****************************************************************************/ + +/* + ******************************************************************************* + * Include public/global header files + ******************************************************************************* + */ +#include "cpa.h" +#include "cpa_dc.h" +#include "icp_adf_init.h" + +/* + ******************************************************************************* + * Include private header files + ******************************************************************************* + */ +#include "dc_header_footer.h" +#include "dc_session.h" +#include "dc_datapath.h" + +CpaStatus +cpaDcGenerateHeader(CpaDcSessionHandle pSessionHandle, + CpaFlatBuffer *pDestBuff, + Cpa32U *count) +{ + dc_session_desc_t *pSessionDesc = NULL; + + LAC_CHECK_NULL_PARAM(pSessionHandle); + LAC_CHECK_NULL_PARAM(pDestBuff); + LAC_CHECK_NULL_PARAM(pDestBuff->pData); + LAC_CHECK_NULL_PARAM(count); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + + if (NULL == pSessionDesc) { + QAT_UTILS_LOG("Session handle not as expected\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_DC_DIR_DECOMPRESS == pSessionDesc->sessDirection) { + QAT_UTILS_LOG("Invalid session direction\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_DC_DEFLATE == pSessionDesc->compType) { + /* Adding a Gzip header */ + if (CPA_DC_CRC32 == pSessionDesc->checksumType) { + Cpa8U *pDest = pDestBuff->pData; + + if (pDestBuff->dataLenInBytes < DC_GZIP_HEADER_SIZE) { + QAT_UTILS_LOG( + "The dataLenInBytes of the dest buffer is too small.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + pDest[0] = DC_GZIP_ID1; /* ID1 */ + pDest[1] = DC_GZIP_ID2; /* ID2 */ + pDest[2] = + 0x08; /* CM = 8 denotes "deflate" compression */ + pDest[3] = 0x00; /* FLG = 0 denotes "No extra fields" */ + pDest[4] = 0x00; + pDest[5] = 0x00; + pDest[6] = 0x00; + pDest[7] = 0x00; /* MTIME = 0x00 means time stamp not + available */ + + /* XFL = 4 - compressor used fastest compression, */ + /* XFL = 2 - compressor used maximum compression. */ + pDest[8] = 0; + if (CPA_DC_L1 == pSessionDesc->compLevel) + pDest[8] = DC_GZIP_FAST_COMP; + else if (CPA_DC_L4 >= pSessionDesc->compLevel) + pDest[8] = DC_GZIP_MAX_COMP; + + pDest[9] = + DC_GZIP_FILESYSTYPE; /* OS = 0 means FAT filesystem + (MS-DOS, OS/2, NT/Win32), 3 - Unix */ + + /* Set to the number of bytes added to the buffer */ + *count = DC_GZIP_HEADER_SIZE; + } + + /* Adding a Zlib header */ + else if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + Cpa8U *pDest = pDestBuff->pData; + Cpa16U header = 0, level = 0; + + if (pDestBuff->dataLenInBytes < DC_ZLIB_HEADER_SIZE) { + QAT_UTILS_LOG( + "The dataLenInBytes of the dest buffer is too small.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* CMF = CM | CMINFO. + CM = 8 denotes "deflate" compression, + CMINFO = 7 indicates a 32K window size */ + /* Depending on the device, at compression levels above + L1, the + window size can be 8 or 16K bytes. + The file will decompress ok if a greater window size + is specified + in the header. */ + header = + (DC_ZLIB_CM_DEFLATE + + (DC_32K_WINDOW_SIZE << DC_ZLIB_WINDOWSIZE_OFFSET)) + << LAC_NUM_BITS_IN_BYTE; + + switch (pSessionDesc->compLevel) { + case CPA_DC_L1: + level = DC_ZLIB_LEVEL_0; + break; + case CPA_DC_L2: + level = DC_ZLIB_LEVEL_1; + break; + case CPA_DC_L3: + level = DC_ZLIB_LEVEL_2; + break; + default: + level = DC_ZLIB_LEVEL_3; + } + + /* Bits 6 - 7: FLEVEL, compression level */ + header |= level << DC_ZLIB_FLEVEL_OFFSET; + + /* The header has to be a multiple of 31 */ + header += DC_ZLIB_HEADER_OFFSET - + (header % DC_ZLIB_HEADER_OFFSET); + + pDest[0] = (Cpa8U)(header >> LAC_NUM_BITS_IN_BYTE); + pDest[1] = (Cpa8U)header; + + /* Set to the number of bytes added to the buffer */ + *count = DC_ZLIB_HEADER_SIZE; + } + + /* If deflate but no checksum required */ + else { + *count = 0; + } + } else { + /* There is no header for other compressed data */ + *count = 0; + } + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcGenerateFooter(CpaDcSessionHandle pSessionHandle, + CpaFlatBuffer *pDestBuff, + CpaDcRqResults *pRes) +{ + dc_session_desc_t *pSessionDesc = NULL; + + LAC_CHECK_NULL_PARAM(pSessionHandle); + LAC_CHECK_NULL_PARAM(pDestBuff); + LAC_CHECK_NULL_PARAM(pDestBuff->pData); + LAC_CHECK_NULL_PARAM(pRes); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + + if (NULL == pSessionDesc) { + QAT_UTILS_LOG("Session handle not as expected\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_DC_DIR_DECOMPRESS == pSessionDesc->sessDirection) { + QAT_UTILS_LOG("Invalid session direction\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_DC_DEFLATE == pSessionDesc->compType) { + if (CPA_DC_CRC32 == pSessionDesc->checksumType) { + Cpa8U *pDest = pDestBuff->pData; + Cpa32U crc32 = pRes->checksum; + Cpa64U totalLenBeforeCompress = + pSessionDesc->cumulativeConsumedBytes; + + if (pDestBuff->dataLenInBytes < DC_GZIP_FOOTER_SIZE) { + QAT_UTILS_LOG( + "The dataLenInBytes of the dest buffer is too small.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Crc32 of the uncompressed data */ + pDest[0] = (Cpa8U)crc32; + pDest[1] = (Cpa8U)(crc32 >> LAC_NUM_BITS_IN_BYTE); + pDest[2] = (Cpa8U)(crc32 >> 2 * LAC_NUM_BITS_IN_BYTE); + pDest[3] = (Cpa8U)(crc32 >> 3 * LAC_NUM_BITS_IN_BYTE); + + /* Length of the uncompressed data */ + pDest[4] = (Cpa8U)totalLenBeforeCompress; + pDest[5] = (Cpa8U)(totalLenBeforeCompress >> + LAC_NUM_BITS_IN_BYTE); + pDest[6] = (Cpa8U)(totalLenBeforeCompress >> + 2 * LAC_NUM_BITS_IN_BYTE); + pDest[7] = (Cpa8U)(totalLenBeforeCompress >> + 3 * LAC_NUM_BITS_IN_BYTE); + + /* Increment produced by the number of bytes added to + * the buffer */ + pRes->produced += DC_GZIP_FOOTER_SIZE; + } else if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + Cpa8U *pDest = pDestBuff->pData; + Cpa32U adler32 = pRes->checksum; + + if (pDestBuff->dataLenInBytes < DC_ZLIB_FOOTER_SIZE) { + QAT_UTILS_LOG( + "The dataLenInBytes of the dest buffer is too small.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Adler32 of the uncompressed data */ + pDest[0] = (Cpa8U)(adler32 >> 3 * LAC_NUM_BITS_IN_BYTE); + pDest[1] = (Cpa8U)(adler32 >> 2 * LAC_NUM_BITS_IN_BYTE); + pDest[2] = (Cpa8U)(adler32 >> LAC_NUM_BITS_IN_BYTE); + pDest[3] = (Cpa8U)adler32; + + /* Increment produced by the number of bytes added to + * the buffer */ + pRes->produced += DC_ZLIB_FOOTER_SIZE; + } + } + + return CPA_STATUS_SUCCESS; +} Index: sys/dev/qat/qat_api/common/compression/dc_session.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/compression/dc_session.c @@ -0,0 +1,957 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_session.c + * + * @ingroup Dc_DataCompression + * + * @description + * Implementation of the Data Compression session operations. + * + *****************************************************************************/ + +/* + ******************************************************************************* + * Include public/global header files + ******************************************************************************* + */ +#include "cpa.h" +#include "cpa_dc.h" + +#include "icp_qat_fw.h" +#include "icp_qat_fw_comp.h" +#include "icp_qat_hw.h" + +/* + ******************************************************************************* + * Include private header files + ******************************************************************************* + */ +#include "dc_session.h" +#include "dc_datapath.h" +#include "lac_mem_pools.h" +#include "sal_types_compression.h" +#include "lac_buffer_desc.h" +#include "sal_service_state.h" +#include "sal_qat_cmn_msg.h" + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Check that pSessionData is valid + * + * @description + * Check that all the parameters defined in the pSessionData are valid + * + * @param[in] pSessionData Pointer to a user instantiated structure + * containing session data + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_FAIL Function failed to find device + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * @retval CPA_STATUS_UNSUPPORTED Unsupported algorithm/feature + * + *****************************************************************************/ +static CpaStatus +dcCheckSessionData(const CpaDcSessionSetupData *pSessionData, + CpaInstanceHandle dcInstance) +{ + CpaDcInstanceCapabilities instanceCapabilities = { 0 }; + + cpaDcQueryCapabilities(dcInstance, &instanceCapabilities); + + if ((pSessionData->compLevel < CPA_DC_L1) || + (pSessionData->compLevel > CPA_DC_L9)) { + QAT_UTILS_LOG("Invalid compLevel value\n"); + return CPA_STATUS_INVALID_PARAM; + } + if ((pSessionData->autoSelectBestHuffmanTree < CPA_DC_ASB_DISABLED) || + (pSessionData->autoSelectBestHuffmanTree > + CPA_DC_ASB_UNCOMP_STATIC_DYNAMIC_WITH_NO_HDRS)) { + QAT_UTILS_LOG("Invalid autoSelectBestHuffmanTree value\n"); + return CPA_STATUS_INVALID_PARAM; + } + if (pSessionData->compType != CPA_DC_DEFLATE) { + QAT_UTILS_LOG("Invalid compType value\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((pSessionData->huffType < CPA_DC_HT_STATIC) || + (pSessionData->huffType > CPA_DC_HT_FULL_DYNAMIC) || + (CPA_DC_HT_PRECOMP == pSessionData->huffType)) { + QAT_UTILS_LOG("Invalid huffType value\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((pSessionData->sessDirection < CPA_DC_DIR_COMPRESS) || + (pSessionData->sessDirection > CPA_DC_DIR_COMBINED)) { + QAT_UTILS_LOG("Invalid sessDirection value\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((pSessionData->sessState < CPA_DC_STATEFUL) || + (pSessionData->sessState > CPA_DC_STATELESS)) { + QAT_UTILS_LOG("Invalid sessState value\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((pSessionData->checksum < CPA_DC_NONE) || + (pSessionData->checksum > CPA_DC_ADLER32)) { + QAT_UTILS_LOG("Invalid checksum value\n"); + return CPA_STATUS_INVALID_PARAM; + } + + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Populate the compression hardware block + * + * @description + * This function will populate the compression hardware block and update + * the size in bytes of the block + * + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in] pCompConfig Pointer to slice config word + * @param[in] compDecomp Direction of the operation + * @param[in] enableDmm Delayed Match Mode + * + *****************************************************************************/ +static void +dcCompHwBlockPopulate(dc_session_desc_t *pSessionDesc, + icp_qat_hw_compression_config_t *pCompConfig, + dc_request_dir_t compDecomp, + icp_qat_hw_compression_delayed_match_t enableDmm) +{ + icp_qat_hw_compression_direction_t dir = + ICP_QAT_HW_COMPRESSION_DIR_COMPRESS; + icp_qat_hw_compression_algo_t algo = + ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE; + icp_qat_hw_compression_depth_t depth = ICP_QAT_HW_COMPRESSION_DEPTH_1; + icp_qat_hw_compression_file_type_t filetype = + ICP_QAT_HW_COMPRESSION_FILE_TYPE_0; + + /* Set the direction */ + if (DC_COMPRESSION_REQUEST == compDecomp) { + dir = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS; + } else { + dir = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS; + } + + if (CPA_DC_DEFLATE == pSessionDesc->compType) { + algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE; + } else { + QAT_UTILS_LOG("Algorithm not supported for Compression\n"); + } + + /* Set the depth */ + if (DC_DECOMPRESSION_REQUEST == compDecomp) { + depth = ICP_QAT_HW_COMPRESSION_DEPTH_1; + } else { + switch (pSessionDesc->compLevel) { + case CPA_DC_L1: + depth = ICP_QAT_HW_COMPRESSION_DEPTH_1; + break; + case CPA_DC_L2: + depth = ICP_QAT_HW_COMPRESSION_DEPTH_4; + break; + case CPA_DC_L3: + depth = ICP_QAT_HW_COMPRESSION_DEPTH_8; + break; + default: + depth = ICP_QAT_HW_COMPRESSION_DEPTH_16; + } + } + + /* The file type is set to ICP_QAT_HW_COMPRESSION_FILE_TYPE_0. The other + * modes will be used in the future for precompiled huffman trees */ + filetype = ICP_QAT_HW_COMPRESSION_FILE_TYPE_0; + + pCompConfig->val = ICP_QAT_HW_COMPRESSION_CONFIG_BUILD( + dir, enableDmm, algo, depth, filetype); + + pCompConfig->reserved = 0; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Populate the compression content descriptor + * + * @description + * This function will populate the compression content descriptor + * + * @param[in] pService Pointer to the service + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in] contextBufferAddrPhys Physical address of the context buffer + * @param[out] pMsg Pointer to the compression message + * @param[in] nextSlice Next slice + * @param[in] compDecomp Direction of the operation + * + *****************************************************************************/ +static void +dcCompContentDescPopulate(sal_compression_service_t *pService, + dc_session_desc_t *pSessionDesc, + CpaPhysicalAddr contextBufferAddrPhys, + icp_qat_fw_comp_req_t *pMsg, + icp_qat_fw_slice_t nextSlice, + dc_request_dir_t compDecomp) +{ + + icp_qat_fw_comp_cd_hdr_t *pCompControlBlock = NULL; + icp_qat_hw_compression_config_t *pCompConfig = NULL; + CpaBoolean bankEnabled = CPA_FALSE; + + pCompControlBlock = (icp_qat_fw_comp_cd_hdr_t *)&(pMsg->comp_cd_ctrl); + pCompConfig = + (icp_qat_hw_compression_config_t *)(pMsg->cd_pars.sl + .comp_slice_cfg_word); + + ICP_QAT_FW_COMN_NEXT_ID_SET(pCompControlBlock, nextSlice); + ICP_QAT_FW_COMN_CURR_ID_SET(pCompControlBlock, ICP_QAT_FW_SLICE_COMP); + + pCompControlBlock->comp_cfg_offset = 0; + + if ((CPA_DC_STATEFUL == pSessionDesc->sessState) && + (CPA_DC_DEFLATE == pSessionDesc->compType) && + (DC_DECOMPRESSION_REQUEST == compDecomp)) { + /* Enable A, B, C, D, and E (CAMs). */ + pCompControlBlock->ram_bank_flags = + ICP_QAT_FW_COMP_RAM_FLAGS_BUILD( + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */ + ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank E */ + ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank D */ + ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank C */ + ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank B */ + ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */ + bankEnabled = CPA_TRUE; + } else { + /* Disable all banks */ + pCompControlBlock->ram_bank_flags = + ICP_QAT_FW_COMP_RAM_FLAGS_BUILD( + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank E */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank D */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank C */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank B */ + ICP_QAT_FW_COMP_BANK_DISABLED); /* Bank A */ + } + + if (DC_COMPRESSION_REQUEST == compDecomp) { + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + pService->generic_service_info, + pCompControlBlock->comp_state_addr, + pSessionDesc->stateRegistersComp); + } else { + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + pService->generic_service_info, + pCompControlBlock->comp_state_addr, + pSessionDesc->stateRegistersDecomp); + } + + if (CPA_TRUE == bankEnabled) { + pCompControlBlock->ram_banks_addr = contextBufferAddrPhys; + } else { + pCompControlBlock->ram_banks_addr = 0; + } + + pCompControlBlock->resrvd = 0; + + /* Populate Compression Hardware Setup Block */ + dcCompHwBlockPopulate(pSessionDesc, + pCompConfig, + compDecomp, + pService->comp_device_data.enableDmm); +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Populate the translator content descriptor + * + * @description + * This function will populate the translator content descriptor + * + * @param[out] pMsg Pointer to the compression message + * @param[in] nextSlice Next slice + * + *****************************************************************************/ +static void +dcTransContentDescPopulate(icp_qat_fw_comp_req_t *pMsg, + icp_qat_fw_slice_t nextSlice) +{ + + icp_qat_fw_xlt_cd_hdr_t *pTransControlBlock = NULL; + pTransControlBlock = (icp_qat_fw_xlt_cd_hdr_t *)&(pMsg->u2.xlt_cd_ctrl); + + ICP_QAT_FW_COMN_NEXT_ID_SET(pTransControlBlock, nextSlice); + ICP_QAT_FW_COMN_CURR_ID_SET(pTransControlBlock, ICP_QAT_FW_SLICE_XLAT); + + pTransControlBlock->resrvd1 = 0; + pTransControlBlock->resrvd2 = 0; + pTransControlBlock->resrvd3 = 0; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Get the context size and the history size + * + * @description + * This function will get the size of the context buffer and the history + * buffer. The history buffer is a subset of the context buffer and its + * size is needed for stateful compression. + + * @param[in] dcInstance DC Instance Handle + * + * @param[in] pSessionData Pointer to a user instantiated + * structure containing session data + * @param[out] pContextSize Pointer to the context size + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * + * + *****************************************************************************/ +static CpaStatus +dcGetContextSize(CpaInstanceHandle dcInstance, + CpaDcSessionSetupData *pSessionData, + Cpa32U *pContextSize) +{ + sal_compression_service_t *pCompService = NULL; + + pCompService = (sal_compression_service_t *)dcInstance; + + *pContextSize = 0; + if ((CPA_DC_STATEFUL == pSessionData->sessState) && + (CPA_DC_DEFLATE == pSessionData->compType) && + (CPA_DC_DIR_COMPRESS != pSessionData->sessDirection)) { + *pContextSize = + pCompService->comp_device_data.inflateContextSize; + } + return CPA_STATUS_SUCCESS; +} + +CpaStatus +dcInitSession(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaDcSessionSetupData *pSessionData, + CpaBufferList *pContextBuffer, + CpaDcCallbackFn callbackFn) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_compression_service_t *pService = NULL; + icp_qat_fw_comp_req_t *pReqCache = NULL; + dc_session_desc_t *pSessionDesc = NULL; + CpaPhysicalAddr contextAddrPhys = 0; + CpaPhysicalAddr physAddress = 0; + CpaPhysicalAddr physAddressAligned = 0; + Cpa32U minContextSize = 0, historySize = 0; + Cpa32U rpCmdFlags = 0; + icp_qat_fw_serv_specif_flags cmdFlags = 0; + Cpa8U secureRam = ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF; + Cpa8U sessType = ICP_QAT_FW_COMP_STATELESS_SESSION; + Cpa8U autoSelectBest = ICP_QAT_FW_COMP_NOT_AUTO_SELECT_BEST; + Cpa8U enhancedAutoSelectBest = ICP_QAT_FW_COMP_NOT_ENH_AUTO_SELECT_BEST; + Cpa8U disableType0EnhancedAutoSelectBest = + ICP_QAT_FW_COMP_NOT_DISABLE_TYPE0_ENH_AUTO_SELECT_BEST; + icp_qat_fw_la_cmd_id_t dcCmdId = + (icp_qat_fw_la_cmd_id_t)ICP_QAT_FW_COMP_CMD_STATIC; + icp_qat_fw_comn_flags cmnRequestFlags = 0; + dc_integrity_crc_fw_t *pDataIntegrityCrcs = NULL; + + cmnRequestFlags = + ICP_QAT_FW_COMN_FLAGS_BUILD(DC_DEFAULT_QAT_PTR_TYPE, + QAT_COMN_CD_FLD_TYPE_16BYTE_DATA); + + pService = (sal_compression_service_t *)dcInstance; + + secureRam = pService->comp_device_data.useDevRam; + + LAC_CHECK_NULL_PARAM(pSessionHandle); + LAC_CHECK_NULL_PARAM(pSessionData); + + /* Check that the parameters defined in the pSessionData are valid for + * the + * device */ + if (CPA_STATUS_SUCCESS != + dcCheckSessionData(pSessionData, dcInstance)) { + return CPA_STATUS_INVALID_PARAM; + } + + if ((CPA_DC_STATEFUL == pSessionData->sessState) && + (CPA_DC_DIR_DECOMPRESS != pSessionData->sessDirection)) { + QAT_UTILS_LOG("Stateful sessions are not supported.\n"); + return CPA_STATUS_UNSUPPORTED; + } + + if (CPA_DC_HT_FULL_DYNAMIC == pSessionData->huffType) { + /* Test if DRAM is available for the intermediate buffers */ + if ((NULL == pService->pInterBuffPtrsArray) && + (0 == pService->pInterBuffPtrsArrayPhyAddr)) { + if (CPA_DC_ASB_STATIC_DYNAMIC == + pSessionData->autoSelectBestHuffmanTree) { + /* Define the Huffman tree as static */ + pSessionData->huffType = CPA_DC_HT_STATIC; + } else { + QAT_UTILS_LOG( + "No buffer defined for this instance - see cpaDcStartInstance.\n"); + return CPA_STATUS_RESOURCE; + } + } + } + + if ((CPA_DC_STATEFUL == pSessionData->sessState) && + (CPA_DC_DEFLATE == pSessionData->compType)) { + /* Get the size of the context buffer */ + status = + dcGetContextSize(dcInstance, pSessionData, &minContextSize); + + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG( + "Unable to get the context size of the session.\n"); + return CPA_STATUS_FAIL; + } + + /* If the minContextSize is zero it means we will not save or + * restore + * any history */ + if (0 != minContextSize) { + Cpa64U contextBuffSize = 0; + + LAC_CHECK_NULL_PARAM(pContextBuffer); + + if (LacBuffDesc_BufferListVerify( + pContextBuffer, + &contextBuffSize, + LAC_NO_ALIGNMENT_SHIFT) != CPA_STATUS_SUCCESS) { + return CPA_STATUS_INVALID_PARAM; + } + + /* Ensure that the context buffer size is greater or + * equal + * to minContextSize */ + if (contextBuffSize < minContextSize) { + QAT_UTILS_LOG( + "Context buffer size should be greater or equal to %d.\n", + minContextSize); + return CPA_STATUS_INVALID_PARAM; + } + } + } + + /* Re-align the session structure to 64 byte alignment */ + physAddress = + LAC_OS_VIRT_TO_PHYS_EXTERNAL(pService->generic_service_info, + (Cpa8U *)pSessionHandle + + sizeof(void *)); + + if (physAddress == 0) { + QAT_UTILS_LOG( + "Unable to get the physical address of the session.\n"); + return CPA_STATUS_FAIL; + } + + physAddressAligned = + (CpaPhysicalAddr)LAC_ALIGN_POW2_ROUNDUP(physAddress, + LAC_64BYTE_ALIGNMENT); + + pSessionDesc = (dc_session_desc_t *) + /* Move the session pointer by the physical offset + between aligned and unaligned memory */ + ((Cpa8U *)pSessionHandle + sizeof(void *) + + (physAddressAligned - physAddress)); + + /* Save the aligned pointer in the first bytes (size of LAC_ARCH_UINT) + * of the session memory */ + *((LAC_ARCH_UINT *)pSessionHandle) = (LAC_ARCH_UINT)pSessionDesc; + + /* Zero the compression session */ + LAC_OS_BZERO(pSessionDesc, sizeof(dc_session_desc_t)); + + /* Write the buffer descriptor for context/history */ + if (0 != minContextSize) { + status = LacBuffDesc_BufferListDescWrite( + pContextBuffer, + &contextAddrPhys, + CPA_FALSE, + &(pService->generic_service_info)); + + if (status != CPA_STATUS_SUCCESS) { + return status; + } + + pSessionDesc->pContextBuffer = pContextBuffer; + pSessionDesc->historyBuffSize = historySize; + } + + pSessionDesc->cumulativeConsumedBytes = 0; + + /* Initialise pSessionDesc */ + pSessionDesc->requestType = DC_REQUEST_FIRST; + pSessionDesc->huffType = pSessionData->huffType; + pSessionDesc->compType = pSessionData->compType; + pSessionDesc->checksumType = pSessionData->checksum; + pSessionDesc->autoSelectBestHuffmanTree = + pSessionData->autoSelectBestHuffmanTree; + pSessionDesc->sessDirection = pSessionData->sessDirection; + pSessionDesc->sessState = pSessionData->sessState; + pSessionDesc->compLevel = pSessionData->compLevel; + pSessionDesc->isDcDp = CPA_FALSE; + pSessionDesc->minContextSize = minContextSize; + pSessionDesc->isSopForCompressionProcessed = CPA_FALSE; + pSessionDesc->isSopForDecompressionProcessed = CPA_FALSE; + + if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + pSessionDesc->previousChecksum = 1; + } else { + pSessionDesc->previousChecksum = 0; + } + + if (CPA_DC_STATEFUL == pSessionData->sessState) { + /* Init the spinlock used to lock the access to the number of + * stateful + * in-flight requests */ + status = LAC_SPINLOCK_INIT(&(pSessionDesc->sessionLock)); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG( + "Spinlock init failed for sessionLock.\n"); + return CPA_STATUS_RESOURCE; + } + } + + /* For asynchronous - use the user supplied callback + * for synchronous - use the internal synchronous callback */ + pSessionDesc->pCompressionCb = ((void *)NULL != (void *)callbackFn) ? + callbackFn : + LacSync_GenWakeupSyncCaller; + + /* Reset the pending callback counters */ + qatUtilsAtomicSet(0, &pSessionDesc->pendingStatelessCbCount); + qatUtilsAtomicSet(0, &pSessionDesc->pendingStatefulCbCount); + pSessionDesc->pendingDpStatelessCbCount = 0; + + if (CPA_DC_DIR_DECOMPRESS != pSessionData->sessDirection) { + if (CPA_DC_HT_FULL_DYNAMIC == pSessionData->huffType) { + /* Populate the compression section of the content + * descriptor */ + dcCompContentDescPopulate(pService, + pSessionDesc, + contextAddrPhys, + &(pSessionDesc->reqCacheComp), + ICP_QAT_FW_SLICE_XLAT, + DC_COMPRESSION_REQUEST); + + /* Populate the translator section of the content + * descriptor */ + dcTransContentDescPopulate( + &(pSessionDesc->reqCacheComp), + ICP_QAT_FW_SLICE_DRAM_WR); + + if (0 != pService->pInterBuffPtrsArrayPhyAddr) { + pReqCache = &(pSessionDesc->reqCacheComp); + + pReqCache->u1.xlt_pars.inter_buff_ptr = + pService->pInterBuffPtrsArrayPhyAddr; + } + } else { + dcCompContentDescPopulate(pService, + pSessionDesc, + contextAddrPhys, + &(pSessionDesc->reqCacheComp), + ICP_QAT_FW_SLICE_DRAM_WR, + DC_COMPRESSION_REQUEST); + } + } + + /* Populate the compression section of the content descriptor for + * the decompression case or combined */ + if (CPA_DC_DIR_COMPRESS != pSessionData->sessDirection) { + dcCompContentDescPopulate(pService, + pSessionDesc, + contextAddrPhys, + &(pSessionDesc->reqCacheDecomp), + ICP_QAT_FW_SLICE_DRAM_WR, + DC_DECOMPRESSION_REQUEST); + } + + if (CPA_DC_STATEFUL == pSessionData->sessState) { + sessType = ICP_QAT_FW_COMP_STATEFUL_SESSION; + + LAC_OS_BZERO(&pSessionDesc->stateRegistersComp, + sizeof(pSessionDesc->stateRegistersComp)); + + LAC_OS_BZERO(&pSessionDesc->stateRegistersDecomp, + sizeof(pSessionDesc->stateRegistersDecomp)); + } + + /* Get physical address of E2E CRC buffer */ + pSessionDesc->physDataIntegrityCrcs = (icp_qat_addr_width_t) + LAC_OS_VIRT_TO_PHYS_EXTERNAL(pService->generic_service_info, + &pSessionDesc->dataIntegrityCrcs); + if (0 == pSessionDesc->physDataIntegrityCrcs) { + QAT_UTILS_LOG( + "Unable to get the physical address of Data Integrity buffer.\n"); + return CPA_STATUS_FAIL; + } + /* Initialize default CRC parameters */ + pDataIntegrityCrcs = &pSessionDesc->dataIntegrityCrcs; + pDataIntegrityCrcs->crc32 = 0; + pDataIntegrityCrcs->adler32 = 1; + pDataIntegrityCrcs->oCrc32Cpr = DC_INVALID_CRC; + pDataIntegrityCrcs->iCrc32Cpr = DC_INVALID_CRC; + pDataIntegrityCrcs->oCrc32Xlt = DC_INVALID_CRC; + pDataIntegrityCrcs->iCrc32Xlt = DC_INVALID_CRC; + pDataIntegrityCrcs->xorFlags = DC_XOR_FLAGS_DEFAULT; + pDataIntegrityCrcs->crcPoly = DC_CRC_POLY_DEFAULT; + pDataIntegrityCrcs->xorOut = DC_XOR_OUT_DEFAULT; + + /* Initialise seed checksums */ + pSessionDesc->seedSwCrc.swCrcI = 0; + pSessionDesc->seedSwCrc.swCrcO = 0; + + /* Populate the cmdFlags */ + switch (pSessionDesc->autoSelectBestHuffmanTree) { + case CPA_DC_ASB_DISABLED: + break; + case CPA_DC_ASB_STATIC_DYNAMIC: + autoSelectBest = ICP_QAT_FW_COMP_AUTO_SELECT_BEST; + break; + case CPA_DC_ASB_UNCOMP_STATIC_DYNAMIC_WITH_STORED_HDRS: + autoSelectBest = ICP_QAT_FW_COMP_AUTO_SELECT_BEST; + enhancedAutoSelectBest = ICP_QAT_FW_COMP_ENH_AUTO_SELECT_BEST; + break; + case CPA_DC_ASB_UNCOMP_STATIC_DYNAMIC_WITH_NO_HDRS: + autoSelectBest = ICP_QAT_FW_COMP_AUTO_SELECT_BEST; + enhancedAutoSelectBest = ICP_QAT_FW_COMP_ENH_AUTO_SELECT_BEST; + disableType0EnhancedAutoSelectBest = + ICP_QAT_FW_COMP_DISABLE_TYPE0_ENH_AUTO_SELECT_BEST; + break; + default: + break; + } + + rpCmdFlags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD( + ICP_QAT_FW_COMP_SOP, + ICP_QAT_FW_COMP_EOP, + ICP_QAT_FW_COMP_BFINAL, + ICP_QAT_FW_COMP_NO_CNV, + ICP_QAT_FW_COMP_NO_CNV_RECOVERY, + ICP_QAT_FW_COMP_CRC_MODE_LEGACY); + + cmdFlags = + ICP_QAT_FW_COMP_FLAGS_BUILD(sessType, + autoSelectBest, + enhancedAutoSelectBest, + disableType0EnhancedAutoSelectBest, + secureRam); + + if (CPA_DC_DIR_DECOMPRESS != pSessionData->sessDirection) { + if (CPA_DC_HT_FULL_DYNAMIC == pSessionDesc->huffType) { + dcCmdId = (icp_qat_fw_la_cmd_id_t)( + ICP_QAT_FW_COMP_CMD_DYNAMIC); + } + + pReqCache = &(pSessionDesc->reqCacheComp); + pReqCache->comp_pars.req_par_flags = rpCmdFlags; + pReqCache->comp_pars.crc.legacy.initial_adler = 1; + pReqCache->comp_pars.crc.legacy.initial_crc32 = 0; + + /* Populate header of the common request message */ + SalQatMsg_CmnHdrWrite((icp_qat_fw_comn_req_t *)pReqCache, + ICP_QAT_FW_COMN_REQ_CPM_FW_COMP, + (uint8_t)dcCmdId, + cmnRequestFlags, + cmdFlags); + } + + if (CPA_DC_DIR_COMPRESS != pSessionData->sessDirection) { + dcCmdId = + (icp_qat_fw_la_cmd_id_t)(ICP_QAT_FW_COMP_CMD_DECOMPRESS); + pReqCache = &(pSessionDesc->reqCacheDecomp); + pReqCache->comp_pars.req_par_flags = rpCmdFlags; + pReqCache->comp_pars.crc.legacy.initial_adler = 1; + pReqCache->comp_pars.crc.legacy.initial_crc32 = 0; + + /* Populate header of the common request message */ + SalQatMsg_CmnHdrWrite((icp_qat_fw_comn_req_t *)pReqCache, + ICP_QAT_FW_COMN_REQ_CPM_FW_COMP, + (uint8_t)dcCmdId, + cmnRequestFlags, + cmdFlags); + } + + return status; +} + +CpaStatus +cpaDcInitSession(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaDcSessionSetupData *pSessionData, + CpaBufferList *pContextBuffer, + CpaDcCallbackFn callbackFn) +{ + CpaInstanceHandle insHandle = NULL; + sal_compression_service_t *pService = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + LAC_CHECK_INSTANCE_HANDLE(insHandle); + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + pService = (sal_compression_service_t *)insHandle; + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(pService); + + return dcInitSession(insHandle, + pSessionHandle, + pSessionData, + pContextBuffer, + callbackFn); +} + +CpaStatus +cpaDcResetSession(const CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaInstanceHandle insHandle = NULL; + dc_session_desc_t *pSessionDesc = NULL; + Cpa64U numPendingStateless = 0; + Cpa64U numPendingStateful = 0; + icp_comms_trans_handle trans_handle = NULL; + LAC_CHECK_NULL_PARAM(pSessionHandle); + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + LAC_CHECK_NULL_PARAM(pSessionDesc); + + if (CPA_TRUE == pSessionDesc->isDcDp) { + insHandle = dcInstance; + } else { + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + } + LAC_CHECK_NULL_PARAM(insHandle); + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + /* Check if SAL is running otherwise return an error */ + SAL_RUNNING_CHECK(insHandle); + if (CPA_TRUE == pSessionDesc->isDcDp) { + trans_handle = ((sal_compression_service_t *)dcInstance) + ->trans_handle_compression_tx; + if (CPA_TRUE == icp_adf_queueDataToSend(trans_handle)) { + /* Process the remaining messages on the ring */ + SalQatMsg_updateQueueTail(trans_handle); + QAT_UTILS_LOG( + "There are remaining messages on the ring\n"); + return CPA_STATUS_RETRY; + } + + /* Check if there are stateless pending requests */ + if (0 != pSessionDesc->pendingDpStatelessCbCount) { + QAT_UTILS_LOG( + "There are %llu stateless DP requests pending.\n", + (unsigned long long) + pSessionDesc->pendingDpStatelessCbCount); + return CPA_STATUS_RETRY; + } + } else { + numPendingStateless = + qatUtilsAtomicGet(&(pSessionDesc->pendingStatelessCbCount)); + numPendingStateful = + qatUtilsAtomicGet(&(pSessionDesc->pendingStatefulCbCount)); + /* Check if there are stateless pending requests */ + if (0 != numPendingStateless) { + QAT_UTILS_LOG( + "There are %llu stateless requests pending.\n", + (unsigned long long)numPendingStateless); + return CPA_STATUS_RETRY; + } + /* Check if there are stateful pending requests */ + if (0 != numPendingStateful) { + QAT_UTILS_LOG( + "There are %llu stateful requests pending.\n", + (unsigned long long)numPendingStateful); + return CPA_STATUS_RETRY; + } + + /* Reset pSessionDesc */ + pSessionDesc->requestType = DC_REQUEST_FIRST; + pSessionDesc->cumulativeConsumedBytes = 0; + if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + pSessionDesc->previousChecksum = 1; + } else { + pSessionDesc->previousChecksum = 0; + } + } + /* Reset the pending callback counters */ + qatUtilsAtomicSet(0, &pSessionDesc->pendingStatelessCbCount); + qatUtilsAtomicSet(0, &pSessionDesc->pendingStatefulCbCount); + pSessionDesc->pendingDpStatelessCbCount = 0; + if (CPA_DC_STATEFUL == pSessionDesc->sessState) { + LAC_OS_BZERO(&pSessionDesc->stateRegistersComp, + sizeof(pSessionDesc->stateRegistersComp)); + LAC_OS_BZERO(&pSessionDesc->stateRegistersDecomp, + sizeof(pSessionDesc->stateRegistersDecomp)); + } + return status; +} + +CpaStatus +cpaDcRemoveSession(const CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaInstanceHandle insHandle = NULL; + dc_session_desc_t *pSessionDesc = NULL; + Cpa64U numPendingStateless = 0; + Cpa64U numPendingStateful = 0; + icp_comms_trans_handle trans_handle = NULL; + + LAC_CHECK_NULL_PARAM(pSessionHandle); + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + LAC_CHECK_NULL_PARAM(pSessionDesc); + + if (CPA_TRUE == pSessionDesc->isDcDp) { + insHandle = dcInstance; + } else { + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + } + + LAC_CHECK_NULL_PARAM(insHandle); + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + /* Check if SAL is running otherwise return an error */ + SAL_RUNNING_CHECK(insHandle); + + if (CPA_TRUE == pSessionDesc->isDcDp) { + trans_handle = ((sal_compression_service_t *)insHandle) + ->trans_handle_compression_tx; + + if (CPA_TRUE == icp_adf_queueDataToSend(trans_handle)) { + /* Process the remaining messages on the ring */ + SalQatMsg_updateQueueTail(trans_handle); + QAT_UTILS_LOG( + "There are remaining messages on the ring.\n"); + return CPA_STATUS_RETRY; + } + + /* Check if there are stateless pending requests */ + if (0 != pSessionDesc->pendingDpStatelessCbCount) { + QAT_UTILS_LOG( + "There are %llu stateless DP requests pending.\n", + (unsigned long long) + pSessionDesc->pendingDpStatelessCbCount); + return CPA_STATUS_RETRY; + } + } else { + numPendingStateless = + qatUtilsAtomicGet(&(pSessionDesc->pendingStatelessCbCount)); + numPendingStateful = + qatUtilsAtomicGet(&(pSessionDesc->pendingStatefulCbCount)); + + /* Check if there are stateless pending requests */ + if (0 != numPendingStateless) { + QAT_UTILS_LOG( + "There are %llu stateless requests pending.\n", + (unsigned long long)numPendingStateless); + status = CPA_STATUS_RETRY; + } + + /* Check if there are stateful pending requests */ + if (0 != numPendingStateful) { + QAT_UTILS_LOG( + "There are %llu stateful requests pending.\n", + (unsigned long long)numPendingStateful); + status = CPA_STATUS_RETRY; + } + if ((CPA_DC_STATEFUL == pSessionDesc->sessState) && + (CPA_STATUS_SUCCESS == status)) { + if (CPA_STATUS_SUCCESS != + LAC_SPINLOCK_DESTROY( + &(pSessionDesc->sessionLock))) { + QAT_UTILS_LOG( + "Failed to destory session lock.\n"); + } + } + } + + return status; +} + +CpaStatus +dcGetSessionSize(CpaInstanceHandle dcInstance, + CpaDcSessionSetupData *pSessionData, + Cpa32U *pSessionSize, + Cpa32U *pContextSize) +{ + + CpaStatus status = CPA_STATUS_SUCCESS; + CpaInstanceHandle insHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + /* Check parameters */ + LAC_CHECK_NULL_PARAM(insHandle); + LAC_CHECK_NULL_PARAM(pSessionData); + LAC_CHECK_NULL_PARAM(pSessionSize); + + if (dcCheckSessionData(pSessionData, insHandle) != CPA_STATUS_SUCCESS) { + return CPA_STATUS_INVALID_PARAM; + } + + /* Get session size for session data */ + *pSessionSize = sizeof(dc_session_desc_t) + LAC_64BYTE_ALIGNMENT + + sizeof(LAC_ARCH_UINT); + + if (NULL != pContextSize) { + status = + dcGetContextSize(insHandle, pSessionData, pContextSize); + + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG( + "Unable to get the context size of the session.\n"); + return CPA_STATUS_FAIL; + } + } + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcGetSessionSize(CpaInstanceHandle dcInstance, + CpaDcSessionSetupData *pSessionData, + Cpa32U *pSessionSize, + Cpa32U *pContextSize) +{ + + LAC_CHECK_NULL_PARAM(pContextSize); + + return dcGetSessionSize(dcInstance, + pSessionData, + pSessionSize, + pContextSize); +} Index: sys/dev/qat/qat_api/common/compression/dc_stats.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/compression/dc_stats.c @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_stats.c + * + * @ingroup Dc_DataCompression + * + * @description + * Implementation of the Data Compression stats operations. + * + *****************************************************************************/ + +/* + ******************************************************************************* + * Include public/global header files + ******************************************************************************* + */ +#include "cpa.h" +#include "cpa_dc.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +/* + ******************************************************************************* + * Include private header files + ******************************************************************************* + */ +#include "lac_common.h" +#include "icp_accel_devices.h" +#include "sal_statistics.h" +#include "dc_session.h" +#include "dc_datapath.h" +#include "lac_mem_pools.h" +#include "sal_service_state.h" +#include "sal_types_compression.h" +#include "dc_stats.h" + +CpaStatus +dcStatsInit(sal_compression_service_t *pService) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + pService->pCompStatsArr = + LAC_OS_MALLOC(COMPRESSION_NUM_STATS * sizeof(QatUtilsAtomic)); + + if (pService->pCompStatsArr == NULL) { + status = CPA_STATUS_RESOURCE; + } + + if (CPA_STATUS_SUCCESS == status) { + COMPRESSION_STATS_RESET(pService); + } + + return status; +} + +void +dcStatsFree(sal_compression_service_t *pService) +{ + if (NULL != pService->pCompStatsArr) { + LAC_OS_FREE(pService->pCompStatsArr); + } +} + +CpaStatus +cpaDcGetStats(CpaInstanceHandle dcInstance, CpaDcStats *pStatistics) +{ + sal_compression_service_t *pService = NULL; + CpaInstanceHandle insHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + pService = (sal_compression_service_t *)insHandle; + + LAC_CHECK_NULL_PARAM(insHandle); + LAC_CHECK_NULL_PARAM(pStatistics); + SAL_RUNNING_CHECK(insHandle); + + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + /* Retrieves the statistics for compression */ + COMPRESSION_STATS_GET(pStatistics, pService); + + return CPA_STATUS_SUCCESS; +} Index: sys/dev/qat/qat_api/common/compression/icp_sal_dc_err.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/compression/icp_sal_dc_err.c @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file icp_sal_dc_err.c + * + * @defgroup SalCommon + * + * @ingroup SalCommon + * + *****************************************************************************/ + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ +#include "cpa.h" +#include "icp_sal.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "dc_error_counter.h" + +Cpa64U +icp_sal_get_dc_error(Cpa8S dcError) +{ + return getDcErrorCounter(dcError); +} Index: sys/dev/qat/qat_api/common/compression/include/dc_datapath.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/compression/include/dc_datapath.h @@ -0,0 +1,186 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_datapath.h + * + * @ingroup Dc_DataCompression + * + * @description + * Definition of the Data Compression datapath parameters. + * + ******************* + * **********************************************************/ +#ifndef DC_DATAPATH_H_ +#define DC_DATAPATH_H_ + +#define LAC_QAT_DC_REQ_SZ_LW 32 +#define LAC_QAT_DC_RESP_SZ_LW 8 + +/* Restriction on the source buffer size for compression due to the firmware + * processing */ +#define DC_SRC_BUFFER_MIN_SIZE (15) + +/* Restriction on the destination buffer size for compression due to + * the management of skid buffers in the firmware */ +#define DC_DEST_BUFFER_DYN_MIN_SIZE (128) +#define DC_DEST_BUFFER_STA_MIN_SIZE (64) +/* C62x and C3xxx pcie rev0 devices require an additional 32bytes */ +#define DC_DEST_BUFFER_STA_ADDITIONAL_SIZE (32) + +/* C4xxx device only requires 47 bytes */ +#define DC_DEST_BUFFER_MIN_SIZE (47) + +/* Minimum destination buffer size for decompression */ +#define DC_DEST_BUFFER_DEC_MIN_SIZE (1) + +/* Restriction on the source and destination buffer sizes for compression due + * to the firmware taking 32 bits parameters. The max size is 2^32-1 */ +#define DC_BUFFER_MAX_SIZE (0xFFFFFFFF) + +/* DC Source & Destination buffer type (FLAT/SGL) */ +#define DC_DEFAULT_QAT_PTR_TYPE QAT_COMN_PTR_TYPE_SGL +#define DC_DP_QAT_PTR_TYPE QAT_COMN_PTR_TYPE_FLAT + +/* Offset to first byte of Input Byte Counter (IBC) in state register */ +#define DC_STATE_IBC_OFFSET (8) +/* Size in bytes of input byte counter (IBC) in state register */ +#define DC_IBC_SIZE_IN_BYTES (4) + +/* Offset to first byte to CRC32 in state register */ +#define DC_STATE_CRC32_OFFSET (40) +/* Offset to first byte to output CRC32 in state register */ +#define DC_STATE_OUTPUT_CRC32_OFFSET (48) +/* Offset to first byte to input CRC32 in state register */ +#define DC_STATE_INPUT_CRC32_OFFSET (52) + +/* Offset to first byte of ADLER32 in state register */ +#define DC_STATE_ADLER32_OFFSET (48) + +/* 8 bit mask value */ +#define DC_8_BIT_MASK (0xff) + +/* 8 bit shift position */ +#define DC_8_BIT_SHIFT_POS (8) + +/* Size in bytes of checksum */ +#define DC_CHECKSUM_SIZE_IN_BYTES (4) + +/* Mask used to set the most significant bit to zero */ +#define DC_STATE_REGISTER_ZERO_MSB_MASK (0x7F) + +/* Mask used to keep only the most significant bit and set the others to zero */ +#define DC_STATE_REGISTER_KEEP_MSB_MASK (0x80) + +/* Compression state register word containing the parity bit */ +#define DC_STATE_REGISTER_PARITY_BIT_WORD (5) + +/* Location of the parity bit within the compression state register word */ +#define DC_STATE_REGISTER_PARITY_BIT (7) + +/* size which needs to be reserved before the results field to + * align the results field with the API struct */ +#define DC_API_ALIGNMENT_OFFSET (offsetof(CpaDcDpOpData, results)) + +/* Mask used to check the CompressAndVerify capability bit */ +#define DC_CNV_EXTENDED_CAPABILITY (0x01) + +/* Mask used to check the CompressAndVerifyAndRecover capability bit */ +#define DC_CNVNR_EXTENDED_CAPABILITY (0x100) + +/* Default values for CNV integrity checks, + * those are used to inform hardware of specifying CRC parameters to be used + * when calculating CRCs */ +#define DC_CRC_POLY_DEFAULT 0x04c11db7 +#define DC_XOR_FLAGS_DEFAULT 0xe0000 +#define DC_XOR_OUT_DEFAULT 0xffffffff +#define DC_INVALID_CRC 0x0 + +/** +******************************************************************************* +* @ingroup cpaDc Data Compression +* Compression cookie +* @description +* This cookie stores information for a particular compression perform op. +* This includes various user-supplied parameters for the operation which +* will be needed in our callback function. +* A pointer to this cookie is stored in the opaque data field of the QAT +* message so that it can be accessed in the asynchronous callback. +* @note +* The order of the parameters within this structure is important. It needs +* to match the order of the parameters in CpaDcDpOpData up to the +* pSessionHandle. This allows the correct processing of the callback. +*****************************************************************************/ +typedef struct dc_compression_cookie_s { + Cpa8U dcReqParamsBuffer[DC_API_ALIGNMENT_OFFSET]; + /**< Memory block - was previously reserved for request parameters. + * Now size maintained so following members align with API struct, + * but no longer used for request parameters */ + CpaDcRqResults reserved; + /**< This is reserved for results to correctly align the structure + * to match the one from the data plane API */ + CpaInstanceHandle dcInstance; + /**< Compression instance handle */ + CpaDcSessionHandle pSessionHandle; + /**< Pointer to the session handle */ + icp_qat_fw_comp_req_t request; + /**< Compression request */ + void *callbackTag; + /**< Opaque data supplied by the client */ + dc_session_desc_t *pSessionDesc; + /**< Pointer to the session descriptor */ + CpaDcFlush flushFlag; + /**< Flush flag */ + CpaDcOpData *pDcOpData; + /**< struct containing flags and CRC related data for this session */ + CpaDcRqResults *pResults; + /**< Pointer to result buffer holding consumed and produced data */ + Cpa32U srcTotalDataLenInBytes; + /**< Total length of the source data */ + Cpa32U dstTotalDataLenInBytes; + /**< Total length of the destination data */ + dc_request_dir_t compDecomp; + /**< Used to know whether the request is compression or decompression. + * Useful when defining the session as combined */ + CpaBufferList *pUserSrcBuff; + /**< virtual userspace ptr to source SGL */ + CpaBufferList *pUserDestBuff; + /**< virtual userspace ptr to destination SGL */ +} dc_compression_cookie_t; + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Callback function called for compression and decompression requests in + * asynchronous mode + * + * @description + * Called to process compression and decompression response messages. This + * callback will check for errors, update the statistics and will call the + * user callback + * + * @param[in] pRespMsg Response message + * + *****************************************************************************/ +void dcCompression_ProcessCallback(void *pRespMsg); + +/** +***************************************************************************** +* @ingroup Dc_DataCompression +* Describes CNV and CNVNR modes +* +* @description +* This enum is used to indicate the CNV modes. +* +*****************************************************************************/ +typedef enum dc_cnv_mode_s { + DC_NO_CNV = 0, + /* CNV = FALSE, CNVNR = FALSE */ + DC_CNV, + /* CNV = TRUE, CNVNR = FALSE */ + DC_CNVNR, + /* CNV = TRUE, CNVNR = TRUE */ +} dc_cnv_mode_t; + +#endif /* DC_DATAPATH_H_ */ Index: sys/dev/qat/qat_api/common/compression/include/dc_error_counter.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/compression/include/dc_error_counter.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_error_counter.h + * + * @ingroup Dc_DataCompression + * + * @description + * Definition of the Data Compression Error Counter parameters. + * + *****************************************************************************/ +#ifndef DC_ERROR_COUNTER_H +#define DC_ERROR_COUNTER_H + +#include "cpa_types.h" +#include "cpa_dc.h" + +#define MAX_DC_ERROR_TYPE 20 + +void dcErrorLog(CpaDcReqStatus dcError); +Cpa64U getDcErrorCounter(CpaDcReqStatus dcError); + +#endif /* DC_ERROR_COUNTER_H */ Index: sys/dev/qat/qat_api/common/compression/include/dc_header_footer.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/compression/include/dc_header_footer.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_header_footer.h + * + * @ingroup Dc_DataCompression + * + * @description + * Definition of the Data Compression header and footer parameters. + * + *****************************************************************************/ +#ifndef DC_HEADER_FOOTER_H_ +#define DC_HEADER_FOOTER_H_ + +/* Header and footer sizes for Zlib and Gzip */ +#define DC_ZLIB_HEADER_SIZE (2) +#define DC_GZIP_HEADER_SIZE (10) +#define DC_ZLIB_FOOTER_SIZE (4) +#define DC_GZIP_FOOTER_SIZE (8) + +/* Values used to build the headers for Zlib and Gzip */ +#define DC_GZIP_ID1 (0x1f) +#define DC_GZIP_ID2 (0x8b) +#define DC_GZIP_FILESYSTYPE (0x03) +#define DC_ZLIB_WINDOWSIZE_OFFSET (4) +#define DC_ZLIB_FLEVEL_OFFSET (6) +#define DC_ZLIB_HEADER_OFFSET (31) + +/* Compression level for Zlib */ +#define DC_ZLIB_LEVEL_0 (0) +#define DC_ZLIB_LEVEL_1 (1) +#define DC_ZLIB_LEVEL_2 (2) +#define DC_ZLIB_LEVEL_3 (3) + +/* CM parameter for Zlib */ +#define DC_ZLIB_CM_DEFLATE (8) + +/* Type of Gzip compression */ +#define DC_GZIP_FAST_COMP (4) +#define DC_GZIP_MAX_COMP (2) + +#endif /* DC_HEADER_FOOTER_H_ */ Index: sys/dev/qat/qat_api/common/compression/include/dc_session.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/compression/include/dc_session.h @@ -0,0 +1,278 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_session.h + * + * @ingroup Dc_DataCompression + * + * @description + * Definition of the Data Compression session parameters. + * + *****************************************************************************/ +#ifndef DC_SESSION_H +#define DC_SESSION_H + +#include "cpa_dc_dp.h" +#include "icp_qat_fw_comp.h" +#include "sal_qat_cmn_msg.h" + +/* Maximum number of intermediate buffers SGLs for devices + * with a maximum of 6 compression slices */ +#define DC_QAT_MAX_NUM_INTER_BUFFERS_6COMP_SLICES (12) + +/* Maximum number of intermediate buffers SGLs for devices + * with a maximum of 10 max compression slices */ +#define DC_QAT_MAX_NUM_INTER_BUFFERS_10COMP_SLICES (20) + +/* Maximum number of intermediate buffers SGLs for devices + * with a maximum of 24 max compression slices and 32 MEs */ +#define DC_QAT_MAX_NUM_INTER_BUFFERS_24COMP_SLICES (64) + +/* Maximum size of the state registers 64 bytes */ +#define DC_QAT_STATE_REGISTERS_MAX_SIZE (64) + +/* Size of the history window. + * Base 2 logarithm of maximum window size minus 8 */ +#define DC_8K_WINDOW_SIZE (5) +#define DC_16K_WINDOW_SIZE (6) +#define DC_32K_WINDOW_SIZE (7) + +/* Context size */ +#define DC_DEFLATE_MAX_CONTEXT_SIZE (49152) +#define DC_INFLATE_CONTEXT_SIZE (36864) + +#define DC_DEFLATE_EH_MAX_CONTEXT_SIZE (65536) +#define DC_DEFLATE_EH_MIN_CONTEXT_SIZE (49152) +#define DC_INFLATE_EH_CONTEXT_SIZE (34032) + +/* Retrieve the session descriptor pointer from the session context structure + * that the user allocates. The pointer to the internally realigned address + * is stored at the start of the session context that the user allocates */ +#define DC_SESSION_DESC_FROM_CTX_GET(pSession) \ + (dc_session_desc_t *)(*(LAC_ARCH_UINT *)pSession) + +/* Maximum size for the compression part of the content descriptor */ +#define DC_QAT_COMP_CONTENT_DESC_SIZE sizeof(icp_qat_fw_comp_cd_hdr_t) + +/* Maximum size for the translator part of the content descriptor */ +#define DC_QAT_TRANS_CONTENT_DESC_SIZE \ + (sizeof(icp_qat_fw_xlt_cd_hdr_t) + DC_QAT_MAX_TRANS_SETUP_BLK_SZ) + +/* Maximum size of the decompression content descriptor */ +#define DC_QAT_CONTENT_DESC_DECOMP_MAX_SIZE \ + LAC_ALIGN_POW2_ROUNDUP(DC_QAT_COMP_CONTENT_DESC_SIZE, \ + (1 << LAC_64BYTE_ALIGNMENT_SHIFT)) + +/* Maximum size of the compression content descriptor */ +#define DC_QAT_CONTENT_DESC_COMP_MAX_SIZE \ + LAC_ALIGN_POW2_ROUNDUP(DC_QAT_COMP_CONTENT_DESC_SIZE + \ + DC_QAT_TRANS_CONTENT_DESC_SIZE, \ + (1 << LAC_64BYTE_ALIGNMENT_SHIFT)) + +/* Direction of the request */ +typedef enum dc_request_dir_e { + DC_COMPRESSION_REQUEST = 1, + DC_DECOMPRESSION_REQUEST +} dc_request_dir_t; + +/* Type of the compression request */ +typedef enum dc_request_type_e { + DC_REQUEST_FIRST = 1, + DC_REQUEST_SUBSEQUENT +} dc_request_type_t; + +typedef enum dc_block_type_e { + DC_CLEARTEXT_TYPE = 0, + DC_STATIC_TYPE, + DC_DYNAMIC_TYPE +} dc_block_type_t; + +/* Internal data structure supporting end to end data integrity checks. */ +typedef struct dc_integrity_crc_fw_s { + Cpa32U crc32; + /* CRC32 checksum returned for compressed data */ + Cpa32U adler32; + /* ADLER32 checksum returned for compressed data */ + Cpa32U oCrc32Cpr; + /* CRC32 checksum returned for data output by compression accelerator */ + Cpa32U iCrc32Cpr; + /* CRC32 checksum returned for input data to compression accelerator */ + Cpa32U oCrc32Xlt; + /* CRC32 checksum returned for data output by translator accelerator */ + Cpa32U iCrc32Xlt; + /* CRC32 checksum returned for input data to translator accelerator */ + Cpa32U xorFlags; + /* Initialise transactor pCRC controls in state register */ + Cpa32U crcPoly; + /* CRC32 polynomial used by hardware */ + Cpa32U xorOut; + /* CRC32 from XOR stage (Input CRC is xor'ed with value in the state) */ + Cpa32U deflateBlockType; + /* Bit 1 - Bit 0 + * 0 0 -> RAW DATA + Deflate header. + * This will not produced any CRC check because + * the output will not come from the slices. + * It will be a simple copy from input to output + * buffers list. + * 0 1 -> Static deflate block type + * 1 0 -> Dynamic deflate block type + * 1 1 -> Invalid type */ +} dc_integrity_crc_fw_t; + +typedef struct dc_sw_checksums_s { + Cpa32U swCrcI; + Cpa32U swCrcO; +} dc_sw_checksums_t; + +/* Session descriptor structure for compression */ +typedef struct dc_session_desc_s { + Cpa8U stateRegistersComp[DC_QAT_STATE_REGISTERS_MAX_SIZE]; + /**< State registers for compression */ + Cpa8U stateRegistersDecomp[DC_QAT_STATE_REGISTERS_MAX_SIZE]; + /**< State registers for decompression */ + icp_qat_fw_comp_req_t reqCacheComp; + /**< Cache as much as possible of the compression request in a pre-built + * request */ + icp_qat_fw_comp_req_t reqCacheDecomp; + /**< Cache as much as possible of the decompression request in a + * pre-built + * request */ + dc_request_type_t requestType; + /**< Type of the compression request. As stateful mode do not support + * more + * than one in-flight request there is no need to use spinlocks */ + dc_request_type_t previousRequestType; + /**< Type of the previous compression request. Used in cases where there + * the + * stateful operation needs to be resubmitted */ + CpaDcHuffType huffType; + /**< Huffman tree type */ + CpaDcCompType compType; + /**< Compression type */ + CpaDcChecksum checksumType; + /**< Type of checksum */ + CpaDcAutoSelectBest autoSelectBestHuffmanTree; + /**< Indicates if the implementation selects the best Huffman encoding + */ + CpaDcSessionDir sessDirection; + /**< Session direction */ + CpaDcSessionState sessState; + /**< Session state */ + Cpa32U deflateWindowSize; + /**< Window size */ + CpaDcCompLvl compLevel; + /**< Compression level */ + CpaDcCallbackFn pCompressionCb; + /**< Callback function defined for the traditional compression session + */ + QatUtilsAtomic pendingStatelessCbCount; + /**< Keeps track of number of pending requests on stateless session */ + QatUtilsAtomic pendingStatefulCbCount; + /**< Keeps track of number of pending requests on stateful session */ + Cpa64U pendingDpStatelessCbCount; + /**< Keeps track of number of data plane pending requests on stateless + * session */ + struct mtx sessionLock; + /**< Lock used to provide exclusive access for number of stateful + * in-flight + * requests update */ + CpaBoolean isDcDp; + /**< Indicates if the data plane API is used */ + Cpa32U minContextSize; + /**< Indicates the minimum size required to allocate the context buffer + */ + CpaBufferList *pContextBuffer; + /**< Context buffer */ + Cpa32U historyBuffSize; + /**< Size of the history buffer */ + Cpa64U cumulativeConsumedBytes; + /**< Cumulative amount of consumed bytes. Used to build the footer in + * the + * stateful case */ + Cpa32U previousChecksum; + /**< Save the previous value of the checksum. Used to process zero byte + * stateful compression or decompression requests */ + CpaBoolean isSopForCompressionProcessed; + /**< Indicates whether a Compression Request is received in this session + */ + CpaBoolean isSopForDecompressionProcessed; + /**< Indicates whether a Decompression Request is received in this + * session + */ + /**< Data integrity table */ + dc_integrity_crc_fw_t dataIntegrityCrcs; + /**< Physical address of Data integrity buffer */ + CpaPhysicalAddr physDataIntegrityCrcs; + /* Seed checksums structure used to calculate software calculated + * checksums. + */ + dc_sw_checksums_t seedSwCrc; + /* Driver calculated integrity software CRC */ + dc_sw_checksums_t integritySwCrc; +} dc_session_desc_t; + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Initialise a compression session + * + * @description + * This function will initialise a compression session + * + * @param[in] dcInstance Instance handle derived from discovery + * functions + * @param[in,out] pSessionHandle Pointer to a session handle + * @param[in,out] pSessionData Pointer to a user instantiated structure + * containing session data + * @param[in] pContextBuffer Pointer to context buffer + * + * @param[in] callbackFn For synchronous operation this callback + * shall be a null pointer + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_FAIL Function failed + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * @retval CPA_STATUS_RESOURCE Error related to system resources + *****************************************************************************/ +CpaStatus dcInitSession(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaDcSessionSetupData *pSessionData, + CpaBufferList *pContextBuffer, + CpaDcCallbackFn callbackFn); + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Get the size of the memory required to hold the session information + * + * @description + * This function will get the size of the memory required to hold the + * session information + * + * @param[in] dcInstance Instance handle derived from discovery + * functions + * @param[in] pSessionData Pointer to a user instantiated structure + * containing session data + * @param[out] pSessionSize On return, this parameter will be the size + * of the memory that will be + * required by cpaDcInitSession() for session + * data. + * @param[out] pContextSize On return, this parameter will be the size + * of the memory that will be required + * for context data. Context data is + * save/restore data including history and + * any implementation specific data that is + * required for a save/restore operation. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_FAIL Function failed + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + *****************************************************************************/ +CpaStatus dcGetSessionSize(CpaInstanceHandle dcInstance, + CpaDcSessionSetupData *pSessionData, + Cpa32U *pSessionSize, + Cpa32U *pContextSize); + +#endif /* DC_SESSION_H */ Index: sys/dev/qat/qat_api/common/compression/include/dc_stats.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/compression/include/dc_stats.h @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_stats.h + * + * @ingroup Dc_DataCompression + * + * @description + * Definition of the Data Compression stats parameters. + * + *****************************************************************************/ +#ifndef DC_STATS_H_ +#define DC_STATS_H_ + +/* Number of Compression statistics */ +#define COMPRESSION_NUM_STATS (sizeof(CpaDcStats) / sizeof(Cpa64U)) + +#define COMPRESSION_STAT_INC(statistic, pService) \ + do { \ + if (CPA_TRUE == \ + pService->generic_service_info.stats->bDcStatsEnabled) { \ + qatUtilsAtomicInc( \ + &pService->pCompStatsArr[offsetof(CpaDcStats, \ + statistic) / \ + sizeof(Cpa64U)]); \ + } \ + } while (0) + +/* Macro to get all Compression stats (from internal array of atomics) */ +#define COMPRESSION_STATS_GET(compStats, pService) \ + do { \ + int i; \ + for (i = 0; i < COMPRESSION_NUM_STATS; i++) { \ + ((Cpa64U *)compStats)[i] = \ + qatUtilsAtomicGet(&pService->pCompStatsArr[i]); \ + } \ + } while (0) + +/* Macro to reset all Compression stats */ +#define COMPRESSION_STATS_RESET(pService) \ + do { \ + int i; \ + for (i = 0; i < COMPRESSION_NUM_STATS; i++) { \ + qatUtilsAtomicSet(0, &pService->pCompStatsArr[i]); \ + } \ + } while (0) + +/** +******************************************************************************* +* @ingroup Dc_DataCompression +* Initialises the compression stats +* +* @description +* This function allocates and initialises the stats array to 0 +* +* @param[in] pService Pointer to a compression service structure +* +* @retval CPA_STATUS_SUCCESS initialisation successful +* @retval CPA_STATUS_RESOURCE array allocation failed +* +*****************************************************************************/ +CpaStatus dcStatsInit(sal_compression_service_t *pService); + +/** +******************************************************************************* +* @ingroup Dc_DataCompression +* Frees the compression stats +* +* @description +* This function frees the stats array +* +* @param[in] pService Pointer to a compression service structure +* +* @retval None +* +*****************************************************************************/ +void dcStatsFree(sal_compression_service_t *pService); + +#endif /* DC_STATS_H_ */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_session.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_session.h @@ -0,0 +1,622 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_session.h + * + * @defgroup LacSym_Session Session + * + * @ingroup LacSym + * + * Definition of symmetric session descriptor structure + * + * @lld_start + * + * @lld_overview + * A session is required for each symmetric operation. The session descriptor + * holds information about the session from when the session is initialised to + * when the session is removed. The session descriptor is used in the + * subsequent perform operations in the paths for both sending the request and + * receiving the response. The session descriptor and any other state + * information required for processing responses from the QAT are stored in an + * internal cookie. A pointer to this cookie is stored in the opaque data + * field of the QAT request. + * + * The user allocates the memory for the session using the size returned from + * \ref cpaCySymSessionCtxGetSize(). Internally this memory is re-aligned on a + * 64 byte boundary for use by the QAT engine. The aligned pointer is saved in + * the first bytes (size of void *) of the session memory. This address + * is then dereferenced in subsequent performs to get access to the session + * descriptor. + * + * LAC Session Init\n The session descriptor is re-aligned and + * populated. This includes populating the content descriptor which contains + * the hardware setup for the QAT engine. The content descriptor is a read + * only structure after session init and a pointer to it is sent to the QAT + * for each perform operation. + * + * LAC Perform \n + * The address for the session descriptor is got by dereferencing the first + * bytes of the session memory (size of void *). For each successful + * request put on the ring, the pendingCbCount for the session is incremented. + * + * LAC Callback \n + * For each successful response the pendingCbCount for the session is + * decremented. See \ref LacSymCb_ProcessCallbackInternal() + * + * LAC Session Remove \n + * The address for the session descriptor is got by dereferencing the first + * bytes of the session memory (size of void *). + * The pendingCbCount for the session is checked to see if it is 0. If it is + * non 0 then there are requests in flight. An error is returned to the user. + * + * Concurrency\n + * A reference count is used to prevent the descriptor being removed + * while there are requests in flight. + * + * Reference Count\n + * - The perform funcion increments the reference count for the session. + * - The callback function decrements the reference count for the session. + * - The Remove function checks the reference count to ensure that it is 0. + * + * @lld_dependencies + * - \ref LacMem "Memory" - Inline memory functions + * - QatUtils: logging, locking & virt to phys translations. + * + * @lld_initialisation + * + * @lld_module_algorithms + * + * @lld_process_context + * + * @lld_end + * + *****************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_SYM_SESSION_H +#define LAC_SYM_SESSION_H + +/* + * Common alignment attributes to ensure + * hashStatePrefixBuffer is 64-byte aligned + */ +#define ALIGN_START(x) +#define ALIGN_END(x) __attribute__((__aligned__(x))) +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "icp_accel_devices.h" +#include "lac_list.h" +#include "lac_sal_types.h" +#include "sal_qat_cmn_msg.h" +#include "lac_sym_cipher_defs.h" +#include "lac_sym.h" +#include "lac_sym_hash_defs.h" +#include "lac_sym_qat_hash.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +/** +******************************************************************************* +* @ingroup LacSym_Session +* Symmetric session descriptor +* @description +* This structure stores information about a session +* Note: struct types lac_session_d1_s and lac_session_d2_s are subsets of +* this structure. Elements in all three should retain the same order +* Only this structure is used in the session init call. The other two are +* for determining the size of memory to allocate. +* The comments section of each of the other two structures below show +* the conditions that determine which session context memory size to use. +*****************************************************************************/ +typedef struct lac_session_desc_s { + Cpa8U contentDescriptor[LAC_SYM_QAT_CONTENT_DESC_MAX_SIZE]; + /**< QAT Content Descriptor for this session. + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + Cpa8U contentDescriptorOptimised[LAC_SYM_OPTIMISED_CD_SIZE]; + /**< QAT Optimised Content Descriptor for this session. + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + CpaCySymOp symOperation; + /**< type of command to be performed */ + sal_qat_content_desc_info_t contentDescInfo; + /**< info on the content descriptor */ + sal_qat_content_desc_info_t contentDescOptimisedInfo; + /**< info on the optimised content descriptor */ + icp_qat_fw_la_cmd_id_t laCmdId; + /** + * + ***************************************************************************/ + +/** + ***************************************************************************** + * @file lac_sym_alg_chain.h + * + * @defgroup LacAlgChain Algorithm Chaining + * + * @ingroup LacSym + * + * Interfaces exposed by the Algorithm Chaining Component + * + * @lld_start + * + * @lld_overview + * This is the LAC Algorithm-Chaining feature component. This component + * implements session registration and cleanup functions, and a perform + * function. Statistics are maintained to track requests issued and completed, + * errors incurred, and authentication verification failures. For each + * function the parameters supplied by the client are checked, and then the + * function proceeds if all the parameters are valid. This component also + * incorporates support for Authenticated-Encryption (CCM and GCM) which + * essentially comprises of a cipher operation and a hash operation combined. + * + * This component can combine a cipher operation with a hash operation or just + * simply create a hash only or cipher only operation and is called from the + * LAC Symmetric API component. In turn it calls the LAC Cipher, LAC Hash, and + * LAC Symmetric QAT components. The goal here is to duplicate as little code + * as possible from the Cipher and Hash components. + * + * The cipher and hash operations can be combined in either order, i.e. cipher + * first then hash or hash first then cipher. The client specifies this via + * the algChainOrder field in the session context. This ordering choice is + * stored as part of the session descriptor, so that it is known when a + * perform request is issued. In the case of Authenticated-Encryption, the + * ordering is an implicit part of the CCM or GCM protocol. + * + * When building a content descriptor, as part of session registration, this + * component asks the Cipher and Hash components to build their respective + * parts of the session descriptor. The key aspect here is to provide the + * correct offsets to the Cipher and Hash components for where in the content + * descriptor to write their Config and Hardware Setup blocks. Also the + * Config block in each case must specify the appropriate next slice. + * + * When building request parameters, as part of a perform operation, this + * component asks the Cipher and Hash components to build their respective + * parts of the request parameters block. Again the key aspect here is to + * provide the correct offsets to the Cipher and Hash components for where in + * the request parameters block to write their parameters. Also the request + * parameters block in each case must specify the appropriate next slice. + * + * Parameter checking for session registration and for operation perform is + * mostly delegated to the Cipher and Hash components. There are a few + * extra checks that this component must perform: check the algChainOrder + * parameter, ensure that CCM/GCM are specified for hash/cipher algorithms + * as appropriate, and ensure that requests are for full packets (partial + * packets are not supported for Algorithm-Chaining). + * + * The perform operation allocates a cookie to capture information required + * in the request callback. This cookie is then freed in the callback. + * + * @lld_dependencies + * - \ref LacCipher "Cipher" : For taking care of the cipher aspects of + * session registration and operation perform + * - \ref LacHash "Hash" : For taking care of the hash aspects of session + * registration and operation perform + * - \ref LacSymCommon "Symmetric Common" : statistics. + * - \ref LacSymQat "Symmetric QAT": To build the QAT request message, + * request param structure, and populate the content descriptor. Also + * for registering a callback function to process the QAT response. + * - \ref QatComms "QAT Comms" : For sending messages to the QAT, and for + * setting the response callback + * - \ref LacMem "Mem" : For memory allocation and freeing, virtual/physical + * address translation, and translating between scalar and pointer types + * - OSAL : For atomics and locking + * + * @lld_module_algorithms + * This component builds up a chain of slices at session init time + * and stores it in the session descriptor. This is used for building up the + * content descriptor at session init time and the request parameters structure + * in the perform operation. + * + * The offsets for the first slice are updated so that the second slice adds + * its configuration information after that of the first slice. The first + * slice also configures the next slice appropriately. + * + * This component is very much hard-coded to just support cipher+hash or + * hash+cipher. It should be quite possible to extend this idea to support + * an arbitrary chain of commands, by building up a command chain that can + * be traversed in order to build up the appropriate configuration for the + * QAT. This notion should be looked at in the future if other forms of + * Algorithm-Chaining are desired. + * + * @lld_process_context + * + * @lld_end + * + *****************************************************************************/ + +/*****************************************************************************/ + +#ifndef LAC_SYM_ALG_CHAIN_H +#define LAC_SYM_ALG_CHAIN_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" +#include "lac_session.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +/* Macro for checking if zero length buffer are supported + * only for cipher is AES-GCM and hash are AES-GCM/AES-GMAC */ +#define IS_ZERO_LENGTH_BUFFER_SUPPORTED(cipherAlgo, hashAlgo) \ + (CPA_CY_SYM_CIPHER_AES_GCM == cipherAlgo && \ + (CPA_CY_SYM_HASH_AES_GMAC == hashAlgo || \ + CPA_CY_SYM_HASH_AES_GCM == hashAlgo)) + +/** +******************************************************************************* +* @ingroup LacAlgChain +* This function registers a session for an Algorithm-Chaining operation. +* +* @description +* This function is called from the LAC session register API function for +* Algorithm-Chaining operations. It validates all input parameters. If +* an invalid parameter is passed, an error is returned to the calling +* function. If all parameters are valid an Algorithm-Chaining session is +* registered. +* +* @param[in] instanceHandle Instance Handle +* +* @param[in] pSessionCtx Pointer to session context which contains +* parameters which are static for a given +* cryptographic session such as operation type, +* mechanisms, and keys for cipher and/or digest +* operations. +* @param[out] pSessionDesc Pointer to session descriptor +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_RESOURCE Error related to system resources. +* +* @see cpaCySymInitSession() +* +*****************************************************************************/ +CpaStatus LacAlgChain_SessionInit(const CpaInstanceHandle instanceHandle, + const CpaCySymSessionSetupData *pSessionCtx, + lac_session_desc_t *pSessionDesc); + +/** +******************************************************************************* +* @ingroup LacAlgChain +* Data path function for the Algorithm-Chaining component +* +* @description +* This function gets called from cpaCySymPerformOp() which is the +* symmetric LAC API function. It is the data path function for the +* Algorithm-Chaining component. It does the parameter checking on the +* client supplied parameters and if the parameters are valid, the +* operation is performed and a request sent to the QAT, otherwise an +* error is returned to the client. +* +* @param[in] instanceHandle Instance Handle +* +* @param[in] pSessionDesc Pointer to session descriptor +* @param[in] pCallbackTag The application's context for this call +* @param[in] pOpData Pointer to a structure containing request +* parameters. The client code allocates the memory for +* this structure. This component takes ownership of +* the memory until it is returned in the callback. +* +* @param[in] pSrcBuffer Source Buffer List +* @param[out] pDstBuffer Destination Buffer List +* @param[out] pVerifyResult Verify Result +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_RESOURCE Error related to system resource. +* +* @see cpaCySymPerformOp() +* +*****************************************************************************/ +CpaStatus LacAlgChain_Perform(const CpaInstanceHandle instanceHandle, + lac_session_desc_t *pSessionDesc, + void *pCallbackTag, + const CpaCySymOpData *pOpData, + const CpaBufferList *pSrcBuffer, + CpaBufferList *pDstBuffer, + CpaBoolean *pVerifyResult); + +/** +******************************************************************************* +* @ingroup LacAlgChain +* This function is used to update cipher key, as specified in provided +* input. +* +* @description +* This function is called from the LAC session register API function for +* Algorithm-Chaining operations. It validates all input parameters. If +* an invalid parameter is passed, an error is returned to the calling +* function. If all parameters are valid an Algorithm-Chaining session is +* updated. +* +* @threadSafe +* No +* +* @param[in] pSessionDesc Pointer to session descriptor +* @param[in] pCipherKey Pointer to new cipher key. +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_RETRY Resubmit the request. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_UNSUPPORTED Function is not supported. +* +*****************************************************************************/ +CpaStatus LacAlgChain_SessionCipherKeyUpdate(lac_session_desc_t *pSessionDesc, + Cpa8U *pCipherKey); + +/** +******************************************************************************* +* @ingroup LacAlgChain +* This function is used to update authentication key, as specified in +* provided input. +* +* @description +* This function is called from the LAC session register API function for +* Algorithm-Chaining operations. It validates all input parameters. If +* an invalid parameter is passed, an error is returned to the calling +* function. If all parameters are valid an Algorithm-Chaining session is +* updated. +* +* @threadSafe +* No +* +* @param[in] pSessionDesc Pointer to session descriptor +* @param[in] pCipherKey Pointer to new authentication key. +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_RETRY Resubmit the request. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_UNSUPPORTED Function is not supported. +* +*****************************************************************************/ +CpaStatus LacAlgChain_SessionAuthKeyUpdate(lac_session_desc_t *pSessionDesc, + Cpa8U *pAuthKey); + +/** +******************************************************************************* +* @ingroup LacAlgChain +* This function is used to update AAD length as specified in provided +* input. +* +* @description +* This function is called from the LAC session register API function for +* Algorithm-Chaining operations. It validates all input parameters. If +* an invalid parameter is passed, an error is returned to the calling +* function. If all parameters are valid an Algorithm-Chaining session is +* updated. +* +* @threadSafe +* No +* +* @param[in] pSessionDesc Pointer to session descriptor +* @param[in] newAADLength New AAD length. +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_RETRY Resubmit the request. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_UNSUPPORTED Function is not supported. +* +*****************************************************************************/ +CpaStatus LacAlgChain_SessionAADUpdate(lac_session_desc_t *pSessionDesc, + Cpa32U newAADLength); + +#endif /* LAC_SYM_ALG_CHAIN_H */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_auth_enc.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_auth_enc.h @@ -0,0 +1,87 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_auth_enc.h + * + * @defgroup LacAuthEnc Authenticated Encryption + * + * @ingroup LacSym + * + * @description + * Authenticated encryption specific functionality. + * For CCM related code NIST SP 800-38C is followed. + * For GCM related code NIST SP 800-38D is followed. + * + ***************************************************************************/ +#ifndef LAC_SYM_AUTH_ENC_H_ +#define LAC_SYM_AUTH_ENC_H_ + +/* This define for CCM describes constant sum of n and q */ +#define LAC_ALG_CHAIN_CCM_NQ_CONST 15 + +/* These defines for CCM describe maximum and minimum + * length of nonce in bytes*/ +#define LAC_ALG_CHAIN_CCM_N_LEN_IN_BYTES_MAX 13 +#define LAC_ALG_CHAIN_CCM_N_LEN_IN_BYTES_MIN 7 + +/** + * @ingroup LacAuthEnc + * This function applies any necessary padding to additional authentication data + * pointed by pAdditionalAuthData field of pOpData as described in + * NIST SP 800-38D + * + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in,out] pAdditionalAuthData Pointer to AAD + * + * @retval CPA_STATUS_SUCCESS Operation finished successfully + * + * @pre pAdditionalAuthData has been param checked + * + */ +void LacSymAlgChain_PrepareGCMData(lac_session_desc_t *pSessionDesc, + Cpa8U *pAdditionalAuthData); + +/** + * @ingroup LacAuthEnc + * This function prepares param checks iv and aad for CCM + * + * @param[in,out] pAdditionalAuthData Pointer to AAD + * @param[in,out] pIv Pointer to IV + * @param[in] messageLenToCipherInBytes Size of the message to cipher + * @param[in] ivLenInBytes Size of the IV + * + * @retval CPA_STATUS_SUCCESS Operation finished successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed + * + */ +CpaStatus LacSymAlgChain_CheckCCMData(Cpa8U *pAdditionalAuthData, + Cpa8U *pIv, + Cpa32U messageLenToCipherInBytes, + Cpa32U ivLenInBytes); + +/** + * @ingroup LacAuthEnc + * This function prepares Ctr0 and B0-Bn blocks for CCM algorithm as described + * in NIST SP 800-38C. Ctr0 block is placed in pIv field of pOpData and B0-BN + * blocks are placed in pAdditionalAuthData. + * + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in,out] pAdditionalAuthData Pointer to AAD + * @param[in,out] pIv Pointer to IV + * @param[in] messageLenToCipherInBytes Size of the message to cipher + * @param[in] ivLenInBytes Size of the IV + * + * @retval none + * + * @pre parameters have been checked using LacSymAlgChain_CheckCCMData() + */ +void LacSymAlgChain_PrepareCCMData(lac_session_desc_t *pSessionDesc, + Cpa8U *pAdditionalAuthData, + Cpa8U *pIv, + Cpa32U messageLenToCipherInBytes, + Cpa32U ivLenInBytes); + +#endif /* LAC_SYM_AUTH_ENC_H_ */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cb.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cb.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_cb.h + * + * @defgroup LacSymCb Symmetric callback functions + * + * @ingroup LacSym + * + * Functions to assist with callback processing for the symmetric component + ***************************************************************************/ + +#ifndef LAC_SYM_CB_H +#define LAC_SYM_CB_H + +/** + ***************************************************************************** + * @ingroup LacSym + * Dequeue pending requests + * @description + * This function is called by a callback function of a blocking + * operation (either a partial packet or a hash precompute operaion) + * in softIRQ context. It dequeues requests for the following reasons: + * 1. All pre-computes that happened when initialising a session + * have completed. Dequeue any requests that were queued on the + * session while waiting for the precompute operations to complete. + * 2. A partial packet request has completed. Dequeue any partials + * that were queued for this session while waiting for a previous + * partial to complete. + * + * @param[in] pSessionDesc Pointer to the session descriptor + * + * @return CpaStatus + * + ****************************************************************************/ +CpaStatus LacSymCb_PendingReqsDequeue(lac_session_desc_t *pSessionDesc); + +/** + ***************************************************************************** + * @ingroup LacSym + * Register symmetric callback funcion handlers + * + * @description + * This function registers the symmetric callback handler functions with + * the main symmetric callback handler function + * + * @return None + * + ****************************************************************************/ +void LacSymCb_CallbacksRegister(void); + +#endif /* LAC_SYM_CB_H */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cipher.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cipher.h @@ -0,0 +1,312 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_cipher.h + * + * @defgroup LacCipher Cipher + * + * @ingroup LacSym + * + * API functions of the cipher component + * + * @lld_start + * @lld_overview + * There is a single \ref icp_LacSym "Symmetric LAC API" for hash, cipher, + * auth encryption and algorithm chaining. This API is implemented by the + * \ref LacSym "Symmetric" module. It demultiplexes calls to this API into + * their basic operation and does some common parameter checking and deals + * with accesses to the session table. + * + * The cipher component supports data encryption/decryption using the AES, DES, + * and Triple-DES cipher algorithms, in ECB, CBC and CTR modes. The ARC4 stream + * cipher algorithm is also supported. Data may be provided as a full packet, + * or as a sequence of partial packets. The result of the operation can be + * written back to the source buffer (in-place) or to a seperate output buffer + * (out-of-place). Data must be encapsulated in ICP buffers. + * + * The cipher component is responsible for implementing the cipher-specific + * functionality for registering and de-registering a session, for the perform + * operation and for processing the QAT responses to cipher requests. Statistics + * are maintained for cipher in the symmetric \ref CpaCySymStats64 "stats" + * structure. This module has been seperated out into two. The cipher QAT module + * deals entirely with QAT data structures. The cipher module itself has minimal + * exposure to the QAT data structures. + * + * @lld_dependencies + * - \ref LacCommon + * - \ref LacSymQat "Symmetric QAT": Hash uses the lookup table provided by + * this module to validate user input. Hash also uses this module to build + * the hash QAT request message, request param structure, populate the + * content descriptor, allocate and populate the hash state prefix buffer. + * Hash also registers its function to process the QAT response with this + * module. + * - OSAL : For memory functions, atomics and locking + * + * @lld_module_algorithms + * In general, all the cipher algorithms supported by this component are + * implemented entirely by the QAT. However, in the case of the ARC4 algorithm, + * it was deemed more efficient to carry out some processing on IA. During + * session registration, an initial state is derived from the base key provided + * by the user, using a simple ARC4 Key Scheduling Algorithm (KSA). Then the + * base key is discarded, but the state is maintained for the duration of the + * session. + * + * The ARC4 key scheduling algorithm (KSA) is specified as follows + * (taken from http://en.wikipedia.org/wiki/RC4_(cipher)): + * \code + * for i from 0 to 255 + * S[i] := i + * endfor + * j := 0 + * for i from 0 to 255 + * j := (j + S[i] + key[i mod keylength]) mod 256 + * swap(S[i],S[j]) + * endfor + * \endcode + * + * On registration of a new ARC4 session, the user provides a base key of any + * length from 1 to 256 bytes. This algorithm produces the initial ARC4 state + * (key matrix + i & j index values) from that base key. This ARC4 state is + * used as input for each ARC4 cipher operation in that session, and is updated + * by the QAT after each operation. The ARC4 state is stored in a session + * descriptor, and it's memory is freed when the session is deregistered. + * + * Block Vs. Stream Ciphers\n + * Block ciphers are treated slightly differently than Stream ciphers by this + * cipher component. Supported stream ciphers consist of AES and + * TripleDES algorithms in CTR mode, and ARC4. The 2 primary differences are: + * - Data buffers for block ciphers are required to be a multiple of the + * block size defined for the algorithm (e.g. 8 bytes for DES). For stream + * ciphers, there is no such restriction. + * - For stream ciphers, decryption is performed by setting the QAT hardware + * to encryption mode. + * + * Memory address alignment of data buffers \n + * The QAT requires that most data buffers are aligned on an 8-byte memory + * address boundary (64-byte boundary for optimum performance). For Cipher, + * this applies to the cipher key buffer passed in the Content Descriptor, + * and the IV/State buffer passed in the Request Parameters block in each + * request. Both of these buffers are provided by the user. It does not + * apply to the cipher source/destination data buffers. + * Alignment of the key buffer is ensured because the key is always copied + * from the user provided buffer into a new (aligned) buffer for the QAT + * (the hardware setup block, which configures the QAT slice). This is done + * once only during session registration, and the user's key buffer can be + * effectively discarded after that. + * The IV/State buffer is provided per-request by the user, so it is recommended + * to the user to provide aligned buffers for optimal performance. In the case + * where an unaligned buffer is provided, a new temporary buffer is allocated + * and the user's IV/State data is copied into this buffer. The aligned buffer + * is then passed to the QAT in the request. In the response callback, if the + * IV was updated by the QAT, the contents are copied back to the user's buffer + * and the temporary buffer is freed. + * + * @lld_process_context + * + * Session Register Sequence Diagram: For ARC4 cipher algorithm + * \msc + * APP [label="Application"], SYM [label="Symmetric LAC"], + * Achain [label="Alg chain"], Cipher, SQAT [label="Symmetric QAT"]; + * + * APP=>SYM [ label = "cpaCySymInitSession(cbFunc)", + * URL="\ref cpaCySymInitSession()"] ; + * SYM=>SYM [ label = "LacSymSession_ParamCheck()", + * URL="\ref LacSymSession_ParamCheck()"]; + * SYM=>Achain [ label = "LacAlgChain_SessionInit()", + * URL="\ref LacAlgChain_SessionInit()"]; + * Achain=>Cipher [ label = "LacCipher_SessionSetupDataCheck()", + * URL="\ref LacCipher_SessionSetupDataCheck()"]; + * Achain<SQAT [ label = "LacSymQat_CipherContentDescPopulate()", + * URL="\ref LacSymQat_CipherContentDescPopulate()"]; + * Achain<SQAT [ label = "LacSymQat_CipherArc4StateInit()", + * URL="\ref LacSymQat_CipherArc4StateInit()"]; + * Achain<SYM [label = "LAC_SYM_STAT_INC", URL="\ref LAC_SYM_STAT_INC"]; + * APP<SYM [ label = "cpaCySymPerformOp()", + * URL="\ref cpaCySymPerformOp()"] ; + * SYM=>SYM [ label = "LacSym_Perform()", + * URL="\ref LacSym_Perform()"]; + * SYM=>SYM [ label = "LacSymPerform_BufferParamCheck()", + * URL="\ref LacSymPerform_BufferParamCheck()"]; + * SYM<Achain [ label = "LacAlgChain_Perform()", + * URL="\ref LacCipher()"]; + * Achain=>Cipher [ label = "LacCipher_PerformParamCheck()", + * URL="\ref LacCipher_PerformParamCheck()"]; + * Achain<LMP [label="Lac_MemPoolEntryAlloc()", + * URL="\ref Lac_MemPoolEntryAlloc()"]; + * Achain<Cipher [ label = "LacCipher_PerformIvCheckAndAlign()", + * URL="\ref LacCipher_PerformIvCheckAndAlign()"]; + * Achain<SQAT [ label = "LacSymQat_CipherRequestParamsPopulate()", + * URL="\ref LacSymQat_CipherRequestParamsPopulate()"]; + * Achain<BUF [ label = "LacBuffDesc_BufferListDescWrite()", + * URL = "\ref LacBuffDesc_BufferListDescWrite()"]; + * Achain<SQAT [ label = "SalQatMsg_CmnMsgAndReqParamsPopulate()", + * URL="\ref SalQatMsg_CmnMsgAndReqParamsPopulate()"]; + * Achain<SYMQ [ label = "LacSymQueue_RequestSend()", + * URL="\ref LacSymQueue_RequestSend()"]; + * SYMQ=>QATCOMMS [ label = "QatComms_MsgSend()", + * URL="\ref QatComms_MsgSend()"]; + * SYMQ<SYM [ label = "LacSym_PartialPacketStateUpdate()", + * URL="\ref LacSym_PartialPacketStateUpdate()"]; + * SYM<SC [label = "LAC_SYM_STAT_INC", URL="\ref LAC_SYM_STAT_INC"]; + * SYM<QATCOMMS [label ="QatComms_ResponseMsgHandler()", + * URL="\ref QatComms_ResponseMsgHandler()"]; + * QATCOMMS=>SQAT [label ="LacSymQat_SymRespHandler()", + * URL="\ref LacSymQat_SymRespHandler()"]; + * SQAT=>SYMCB [label="LacSymCb_ProcessCallback()", + * URL="\ref LacSymCb_ProcessCallback()"]; + * SYMCB=>SYMCB [label="LacSymCb_ProcessCallbackInternal()", + * URL="\ref LacSymCb_ProcessCallbackInternal()"]; + * SYMCB=>LMP [label="Lac_MemPoolEntryFree()", + * URL="\ref Lac_MemPoolEntryFree()"]; + * SYMCB<SC [label = "LAC_SYM_STAT_INC", URL="\ref LAC_SYM_STAT_INC"]; + * SYMCB<APP [label="cbFunc"]; + * SYMCB< + * + ***************************************************************************/ + +/** + ***************************************************************************** + * @file lac_sym_hash.h + * + * @defgroup LacHash Hash + * + * @ingroup LacSym + * + * API functions of the Hash component + * + * @lld_start + * @lld_overview + * There is a single \ref cpaCySym "Symmetric LAC API" for hash, cipher, + * auth encryption and algorithm chaining. This API is implemented by the + * \ref LacSym "Symmetric" module. It demultiplexes calls to this API into + * their basic operation and does some common parameter checking and deals + * with accesses to the session table. + * + * The hash component supports hashing in 3 modes. PLAIN, AUTH and NESTED. + * Plain mode is used to provide data integrity while auth mode is used to + * provide integrity as well as its authenticity. Nested mode is inteded + * for use by non standard HMAC like algorithms such as for the SSL master + * key secret. Partial packets is supported for both plain and auth modes. + * In-place and out-of-place processing is supported for all modes. The + * verify operation is supported for PLAIN and AUTH modes only. + * + * The hash component is responsible for implementing the hash specific + * functionality for initialising a session and for a perform operation. + * Statistics are maintained in the symmetric \ref CpaCySymStats64 "stats" + * structure. This module has been seperated out into two. The hash QAT module + * deals entirely with QAT data structures. The hash module itself has minimal + * exposure to the QAT data structures. + * + * @lld_dependencies + * - \ref LacCommon + * - \ref LacSymQat "Symmetric QAT": Hash uses the lookup table provided by + * this module to validate user input. Hash also uses this module to build + * the hash QAT request message, request param structure, populate the + * content descriptor, allocate and populate the hash state prefix buffer. + * Hash also registers its function to process the QAT response with this + * module. + * - OSAL : For memory functions, atomics and locking + * + * @lld_module_algorithms + * a. HMAC Precomputes\n + * HMAC algorithm is specified as follows: + * \f$ HMAC(msg) = hash((key \oplus opad) \parallel + * hash((key \oplus ipad) \parallel msg ))\f$. + * The key is fixed per session, and is padded up to the block size of the + * algorithm if necessary and xored with the ipad/opad. The following portion + * of the operation can be precomputed: \f$ hash(key \oplus ipad) \f$ as the + * output of this intermediate hash will be the same for every perform + * operation. This intermediate state is the intermediate state of a partial + * partial packet. It is used as the initialiser state to \f$ hash(msg) \f$. + * The same applies to \f$ hash(key \oplus ipad) \f$. There is a saving in + * the data path by the length of time it takes to do two hashes on a block + * size of data. Note: a partial packet operation generates an intermediate + * state. The final operation on a partial packet or when a full packet is + * used applies padding and gives the final hash result. Esentially for the + * inner hash, a partial packet final is issued on the data, using the + * precomputed intermediate state and returns the digest. + * + * For the HMAC precomputes, \ref LacSymHash_HmacPreCompute(), there are two + * hash operations done using a internal content descriptor to configure the + * QAT. A first partial packet is specified as the packet type for the + * pre-computes as we need the state that uses the initialiser constants + * specific to the algorithm. The resulting output is copied from the hash + * state prefix buffer into the QAT content descriptor for the session being + * initialised. The state is used each perform operation as the initialiser + * to the algorithm + * + * b. AES XCBC Precomputes\n + * A similar technique to HMAC will be used to generate the precomputes for + * AES XCBC. In this case a cipher operation will be used to generate the + * precomputed result. The Pre-compute operation involves deriving 3 128-bit + * keys (K1, K2 and K3) from the 128-bit secret key K. + * + * - K1 = 0x01010101010101010101010101010101 encrypted with Key K + * - K2 = 0x02020202020202020202020202020202 encrypted with Key K + * - K3 = 0x03030303030303030303030303030303 encrypted with Key K + * + * A content descriptor is created with the cipher algorithm set to AES + * in ECB mode and with the keysize set to 128 bits. The 3 constants, 16 bytes + * each, are copied into the src buffer and an in-place cipher operation is + * performed on the 48 bytes. ECB mode does not maintain the state, therefore + * the 3 keys can be encrypted in one perform. The encrypted result is used by + * the state2 field in the hash setup block of the content descriptor. + * + * The precompute operations use a different lac command ID and thus have a + * different route in the response path to the symmetric code. In this + * precompute callback function the output of the precompute operation is + * copied into the content descriptor for the session being registered. + * + * c. AES CCM Precomputes\n + * The precomputes for AES CCM are trivial, i.e. there is no need to perform + * a cipher or a digest operation. Instead, the key is stored directly in + * the state2 field. + * + * d. AES GCM Precomputes\n + * As with AES XCBC precomputes, a cipher operation will be used to generate + * the precomputed result for AES GCM. In this case the Galois Hash + * Multiplier (H) must be derived and stored in the state2 field. H is + * derived by encrypting a 16-byte block of zeroes with the + * cipher/authentication key, using AES in ECB mode. + * + * Key size for Auth algorithms\n + * Min Size\n + * RFC 2104 states "The key for HMAC can be of any length. However, less than + * L bytes is strongly discouraged as it would decrease the security strength + * of the function." + * + * FIPS 198a states "The size of the key, K, shall be equal to or greater than + * L/2, where L is the size of the hash function output." + * + * RFC 4434 states "If the key has fewer than 128 bits, lengthen it to exactly + * 128 bits by padding it on the right with zero bits. + * + * A key length of 0 upwards is accepted. It is up to the client to pass in a + * key that complies with the standard they wish to support. + * + * Max Size\n + * RFC 2104 section 2 states : "Applications that use keys longer than B bytes + * will first hash the key using H and then use the resultant L byte string + * as the actual key to HMAC + * + * RFC 4434 section 2 states: + * "If the key is 129 bits or longer, shorten it to exactly 128 bits + * by performing the steps in AES-XCBC-PRF-128 (that is, the + * algorithm described in this document). In that re-application of + * this algorithm, the key is 128 zero bits; the message is the + * too-long current key." + * + * We push this up to the client. They need to do the hash operation through + * the LAC API if the key is greater than the block size of the algorithm. This + * will reduce the key size to the digest size of the algorithm. + * + * RFC 3566 section 4 states: + * AES-XCBC-MAC-96 is a secret key algorithm. For use with either ESP or + * AH a fixed key length of 128-bits MUST be supported. Key lengths + * other than 128-bits MUST NOT be supported (i.e., only 128-bit keys are + * to be used by AES-XCBC-MAC-96). + * + * In this case it is up to the client to provide a key that complies with + * the standards. i.e. exactly 128 bits in size. + * + * + * HMAC-MD5-96 and HMAC-SHA1-96\n + * HMAC-MD5-96 and HMAC-SHA1-96 are defined as requirements by Look Aside + * IPsec. The differences between HMAC-SHA1 and HMAC-SHA1-96 are that the + * digest produced is truncated and there are strict requirements on the + * size of the key that is used. + * + * They are supported in LAC by HMAC-MD5 and HMAC-SHA1. The field + * \ref CpaCySymHashSetupData::digestResultLenInBytes in the LAC API in + * bytes needs to be set to 12 bytes. There are also requirements regarding + * the keysize. It is up to the client to ensure the key size meets the + * requirements of the standards they are using. + * + * RFC 2403: HMAC-MD5-96 Key lengths other than 128-bits MUST NOT be supported. + * HMAC-MD5-96 produces a 128-bit authenticator value. For use with either + * ESP or AH, a truncated value using the first 96 bits MUST be supported. + * + * RFC2404: HMAC-SHA1-96 Key lengths other than 160- bits MUST NOT be supported + * HMAC-SHA-1-96 produces a 160-bit authenticator value. For use with either + * ESP or AH, a truncated value using the first 96 bits MUST be supported. + * + * Out of place operations + * When verify is disabled, the digest will be written to the destination + * buffer. When verify is enabled, the digest calculated is compared to the + * digest stored in the source buffer. + * + * Partial Packets + * Partial packets are handled in the \ref LacSym "Symmetric" component for + * the request. The hash callback function handles the update of the state + * in the callback. + * + * + * @lld_process_context + * + * Session Register Sequence Diagram: For hash modes plain and nested. + * \msc + * APP [label="Application"], SYM [label="Symmetric LAC"], + * Achain [label="Alg chain"], Hash, SQAT [label="Symmetric QAT"]; + * + * APP=>SYM [ label = "cpaCySymInitSession(cbFunc)", + * URL="\ref cpaCySymInitSession()"] ; + * SYM=>SYM [ label = "LacSymSession_ParamCheck()", + * URL="\ref LacSymSession_ParamCheck()"]; + * SYM=>Achain [ label = "LacAlgChain_SessionInit()", + * URL="\ref LacAlgChain_SessionInit()"]; + * Achain=>Hash [ label = "LacHash_HashContextCheck()", + * URL="\ref LacHash_HashContextCheck()"]; + * Achain<SQAT [ label = "LacSymQat_HashContentDescInit()", + * URL="\ref LacSymQat_HashContentDescInit()"]; + * Achain<Hash [ label = "LacHash_StatePrefixAadBufferInit()", + * URL="\ref LacHash_StatePrefixAadBufferInit()"]; + * Hash=>SQAT [ label = "LacSymQat_HashStatePrefixAadBufferSizeGet()", + * URL="\ref LacSymQat_HashStatePrefixAadBufferSizeGet()"]; + * Hash<SQAT [ label = "LacSymQat_HashStatePrefixAadBufferPopulate()", + * URL="\ref LacSymQat_HashStatePrefixAadBufferPopulate()"]; + * Hash<SYM [label = "LAC_SYM_STAT_INC", URL="\ref LAC_SYM_STAT_INC"]; + * APP<SYM [ label = "cpaCySymPerformOp()", + * URL="\ref cpaCySymPerformOp()"] ; + * SYM=>SYM [ label = "LacSymPerform_BufferParamCheck()", + * URL="\ref LacSymPerform_BufferParamCheck()"]; + * SYM=>Achain [ label = "LacAlgChain_Perform()", + * URL="\ref LacAlgChain_Perform()"]; + * Achain=>Achain [ label = "Lac_MemPoolEntryAlloc()", + * URL="\ref Lac_MemPoolEntryAlloc()"]; + * Achain=>SQAT [ label = "LacSymQat_packetTypeGet()", + * URL="\ref LacSymQat_packetTypeGet()"]; + * Achain<Achain [ label = "LacBuffDesc_BufferListTotalSizeGet()", + * URL="\ref LacBuffDesc_BufferListTotalSizeGet()"]; + * Achain=>Hash [ label = "LacHash_PerformParamCheck()", + * URL = "\ref LacHash_PerformParamCheck()"]; + * Achain<SQAT [ label = "LacSymQat_HashRequestParamsPopulate()", + * URL="\ref LacSymQat_HashRequestParamsPopulate()"]; + * Achain<Achain [ label = "LacBuffDesc_BufferListDescWrite()", + * URL="\ref LacBuffDesc_BufferListDescWrite()"]; + * Achain=>SQAT [ label = "SalQatMsg_CmnMsgAndReqParamsPopulate()", + * URL="\ref SalQatMsg_CmnMsgAndReqParamsPopulate()"]; + * Achain<SYM [ label = "LacSymQueue_RequestSend()", + * URL="\ref LacSymQueue_RequestSend()"]; + * SYM=>QATCOMMS [ label = "QatComms_MsgSend()", + * URL="\ref QatComms_MsgSend()"]; + * SYM<SYM [label = "LAC_SYM_STAT_INC", URL="\ref LAC_SYM_STAT_INC"]; + * APP<QATCOMMS [label ="QatComms_ResponseMsgHandler()", + * URL="\ref QatComms_ResponseMsgHandler()"]; + * QATCOMMS=>SQAT [label ="LacSymQat_SymRespHandler()", + * URL="\ref LacSymQat_SymRespHandler()"]; + * SQAT=>SYM [label="LacSymCb_ProcessCallback()", + * URL="\ref LacSymCb_ProcessCallback()"]; + * SYM=>SYM [label = "LacSymCb_ProcessCallbackInternal()", + * URL="\ref LacSymCb_ProcessCallbackInternal()"]; + * SYM=>SYM [label = "Lac_MemPoolEntryFree()", + * URL="\ref Lac_MemPoolEntryFree()"]; + * SYM=>SYM [label = "LAC_SYM_STAT_INC", URL="\ref LAC_SYM_STAT_INC"]; + * SYM=>APP [label="cbFunc"]; + * APP>>SYM [label="return"]; + * SYM>>SQAT [label="return"]; + * SQAT>>QATCOMMS [label="return"]; + * \endmsc + * + * @lld_end + * + *****************************************************************************/ + +/*****************************************************************************/ + +#ifndef LAC_SYM_HASH_H +#define LAC_SYM_HASH_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +#include "lac_session.h" +#include "lac_buffer_desc.h" + +/** + ***************************************************************************** + * @ingroup LacHash + * Definition of callback function. + * + * @description + * This is the callback function prototype. The callback function is + * invoked when a hash precompute operation completes. + * + * @param[in] pCallbackTag Opaque value provided by user while making + * individual function call. + * + * @retval + * None + *****************************************************************************/ +typedef void (*lac_hash_precompute_done_cb_t)(void *pCallbackTag); + +/* + * WARNING: There are no checks done on the parameters of the functions in + * this file. The expected values of the parameters are documented and it is + * up to the caller to provide valid values. + */ + +/** +******************************************************************************* +* @ingroup LacHash +* validate the hash context +* +* @description +* The client populates the hash context in the session context structure +* This is passed as parameter to the session register API function and +* needs to be validated. +* +* @param[in] pHashSetupData pointer to hash context structure +* +* @retval CPA_STATUS_SUCCESS Success +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter +* +*****************************************************************************/ +CpaStatus LacHash_HashContextCheck(CpaInstanceHandle instanceHandle, + const CpaCySymHashSetupData *pHashSetupData); + +/** + ****************************************************************************** + * @ingroup LacHash + * Populate the hash pre-compute data. + * + * @description + * This function populates the state1 and state2 fields with the hash + * pre-computes. This is only done for authentication. The state1 + * and state2 pointers must be set to point to the correct locations + * in the content descriptor where the precompute result(s) will be + * written, before this function is called. + * + * @param[in] instanceHandle Instance Handle + * @param[in] pSessionSetup pointer to session setup data + * @param[in] callbackFn Callback function which is invoked when + * the precompute operation is completed + * @param[in] pCallbackTag Opaque data which is passed back to the user + * as a parameter in the callback function + * @param[out] pWorkingBuffer Pointer to working buffer, sufficient memory + * must be allocated by the caller for this. + * Assumption that this is 8 byte aligned. + * @param[out] pState1 pointer to State 1 in content descriptor + * @param[out] pState2 pointer to State 2 in content descriptor + * + * @retval CPA_STATUS_SUCCESS Success + * @retval CPA_STATUS_RETRY Retry the operation. + * @retval CPA_STATUS_RESOURCE Error Allocating memory + * @retval CPA_STATUS_FAIL Operation Failed + * + *****************************************************************************/ +CpaStatus LacHash_PrecomputeDataCreate(const CpaInstanceHandle instanceHandle, + CpaCySymSessionSetupData *pSessionSetup, + lac_hash_precompute_done_cb_t callbackFn, + void *pCallbackTag, + Cpa8U *pWorkingBuffer, + Cpa8U *pState1, + Cpa8U *pState2); + +/** + ****************************************************************************** + * @ingroup LacHash + * populate the hash state prefix aad buffer. + * + * @description + * This function populates the hash state prefix aad buffer. This function + * is not called for CCM/GCM operations as the AAD data varies per request + * and is stored in the cookie as opposed to the session descriptor. + * + * @param[in] pHashSetupData pointer to hash setup structure + * @param[in] pHashControlBlock pointer to hash control block + * @param[in] qatHashMode QAT Mode for hash + * @param[in] pHashStateBuffer pointer to hash state prefix aad buffer + * @param[in] pHashStateBufferInfo Pointer to hash state prefix buffer info + * + * @retval CPA_STATUS_SUCCESS Success + * @retval CPA_STATUS_FAIL Operation Failed + * + *****************************************************************************/ +CpaStatus LacHash_StatePrefixAadBufferInit( + sal_service_t *pService, + const CpaCySymHashSetupData *pHashSetupData, + icp_qat_la_bulk_req_ftr_t *pHashControlBlock, + icp_qat_hw_auth_mode_t qatHashMode, + Cpa8U *pHashStateBuffer, + lac_sym_qat_hash_state_buffer_info_t *pHashStateBufferInfo); + +/** +******************************************************************************* +* @ingroup LacHash +* Check parameters for a hash perform operation +* +* @description +* This function checks the parameters for a hash perform operation. +* +* @param[in] pSessionDesc Pointer to session descriptor. +* @param[in] pOpData Pointer to request parameters. +* @param[in] srcPktSize Total size of the Buffer List +* @param[in] pVerifyResult Pointer to user flag +* +* @retval CPA_STATUS_SUCCESS Success +* @retval CPA_STATUS_INVALID_PARAM Invalid Parameter +* +*****************************************************************************/ +CpaStatus LacHash_PerformParamCheck(CpaInstanceHandle instanceHandle, + lac_session_desc_t *pSessionDesc, + const CpaCySymOpData *pOpData, + Cpa64U srcPktSize, + const CpaBoolean *pVerifyResult); + +/** +******************************************************************************* +* @ingroup LacHash +* Perform hash precompute operation for HMAC +* +* @description +* This function sends 2 requests to the CPM for the hmac precompute +* operations. The results of the ipad and opad state calculation +* is copied into pState1 and pState2 (e.g. these may be the state1 and +* state2 buffers in a hash content desciptor) and when +* the final operation has completed the condition passed as a param to +* this function is set to true. +* +* This function performs the XORing of the IPAD and OPAD constants to +* the key (which was padded to the block size of the algorithm) +* +* @param[in] instanceHandle Instance Handle +* @param[in] hashAlgorithm Hash Algorithm +* @param[in] authKeyLenInBytes Length of Auth Key +* @param[in] pAuthKey Pointer to Auth Key +* @param[out] pWorkingMemory Pointer to working memory that is carved +* up and used in the pre-compute operations. +* Assumption that this is 8 byte aligned. +* @param[out] pState1 Pointer to State 1 in content descriptor +* @param[out] pState2 Pointer to State 2 in content descriptor +* @param[in] callbackFn Callback function which is invoked when +* the precompute operation is completed +* @param[in] pCallbackTag Opaque data which is passed back to the user +* as a parameter in the callback function +* +* @retval CPA_STATUS_SUCCESS Success +* @retval CPA_STATUS_RETRY Retry the operation. +* @retval CPA_STATUS_FAIL Operation Failed +* +*****************************************************************************/ +CpaStatus LacSymHash_HmacPreComputes(CpaInstanceHandle instanceHandle, + CpaCySymHashAlgorithm hashAlgorithm, + Cpa32U authKeyLenInBytes, + Cpa8U *pAuthKey, + Cpa8U *pWorkingMemory, + Cpa8U *pState1, + Cpa8U *pState2, + lac_hash_precompute_done_cb_t callbackFn, + void *pCallbackTag); + +/** +******************************************************************************* + * @ingroup LacHash + * Perform hash precompute operation for XCBC MAC and GCM + * + * @description + * This function sends 1 request to the CPM for the precompute operation + * based on an AES ECB cipher. The results of the calculation is copied + * into pState (this may be a pointer to the State 2 buffer in a Hash + * content descriptor) and when the operation has completed the condition + * passed as a param to this function is set to true. + * + * @param[in] instanceHandle Instance Handle + * @param[in] hashAlgorithm Hash Algorithm + * @param[in] authKeyLenInBytes Length of Auth Key + * @param[in] pAuthKey Auth Key + * @param[out] pWorkingMemory Pointer to working memory that is carved + * up and used in the pre-compute operations. + * Assumption that this is 8 byte aligned. + * @param[out] pState Pointer to output state + * @param[in] callbackFn Callback function which is invoked when + * the precompute operation is completed + * @param[in] pCallbackTag Opaque data which is passed back to the user + * as a parameter in the callback function + + * + * @retval CPA_STATUS_SUCCESS Success + * @retval CPA_STATUS_RETRY Retry the operation. + * @retval CPA_STATUS_FAIL Operation Failed + * + *****************************************************************************/ +CpaStatus LacSymHash_AesECBPreCompute(CpaInstanceHandle instanceHandle, + CpaCySymHashAlgorithm hashAlgorithm, + Cpa32U authKeyLenInBytes, + Cpa8U *pAuthKey, + Cpa8U *pWorkingMemory, + Cpa8U *pState, + lac_hash_precompute_done_cb_t callbackFn, + void *pCallbackTag); + +/** +******************************************************************************* +* @ingroup LacHash +* initialise data structures for the hash precompute operations +* +* @description +* This function registers the precompute callback handler function, which +* is different to the default one used by symmetric. Content desciptors +* are preallocted for the hmac precomputes as they are constant for these +* operations. +* +* @retval CPA_STATUS_SUCCESS Success +* @retval CPA_STATUS_RESOURCE Error allocating memory +* +*****************************************************************************/ +CpaStatus LacSymHash_HmacPrecompInit(CpaInstanceHandle instanceHandle); + +/** +******************************************************************************* +* @ingroup LacHash +* free resources allocated for the precompute operations +* +* @description +* free up the memory allocated on init time for the content descriptors +* that were allocated for the HMAC precompute operations. +* +* @return none +* +*****************************************************************************/ +void LacSymHash_HmacPrecompShutdown(CpaInstanceHandle instanceHandle); + +void LacSync_GenBufListVerifyCb(void *pCallbackTag, + CpaStatus status, + CpaCySymOp operationType, + void *pOpData, + CpaBufferList *pDstBuffer, + CpaBoolean opResult); + +#endif /* LAC_SYM_HASH_H */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash_defs.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash_defs.h @@ -0,0 +1,344 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_hash_defs.h + * + * @defgroup LacHashDefs Hash Definitions + * + * @ingroup LacHash + * + * Constants for hash algorithms + * + ***************************************************************************/ + +#ifndef LAC_SYM_HASH_DEFS_H +#define LAC_SYM_HASH_DEFS_H + +/* Constant for MD5 algorithm */ +#define LAC_HASH_MD5_BLOCK_SIZE 64 +/**< @ingroup LacHashDefs + * MD5 block size in bytes */ +#define LAC_HASH_MD5_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * MD5 digest length in bytes */ +#define LAC_HASH_MD5_STATE_SIZE 16 +/**< @ingroup LacHashDefs + * MD5 state size */ + +/* Constants for SHA1 algorithm */ +#define LAC_HASH_SHA1_BLOCK_SIZE 64 +/**< @ingroup LacHashDefs + * SHA1 Block size in bytes */ +#define LAC_HASH_SHA1_DIGEST_SIZE 20 +/**< @ingroup LacHashDefs + * SHA1 digest length in bytes */ +#define LAC_HASH_SHA1_STATE_SIZE 20 +/**< @ingroup LacHashDefs + * SHA1 state size */ + +/* Constants for SHA224 algorithm */ +#define LAC_HASH_SHA224_BLOCK_SIZE 64 +/**< @ingroup LacHashDefs + * SHA224 block size in bytes */ +#define LAC_HASH_SHA224_DIGEST_SIZE 28 +/**< @ingroup LacHashDefs + * SHA224 digest length in bytes */ +#define LAC_HASH_SHA224_STATE_SIZE 32 +/**< @ingroup LacHashDefs + * SHA224 state size */ + +/* Constants for SHA256 algorithm */ +#define LAC_HASH_SHA256_BLOCK_SIZE 64 +/**< @ingroup LacHashDefs + * SHA256 block size in bytes */ +#define LAC_HASH_SHA256_DIGEST_SIZE 32 +/**< @ingroup LacHashDefs + * SHA256 digest length */ +#define LAC_HASH_SHA256_STATE_SIZE 32 +/**< @ingroup LacHashDefs + * SHA256 state size */ + +/* Constants for SHA384 algorithm */ +#define LAC_HASH_SHA384_BLOCK_SIZE 128 +/**< @ingroup LacHashDefs + * SHA384 block size in bytes */ +#define LAC_HASH_SHA384_DIGEST_SIZE 48 +/**< @ingroup LacHashDefs + * SHA384 digest length in bytes */ +#define LAC_HASH_SHA384_STATE_SIZE 64 +/**< @ingroup LacHashDefs + * SHA384 state size */ + +/* Constants for SHA512 algorithm */ +#define LAC_HASH_SHA512_BLOCK_SIZE 128 +/**< @ingroup LacHashDefs + * SHA512 block size in bytes */ +#define LAC_HASH_SHA512_DIGEST_SIZE 64 +/**< @ingroup LacHashDefs + * SHA512 digest length in bytes */ +#define LAC_HASH_SHA512_STATE_SIZE 64 +/**< @ingroup LacHashDefs + * SHA512 state size */ + +/* Constants for SHA3_224 algorithm */ +#define LAC_HASH_SHA3_224_BLOCK_SIZE 144 +/**< @ingroup LacHashDefs + * SHA3_224 block size in bytes */ +#define LAC_HASH_SHA3_224_DIGEST_SIZE 28 +/**< @ingroup LacHashDefs + * SHA3_224 digest length in bytes */ +#define LAC_HASH_SHA3_224_STATE_SIZE 28 +/**< @ingroup LacHashDefs + * SHA3_224 state size */ + +/* Constants for SHA3_256 algorithm */ +#define LAC_HASH_SHA3_256_BLOCK_SIZE 136 +/**< @ingroup LacHashDefs + * SHA3_256 block size in bytes */ +#define LAC_HASH_SHA3_256_DIGEST_SIZE 32 +/**< @ingroup LacHashDefs + * SHA3_256 digest length in bytes */ +#define LAC_HASH_SHA3_256_STATE_SIZE 32 +/**< @ingroup LacHashDefs + * SHA3_256 state size */ + +/* Constants for SHA3_384 algorithm */ +#define LAC_HASH_SHA3_384_BLOCK_SIZE 104 +/**< @ingroup LacHashDefs + * * SHA3_384 block size in bytes */ +#define LAC_HASH_SHA3_384_DIGEST_SIZE 48 +/**< @ingroup LacHashDefs + * * SHA3_384 digest length in bytes */ +#define LAC_HASH_SHA3_384_STATE_SIZE 48 +/**< @ingroup LacHashDefs + * * SHA3_384 state size */ + +/* Constants for SHA3_512 algorithm */ +#define LAC_HASH_SHA3_512_BLOCK_SIZE 72 +/**< @ingroup LacHashDefs + * * * SHA3_512 block size in bytes */ +#define LAC_HASH_SHA3_512_DIGEST_SIZE 64 +/**< @ingroup LacHashDefs + * * * SHA3_512 digest length in bytes */ +#define LAC_HASH_SHA3_512_STATE_SIZE 64 +/**< @ingroup LacHashDefs + * * * SHA3_512 state size */ + +/* Constants for SHAKE_128 algorithm */ +#define LAC_HASH_SHAKE_128_BLOCK_SIZE 168 +/**< @ingroup LacHashDefs + * * * SHAKE_128 block size in bytes */ +#define LAC_HASH_SHAKE_128_DIGEST_SIZE 0xFFFFFFFF +/**< @ingroup LacHashDefs + * * * SHAKE_128 digest length in bytes ((2^32)-1)*/ + +/* Constants for SHAKE_256 algorithm */ +#define LAC_HASH_SHAKE_256_BLOCK_SIZE 136 +/**< @ingroup LacHashDefs + * * * SHAKE_256 block size in bytes */ +#define LAC_HASH_SHAKE_256_DIGEST_SIZE 0xFFFFFFFF +/**< @ingroup LacHashDefs + * * * SHAKE_256 digest length in bytes ((2^ 32)-1)*/ + +/* Constants for POLY algorithm */ +#define LAC_HASH_POLY_BLOCK_SIZE 64 +/**< @ingroup LacHashDefs + * POLY block size in bytes */ +#define LAC_HASH_POLY_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * POLY digest length */ +#define LAC_HASH_POLY_STATE_SIZE 0 +/**< @ingroup LacHashDefs + * POLY state size */ + +/* Constants for SM3 algorithm */ +#define LAC_HASH_SM3_BLOCK_SIZE 64 +/**< @ingroup LacHashDefs + * SM3 block size in bytes */ +#define LAC_HASH_SM3_DIGEST_SIZE 32 +/**< @ingroup LacHashDefs + * SM3 digest length */ +#define LAC_HASH_SM3_STATE_SIZE 32 +/**< @ingroup LacHashDefs + * SM3 state size */ + +/* Constants for XCBC precompute algorithm */ +#define LAC_HASH_XCBC_PRECOMP_KEY_NUM 3 +/**< @ingroup LacHashDefs + * The Pre-compute operation involves deriving 3 128-bit + * keys (K1, K2 and K3) */ + +/* Constants for XCBC MAC algorithm */ +#define LAC_HASH_XCBC_MAC_BLOCK_SIZE 16 +/**< @ingroup LacHashDefs + * XCBC_MAC block size in bytes */ +#define LAC_HASH_XCBC_MAC_128_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * XCBC_MAC_PRF_128 digest length in bytes */ + +/* Constants for AES CMAC algorithm */ +#define LAC_HASH_CMAC_BLOCK_SIZE 16 +/**< @ingroup LacHashDefs + * AES CMAC block size in bytes */ +#define LAC_HASH_CMAC_128_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * AES CMAC digest length in bytes */ + +/* constants for AES CCM */ +#define LAC_HASH_AES_CCM_BLOCK_SIZE 16 +/**< @ingroup LacHashDefs + * block size for CBC-MAC part of CCM */ +#define LAC_HASH_AES_CCM_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * untruncated size of authentication field */ + +/* constants for AES GCM */ +#define LAC_HASH_AES_GCM_BLOCK_SIZE 16 +/**< @ingroup LacHashDefs + * block size for Galois Hash 128 part of CCM */ +#define LAC_HASH_AES_GCM_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * untruncated size of authentication field */ + +/* constants for KASUMI F9 */ +#define LAC_HASH_KASUMI_F9_BLOCK_SIZE 8 +/**< @ingroup LacHashDefs + * KASUMI_F9 block size in bytes */ +#define LAC_HASH_KASUMI_F9_DIGEST_SIZE 4 +/**< @ingroup LacHashDefs + * KASUMI_F9 digest size in bytes */ + +/* constants for SNOW3G UIA2 */ +#define LAC_HASH_SNOW3G_UIA2_BLOCK_SIZE 8 +/**< @ingroup LacHashDefs + * SNOW3G UIA2 block size in bytes */ +#define LAC_HASH_SNOW3G_UIA2_DIGEST_SIZE 4 +/**< @ingroup LacHashDefs + * SNOW3G UIA2 digest size in bytes */ + +/* constants for AES CBC MAC */ +#define LAC_HASH_AES_CBC_MAC_BLOCK_SIZE 16 +/**< @ingroup LacHashDefs + * AES CBC MAC block size in bytes */ +#define LAC_HASH_AES_CBC_MAC_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * AES CBC MAC digest size in bytes */ + +#define LAC_HASH_ZUC_EIA3_BLOCK_SIZE 4 +/**< @ingroup LacHashDefs + * ZUC EIA3 block size in bytes */ +#define LAC_HASH_ZUC_EIA3_DIGEST_SIZE 4 +/**< @ingroup LacHashDefs + * ZUC EIA3 digest size in bytes */ + +/* constants for AES GCM ICV allowed sizes */ +#define LAC_HASH_AES_GCM_ICV_SIZE_8 8 +#define LAC_HASH_AES_GCM_ICV_SIZE_12 12 +#define LAC_HASH_AES_GCM_ICV_SIZE_16 16 + +/* constants for AES CCM ICV allowed sizes */ +#define LAC_HASH_AES_CCM_ICV_SIZE_MIN 4 +#define LAC_HASH_AES_CCM_ICV_SIZE_MAX 16 + +/* constants for authentication algorithms */ +#define LAC_HASH_IPAD_BYTE 0x36 +/**< @ingroup LacHashDefs + * Ipad Byte */ +#define LAC_HASH_OPAD_BYTE 0x5c +/**< @ingroup LacHashDefs + * Opad Byte */ + +#define LAC_HASH_IPAD_4_BYTES 0x36363636 +/**< @ingroup LacHashDefs + * Ipad for 4 Bytes */ +#define LAC_HASH_OPAD_4_BYTES 0x5c5c5c5c +/**< @ingroup LacHashDefs + * Opad for 4 Bytes */ + +/* Key Modifier (KM) value used in Kasumi algorithm in F9 mode to XOR + * Integrity Key (IK) */ +#define LAC_HASH_KASUMI_F9_KEY_MODIFIER_4_BYTES 0xAAAAAAAA +/**< @ingroup LacHashDefs + * Kasumi F9 Key Modifier for 4 bytes */ + +#define LAC_SYM_QAT_HASH_IV_REQ_MAX_SIZE_QW 2 +/**< @ingroup LacSymQatHash + * Maximum size of IV embedded in the request. + * This is set to 2, namely 4 LONGWORDS. */ + +#define LAC_SYM_QAT_HASH_STATE1_MAX_SIZE_BYTES LAC_HASH_SHA512_BLOCK_SIZE +/**< @ingroup LacSymQatHash + * Maximum size of state1 in the hash setup block of the content descriptor. + * This is set to the block size of SHA512. */ + +#define LAC_SYM_QAT_HASH_STATE2_MAX_SIZE_BYTES LAC_HASH_SHA512_BLOCK_SIZE +/**< @ingroup LacSymQatHash + * Maximum size of state2 in the hash setup block of the content descriptor. + * This is set to the block size of SHA512. */ + +#define LAC_MAX_INNER_OUTER_PREFIX_SIZE_BYTES 255 +/**< Maximum size of the inner and outer prefix for nested hashing operations. + * This is got from the maximum size supported by the accelerator which stores + * the size in an 8bit field */ + +#define LAC_MAX_HASH_STATE_STORAGE_SIZE \ + (sizeof(icp_qat_hw_auth_counter_t) + LAC_HASH_SHA512_STATE_SIZE) +/**< Maximum size of the hash state storage section of the hash state prefix + * buffer */ + +#define LAC_MAX_HASH_STATE_BUFFER_SIZE_BYTES \ + LAC_MAX_HASH_STATE_STORAGE_SIZE + \ + (LAC_ALIGN_POW2_ROUNDUP(LAC_MAX_INNER_OUTER_PREFIX_SIZE_BYTES, \ + LAC_QUAD_WORD_IN_BYTES) * \ + 2) +/**< Maximum size of the hash state prefix buffer will be for nested hash when + * there is the maximum sized inner prefix and outer prefix */ + +#define LAC_MAX_AAD_SIZE_BYTES 256 +/**< Maximum size of AAD in bytes */ + +#define IS_HMAC_ALG(algorithm) \ + ((algorithm == CPA_CY_SYM_HASH_MD5) || \ + (algorithm == CPA_CY_SYM_HASH_SHA1) || \ + (algorithm == CPA_CY_SYM_HASH_SHA224) || \ + (algorithm == CPA_CY_SYM_HASH_SHA256) || \ + (algorithm == CPA_CY_SYM_HASH_SHA384) || \ + (algorithm == CPA_CY_SYM_HASH_SHA512) || \ + (algorithm == CPA_CY_SYM_HASH_SHA3_224) || \ + (algorithm == CPA_CY_SYM_HASH_SHA3_256) || \ + (algorithm == CPA_CY_SYM_HASH_SHA3_384) || \ + (algorithm == CPA_CY_SYM_HASH_SHA3_512) || \ + (algorithm == CPA_CY_SYM_HASH_SM3)) +/**< @ingroup LacSymQatHash + * Macro to detect if the hash algorithm is a HMAC algorithm */ + +#define IS_HASH_MODE_1(qatHashMode) (ICP_QAT_HW_AUTH_MODE1 == qatHashMode) +/**< @ingroup LacSymQatHash + * Macro to detect is qat hash mode is set to 1 (precompute mode) + * only used with algorithms in hash mode CPA_CY_SYM_HASH_MODE_AUTH */ + +#define IS_HASH_MODE_2(qatHashMode) (ICP_QAT_HW_AUTH_MODE2 == qatHashMode) +/**< @ingroup LacSymQatHash + * Macro to detect is qat hash mode is set to 2. This is used for TLS and + * mode 2 HMAC (no preompute mode) */ + +#define IS_HASH_MODE_2_AUTH(qatHashMode, hashMode) \ + ((IS_HASH_MODE_2(qatHashMode)) && \ + (CPA_CY_SYM_HASH_MODE_AUTH == hashMode)) +/**< @ingroup LacSymQatHash + * Macro to check for qat hash mode is set to 2 and the hash mode is + * Auth. This applies to HMAC algorithms (no pre compute). This is used + * to differntiate between TLS and HMAC */ + +#define IS_HASH_MODE_2_NESTED(qatHashMode, hashMode) \ + ((IS_HASH_MODE_2(qatHashMode)) && \ + (CPA_CY_SYM_HASH_MODE_NESTED == hashMode)) +/**< @ingroup LacSymQatHash + * Macro to check for qat hash mode is set to 2 and the LAC hash mode is + * Nested. This applies to TLS. This is used to differentiate between + * TLS and HMAC */ + +#endif /* LAC_SYM_HASH_DEFS_H */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash_precomputes.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash_precomputes.h @@ -0,0 +1,176 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_hash_precomputes.h + * + * @defgroup LacHashDefs Hash Definitions + * + * @ingroup LacHash + * + * Constants for hash algorithms + * + ***************************************************************************/ +#ifndef LAC_SYM_HASH_PRECOMPUTES_H +#define LAC_SYM_HASH_PRECOMPUTES_H + +#include "lac_sym_hash.h" + +#define LAC_SYM_AES_CMAC_RB_128 0x87 /* constant used for */ + /* CMAC calculation */ + +#define LAC_SYM_HASH_MSBIT_MASK 0x80 /* Mask to check MSB top bit */ + /* zero or one */ + +#define LAC_SINGLE_BUFFER_HW_META_SIZE \ + (sizeof(icp_buffer_list_desc_t) + sizeof(icp_flat_buffer_desc_t)) +/**< size of memory to allocate for the HW buffer list that is sent to the + * QAT */ + +#define LAC_SYM_HASH_PRECOMP_MAX_WORKING_BUFFER \ + ((sizeof(lac_sym_hash_precomp_op_data_t) * 2) + \ + sizeof(lac_sym_hash_precomp_op_t)) +/**< maximum size of the working data for the HMAC precompute operations + * + * Maximum size of lac_sym_hash_precomp_op_data_t is 264 bytes. For hash + * precomputes there are 2 of these structrues and a further + * lac_sym_hash_precomp_op_t structure required. This comes to a total of 536 + * bytes. + * For the asynchronous version of the precomputes, the memory for the hash + * state prefix buffer is used as the working memory. There are 584 bytes + * which are alloacted for the hash state prefix buffer which is enough to + * carve up for the precomputes. + */ + +#define LAC_SYM_HASH_PRECOMP_MAX_AES_ECB_DATA \ + ((ICP_QAT_HW_AES_128_KEY_SZ) * (3)) +/**< Maximum size for the data that an AES ECB precompute is generated on */ + +/** + ***************************************************************************** + * @ingroup LacHashDefs + * Precompute type enum + * @description + * Enum used to distinguish between precompute types + * + *****************************************************************************/ +typedef enum { + LAC_SYM_HASH_PRECOMP_HMAC = 1, + /**< Hmac precompute operation. Copy state from hash state buffer */ + LAC_SYM_HASH_PRECOMP_AES_ECB, + /**< XCBC/CGM precompute, Copy state from data buffer */ +} lac_sym_hash_precomp_type_t; + +/** + ***************************************************************************** + * @ingroup LacHashDefs + * overall precompute management structure + * @description + * structure used to manage the precompute operations for a session + * + *****************************************************************************/ +typedef struct lac_sym_hash_precomp_op_s { + lac_hash_precompute_done_cb_t callbackFn; + /**< Callback function to be invoked when the final precompute completes + */ + + void *pCallbackTag; + /**< Opaque data to be passed back as a parameter in the callback */ + + QatUtilsAtomic opsPending; + /**< counter used to determine if the current precompute is the + * final one. */ + +} lac_sym_hash_precomp_op_t; + +/** + ***************************************************************************** + * @ingroup LacHashDefs + * hmac precompute structure as used by the QAT + * @description + * data used by the QAT for HMAC precomputes + * + * Must be allocated on an 8-byte aligned memory address. + * + *****************************************************************************/ +typedef struct lac_sym_hash_hmac_precomp_qat_s { + Cpa8U data[LAC_HASH_SHA512_BLOCK_SIZE]; + /**< data to be hashed - block size of data for the algorithm */ + /* NOTE: to save space we could have got the QAT to overwrite + * this with the hash state storage */ + icp_qat_fw_la_auth_req_params_t hashReqParams; + /**< Request parameters as read in by the QAT */ + Cpa8U bufferDesc[LAC_SINGLE_BUFFER_HW_META_SIZE]; + /**< Buffer descriptor structure */ + Cpa8U hashStateStorage[LAC_MAX_HASH_STATE_STORAGE_SIZE]; + /**< Internal buffer where QAT writes the intermediate partial + * state that is used in the precompute */ +} lac_sym_hash_hmac_precomp_qat_t; + +/** + ***************************************************************************** + * @ingroup LacHashDefs + * AES ECB precompute structure as used by the QAT + * @description + * data used by the QAT for AES ECB precomptes + * + * Must be allocated on an 8-byte aligned memory address. + * + *****************************************************************************/ +typedef struct lac_sym_hash_aes_precomp_qat_s { + Cpa8U contentDesc[LAC_SYM_QAT_MAX_CIPHER_SETUP_BLK_SZ]; + /**< Content descriptor for a cipher operation */ + Cpa8U data[LAC_SYM_HASH_PRECOMP_MAX_AES_ECB_DATA]; + /**< The data to be ciphered is conatined here and the result is + * written in place back into this buffer */ + icp_qat_fw_la_cipher_req_params_t cipherReqParams; + /**< Request parameters as read in by the QAT */ + Cpa8U bufferDesc[LAC_SINGLE_BUFFER_HW_META_SIZE]; + /**< Buffer descriptor structure */ +} lac_sym_hash_aes_precomp_qat_t; + +/** + ***************************************************************************** + * @ingroup LacHashDefs + * overall structure for managing a single precompute operation + * @description + * overall structure for managing a single precompute operation + * + * Must be allocated on an 8-byte aligned memory address. + * + *****************************************************************************/ +typedef struct lac_sym_hash_precomp_op_data_s { + sal_crypto_service_t *pInstance; + /**< Instance handle for the operation */ + Cpa8U reserved[4]; + /**< padding to align later structures on minimum 8-Byte address */ + lac_sym_hash_precomp_type_t opType; + /**< operation type to determine the precompute type in the callback */ + lac_sym_hash_precomp_op_t *pOpStatus; + /**< structure containing the counter and the condition for the overall + * precompute operation. This is a pointer because the memory structure + * may be shared between precomputes when there are more than 1 as in + * the + * case of HMAC */ + union { + lac_sym_hash_hmac_precomp_qat_t hmacQatData; + /**< Data sent to the QAT for hmac precomputes */ + lac_sym_hash_aes_precomp_qat_t aesQatData; + /**< Data sent to the QAT for AES ECB precomputes */ + } u; + + /**< ASSUMPTION: The above structures are 8 byte aligned if the overall + * struct is 8 byte aligned, as there are two 4 byte fields before this + * union */ + Cpa32U stateSize; + /**< Size of the state to be copied into the state pointer in the + * content + * descriptor */ + Cpa8U *pState; + /**< pointer to the state in the content descriptor where the result of + * the precompute should be copied to */ +} lac_sym_hash_precomp_op_data_t; + +#endif /* LAC_SYM_HASH_PRECOMPUTES_H */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_key.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_key.h @@ -0,0 +1,184 @@ +/*************************************************************************** + * + * + * + ***************************************************************************/ + +/** + ***************************************************************************** + * @file lac_sym_key.h + * + * @defgroup LacSymKey Key Generation + * + * @ingroup LacSym + * + * @lld_start + * + * @lld_overview + * + * Key generation component is reponsible for SSL, TLS & MGF operations. All + * memory required for the keygen operations is got from the keygen cookie + * structure which is carved up as required. + * + * For SSL the QAT accelerates the nested hash function with MD5 as the + * outer hash and SHA1 as the inner hash. + * + * Refer to sections in draft-freier-ssl-version3-02.txt: + * 6.1 Asymmetric cryptographic computations - This refers to coverting + * the pre master secret to the master secret. + * 6.2.2 Converting the master secret into keys and MAC secrets - Using + * the master secret to generate the key material. + * + * For TLS the QAT accelerates the PRF function as described in + * rfc4346 - TLS version 1.1 (this obsoletes rfc2246 - TLS version 1.0) + * 5. HMAC and the pseudorandom function - For the TLS PRF and getting + * S1 and S2 from the secret. + * 6.3. Key calculation - For how the key material is generated + * 7.4.9. Finished - How the finished message uses the TLS PRF + * 8.1. Computing the master secret + * + * + * @lld_dependencies + * \ref LacSymQatHash: for building up hash content descriptor + * \ref LacMem: for virt to phys coversions + * + * @lld_initialisation + * The reponse handler is registered with Symmetric. The Maximum SSL is + * allocated. A structure is allocated containing all the TLS lables that + * are supported. On shutdown the memory for these structures are freed. + * + * @lld_module_algorithms + * @lld_process_context + * + * @lld_end + * + * + *****************************************************************************/ +#ifndef LAC_SYM_KEY_H_ +#define LAC_SYM_KEY_H_ + +#include "icp_qat_fw_la.h" +#include "cpa_cy_key.h" + +/**< @ingroup LacSymKey + * Label for SSL. Size is 136 bytes for 16 iterations, which can theroretically + * generate up to 256 bytes of output data. QAT will generate a maximum of + * 255 bytes */ + +#define LAC_SYM_KEY_TLS_MASTER_SECRET_LABEL ("master secret") +/**< @ingroup LacSymKey + * Label for TLS Master Secret Key Derivation, as defined in RFC4346 */ + +#define LAC_SYM_KEY_TLS_KEY_MATERIAL_LABEL ("key expansion") +/**< @ingroup LacSymKey + * Label for TLS Key Material Generation, as defined in RFC4346. */ + +#define LAC_SYM_KEY_TLS_CLIENT_FIN_LABEL ("client finished") +/**< @ingroup LacSymKey + * Label for TLS Client finished Message, as defined in RFC4346. */ + +#define LAC_SYM_KEY_TLS_SERVER_FIN_LABEL ("server finished") +/**< @ingroup LacSymKey + * Label for TLS Server finished Message, as defined in RFC4346. */ + +/* +******************************************************************************* +* Define Constants and Macros for SSL, TLS and MGF +******************************************************************************* +*/ + +#define LAC_SYM_KEY_NO_HASH_BLK_OFFSET_QW 0 +/**< Used to indicate there is no hash block offset in the content descriptor + */ + +/* +******************************************************************************* +* Define Constant lengths for HKDF TLS v1.3 sublabels. +******************************************************************************* +*/ +#define HKDF_SUB_LABEL_KEY_LENGTH ((Cpa8U)13) +#define HKDF_SUB_LABEL_IV_LENGTH ((Cpa8U)12) +#define HKDF_SUB_LABEL_RESUMPTION_LENGTH ((Cpa8U)20) +#define HKDF_SUB_LABEL_FINISHED_LENGTH ((Cpa8U)18) +#define HKDF_SUB_LABELS_ALL \ + (CPA_CY_HKDF_SUBLABEL_KEY | CPA_CY_HKDF_SUBLABEL_IV | \ + CPA_CY_HKDF_SUBLABEL_RESUMPTION | CPA_CY_HKDF_SUBLABEL_FINISHED) +#define LAC_KEY_HKDF_SUBLABELS_NUM 4 +#define LAC_KEY_HKDF_DIGESTS 0 +#define LAC_KEY_HKDF_CIPHERS_MAX (CPA_CY_HKDF_TLS_AES_128_CCM_8_SHA256 + 1) +#define LAC_KEY_HKDF_SUBLABELS_MAX (LAC_KEY_HKDF_SUBLABELS_NUM + 1) + +/** + ****************************************************************************** + * @ingroup LacSymKey + * TLS label struct + * + * @description + * This structure is used to hold the various TLS labels. Each field is + * on an 8 byte boundary provided the structure itslef is 8 bytes aligned. + *****************************************************************************/ +typedef struct lac_sym_key_tls_labels_s { + Cpa8U masterSecret[ICP_QAT_FW_LA_TLS_LABEL_LEN_MAX]; + /**< Master secret label */ + Cpa8U keyMaterial[ICP_QAT_FW_LA_TLS_LABEL_LEN_MAX]; + /**< Key material label */ + Cpa8U clientFinished[ICP_QAT_FW_LA_TLS_LABEL_LEN_MAX]; + /**< client finished label */ + Cpa8U serverFinished[ICP_QAT_FW_LA_TLS_LABEL_LEN_MAX]; + /**< server finished label */ +} lac_sym_key_tls_labels_t; + +/** + ****************************************************************************** + * @ingroup LacSymKey + * TLS HKDF sub label struct + * + * @description + * This structure is used to hold the various TLS HKDF sub labels. + * Each field is on an 8 byte boundary. + *****************************************************************************/ +typedef struct lac_sym_key_tls_hkdf_sub_labels_s { + CpaCyKeyGenHKDFExpandLabel keySublabel256; + /**< CPA_CY_HKDF_SUBLABEL_KEY */ + CpaCyKeyGenHKDFExpandLabel ivSublabel256; + /**< CPA_CY_HKDF_SUBLABEL_IV */ + CpaCyKeyGenHKDFExpandLabel resumptionSublabel256; + /**< CPA_CY_HKDF_SUBLABEL_RESUMPTION */ + CpaCyKeyGenHKDFExpandLabel finishedSublabel256; + /**< CPA_CY_HKDF_SUBLABEL_FINISHED */ + CpaCyKeyGenHKDFExpandLabel keySublabel384; + /**< CPA_CY_HKDF_SUBLABEL_KEY */ + CpaCyKeyGenHKDFExpandLabel ivSublabel384; + /**< CPA_CY_HKDF_SUBLABEL_IV */ + CpaCyKeyGenHKDFExpandLabel resumptionSublabel384; + /**< CPA_CY_HKDF_SUBLABEL_RESUMPTION */ + CpaCyKeyGenHKDFExpandLabel finishedSublabel384; + /**< CPA_CY_HKDF_SUBLABEL_FINISHED */ + CpaCyKeyGenHKDFExpandLabel keySublabelChaChaPoly; + /**< CPA_CY_HKDF_SUBLABEL_KEY */ + CpaCyKeyGenHKDFExpandLabel ivSublabelChaChaPoly; + /**< CPA_CY_HKDF_SUBLABEL_IV */ + CpaCyKeyGenHKDFExpandLabel resumptionSublabelChaChaPoly; + /**< CPA_CY_HKDF_SUBLABEL_RESUMPTION */ + CpaCyKeyGenHKDFExpandLabel finishedSublabelChaChaPoly; + /**< CPA_CY_HKDF_SUBLABEL_FINISHED */ + Cpa64U sublabelPhysAddr256; + /**< Physical address of the SHA-256 subLabels */ + Cpa64U sublabelPhysAddr384; + /**< Physical address of the SHA-384 subLabels */ + Cpa64U sublabelPhysAddrChaChaPoly; + /**< Physical address of the ChaChaPoly subLabels */ +} lac_sym_key_tls_hkdf_sub_labels_t; + +/** + ****************************************************************************** + * @ingroup LacSymKey + * This function prints the stats to standard out. + * + * @retval CPA_STATUS_SUCCESS Status Success + * @retval CPA_STATUS_FAIL General failure + * + *****************************************************************************/ +void LacKeygen_StatsShow(CpaInstanceHandle instanceHandle); + +#endif Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_partial.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_partial.h @@ -0,0 +1,121 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_partial.h + * + * @defgroup LacSymPartial Partial Packets + * + * @ingroup LacSymCommon + * + * Partial packet handling code + * + * @lld_start + * + * Partials In Flight\n + * The API states that for partial packets the client should not submit + * the next partial request until the callback for the current partial has + * been called. We have chosen to enforce this rather than letting the user + * proceed where they would get an incorrect digest, cipher result. + * + * Maintain a SpinLock for partials in flight per session. Try and acquire this + * SpinLock. If it cant be acquired return an error straight away to the client + * as there is already a partial in flight. There is no blocking in the data + * path for this. + * + * By preventing any other partials from coming in while a partial is in flight + * we can check and change the state of the session without having to lock + * round it (dont want to have to lock and block in the data path). The state + * of the session indicates the previous packet type that a request was + * successfully completed for. The last packet type is only updated for partial + * packets. This state determines the packet types that can be accepted. + * e.g a last partial will not be accepted unless the previous packet was a + * partial. By only allowing one partial packet to be in flight, there is no + * need to lock around the update of the previous packet type for the session. + * + * The ECB Cipher mode, ciphers each block separately. No state is maintained + * between blocks. There is no need to wait for the callback for previous + * partial in ECB mode as the result of the previous partial has no impact on + * it. The API and our implementation only allows 1 partial packet to be in + * flight per session, therefore a partial packet request for ECB mode must + * be fully completed (ie. callback called) before the next partial request + * can be issued. + * + * Partial Ordering\n + * The ordering that the user submits partial packets will be checked. + * (we could have let the user proceed where they will get an incorrect + * digest/cipher result but chose against this). + * + * -# Maintain the last packet type of a partial operation for the session. If + * there have been no previous partials, we will accept only first partials + * -# The state must be set to partial before we will accept a final partial. + * i.e. a partial request must have already completed. + * + * The last packet type is updated in the callback for partial packets as this + * is the only place we can guarantee that a partial packet operation has been + * completed. When a partial completes the state can be updated from FULL to + * PARTIAL. The SpinLock for partial packets in flight for the session can be + * unlocked at this point. On a final Partial request the last packet type is + * reset back to FULL. NOTE: This is not done at the same time as the check in + * the perform as if an error occurs we would have to roll back the state + * + * For Hash mode it is possible to interleave full and a single partial + * packet stream in a session as the hash state buffer is updated for partial + * packets. It is not touched by full packets. For cipher mode, as the client + * manages the state, they can interleave full and a single partial packets. + * For ARC4, the state is managed internally and the packet type will always + * be set to partial internally. + * + * @lld_end + * + ***************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_SYM_PARTIAL_H +#define LAC_SYM_PARTIAL_H + +#include "cpa.h" +#include "cpa_cy_sym.h" + +/***************************************************************************/ + +/** +******************************************************************************* +* @ingroup LacSymPartial +* check if partial packet request is valid for a session +* +* @description +* This function checks to see if there is a partial packet request in +* flight and then if the partial state is correct +* +* @param[in] packetType Partial packet request +* @param[in] partialState Partial state of session +* +* @retval CPA_STATUS_SUCCESS Normal Operation +* @retval CPA_STATUS_INVALID_PARAM Invalid Parameter +* +*****************************************************************************/ +CpaStatus LacSym_PartialPacketStateCheck(CpaCySymPacketType packetType, + CpaCySymPacketType partialState); + +/** +******************************************************************************* +* @ingroup LacSymPartial +* update the state of the partial packet in a session +* +* @description +* This function is called in callback operation. It updates the state +* of a partial packet in a session and indicates that there is no +* longer a partial packet in flight for the session +* +* @param[in] packetType Partial packet request +* @param[out] pPartialState Pointer to partial state of session +* +*****************************************************************************/ +void LacSym_PartialPacketStateUpdate(CpaCySymPacketType packetType, + CpaCySymPacketType *pPartialState); + +#endif /* LAC_SYM_PARTIAL_H */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat.h @@ -0,0 +1,209 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_qat.h + * + * @defgroup LacSymQat Symmetric QAT + * + * @ingroup LacSym + * + * Interfaces for populating the qat structures for a symmetric operation + * + * @lld_start + * + * @lld_overview + * This file documents the interfaces for populating the qat structures + * that are common for all symmetric operations. + * + * @lld_dependencies + * - \ref LacSymQatHash "Hash QAT Comms" Sym Qat commons for Hash + * - \ref LacSymQat_Cipher "Cipher QAT Comms" Sym Qat commons for Cipher + * - OSAL: logging + * - \ref LacMem "Memory" - Inline memory functions + * + * @lld_initialisation + * This component is initialied during the LAC initialisation sequence. It + * is called by the Symmetric Initialisation function. + * + * @lld_module_algorithms + * + * @lld_process_context + * Refer to \ref LacHash "Hash" and \ref LacCipher "Cipher" for sequence + * diagrams to see their interactions with this code. + * + * + * @lld_end + * + *****************************************************************************/ + +/*****************************************************************************/ + +#ifndef LAC_SYM_QAT_H +#define LAC_SYM_QAT_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" +#include "icp_accel_devices.h" +#include "icp_qat_fw_la.h" +#include "icp_qat_hw.h" +#include "lac_session.h" +#include "sal_qat_cmn_msg.h" +#include "lac_common.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +#define LAC_SYM_DEFAULT_QAT_PTR_TYPE QAT_COMN_PTR_TYPE_SGL +#define LAC_SYM_DP_QAT_PTR_TYPE QAT_COMN_PTR_TYPE_FLAT +#define LAC_SYM_KEY_QAT_PTR_TYPE QAT_COMN_PTR_TYPE_FLAT +/**< @ingroup LacSymQat + * LAC SYM Source & Destination buffer type (FLAT/SGL) */ + +#define LAC_QAT_SYM_REQ_SZ_LW 32 +#define SYM_TX_MSG_SIZE (LAC_QAT_SYM_REQ_SZ_LW * LAC_LONG_WORD_IN_BYTES) +#define NRBG_TX_MSG_SIZE (LAC_QAT_SYM_REQ_SZ_LW * LAC_LONG_WORD_IN_BYTES) + +#define LAC_QAT_SYM_RESP_SZ_LW 8 +#define SYM_RX_MSG_SIZE (LAC_QAT_SYM_RESP_SZ_LW * LAC_LONG_WORD_IN_BYTES) +#define NRBG_RX_MSG_SIZE (LAC_QAT_SYM_RESP_SZ_LW * LAC_LONG_WORD_IN_BYTES) + +/** + ******************************************************************************* + * @ingroup LacSymQat + * Symmetric crypto response handler + * + * @description + * This function handles the symmetric crypto response + * + * @param[in] trans_handle transport handle (if ICP_QAT_DBG set) + * @param[in] instanceHandle void* pRespMsg + * + * + *****************************************************************************/ +void LacSymQat_SymRespHandler(void *pRespMsg); + +/** + ******************************************************************************* + * @ingroup LacSymQat + * Initialise the Symmetric QAT code + * + * @description + * This function initialises the symmetric QAT code + * + * @param[in] device Pointer to the acceleration device + * structure + * @param[in] instanceHandle Instance handle + * @param[in] numSymRequests Number of concurrent requests a pair + * (tx and rx) need to support + * + * @return CPA_STATUS_SUCCESS Operation successful + * @return CPA_STATUS_FAIL Initialisation Failed + * + *****************************************************************************/ +CpaStatus LacSymQat_Init(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacSymQat + * Register a response handler function for a symmetric command ID + * + * @description + * This function registers a response handler function for a symmetric + * operation. + * + * Note: This operation should only be performed once by the init function + * of a component. There is no corresponding deregister function, but + * registering a NULL function pointer will have the same effect. There + * MUST not be any requests in flight when calling this function. + * + * @param[in] lacCmdId Command Id of operation + * @param[in] pCbHandler callback handler function + * + * @return None + * + *****************************************************************************/ +void LacSymQat_RespHandlerRegister(icp_qat_fw_la_cmd_id_t lacCmdId, + sal_qat_resp_handler_func_t pCbHandler); + +/** + ****************************************************************************** + * @ingroup LacSymQat + * get the QAT packet type + * + * @description + * This function returns the QAT packet type for a LAC packet type. The + * LAC packet type does not indicate a first partial. therefore for a + * partial request, the previous packet type needs to be looked at to + * figure out if the current partial request is a first partial. + * + * + * @param[in] packetType LAC Packet type + * @param[in] packetState LAC Previous Packet state + * @param[out] pQatPacketType Packet type using the QAT macros + * + * @return none + * + *****************************************************************************/ +void LacSymQat_packetTypeGet(CpaCySymPacketType packetType, + CpaCySymPacketType packetState, + Cpa32U *pQatPacketType); + +/** + ****************************************************************************** + * @ingroup LacSymQat + * Populate the command flags based on the packet type + * + * @description + * This function populates the following flags in the Symmetric Crypto + * service_specif_flags field of the common header of the request: + * - LA_PARTIAL + * - UPDATE_STATE + * - RET_AUTH_RES + * - CMP_AUTH_RES + * based on looking at the input params listed below. + * + * @param[in] qatPacketType Packet type + * @param[in] cmdId Command Id + * @param[in] cipherAlgorithm Cipher Algorithm + * @param[out] pLaCommandFlags Command Flags + * + * @return none + * + *****************************************************************************/ +void LacSymQat_LaPacketCommandFlagSet(Cpa32U qatPacketType, + icp_qat_fw_la_cmd_id_t laCmdId, + CpaCySymCipherAlgorithm cipherAlgorithm, + Cpa16U *pLaCommandFlags, + Cpa32U ivLenInBytes); + +/** + ****************************************************************************** + * @ingroup LacSymQat + * + * + * @description + * defaults the common request service specific flags + * + * @param[in] laCmdFlags Common request service specific flags + * @param[in] symOp Type of operation performed e.g hash or cipher + * + * @return none + * + *****************************************************************************/ + +void LacSymQat_LaSetDefaultFlags(icp_qat_fw_serv_specif_flags *laCmdFlags, + CpaCySymOp symOp); + +#endif /* LAC_SYM_QAT_H */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_cipher.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_cipher.h @@ -0,0 +1,291 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_qat_cipher.h + * + * @defgroup LacSymQat_Cipher Cipher QAT + * + * @ingroup LacSymQat + * + * external interfaces for populating QAT structures for cipher operations. + * + *****************************************************************************/ + +/*****************************************************************************/ + +#ifndef LAC_SYM_QAT_CIPHER_H +#define LAC_SYM_QAT_CIPHER_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa_cy_sym.h" +#include "icp_qat_fw_la.h" +#include "lac_session.h" +#include "lac_sal_types_crypto.h" + +/* + ************************************************************************** + * @ingroup LacSymQat_Cipher + * + * @description + * Defines for building the cipher request params cache + * + ************************************************************************** */ + +#define LAC_SYM_QAT_CIPHER_NEXT_ID_BIT_OFFSET 24 +#define LAC_SYM_QAT_CIPHER_CURR_ID_BIT_OFFSET 16 +#define LAC_SYM_QAT_CIPHER_STATE_SIZE_BIT_OFFSET 8 +#define LAC_SYM_QAT_CIPHER_OFFSET_IN_DRAM_GCM_SPC 9 +#define LAC_SYM_QAT_CIPHER_OFFSET_IN_DRAM_CHACHA_SPC 2 +#define LAC_SYM_QAT_CIPHER_STATE_SIZE_SPC 48 +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * Retrieve the cipher block size in bytes for a given algorithm + * + * @description + * This function returns a hard-coded block size for the specific cipher + * algorithm + * + * @param[in] cipherAlgorithm Cipher algorithm for the current session + * + * @retval The block size, in bytes, for the given cipher algorithm + * + *****************************************************************************/ +Cpa8U +LacSymQat_CipherBlockSizeBytesGet(CpaCySymCipherAlgorithm cipherAlgorithm); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * Retrieve the cipher IV/state size in bytes for a given algorithm + * + * @description + * This function returns a hard-coded IV/state size for the specific cipher + * algorithm + * + * @param[in] cipherAlgorithm Cipher algorithm for the current session + * + * @retval The IV/state size, in bytes, for the given cipher algorithm + * + *****************************************************************************/ +Cpa32U LacSymQat_CipherIvSizeBytesGet(CpaCySymCipherAlgorithm cipherAlgorithm); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * Populate the cipher request params structure + * + * @description + * This function is passed a pointer to the 128B request block. + * (This memory must be allocated prior to calling this function). It + * populates: + * - the cipher fields of the req_params block in the request. No + * need to zero this first, all fields will be populated. + * - the corresponding CIPH_IV_FLD flag in the serv_specif_flags field + * of the common header. + * To do this it uses the parameters described below and the following + *fields from the request block which must be populated prior to calling this + *function: + * - cd_ctrl.cipher_state_sz + * - UPDATE_STATE flag in comn_hdr.serv_specif_flags + * + * + * @param[in] pReq Pointer to request block. + * * + * @param[in] cipherOffsetInBytes Offset to cipher data in user data buffer + * + * @param[in] cipherLenInBytes Length of cipher data in buffer + * + * @param[in] ivBufferPhysAddr Physical address of aligned IV/state + * buffer + * @param[in] pIvBufferVirt Virtual address of aligned IV/state + * buffer + * @retval void + * + *****************************************************************************/ +CpaStatus LacSymQat_CipherRequestParamsPopulate(icp_qat_fw_la_bulk_req_t *pReq, + Cpa32U cipherOffsetInBytes, + Cpa32U cipherLenInBytes, + Cpa64U ivBufferPhysAddr, + Cpa8U *pIvBufferVirt); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * Derive initial ARC4 cipher state from a base key + * + * @description + * An initial state for an ARC4 cipher session is derived from the base + * key provided by the user, using the ARC4 Key Scheduling Algorithm (KSA) + * + * @param[in] pKey The base key provided by the user + * + * @param[in] keyLenInBytes The length of the base key provided. + * The range of valid values is 1-256 bytes + * + * @param[out] pArc4CipherState The initial state is written to this buffer, + * including i and j values, and 6 bytes of padding + * so 264 bytes must be allocated for this buffer + * by the caller + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherArc4StateInit(const Cpa8U *pKey, + Cpa32U keyLenInBytes, + Cpa8U *pArc4CipherState); + +/** + ****************************************************************************** + * @ingroup LacSymQat_CipherXTSModeUpdateKeyLen + * Update the initial XTS key after the first partial has been received. + * + * @description + * For XTS mode using partial packets, after the first partial response + * has been received, the the key length needs to be halved for subsequent + * partials. + * + * @param[in] pSessionDesc The session descriptor. + * + * @param[in] newKeySizeInBytes The new key size.. + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherXTSModeUpdateKeyLen(lac_session_desc_t *pSessionDesc, + Cpa32U newKeySizeInBytes); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * LacSymQat_CipherCtrlBlockInitialize() + * + * @description + * intialize the cipher control block with all zeros + * + * @param[in] pMsg Pointer to the common request message + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherCtrlBlockInitialize(icp_qat_fw_la_bulk_req_t *pMsg); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * LacSymQat_CipherCtrlBlockWrite() + * + * @description + * This function populates the cipher control block of the common request + * message + * + * @param[in] pMsg Pointer to the common request message + * + * @param[in] cipherAlgorithm Cipher Algorithm to be used + * + * @param[in] targetKeyLenInBytes cipher key length in bytes of selected + * algorithm + * + * @param[out] nextSlice SliceID for next control block + * entry. This value is known only by + * the calling component + * + * @param[out] cipherCfgOffsetInQuadWord Offset into the config table in QW + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherCtrlBlockWrite(icp_qat_la_bulk_req_ftr_t *pMsg, + Cpa32U cipherAlgorithm, + Cpa32U targetKeyLenInBytes, + icp_qat_fw_slice_t nextSlice, + Cpa8U cipherCfgOffsetInQuadWord); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * LacSymQat_CipherHwBlockPopulateCfgData() + * + * @description + * Populate the physical HW block with config data + * + * @param[in] pSession Pointer to the session data + * + * @param[in] pCipherHwBlock pointer to the hardware control block + * in the common message + * + * @param[in] pSizeInBytes + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherHwBlockPopulateCfgData(lac_session_desc_t *pSession, + const void *pCipherHwBlock, + Cpa32U *pSizeInBytes); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * LacSymQat_CipherGetCfgData() + * + * @description + * setup the config data for cipher + * + * @param[in] pSession Pointer to the session data + * + * @param[in] pAlgorithm * + * @param[in] pMode + * @param[in] pDir + * @param[in] pKey_convert + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherGetCfgData(lac_session_desc_t *pSession, + icp_qat_hw_cipher_algo_t *pAlgorithm, + icp_qat_hw_cipher_mode_t *pMode, + icp_qat_hw_cipher_dir_t *pDir, + icp_qat_hw_cipher_convert_t *pKey_convert); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * LacSymQat_CipherHwBlockPopulateKeySetup() + * + * @description + * populate the key setup data in the cipher hardware control block + * in the common request message + * + * param[in] pCipherSetupData Pointer to cipher setup data + * + * @param[in] targetKeyLenInBytes Target key length. If key length given + * in cipher setup data is less that this, + * the key will be "rounded up" to this + * target length by padding it with 0's. + * In normal no-padding case, the target + * key length MUST match the key length + * in the cipher setup data. + * + * @param[in] pCipherHwBlock Pointer to the cipher hardware block + * + * @param[out] pCipherHwBlockSizeBytes Size in bytes of cipher setup block + * + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherHwBlockPopulateKeySetup( + const CpaCySymCipherSetupData *pCipherSetupData, + Cpa32U targetKeyLenInBytes, + const void *pCipherHwBlock, + Cpa32U *pCipherHwBlockSizeBytes); + +#endif /* LAC_SYM_QAT_CIPHER_H */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_hash.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_hash.h @@ -0,0 +1,309 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_qat_hash.h + * + * @defgroup LacSymQatHash Hash QAT + * + * @ingroup LacSymQat + * + * interfaces for populating qat structures for a hash operation + * + *****************************************************************************/ + +/*****************************************************************************/ + +#ifndef LAC_SYM_QAT_HASH_H +#define LAC_SYM_QAT_HASH_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" +#include "icp_qat_fw_la.h" +#include "icp_qat_hw.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "lac_common.h" + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * hash precomputes + * + * @description + * This structure contains infomation on the hash precomputes + * + *****************************************************************************/ +typedef struct lac_sym_qat_hash_precompute_info_s { + Cpa8U *pState1; + /**< state1 pointer */ + Cpa32U state1Size; + /**< state1 size */ + Cpa8U *pState2; + /**< state2 pointer */ + Cpa32U state2Size; + /**< state2 size */ +} lac_sym_qat_hash_precompute_info_t; + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * hash state prefix buffer info + * + * @description + * This structure contains infomation on the hash state prefix aad buffer + * + *****************************************************************************/ +typedef struct lac_sym_qat_hash_state_buffer_info_s { + Cpa64U pDataPhys; + /**< Physical pointer to the hash state prefix buffer */ + Cpa8U *pData; + /**< Virtual pointer to the hash state prefix buffer */ + Cpa8U stateStorageSzQuadWords; + /**< hash state storage size in quad words */ + Cpa8U prefixAadSzQuadWords; + /**< inner prefix/aad and outer prefix size in quad words */ +} lac_sym_qat_hash_state_buffer_info_t; + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * Init the hash specific part of the content descriptor. + * + * @description + * This function populates the hash specific fields of the control block + * and the hardware setup block for a digest session. This function sets + * the size param to hold the size of the hash setup block. + * + * In the case of hash only, the content descriptor will contain just a + * hash control block and hash setup block. In the case of chaining it + * will contain the hash control block and setup block along with the + * control block and setup blocks of additional services. + * + * Note: The memory for the content descriptor MUST be allocated prior to + * calling this function. The memory for the hash control block and hash + * setup block MUST be set to 0 prior to calling this function. + * + * @image html contentDescriptor.png "Content Descriptor" + * + * @param[in] pMsg Pointer to req Parameter Footer + * + * @param[in] pHashSetupData Pointer to the hash setup data as + * defined in the LAC API. + * + * @param[in] pHwBlockBase Pointer to the base of the hardware + * setup block + * + * @param[in] hashBlkOffsetInHwBlock Offset in quad-words from the base of + * the hardware setup block where the + * hash block will start. This offset + * is stored in the control block. It + * is used to figure out where to write + * that hash setup block. + * + * @param[in] nextSlice SliceID for next control block + * entry This value is known only by + * the calling component + * + * @param[in] qatHashMode QAT hash mode + * + * @param[in] useSymConstantsTable Indicate if Shared-SRAM constants table + * is used for this session. If TRUE, the + * h/w setup block is NOT populated + * + * @param[in] useOptimisedContentDesc Indicate if optimised content desc + * is used for this session. + * + * @param[in] pPrecompute For auth mode, this is the pointer + * to the precompute data. Otherwise this + * should be set to NULL + * + * @param[out] pHashBlkSizeInBytes size in bytes of hash setup block + * + * @return void + * + *****************************************************************************/ +void +LacSymQat_HashContentDescInit(icp_qat_la_bulk_req_ftr_t *pMsg, + CpaInstanceHandle instanceHandle, + const CpaCySymHashSetupData *pHashSetupData, + void *pHwBlockBase, + Cpa32U hashBlkOffsetInHwBlock, + icp_qat_fw_slice_t nextSlice, + icp_qat_hw_auth_mode_t qatHashMode, + CpaBoolean useSymConstantsTable, + CpaBoolean useOptimisedContentDesc, + lac_sym_qat_hash_precompute_info_t *pPrecompute, + Cpa32U *pHashBlkSizeInBytes); + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * Calculate the size of the hash state prefix aad buffer + * + * @description + * This function inspects the hash control block and based on the values + * in the fields, it calculates the size of the hash state prefix aad + * buffer. + * + * A partial packet processing request is possible at any stage during a + * hash session. In this case, there will always be space for the hash + * state storage field of the hash state prefix buffer. When there is + * AAD data just the inner prefix AAD data field is used. + * + * @param[in] pMsg Pointer to the Request Message + * + * @param[out] pHashStateBuf Pointer to hash state prefix buffer info + * structure. + * + * @return None + * + *****************************************************************************/ +void LacSymQat_HashStatePrefixAadBufferSizeGet( + icp_qat_la_bulk_req_ftr_t *pMsg, + lac_sym_qat_hash_state_buffer_info_t *pHashStateBuf); + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * Populate the fields of the hash state prefix buffer + * + * @description + * This function populates the inner prefix/aad fields and/or the outer + * prefix field of the hash state prefix buffer. + * + * @param[in] pHashStateBuf Pointer to hash state prefix buffer info + * structure. + * + * @param[in] pMsg Pointer to the Request Message + * + * @param[in] pInnerPrefixAad Pointer to the Inner Prefix or Aad data + * This is NULL where if the data size is 0 + * + * @param[in] innerPrefixSize Size of inner prefix/aad data in bytes + * + * @param[in] pOuterPrefix Pointer to the Outer Prefix data. This is + * NULL where the data size is 0. + * + * @param[in] outerPrefixSize Size of the outer prefix data in bytes + * + * @return void + * + *****************************************************************************/ +void LacSymQat_HashStatePrefixAadBufferPopulate( + lac_sym_qat_hash_state_buffer_info_t *pHashStateBuf, + icp_qat_la_bulk_req_ftr_t *pMsg, + Cpa8U *pInnerPrefixAad, + Cpa8U innerPrefixSize, + Cpa8U *pOuterPrefix, + Cpa8U outerPrefixSize); + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * Populate the hash request params structure + * + * @description + * This function is passed a pointer to the 128B Request block. + * (This memory must be allocated prior to calling this function). It + * populates the fields of this block using the parameters as described + * below. It is also expected that this structure has been set to 0 + * prior to calling this function. + * + * + * @param[in] pReq Pointer to 128B request block. + * + * @param[in] authOffsetInBytes start offset of data that the digest is to + * be computed on. + * + * @param[in] authLenInBytes Length of data digest calculated on + * + * @param[in] pService Pointer to service data + * + * @param[in] pHashStateBuf Pointer to hash state buffer info. This + * structure contains the pointers and sizes. + * If there is no hash state prefix buffer + * required, this parameter can be set to NULL + * + * @param[in] qatPacketType Packet type using QAT macros. The hash + * state buffer pointer and state size will be + * different depending on the packet type + * + * @param[in] hashResultSize Size of the final hash result in bytes. + * + * @param[in] digestVerify Indicates if verify is enabled or not + * + * @param[in] pAuthResult Virtual pointer to digest + * + * @return CPA_STATUS_SUCCESS or CPA_STATUS_FAIL + * + *****************************************************************************/ +CpaStatus LacSymQat_HashRequestParamsPopulate( + icp_qat_fw_la_bulk_req_t *pReq, + Cpa32U authOffsetInBytes, + Cpa32U authLenInBytes, + sal_service_t *pService, + lac_sym_qat_hash_state_buffer_info_t *pHashStateBuf, + Cpa32U qatPacketType, + Cpa32U hashResultSize, + CpaBoolean digestVerify, + Cpa8U *pAuthResult, + CpaCySymHashAlgorithm alg, + void *data); + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * + * + * @description + * This fn returns the QAT values for hash algorithm and nested fields + * + * + * @param[in] pInstance Pointer to service instance. + * + * @param[in] qatHashMode value for hash mode on the fw qat + *interface. + * + * @param[in] apiHashMode value for hash mode on the QA API. + * + * @param[in] apiHashAlgorithm value for hash algorithm on the QA API. + * + * @param[out] pQatAlgorithm Pointer to return fw qat value for + *algorithm. + * + * @param[out] pQatNested Pointer to return fw qat value for nested. + * + * + * @return + * none + * + *****************************************************************************/ +void LacSymQat_HashGetCfgData(CpaInstanceHandle pInstance, + icp_qat_hw_auth_mode_t qatHashMode, + CpaCySymHashMode apiHashMode, + CpaCySymHashAlgorithm apiHashAlgorithm, + icp_qat_hw_auth_algo_t *pQatAlgorithm, + CpaBoolean *pQatNested); + +void LacSymQat_HashSetupReqParamsMetaData( + icp_qat_la_bulk_req_ftr_t *pMsg, + CpaInstanceHandle instanceHandle, + const CpaCySymHashSetupData *pHashSetupData, + CpaBoolean hashStateBuffer, + icp_qat_hw_auth_mode_t qatHashMode, + CpaBoolean digestVerify); + +#endif /* LAC_SYM_QAT_HASH_H */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_hash_defs_lookup.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_hash_defs_lookup.h @@ -0,0 +1,139 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_qat_hash_defs_lookup.h + * + * @defgroup LacSymQatHashDefsLookup Hash Defs Lookup + * + * @ingroup LacSymQatHash + * + * API to be used for the hash defs lookup table. + * + *****************************************************************************/ + +#ifndef LAC_SYM_QAT_HASH_DEFS_LOOKUP_P_H +#define LAC_SYM_QAT_HASH_DEFS_LOOKUP_P_H + +#include "cpa.h" +#include "cpa_cy_sym.h" + +/** +****************************************************************************** +* @ingroup LacSymQatHashDefsLookup +* Finishing Hash algorithm +* @description +* This define points to the last available hash algorithm +* @NOTE: If a new algorithm is added to the api, this #define +* MUST be updated to being the last hash algorithm in the struct +* CpaCySymHashAlgorithm in the file cpa_cy_sym.h +*****************************************************************************/ +#define CPA_CY_HASH_ALG_END CPA_CY_SYM_HASH_SM3 + +/***************************************************************************/ + +/** +****************************************************************************** +* @ingroup LacSymQatHashDefsLookup +* hash algorithm specific structure +* @description +* This structure contain constants specific to an algorithm. +*****************************************************************************/ +typedef struct lac_sym_qat_hash_alg_info_s { + Cpa32U digestLength; /**< Digest length in bytes */ + Cpa32U blockLength; /**< Block length in bytes */ + Cpa8U *initState; /**< Initialiser state for hash algorithm */ + Cpa32U stateSize; /**< size of above state in bytes */ +} lac_sym_qat_hash_alg_info_t; + +/** +****************************************************************************** +* @ingroup LacSymQatHashDefsLookup +* hash qat specific structure +* @description +* This structure contain constants as defined by the QAT for an +* algorithm. +*****************************************************************************/ +typedef struct lac_sym_qat_hash_qat_info_s { + Cpa32U algoEnc; /**< QAT Algorithm encoding */ + Cpa32U authCounter; /**< Counter value for Auth */ + Cpa32U state1Length; /**< QAT state1 length in bytes */ + Cpa32U state2Length; /**< QAT state2 length in bytes */ +} lac_sym_qat_hash_qat_info_t; + +/** +****************************************************************************** +* @ingroup LacSymQatHashDefsLookup +* hash defs structure +* @description +* This type contains pointers to the hash algorithm structure and +* to the hash qat specific structure +*****************************************************************************/ +typedef struct lac_sym_qat_hash_defs_s { + lac_sym_qat_hash_alg_info_t *algInfo; + /**< pointer to hash info structure */ + lac_sym_qat_hash_qat_info_t *qatInfo; + /**< pointer to hash QAT info structure */ +} lac_sym_qat_hash_defs_t; + +/** +******************************************************************************* +* @ingroup LacSymQatHashDefsLookup +* initialise the hash lookup table +* +* @description +* This function initialises the digest lookup table. +* +* @note +* This function does not have a corresponding shutdown function. +* +* @return CPA_STATUS_SUCCESS Operation successful +* @return CPA_STATUS_RESOURCE Allocating of hash lookup table failed +* +*****************************************************************************/ +CpaStatus LacSymQat_HashLookupInit(CpaInstanceHandle instanceHandle); + +/** +******************************************************************************* +* @ingroup LacSymQatHashDefsLookup +* get hash algorithm specific structure from lookup table +* +* @description +* This function looks up the hash lookup array for a structure +* containing data specific to a hash algorithm. The hashAlgorithm enum +* value MUST be in the correct range prior to calling this function. +* +* @param[in] hashAlgorithm Hash Algorithm +* @param[out] ppHashAlgInfo Hash Alg Info structure +* +* @return None +* +*****************************************************************************/ +void LacSymQat_HashAlgLookupGet(CpaInstanceHandle instanceHandle, + CpaCySymHashAlgorithm hashAlgorithm, + lac_sym_qat_hash_alg_info_t **ppHashAlgInfo); + +/** +******************************************************************************* +* @ingroup LacSymQatHashDefsLookup +* get hash defintions from lookup table. +* +* @description +* This function looks up the hash lookup array for a structure +* containing data specific to a hash algorithm. This includes both +* algorithm specific info and qat specific infro. The hashAlgorithm enum +* value MUST be in the correct range prior to calling this function. +* +* @param[in] hashAlgorithm Hash Algorithm +* @param[out] ppHashDefsInfo Hash Defs structure +* +* @return void +* +*****************************************************************************/ +void LacSymQat_HashDefsLookupGet(CpaInstanceHandle instanceHandle, + CpaCySymHashAlgorithm hashAlgorithm, + lac_sym_qat_hash_defs_t **ppHashDefsInfo); + +#endif /* LAC_SYM_QAT_HASH_DEFS_LOOKUP_P_H */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_key.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_key.h @@ -0,0 +1,189 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_qat_key.h + * + * @defgroup LacSymQatKey Key QAT + * + * @ingroup LacSymQat + * + * interfaces for populating qat structures for a key operation + * + *****************************************************************************/ + +#ifndef LAC_SYM_QAT_KEY_H +#define LAC_SYM_QAT_KEY_H + +#include "cpa.h" +#include "lac_sym.h" +#include "icp_qat_fw_la.h" + +/** +****************************************************************************** +* @ingroup LacSymQatKey +* Number of bytes generated per iteration +* @description +* This define is the number of bytes generated per iteration +*****************************************************************************/ +#define LAC_SYM_QAT_KEY_SSL_BYTES_PER_ITERATION (16) + +/** +****************************************************************************** +* @ingroup LacSymQatKey +* Shift to calculate the number of iterations +* @description +* This define is the shift to calculate the number of iterations +*****************************************************************************/ +#define LAC_SYM_QAT_KEY_SSL_ITERATIONS_SHIFT LAC_16BYTE_ALIGNMENT_SHIFT + +/** +******************************************************************************* +* @ingroup LacSymKey +* Populate the SSL request +* +* @description +* Populate the SSL request +* +* @param[out] pKeyGenReqHdr Pointer to Key Generation request Header +* @param[out] pKeyGenReqMid Pointer to LW's 14/15 of Key Gen request +* @param[in] generatedKeyLenInBytes Length of Key generated +* @param[in] labelLenInBytes Length of Label +* @param[in] secretLenInBytes Length of Secret +* @param[in] iterations Number of iterations. This is related +* to the label length. +* +* @return None +* +*****************************************************************************/ +void +LacSymQat_KeySslRequestPopulate(icp_qat_la_bulk_req_hdr_t *pKeyGenReqHdr, + icp_qat_fw_la_key_gen_common_t *pKeyGenReqMid, + Cpa32U generatedKeyLenInBytes, + Cpa32U labelLenInBytes, + Cpa32U secretLenInBytes, + Cpa32U iterations); + +/** +******************************************************************************* +* @ingroup LacSymKey +* Populate the TLS request +* +* @description +* Populate the TLS request +* +* @param[out] pKeyGenReq Pointer to Key Generation request +* @param[in] generatedKeyLenInBytes Length of Key generated +* @param[in] labelLenInBytes Length of Label +* @param[in] secretLenInBytes Length of Secret +* @param[in] seedLenInBytes Length of Seed +* @param[in] cmdId Command Id to differentiate TLS versions +* +* @return None +* +*****************************************************************************/ +void LacSymQat_KeyTlsRequestPopulate( + icp_qat_fw_la_key_gen_common_t *pKeyGenReqParams, + Cpa32U generatedKeyLenInBytes, + Cpa32U labelLenInBytes, + Cpa32U secretLenInBytes, + Cpa8U seedLenInBytes, + icp_qat_fw_la_cmd_id_t cmdId); + +/** +******************************************************************************* +* @ingroup LacSymKey +* Populate MGF request +* +* @description +* Populate MGF request +* +* @param[out] pKeyGenReqHdr Pointer to Key Generation request Header +* @param[out] pKeyGenReqMid Pointer to LW's 14/15 of Key Gen request +* @param[in] seedLenInBytes Length of Seed +* @param[in] maskLenInBytes Length of Mask +* @param[in] hashLenInBytes Length of hash +* +* @return None +* +*****************************************************************************/ +void +LacSymQat_KeyMgfRequestPopulate(icp_qat_la_bulk_req_hdr_t *pKeyGenReqHdr, + icp_qat_fw_la_key_gen_common_t *pKeyGenReqMid, + Cpa8U seedLenInBytes, + Cpa16U maskLenInBytes, + Cpa8U hashLenInBytes); + +/** +******************************************************************************* +* @ingroup LacSymKey +* Populate the SSL key material input +* +* @description +* Populate the SSL key material input +* +* @param[in] pService Pointer to service +* @param[out] pSslKeyMaterialInput Pointer to SSL key material input +* @param[in] pSeed Pointer to Seed +* @param[in] labelPhysAddr Physical address of the label +* @param[in] pSecret Pointer to Secret +* +* @return None +* +*****************************************************************************/ +void LacSymQat_KeySslKeyMaterialInputPopulate( + sal_service_t *pService, + icp_qat_fw_la_ssl_key_material_input_t *pSslKeyMaterialInput, + void *pSeed, + Cpa64U labelPhysAddr, + void *pSecret); + +/** +******************************************************************************* +* @ingroup LacSymKey +* Populate the TLS key material input +* +* @description +* Populate the TLS key material input +* +* @param[in] pService Pointer to service +* @param[out] pTlsKeyMaterialInput Pointer to TLS key material input +* @param[in] pSeed Pointer to Seed +* @param[in] labelPhysAddr Physical address of the label +* +* @return None +* +*****************************************************************************/ +void LacSymQat_KeyTlsKeyMaterialInputPopulate( + sal_service_t *pService, + icp_qat_fw_la_tls_key_material_input_t *pTlsKeyMaterialInput, + void *pSeed, + Cpa64U labelPhysAddr); + +/** +******************************************************************************* +* @ingroup LacSymKey +* Populate the TLS HKDF key material input +* +* @description +* Populate the TLS HKDF key material input +* +* @param[in] pService Pointer to service +* @param[out] pTlsKeyMaterialInput Pointer to TLS key material input +* @param[in] pSeed Pointer to Seed +* @param[in] labelPhysAddr Physical address of the label +* @param[in] cmdId Command ID +* +* @return None +* +*****************************************************************************/ +void LacSymQat_KeyTlsHKDFKeyMaterialInputPopulate( + sal_service_t *pService, + icp_qat_fw_la_hkdf_key_material_input_t *pTlsKeyMaterialInput, + CpaCyKeyGenHKDFOpData *pKeyGenTlsOpData, + Cpa64U subLabelsPhysAddr, + icp_qat_fw_la_cmd_id_t cmdId); + +#endif /* LAC_SYM_QAT_KEY_H */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_queue.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_queue.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ****************************************************************************** + * @file lac_sym_queue.h + * + * @defgroup LacSymQueue Symmetric request queueing functions + * + * @ingroup LacSym + * + * Function prototypes for sending/queuing symmetric requests + *****************************************************************************/ + +#ifndef LAC_SYM_QUEUE_H +#define LAC_SYM_QUEUE_H + +#include "cpa.h" +#include "lac_session.h" +#include "lac_sym.h" + +/** +******************************************************************************* +* @ingroup LacSymQueue +* Send a request message to the QAT, or queue it if necessary +* +* @description +* This function will send a request message to the QAT. However, if a +* blocking condition exists on the session (e.g. partial packet in flight, +* precompute in progress), then the message will instead be pushed on to +* the request queue for the session and will be sent later to the QAT +* once the blocking condition is cleared. +* +* @param[in] instanceHandle Handle for instance of QAT +* @param[in] pRequest Pointer to request cookie +* @param[out] pSessionDesc Pointer to session descriptor +* +* +* @retval CPA_STATUS_SUCCESS Success +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_RESOURCE Problem Acquiring system resource +* @retval CPA_STATUS_RETRY Failed to send message to QAT due to queue +* full condition +* +*****************************************************************************/ +CpaStatus LacSymQueue_RequestSend(const CpaInstanceHandle instanceHandle, + lac_sym_bulk_cookie_t *pRequest, + lac_session_desc_t *pSessionDesc); + +#endif /* LAC_SYM_QUEUE_H */ Index: sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_stats.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_stats.h @@ -0,0 +1,191 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_stats.h + * + * @defgroup LacSymCommon Symmetric Common + * + * @ingroup LacSym + * + * Symetric Common consists of common statistics, buffer and partial packet + * functionality. + * + ***************************************************************************/ + +/** + *************************************************************************** + * @defgroup LacSymStats Statistics + * + * @ingroup LacSymCommon + * + * definitions and prototypes for LAC symmetric statistics. + * + * @lld_start + * In the LAC API the stats fields are defined as Cpa32U but + * QatUtilsAtomic is the type that the atomic API supports. Therefore we + * need to define a structure internally with the same fields as the API + * stats structure, but each field must be of type QatUtilsAtomic. + * + * - Incrementing Statistics:\n + * Atomically increment the statistic on the internal stats structure. + * + * - Providing a copy of the stats back to the user:\n + * Use atomicGet to read the atomic variable for each stat field in the + * local internal stat structure. These values are saved in structure + * (as defined by the LAC API) that the client will provide a pointer + * to as a parameter. + * + * - Stats Show:\n + * Use atomicGet to read the atomic variables for each field in the local + * internal stat structure and print to the screen + * + * - Stats Array:\n + * A macro is used to get the offset off the stat in the structure. This + * offset is passed to a function which uses it to increment the stat + * at that offset. + * + * @lld_end + * + ***************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_SYM_STATS_H +#define LAC_SYM_STATS_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" +#include "cpa_cy_common.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +/** +******************************************************************************* +* @ingroup LacSymStats +* increment a symmetric statistic +* +* @description +* Increment the statistics +* +* @param statistic IN The field in the symmetric statistics structure to be +* incremented +* @param instanceHandle IN engine Id Number +* +* @retval None +* +*****************************************************************************/ +#define LAC_SYM_STAT_INC(statistic, instanceHandle) \ + LacSym_StatsInc(offsetof(CpaCySymStats64, statistic), instanceHandle) + +/** +******************************************************************************* +* @ingroup LacSymStats +* initialises the symmetric stats +* +* @description +* This function allocates and initialises the stats array to 0 +* +* @param instanceHandle Instance Handle +* +* @retval CPA_STATUS_SUCCESS initialisation successful +* @retval CPA_STATUS_RESOURCE array allocation failed +* +*****************************************************************************/ +CpaStatus LacSym_StatsInit(CpaInstanceHandle instanceHandle); + +/** +******************************************************************************* +* @ingroup LacSymStats +* Frees the symmetric stats +* +* @description +* This function frees the stats array +* +* @param instanceHandle Instance Handle +* +* @retval None +* +*****************************************************************************/ +void LacSym_StatsFree(CpaInstanceHandle instanceHandle); + +/** +******************************************************************************* +* @ingroup LacSymStats +* Inrement a stat +* +* @description +* This function incrementes a stat for a specific engine. +* +* @param offset IN offset of stat field in structure +* @param instanceHandle IN qat Handle +* +* @retval None +* +*****************************************************************************/ +void LacSym_StatsInc(Cpa32U offset, CpaInstanceHandle instanceHandle); + +/** +******************************************************************************* +* @ingroup LacSymStats +* Copy the contents of the statistics structure for an engine +* +* @description +* This function copies the 32bit symmetric statistics structure for +* a specific engine into an address supplied as a parameter. +* +* @param instanceHandle IN engine Id Number +* @param pSymStats OUT stats structure to copy the stats for the into +* +* @retval None +* +*****************************************************************************/ +void LacSym_Stats32CopyGet(CpaInstanceHandle instanceHandle, + struct _CpaCySymStats *const pSymStats); + +/** +******************************************************************************* +* @ingroup LacSymStats +* Copy the contents of the statistics structure for an engine +* +* @description +* This function copies the 64bit symmetric statistics structure for +* a specific engine into an address supplied as a parameter. +* +* @param instanceHandle IN engine Id Number +* @param pSymStats OUT stats structure to copy the stats for the into +* +* @retval None +* +*****************************************************************************/ +void LacSym_Stats64CopyGet(CpaInstanceHandle instanceHandle, + CpaCySymStats64 *const pSymStats); + +/** +******************************************************************************* +* @ingroup LacSymStats +* print the symmetric stats to standard output +* +* @description +* The statistics for symmetric are printed to standard output. +* +* @retval None +* +* @see LacSym_StatsCopyGet() +* +*****************************************************************************/ +void LacSym_StatsShow(CpaInstanceHandle instanceHandle); + +#endif /*LAC_SYM_STATS_H_*/ Index: sys/dev/qat/qat_api/common/crypto/sym/key/lac_sym_key.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/key/lac_sym_key.c @@ -0,0 +1,3021 @@ +/*************************************************************************** + * + * + * + ***************************************************************************/ + +/** + ***************************************************************************** + * @file lac_sym_key.c + * + * @ingroup LacSymKey + * + * This file contains the implementation of all keygen functionality + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" +#include "cpa_cy_key.h" +#include "cpa_cy_im.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +#include "icp_adf_init.h" +#include "icp_adf_transport.h" + +#include "qat_utils.h" + +#include "lac_log.h" +#include "lac_hooks.h" +#include "lac_sym.h" +#include "lac_sym_qat_hash_defs_lookup.h" +#include "lac_sym_qat.h" +#include "lac_sal.h" +#include "lac_sym_key.h" +#include "lac_sal_types_crypto.h" +#include "sal_service_state.h" +#include "lac_sym_qat_key.h" +#include "lac_sym_hash_defs.h" +#include "sal_statistics.h" + +/* Number of statistics */ +#define LAC_KEY_NUM_STATS (sizeof(CpaCyKeyGenStats64) / sizeof(Cpa64U)) + +#define LAC_KEY_STAT_INC(statistic, instanceHandle) \ + do { \ + sal_crypto_service_t *pService = NULL; \ + pService = (sal_crypto_service_t *)instanceHandle; \ + if (CPA_TRUE == \ + pService->generic_service_info.stats \ + ->bKeyGenStatsEnabled) { \ + qatUtilsAtomicInc( \ + &pService \ + ->pLacKeyStats[offsetof(CpaCyKeyGenStats64, \ + statistic) / \ + sizeof(Cpa64U)]); \ + } \ + } while (0) +/**< Macro to increment a Key stat (derives offset into array of atomics) */ + +#define LAC_KEY_STATS32_GET(keyStats, instanceHandle) \ + do { \ + int i; \ + sal_crypto_service_t *pService = \ + (sal_crypto_service_t *)instanceHandle; \ + for (i = 0; i < LAC_KEY_NUM_STATS; i++) { \ + ((Cpa32U *)&(keyStats))[i] = \ + (Cpa32U)qatUtilsAtomicGet( \ + &pService->pLacKeyStats[i]); \ + } \ + } while (0) +/**< Macro to get all 32bit Key stats (from internal array of atomics) */ + +#define LAC_KEY_STATS64_GET(keyStats, instanceHandle) \ + do { \ + int i; \ + sal_crypto_service_t *pService = \ + (sal_crypto_service_t *)instanceHandle; \ + for (i = 0; i < LAC_KEY_NUM_STATS; i++) { \ + ((Cpa64U *)&(keyStats))[i] = \ + qatUtilsAtomicGet(&pService->pLacKeyStats[i]); \ + } \ + } while (0) +/**< Macro to get all 64bit Key stats (from internal array of atomics) */ + +#define IS_HKDF_UNSUPPORTED(cmdId, hkdfSupported) \ + ((ICP_QAT_FW_LA_CMD_HKDF_EXTRACT <= cmdId && \ + ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND_LABEL >= cmdId) && \ + !hkdfSupported) /**< macro to check whether the HKDF algorithm can be \ + supported on the device */ + +/* Sublabel for HKDF TLS Key Generation, as defined in RFC8446. */ +const static Cpa8U key256[HKDF_SUB_LABEL_KEY_LENGTH] = { 0, 16, 9, 't', + 'l', 's', '1', '3', + ' ', 'k', 'e', 'y', + 0 }; +const static Cpa8U key384[HKDF_SUB_LABEL_KEY_LENGTH] = { 0, 32, 9, 't', + 'l', 's', '1', '3', + ' ', 'k', 'e', 'y', + 0 }; +const static Cpa8U keyChaChaPoly[HKDF_SUB_LABEL_KEY_LENGTH] = { 0, 32, 9, + 't', 'l', 's', + '1', '3', ' ', + 'k', 'e', 'y', + 0 }; +/* Sublabel for HKDF TLS IV key Generation, as defined in RFC8446. */ +const static Cpa8U iv256[HKDF_SUB_LABEL_IV_LENGTH] = { 0, 12, 8, 't', + 'l', 's', '1', '3', + ' ', 'i', 'v', 0 }; +const static Cpa8U iv384[HKDF_SUB_LABEL_IV_LENGTH] = { 0, 12, 8, 't', + 'l', 's', '1', '3', + ' ', 'i', 'v', 0 }; +/* Sublabel for HKDF TLS RESUMPTION key Generation, as defined in RFC8446. */ +const static Cpa8U resumption256[HKDF_SUB_LABEL_RESUMPTION_LENGTH] = + { 0, 32, 16, 't', 'l', 's', '1', '3', ' ', 'r', + 'e', 's', 'u', 'm', 'p', 't', 'i', 'o', 'n', 0 }; +const static Cpa8U resumption384[HKDF_SUB_LABEL_RESUMPTION_LENGTH] = + { 0, 48, 16, 't', 'l', 's', '1', '3', ' ', 'r', + 'e', 's', 'u', 'm', 'p', 't', 'i', 'o', 'n', 0 }; +/* Sublabel for HKDF TLS FINISHED key Generation, as defined in RFC8446. */ +const static Cpa8U finished256[HKDF_SUB_LABEL_FINISHED_LENGTH] = + { 0, 32, 14, 't', 'l', 's', '1', '3', ' ', + 'f', 'i', 'n', 'i', 's', 'h', 'e', 'd', 0 }; +const static Cpa8U finished384[HKDF_SUB_LABEL_FINISHED_LENGTH] = + { 0, 48, 14, 't', 'l', 's', '1', '3', ' ', + 'f', 'i', 'n', 'i', 's', 'h', 'e', 'd', 0 }; + +/** + ****************************************************************************** + * @ingroup LacSymKey + * SSL/TLS stat type + * + * @description + * This enum determines which stat should be incremented + *****************************************************************************/ +typedef enum { + LAC_KEY_REQUESTS = 0, + /**< Key requests sent */ + LAC_KEY_REQUEST_ERRORS, + /**< Key requests errors */ + LAC_KEY_COMPLETED, + /**< Key requests which received responses */ + LAC_KEY_COMPLETED_ERRORS + /**< Key requests which received responses with errors */ +} lac_key_stat_type_t; + +/*** Local functions prototypes ***/ +static void +LacSymKey_MgfHandleResponse(icp_qat_fw_la_cmd_id_t lacCmdId, + void *pOpaqueData, + icp_qat_fw_comn_flags cmnRespFlags); + +static CpaStatus +LacSymKey_MgfSync(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const void *pKeyGenMgfOpData, + CpaFlatBuffer *pGeneratedMaskBuffer, + CpaBoolean bIsExtRequest); + +static void +LacSymKey_SslTlsHandleResponse(icp_qat_fw_la_cmd_id_t lacCmdId, + void *pOpaqueData, + icp_qat_fw_comn_flags cmnRespFlags); + +static CpaStatus +LacSymKey_SslTlsSync(CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + icp_qat_fw_la_cmd_id_t lacCmdId, + void *pKeyGenSslTlsOpData, + Cpa8U hashAlgorithm, + CpaFlatBuffer *pKeyGenOutpuData); + +/*** Implementation ***/ + +/** + ****************************************************************************** + * @ingroup LacSymKey + * Get the instance handle. Support single handle. + * @param[in] instanceHandle_in user supplied handle. + * @retval CpaInstanceHandle the instance handle + */ +static CpaInstanceHandle +LacKey_GetHandle(CpaInstanceHandle instanceHandle_in) +{ + CpaInstanceHandle instanceHandle = NULL; + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + return instanceHandle; +} + +/** +******************************************************************************* +* @ingroup LacSymKey +* Perform SSL/TLS key gen operation +* +* @description +* Perform SSL/TLS key gen operation +* +* @param[in] instanceHandle QAT device handle. +* @param[in] pKeyGenCb Pointer to callback function to be invoked +* when the operation is complete. +* @param[in] pCallbackTag Opaque User Data for this specific call. +* @param[in] lacCmdId Lac command ID (identify SSL & TLS ops) +* @param[in] pKeyGenSslTlsOpData Structure containing all the data needed to +* perform the SSL/TLS key generation +* operation. +* @param[in] hashAlgorithm Specifies the hash algorithm to use. +* According to RFC5246, this should be +* "SHA-256 or a stronger standard hash +* function." +* @param[out] pKeyGenOutputData pointer to where output result should be +* written +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_RETRY Function should be retried. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_RESOURCE Error related to system resources. +* +*****************************************************************************/ +static CpaStatus +LacSymKey_KeyGenSslTls_GenCommon(CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + icp_qat_fw_la_cmd_id_t lacCmdId, + void *pKeyGenSslTlsOpData, + Cpa8U hashAlgorithm, + CpaFlatBuffer *pKeyGenOutputData); + +/** + ****************************************************************************** + * @ingroup LacSymKey + * Increment stat for TLS or SSL operation + * + * @description + * This is a generic function to update the stats for either a TLS or SSL + * operation. + * + * @param[in] lacCmdId Indicate SSL or TLS operations + * @param[in] statType Statistics Type + * @param[in] instanceHandle Instance Handle + * + * @return None + * + *****************************************************************************/ +static void +LacKey_StatsInc(icp_qat_fw_la_cmd_id_t lacCmdId, + lac_key_stat_type_t statType, + CpaInstanceHandle instanceHandle) +{ + if (ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE == lacCmdId) { + switch (statType) { + case LAC_KEY_REQUESTS: + LAC_KEY_STAT_INC(numSslKeyGenRequests, instanceHandle); + break; + case LAC_KEY_REQUEST_ERRORS: + LAC_KEY_STAT_INC(numSslKeyGenRequestErrors, + instanceHandle); + break; + case LAC_KEY_COMPLETED: + LAC_KEY_STAT_INC(numSslKeyGenCompleted, instanceHandle); + break; + case LAC_KEY_COMPLETED_ERRORS: + LAC_KEY_STAT_INC(numSslKeyGenCompletedErrors, + instanceHandle); + break; + default: + QAT_UTILS_LOG("Invalid statistics type\n"); + break; + } + } else /* TLS v1.0/1.1 and 1.2 */ + { + switch (statType) { + case LAC_KEY_REQUESTS: + LAC_KEY_STAT_INC(numTlsKeyGenRequests, instanceHandle); + break; + case LAC_KEY_REQUEST_ERRORS: + LAC_KEY_STAT_INC(numTlsKeyGenRequestErrors, + instanceHandle); + break; + case LAC_KEY_COMPLETED: + LAC_KEY_STAT_INC(numTlsKeyGenCompleted, instanceHandle); + break; + case LAC_KEY_COMPLETED_ERRORS: + LAC_KEY_STAT_INC(numTlsKeyGenCompletedErrors, + instanceHandle); + break; + default: + QAT_UTILS_LOG("Invalid statistics type\n"); + break; + } + } +} + +void +LacKeygen_StatsShow(CpaInstanceHandle instanceHandle) +{ + CpaCyKeyGenStats64 keyStats = { 0 }; + + LAC_KEY_STATS64_GET(keyStats, instanceHandle); + + QAT_UTILS_LOG(SEPARATOR BORDER + " Key Stats: " BORDER + "\n" SEPARATOR); + + QAT_UTILS_LOG(BORDER " SSL Key Requests: %16llu " BORDER + "\n" BORDER + " SSL Key Request Errors: %16llu " BORDER + "\n" BORDER + " SSL Key Completed %16llu " BORDER + "\n" BORDER + " SSL Key Complete Errors: %16llu " BORDER + "\n" SEPARATOR, + (unsigned long long)keyStats.numSslKeyGenRequests, + (unsigned long long)keyStats.numSslKeyGenRequestErrors, + (unsigned long long)keyStats.numSslKeyGenCompleted, + (unsigned long long)keyStats.numSslKeyGenCompletedErrors); + + QAT_UTILS_LOG(BORDER " TLS Key Requests: %16llu " BORDER + "\n" BORDER + " TLS Key Request Errors: %16llu " BORDER + "\n" BORDER + " TLS Key Completed %16llu " BORDER + "\n" BORDER + " TLS Key Complete Errors: %16llu " BORDER + "\n" SEPARATOR, + (unsigned long long)keyStats.numTlsKeyGenRequests, + (unsigned long long)keyStats.numTlsKeyGenRequestErrors, + (unsigned long long)keyStats.numTlsKeyGenCompleted, + (unsigned long long)keyStats.numTlsKeyGenCompletedErrors); + + QAT_UTILS_LOG(BORDER " MGF Key Requests: %16llu " BORDER + "\n" BORDER + " MGF Key Request Errors: %16llu " BORDER + "\n" BORDER + " MGF Key Completed %16llu " BORDER + "\n" BORDER + " MGF Key Complete Errors: %16llu " BORDER + "\n" SEPARATOR, + (unsigned long long)keyStats.numMgfKeyGenRequests, + (unsigned long long)keyStats.numMgfKeyGenRequestErrors, + (unsigned long long)keyStats.numMgfKeyGenCompleted, + (unsigned long long)keyStats.numMgfKeyGenCompletedErrors); +} + +/** @ingroup LacSymKey */ +CpaStatus +cpaCyKeyGenQueryStats(CpaInstanceHandle instanceHandle_in, + struct _CpaCyKeyGenStats *pSymKeyStats) +{ + CpaInstanceHandle instanceHandle = NULL; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(pSymKeyStats); + + SAL_RUNNING_CHECK(instanceHandle); + + LAC_KEY_STATS32_GET(*pSymKeyStats, instanceHandle); + + return CPA_STATUS_SUCCESS; +} + +/** @ingroup LacSymKey */ +CpaStatus +cpaCyKeyGenQueryStats64(CpaInstanceHandle instanceHandle_in, + CpaCyKeyGenStats64 *pSymKeyStats) +{ + CpaInstanceHandle instanceHandle = NULL; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(pSymKeyStats); + + SAL_RUNNING_CHECK(instanceHandle); + + LAC_KEY_STATS64_GET(*pSymKeyStats, instanceHandle); + + return CPA_STATUS_SUCCESS; +} + +/** + ****************************************************************************** + * @ingroup LacSymKey + * Return the size of the digest for a specific hash algorithm. + * @description + * Return the expected digest size based on the sha algorithm submitted. + * The only supported value are sha256, sha384 and sha512. + * + * @param[in] hashAlgorithm either sha256, sha384 or sha512. + * @return the expected size or 0 for an invalid hash. + * + *****************************************************************************/ +static Cpa32U +getDigestSizeFromHashAlgo(CpaCySymHashAlgorithm hashAlgorithm) +{ + switch (hashAlgorithm) { + case CPA_CY_SYM_HASH_SHA256: + return LAC_HASH_SHA256_DIGEST_SIZE; + case CPA_CY_SYM_HASH_SHA384: + return LAC_HASH_SHA384_DIGEST_SIZE; + case CPA_CY_SYM_HASH_SHA512: + return LAC_HASH_SHA512_DIGEST_SIZE; + case CPA_CY_SYM_HASH_SM3: + return LAC_HASH_SM3_DIGEST_SIZE; + default: + return 0; + } +} + +/** + ****************************************************************************** + * @ingroup LacSymKey + * Return the hash algorithm for a specific cipher. + * @description + * Return the hash algorithm related to the cipher suite. + * Supported hash's are SHA256, and SHA384. + * + * @param[in] cipherSuite AES_128_GCM, AES_256_GCM, AES_128_CCM, + * and CHACHA20_POLY1305. + * @return the expected hash algorithm or 0 for an invalid cipher. + * + *****************************************************************************/ +static CpaCySymHashAlgorithm +getHashAlgorithmFromCipherSuiteHKDF(CpaCyKeyHKDFCipherSuite cipherSuite) +{ + switch (cipherSuite) { + case CPA_CY_HKDF_TLS_AES_128_GCM_SHA256: /* Fall through */ + case CPA_CY_HKDF_TLS_CHACHA20_POLY1305_SHA256: + case CPA_CY_HKDF_TLS_AES_128_CCM_SHA256: + case CPA_CY_HKDF_TLS_AES_128_CCM_8_SHA256: + return CPA_CY_SYM_HASH_SHA256; + case CPA_CY_HKDF_TLS_AES_256_GCM_SHA384: + return CPA_CY_SYM_HASH_SHA384; + default: + return 0; + } +} + +/** + ****************************************************************************** + * @ingroup LacSymKey + * Return the digest size of cipher. + * @description + * Return the output key size of specific cipher, for specified sub label + * + * @param[in] cipherSuite = AES_128_GCM, AES_256_GCM, AES_128_CCM, + * and CHACHA20_POLY1305. + * subLabels = KEY, IV, RESUMPTION, and FINISHED. + * @return the expected digest size of the cipher. + * + *****************************************************************************/ +static const Cpa32U cipherSuiteHKDFHashSizes + [LAC_KEY_HKDF_CIPHERS_MAX][LAC_KEY_HKDF_SUBLABELS_MAX] = { + {}, /* Not used */ + { 32, 16, 12, 32, 32 }, /* AES_128_GCM_SHA256 */ + { 48, 32, 12, 48, 48 }, /* AES_256_GCM_SHA384 */ + { 32, 32, 12, 32, 32 }, /* CHACHA20_POLY1305_SHA256 */ + { 32, 16, 12, 32, 32 }, /* AES_128_CCM_SHA256 */ + { 32, 16, 12, 32, 32 } /* AES_128_CCM_8_SHA256 */ + }; + +/** + ****************************************************************************** + * @ingroup LacSymKey + * Key Generation MGF response handler + * + * @description + * Handles Key Generation MGF response messages from the QAT. + * + * @param[in] lacCmdId Command id of the original request + * @param[in] pOpaqueData Pointer to opaque data that was in request + * @param[in] cmnRespFlags Indicates whether request succeeded + * + * @return void + * + *****************************************************************************/ +static void +LacSymKey_MgfHandleResponse(icp_qat_fw_la_cmd_id_t lacCmdId, + void *pOpaqueData, + icp_qat_fw_comn_flags cmnRespFlags) +{ + CpaCyKeyGenMgfOpData *pMgfOpData = NULL; + lac_sym_key_cookie_t *pCookie = NULL; + CpaCyGenFlatBufCbFunc pKeyGenMgfCb = NULL; + void *pCallbackTag = NULL; + CpaFlatBuffer *pGeneratedKeyBuffer = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + CpaBoolean respStatusOk = + (ICP_QAT_FW_COMN_STATUS_FLAG_OK == + ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(cmnRespFlags)) ? + CPA_TRUE : + CPA_FALSE; + + pCookie = (lac_sym_key_cookie_t *)pOpaqueData; + + if (CPA_TRUE == respStatusOk) { + status = CPA_STATUS_SUCCESS; + LAC_KEY_STAT_INC(numMgfKeyGenCompleted, + pCookie->instanceHandle); + } else { + status = CPA_STATUS_FAIL; + LAC_KEY_STAT_INC(numMgfKeyGenCompletedErrors, + pCookie->instanceHandle); + } + + pKeyGenMgfCb = (CpaCyGenFlatBufCbFunc)(pCookie->pKeyGenCb); + + pMgfOpData = pCookie->pKeyGenOpData; + pCallbackTag = pCookie->pCallbackTag; + pGeneratedKeyBuffer = pCookie->pKeyGenOutputData; + + Lac_MemPoolEntryFree(pCookie); + + (*pKeyGenMgfCb)(pCallbackTag, status, pMgfOpData, pGeneratedKeyBuffer); +} + +/** + ****************************************************************************** + * @ingroup LacSymKey + * Synchronous mode of operation wrapper function + * + * @description + * Wrapper function to implement synchronous mode of operation for + * cpaCyKeyGenMgf and cpaCyKeyGenMgfExt function. + * + * @param[in] instanceHandle Instance handle + * @param[in] pKeyGenCb Internal callback function pointer + * @param[in] pCallbackTag Callback tag + * @param[in] pKeyGenMgfOpData Pointer to user provided Op Data structure + * @param[in] pGeneratedMaskBuffer Pointer to a buffer where generated mask + * will be stored + * @param[in] bIsExtRequest Indicates origin of function call; + * if CPA_TRUE then the call comes from + * cpaCyKeyGenMgfExt function, otherwise + * from cpaCyKeyGenMgf + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Function should be retried. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * + *****************************************************************************/ +static CpaStatus +LacSymKey_MgfSync(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const void *pKeyGenMgfOpData, + CpaFlatBuffer *pGeneratedMaskBuffer, + CpaBoolean bIsExtRequest) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + lac_sync_op_data_t *pSyncCallbackData = NULL; + + status = LacSync_CreateSyncCookie(&pSyncCallbackData); + + if (CPA_STATUS_SUCCESS == status) { + if (CPA_TRUE == bIsExtRequest) { + status = cpaCyKeyGenMgfExt( + instanceHandle, + LacSync_GenFlatBufCb, + pSyncCallbackData, + (const CpaCyKeyGenMgfOpDataExt *)pKeyGenMgfOpData, + pGeneratedMaskBuffer); + } else { + status = cpaCyKeyGenMgf(instanceHandle, + LacSync_GenFlatBufCb, + pSyncCallbackData, + (const CpaCyKeyGenMgfOpData *) + pKeyGenMgfOpData, + pGeneratedMaskBuffer); + } + } else { + /* Failure allocating sync cookie */ + LAC_KEY_STAT_INC(numMgfKeyGenRequestErrors, instanceHandle); + return status; + } + + if (CPA_STATUS_SUCCESS == status) { + CpaStatus syncStatus = CPA_STATUS_SUCCESS; + + syncStatus = + LacSync_WaitForCallback(pSyncCallbackData, + LAC_SYM_SYNC_CALLBACK_TIMEOUT, + &status, + NULL); + + /* If callback doesn't come back */ + if (CPA_STATUS_SUCCESS != syncStatus) { + LAC_KEY_STAT_INC(numMgfKeyGenCompletedErrors, + instanceHandle); + LAC_LOG_ERROR("Callback timed out"); + status = syncStatus; + } + } else { + /* As the Request was not sent the Callback will never + * be called, so need to indicate that we're finished + * with cookie so it can be destroyed. + */ + LacSync_SetSyncCookieComplete(pSyncCallbackData); + } + + LacSync_DestroySyncCookie(&pSyncCallbackData); + + return status; +} + +/** + ****************************************************************************** + * @ingroup LacSymKey + * Perform MGF key gen operation + * + * @description + * This function performs MGF key gen operation. It is common for requests + * coming from both cpaCyKeyGenMgf and cpaCyKeyGenMgfExt QAT API + * functions. + * + * @param[in] instanceHandle Instance handle + * @param[in] pKeyGenCb Pointer to callback function to be invoked + * when the operation is complete. + * @param[in] pCallbackTag Opaque User Data for this specific call. + * @param[in] pOpData Pointer to the Op Data structure provided by + * the user in API function call. For calls + * originating from cpaCyKeyGenMgfExt it will + * point to CpaCyKeyGenMgfOpDataExt type of + * structure while for calls originating from + * cpaCyKeyGenMgf it will point to + * CpaCyKeyGenMgfOpData type of structure. + * @param[in] pKeyGenMgfOpData Pointer to the user provided + * CpaCyKeyGenMgfOpData structure. For calls + * originating from cpaCyKeyGenMgf it will + * point to the same structure as pOpData + * parameter; for calls originating from + * cpaCyKeyGenMgfExt it will point to the + * baseOpData member of the + * CpaCyKeyGenMgfOpDataExt structure passed in + * as a parameter to the API function call. + * @param[in] pGeneratedMaskBuffer Pointer to a buffer where generated mask + * will be stored + * @param[in] hashAlgorithm Indicates which hash algorithm is to be used + * to perform MGF key gen operation. For calls + * originating from cpaCyKeyGenMgf it will + * always be CPA_CY_SYM_HASH_SHA1. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Function should be retried. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * + *****************************************************************************/ +static CpaStatus +LacSymKey_MgfCommon(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const void *pOpData, + const CpaCyKeyGenMgfOpData *pKeyGenMgfOpData, + CpaFlatBuffer *pGeneratedMaskBuffer, + CpaCySymHashAlgorithm hashAlgorithm) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + icp_qat_fw_la_bulk_req_t keyGenReq = { { 0 } }; + icp_qat_la_bulk_req_hdr_t keyGenReqHdr = { { 0 } }; + icp_qat_fw_la_key_gen_common_t keyGenReqMid = { { 0 } }; + icp_qat_la_bulk_req_ftr_t keyGenReqFtr = { { { 0 } } }; + Cpa8U *pMsgDummy = NULL; + Cpa8U *pCacheDummyHdr = NULL; + Cpa8U *pCacheDummyMid = NULL; + Cpa8U *pCacheDummyFtr = NULL; + sal_qat_content_desc_info_t contentDescInfo = { 0 }; + lac_sym_key_cookie_t *pCookie = NULL; + lac_sym_cookie_t *pSymCookie = NULL; + sal_crypto_service_t *pService = NULL; + Cpa64U inputPhysAddr = 0; + Cpa64U outputPhysAddr = 0; +/* Structure initializer is supported by C99, but it is + * not supported by some former Intel compiler. + */ + CpaCySymHashSetupData hashSetupData = { 0 }; + Cpa32U hashBlkSizeInBytes = 0; + lac_sym_qat_hash_alg_info_t *pHashAlgInfo = NULL; + icp_qat_fw_serv_specif_flags laCmdFlags = 0; + icp_qat_fw_comn_flags cmnRequestFlags = + ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_PTR_TYPE_FLAT, + QAT_COMN_CD_FLD_TYPE_64BIT_ADR); + + pService = (sal_crypto_service_t *)instanceHandle; + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + + SAL_RUNNING_CHECK(instanceHandle); + LAC_CHECK_NULL_PARAM(pOpData); + LAC_CHECK_NULL_PARAM(pKeyGenMgfOpData); + LAC_CHECK_NULL_PARAM(pGeneratedMaskBuffer); + LAC_CHECK_NULL_PARAM(pGeneratedMaskBuffer->pData); + LAC_CHECK_NULL_PARAM(pKeyGenMgfOpData->seedBuffer.pData); + + /* Maximum seed length for MGF1 request */ + if (pKeyGenMgfOpData->seedBuffer.dataLenInBytes > + ICP_QAT_FW_LA_MGF_SEED_LEN_MAX) { + LAC_INVALID_PARAM_LOG("seedBuffer.dataLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Maximum mask length for MGF1 request */ + if (pKeyGenMgfOpData->maskLenInBytes > ICP_QAT_FW_LA_MGF_MASK_LEN_MAX) { + LAC_INVALID_PARAM_LOG("maskLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + + /* check for enough space in the flat buffer */ + if (pKeyGenMgfOpData->maskLenInBytes > + pGeneratedMaskBuffer->dataLenInBytes) { + LAC_INVALID_PARAM_LOG("pGeneratedMaskBuffer.dataLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Get hash alg info */ + LacSymQat_HashAlgLookupGet(instanceHandle, + hashAlgorithm, + &pHashAlgInfo); + + /* Allocate the cookie */ + pCookie = (lac_sym_key_cookie_t *)Lac_MemPoolEntryAlloc( + pService->lac_sym_cookie_pool); + if (NULL == pCookie) { + LAC_LOG_ERROR("Cannot get mem pool entry"); + status = CPA_STATUS_RESOURCE; + } else if ((void *)CPA_STATUS_RETRY == pCookie) { + pCookie = NULL; + status = CPA_STATUS_RETRY; + } else { + pSymCookie = (lac_sym_cookie_t *)pCookie; + } + + if (CPA_STATUS_SUCCESS == status) { + /* populate the cookie */ + pCookie->instanceHandle = instanceHandle; + pCookie->pCallbackTag = pCallbackTag; + pCookie->pKeyGenOpData = (void *)LAC_CONST_PTR_CAST(pOpData); + pCookie->pKeyGenCb = pKeyGenCb; + pCookie->pKeyGenOutputData = pGeneratedMaskBuffer; + hashSetupData.hashAlgorithm = hashAlgorithm; + hashSetupData.hashMode = CPA_CY_SYM_HASH_MODE_PLAIN; + hashSetupData.digestResultLenInBytes = + pHashAlgInfo->digestLength; + + /* Populate the CD ctrl Block (LW 27 - LW 31) + * and the CD Hash HW setup block + */ + LacSymQat_HashContentDescInit( + &(keyGenReqFtr), + instanceHandle, + &hashSetupData, + /* point to base of hw setup block */ + (Cpa8U *)pCookie->contentDesc, + LAC_SYM_KEY_NO_HASH_BLK_OFFSET_QW, + ICP_QAT_FW_SLICE_DRAM_WR, + ICP_QAT_HW_AUTH_MODE0, /* just a plain hash */ + CPA_FALSE, /* Not using sym Constants Table in Shared SRAM + */ + CPA_FALSE, /* not using the optimised Content Desc */ + NULL, + &hashBlkSizeInBytes); + + /* Populate the Req param LW 14-26 */ + LacSymQat_KeyMgfRequestPopulate( + &keyGenReqHdr, + &keyGenReqMid, + pKeyGenMgfOpData->seedBuffer.dataLenInBytes, + pKeyGenMgfOpData->maskLenInBytes, + (Cpa8U)pHashAlgInfo->digestLength); + + contentDescInfo.pData = pCookie->contentDesc; + contentDescInfo.hardwareSetupBlockPhys = + LAC_MEM_CAST_PTR_TO_UINT64( + pSymCookie->keyContentDescPhyAddr); + contentDescInfo.hwBlkSzQuadWords = + LAC_BYTES_TO_QUADWORDS(hashBlkSizeInBytes); + + /* Populate common request fields */ + inputPhysAddr = + LAC_MEM_CAST_PTR_TO_UINT64(LAC_OS_VIRT_TO_PHYS_EXTERNAL( + pService->generic_service_info, + pKeyGenMgfOpData->seedBuffer.pData)); + + if (inputPhysAddr == 0) { + LAC_LOG_ERROR( + "Unable to get the seed buffer physical address"); + status = CPA_STATUS_FAIL; + } + outputPhysAddr = LAC_MEM_CAST_PTR_TO_UINT64( + LAC_OS_VIRT_TO_PHYS_EXTERNAL(pService->generic_service_info, + pGeneratedMaskBuffer->pData)); + if (outputPhysAddr == 0) { + LAC_LOG_ERROR( + "Unable to get the physical address of the mask"); + status = CPA_STATUS_FAIL; + } + } + + if (CPA_STATUS_SUCCESS == status) { + /* Make up the full keyGenReq struct from its constituents */ + pMsgDummy = (Cpa8U *)&(keyGenReq); + pCacheDummyHdr = (Cpa8U *)&(keyGenReqHdr); + pCacheDummyMid = (Cpa8U *)&(keyGenReqMid); + pCacheDummyFtr = (Cpa8U *)&(keyGenReqFtr); + + memcpy(pMsgDummy, + pCacheDummyHdr, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_HDR_IN_LW)); + memset((pMsgDummy + + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_HDR_IN_LW)), + 0, + (LAC_LONG_WORD_IN_BYTES * + LAC_SIZE_OF_CACHE_TO_CLEAR_IN_LW)); + memcpy(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_MID_IN_LW), + pCacheDummyMid, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_MID_IN_LW)); + memcpy(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_FTR_IN_LW), + pCacheDummyFtr, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_FTR_IN_LW)); + + SalQatMsg_ContentDescHdrWrite((icp_qat_fw_comn_req_t *)&( + keyGenReq), + &(contentDescInfo)); + + SalQatMsg_CmnHdrWrite((icp_qat_fw_comn_req_t *)&keyGenReq, + ICP_QAT_FW_COMN_REQ_CPM_FW_LA, + ICP_QAT_FW_LA_CMD_MGF1, + cmnRequestFlags, + laCmdFlags); + + /* + * MGF uses a flat buffer but we can use zero for source and + * dest length because the firmware will use the seed length, + * hash length and mask length to find source length. + */ + SalQatMsg_CmnMidWrite((icp_qat_fw_la_bulk_req_t *)&(keyGenReq), + pCookie, + LAC_SYM_KEY_QAT_PTR_TYPE, + inputPhysAddr, + outputPhysAddr, + 0, + 0); + + /* Send to QAT */ + status = icp_adf_transPutMsg(pService->trans_handle_sym_tx, + (void *)&(keyGenReq), + LAC_QAT_SYM_REQ_SZ_LW); + } + if (CPA_STATUS_SUCCESS == status) { + /* Update stats */ + LAC_KEY_STAT_INC(numMgfKeyGenRequests, instanceHandle); + } else { + LAC_KEY_STAT_INC(numMgfKeyGenRequestErrors, instanceHandle); + /* clean up memory */ + if (NULL != pCookie) { + Lac_MemPoolEntryFree(pCookie); + } + } + return status; +} + +/** + * cpaCyKeyGenMgf + */ +CpaStatus +cpaCyKeyGenMgf(const CpaInstanceHandle instanceHandle_in, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const CpaCyKeyGenMgfOpData *pKeyGenMgfOpData, + CpaFlatBuffer *pGeneratedMaskBuffer) +{ + CpaInstanceHandle instanceHandle = NULL; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + /* If synchronous Operation */ + if (NULL == pKeyGenCb) { + return LacSymKey_MgfSync(instanceHandle, + pKeyGenCb, + pCallbackTag, + (const void *)pKeyGenMgfOpData, + pGeneratedMaskBuffer, + CPA_FALSE); + } + /* Asynchronous Operation */ + return LacSymKey_MgfCommon(instanceHandle, + pKeyGenCb, + pCallbackTag, + (const void *)pKeyGenMgfOpData, + pKeyGenMgfOpData, + pGeneratedMaskBuffer, + CPA_CY_SYM_HASH_SHA1); +} + +/** + * cpaCyKeyGenMgfExt + */ +CpaStatus +cpaCyKeyGenMgfExt(const CpaInstanceHandle instanceHandle_in, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const CpaCyKeyGenMgfOpDataExt *pKeyGenMgfOpDataExt, + CpaFlatBuffer *pGeneratedMaskBuffer) +{ + CpaInstanceHandle instanceHandle = NULL; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + /* If synchronous Operation */ + if (NULL == pKeyGenCb) { + return LacSymKey_MgfSync(instanceHandle, + pKeyGenCb, + pCallbackTag, + (const void *)pKeyGenMgfOpDataExt, + pGeneratedMaskBuffer, + CPA_TRUE); + } + + /* Param check specific for Ext function, rest of parameters validated + * in LacSymKey_MgfCommon + */ + LAC_CHECK_NULL_PARAM(pKeyGenMgfOpDataExt); + if (CPA_CY_SYM_HASH_MD5 > pKeyGenMgfOpDataExt->hashAlgorithm || + CPA_CY_SYM_HASH_SHA512 < pKeyGenMgfOpDataExt->hashAlgorithm) { + LAC_INVALID_PARAM_LOG("hashAlgorithm"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Asynchronous Operation */ + return LacSymKey_MgfCommon(instanceHandle, + pKeyGenCb, + pCallbackTag, + (const void *)pKeyGenMgfOpDataExt, + &pKeyGenMgfOpDataExt->baseOpData, + pGeneratedMaskBuffer, + pKeyGenMgfOpDataExt->hashAlgorithm); +} + +/** + ****************************************************************************** + * @ingroup LacSymKey + * Key Generation SSL & TLS response handler + * + * @description + * Handles Key Generation SSL & TLS response messages from the QAT. + * + * @param[in] lacCmdId Command id of the original request + * @param[in] pOpaqueData Pointer to opaque data that was in request + * @param[in] cmnRespFlags LA response flags + * + * @return void + * + *****************************************************************************/ +static void +LacSymKey_SslTlsHandleResponse(icp_qat_fw_la_cmd_id_t lacCmdId, + void *pOpaqueData, + icp_qat_fw_comn_flags cmnRespFlags) +{ + void *pSslTlsOpData = NULL; + CpaCyGenFlatBufCbFunc pKeyGenSslTlsCb = NULL; + lac_sym_key_cookie_t *pCookie = NULL; + void *pCallbackTag = NULL; + CpaFlatBuffer *pGeneratedKeyBuffer = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + + CpaBoolean respStatusOk = + (ICP_QAT_FW_COMN_STATUS_FLAG_OK == + ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(cmnRespFlags)) ? + CPA_TRUE : + CPA_FALSE; + + pCookie = (lac_sym_key_cookie_t *)pOpaqueData; + + pSslTlsOpData = pCookie->pKeyGenOpData; + + if (CPA_TRUE == respStatusOk) { + LacKey_StatsInc(lacCmdId, + LAC_KEY_COMPLETED, + pCookie->instanceHandle); + } else { + status = CPA_STATUS_FAIL; + LacKey_StatsInc(lacCmdId, + LAC_KEY_COMPLETED_ERRORS, + pCookie->instanceHandle); + } + + pKeyGenSslTlsCb = (CpaCyGenFlatBufCbFunc)(pCookie->pKeyGenCb); + + pCallbackTag = pCookie->pCallbackTag; + pGeneratedKeyBuffer = pCookie->pKeyGenOutputData; + + Lac_MemPoolEntryFree(pCookie); + + (*pKeyGenSslTlsCb)(pCallbackTag, + status, + pSslTlsOpData, + pGeneratedKeyBuffer); +} + +/** +******************************************************************************* +* @ingroup LacSymKey +* Synchronous mode of operation function wrapper for performing SSL/TLS +* key gen operation +* +* @description +* Synchronous mode of operation function wrapper for performing SSL/TLS +* key gen operation +* +* @param[in] instanceHandle QAT device handle. +* @param[in] pKeyGenCb Pointer to callback function to be invoked +* when the operation is complete. +* @param[in] pCallbackTag Opaque User Data for this specific call. +* @param[in] lacCmdId Lac command ID (identify SSL & TLS ops) +* @param[in] pKeyGenSslTlsOpData Structure containing all the data needed to +* perform the SSL/TLS key generation +* operation. +* @param[in] hashAlgorithm Specifies the hash algorithm to use. +* According to RFC5246, this should be +* "SHA-256 or a stronger standard hash +* function." +* @param[out] pKeyGenOutputData pointer to where output result should be +* written +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_RETRY Function should be retried. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_RESOURCE Error related to system resources. +* +*****************************************************************************/ +static CpaStatus +LacSymKey_SslTlsSync(CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + icp_qat_fw_la_cmd_id_t lacCmdId, + void *pKeyGenSslTlsOpData, + Cpa8U hashAlgorithm, + CpaFlatBuffer *pKeyGenOutpuData) +{ + lac_sync_op_data_t *pSyncCallbackData = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + + status = LacSync_CreateSyncCookie(&pSyncCallbackData); + if (CPA_STATUS_SUCCESS == status) { + status = LacSymKey_KeyGenSslTls_GenCommon(instanceHandle, + pKeyGenCb, + pSyncCallbackData, + lacCmdId, + pKeyGenSslTlsOpData, + hashAlgorithm, + pKeyGenOutpuData); + } else { + /* Failure allocating sync cookie */ + LacKey_StatsInc(lacCmdId, + LAC_KEY_REQUEST_ERRORS, + instanceHandle); + return status; + } + + if (CPA_STATUS_SUCCESS == status) { + CpaStatus syncStatus = CPA_STATUS_SUCCESS; + + syncStatus = + LacSync_WaitForCallback(pSyncCallbackData, + LAC_SYM_SYNC_CALLBACK_TIMEOUT, + &status, + NULL); + + /* If callback doesn't come back */ + if (CPA_STATUS_SUCCESS != syncStatus) { + LacKey_StatsInc(lacCmdId, + LAC_KEY_COMPLETED_ERRORS, + instanceHandle); + LAC_LOG_ERROR("Callback timed out"); + status = syncStatus; + } + } else { + /* As the Request was not sent the Callback will never + * be called, so need to indicate that we're finished + * with cookie so it can be destroyed. + */ + LacSync_SetSyncCookieComplete(pSyncCallbackData); + } + + LacSync_DestroySyncCookie(&pSyncCallbackData); + + return status; +} + +static CpaStatus +computeHashKey(CpaFlatBuffer *secret, + CpaFlatBuffer *hash, + CpaCySymHashAlgorithm *hashAlgorithm) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + switch (*hashAlgorithm) { + case CPA_CY_SYM_HASH_MD5: + status = qatUtilsHashMD5Full(secret->pData, + hash->pData, + secret->dataLenInBytes); + break; + case CPA_CY_SYM_HASH_SHA1: + status = qatUtilsHashSHA1Full(secret->pData, + hash->pData, + secret->dataLenInBytes); + break; + case CPA_CY_SYM_HASH_SHA256: + status = qatUtilsHashSHA256Full(secret->pData, + hash->pData, + secret->dataLenInBytes); + break; + case CPA_CY_SYM_HASH_SHA384: + status = qatUtilsHashSHA384Full(secret->pData, + hash->pData, + secret->dataLenInBytes); + break; + case CPA_CY_SYM_HASH_SHA512: + status = qatUtilsHashSHA512Full(secret->pData, + hash->pData, + secret->dataLenInBytes); + break; + default: + status = CPA_STATUS_FAIL; + } + return status; +} + +static CpaStatus +LacSymKey_KeyGenSslTls_GenCommon(CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + icp_qat_fw_la_cmd_id_t lacCmdId, + void *pKeyGenSslTlsOpData, + Cpa8U hashAlgCipher, + CpaFlatBuffer *pKeyGenOutputData) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaBoolean precompute = CPA_FALSE; + icp_qat_fw_la_bulk_req_t keyGenReq = { { 0 } }; + icp_qat_la_bulk_req_hdr_t keyGenReqHdr = { { 0 } }; + icp_qat_fw_la_key_gen_common_t keyGenReqMid = { { 0 } }; + icp_qat_la_bulk_req_ftr_t keyGenReqFtr = { { { 0 } } }; + Cpa8U *pMsgDummy = NULL; + Cpa8U *pCacheDummyHdr = NULL; + Cpa8U *pCacheDummyMid = NULL; + Cpa8U *pCacheDummyFtr = NULL; + lac_sym_key_cookie_t *pCookie = NULL; + lac_sym_cookie_t *pSymCookie = NULL; + Cpa64U inputPhysAddr = 0; + Cpa64U outputPhysAddr = 0; +/* Structure initializer is supported by C99, but it is + * not supported by some former Intel compiler. + */ + CpaCySymHashSetupData hashSetupData = { 0 }; + sal_qat_content_desc_info_t contentDescInfo = { 0 }; + Cpa32U hashBlkSizeInBytes = 0; + Cpa32U tlsPrefixLen = 0; + + CpaFlatBuffer inputSecret = { 0 }; + CpaFlatBuffer hashKeyOutput = { 0 }; + Cpa32U uSecretLen = 0; + CpaCySymHashNestedModeSetupData *pNestedModeSetupData = + &(hashSetupData.nestedModeSetupData); + icp_qat_fw_serv_specif_flags laCmdFlags = 0; + icp_qat_fw_comn_flags cmnRequestFlags = + ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_PTR_TYPE_FLAT, + QAT_COMN_CD_FLD_TYPE_64BIT_ADR); + + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + + /* If synchronous Operation */ + if (NULL == pKeyGenCb) { + return LacSymKey_SslTlsSync(instanceHandle, + LacSync_GenFlatBufCb, + pCallbackTag, + lacCmdId, + pKeyGenSslTlsOpData, + hashAlgCipher, + pKeyGenOutputData); + } + /* Allocate the cookie */ + pCookie = (lac_sym_key_cookie_t *)Lac_MemPoolEntryAlloc( + pService->lac_sym_cookie_pool); + if (NULL == pCookie) { + LAC_LOG_ERROR("Cannot get mem pool entry"); + status = CPA_STATUS_RESOURCE; + } else if ((void *)CPA_STATUS_RETRY == pCookie) { + pCookie = NULL; + status = CPA_STATUS_RETRY; + } else { + pSymCookie = (lac_sym_cookie_t *)pCookie; + } + + if (CPA_STATUS_SUCCESS == status) { + icp_qat_hw_auth_mode_t qatHashMode = 0; + + if (ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE == lacCmdId) { + qatHashMode = ICP_QAT_HW_AUTH_MODE0; + } else /* TLS v1.1, v1.2, v1.3 */ + { + qatHashMode = ICP_QAT_HW_AUTH_MODE2; + } + + pCookie->instanceHandle = pService; + pCookie->pCallbackTag = pCallbackTag; + pCookie->pKeyGenCb = pKeyGenCb; + pCookie->pKeyGenOpData = pKeyGenSslTlsOpData; + pCookie->pKeyGenOutputData = pKeyGenOutputData; + hashSetupData.hashMode = CPA_CY_SYM_HASH_MODE_NESTED; + + /* SSL3 */ + if (ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE == lacCmdId) { + hashSetupData.hashAlgorithm = CPA_CY_SYM_HASH_SHA1; + hashSetupData.digestResultLenInBytes = + LAC_HASH_MD5_DIGEST_SIZE; + pNestedModeSetupData->outerHashAlgorithm = + CPA_CY_SYM_HASH_MD5; + + pNestedModeSetupData->pInnerPrefixData = NULL; + pNestedModeSetupData->innerPrefixLenInBytes = 0; + pNestedModeSetupData->pOuterPrefixData = NULL; + pNestedModeSetupData->outerPrefixLenInBytes = 0; + } + /* TLS v1.1 */ + else if (ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE == lacCmdId) { + CpaCyKeyGenTlsOpData *pKeyGenTlsOpData = + (CpaCyKeyGenTlsOpData *)pKeyGenSslTlsOpData; + + hashSetupData.hashAlgorithm = CPA_CY_SYM_HASH_SHA1; + hashSetupData.digestResultLenInBytes = + LAC_HASH_MD5_DIGEST_SIZE; + pNestedModeSetupData->outerHashAlgorithm = + CPA_CY_SYM_HASH_MD5; + + uSecretLen = pKeyGenTlsOpData->secret.dataLenInBytes; + + /* We want to handle pre_master_secret > 128 bytes + * therefore we + * only verify if the current operation is Master Secret + * Derive. + * The other operations remain unchanged. + */ + if ((uSecretLen > + ICP_QAT_FW_LA_TLS_V1_1_SECRET_LEN_MAX) && + (CPA_CY_KEY_TLS_OP_MASTER_SECRET_DERIVE == + pKeyGenTlsOpData->tlsOp || + CPA_CY_KEY_TLS_OP_USER_DEFINED == + pKeyGenTlsOpData->tlsOp)) { + CpaCySymHashAlgorithm hashAlgorithm = + (CpaCySymHashAlgorithm)hashAlgCipher; + /* secret = [s1 | s2 ] + * s1 = outer prefix, s2 = inner prefix + * length of s1 and s2 = ceil(secret_length / 2) + * (secret length + 1)/2 will always give the + * ceil as + * division by 2 + * (>>1) will give the smallest integral value + * not less than + * arg + */ + tlsPrefixLen = + (pKeyGenTlsOpData->secret.dataLenInBytes + + 1) >> + 1; + inputSecret.dataLenInBytes = tlsPrefixLen; + inputSecret.pData = + pKeyGenTlsOpData->secret.pData; + + /* Since the pre_master_secret is > 128, we + * split the input + * pre_master_secret in 2 halves and compute the + * MD5 of the + * first half and the SHA1 on the second half. + */ + hashAlgorithm = CPA_CY_SYM_HASH_MD5; + + /* Initialize pointer where MD5 key will go. */ + hashKeyOutput.pData = + &pCookie->hashKeyBuffer[0]; + hashKeyOutput.dataLenInBytes = + LAC_HASH_MD5_DIGEST_SIZE; + computeHashKey(&inputSecret, + &hashKeyOutput, + &hashAlgorithm); + + pNestedModeSetupData->pOuterPrefixData = + &pCookie->hashKeyBuffer[0]; + pNestedModeSetupData->outerPrefixLenInBytes = + LAC_HASH_MD5_DIGEST_SIZE; + + /* Point to the second half of the + * pre_master_secret */ + inputSecret.pData = + pKeyGenTlsOpData->secret.pData + + (pKeyGenTlsOpData->secret.dataLenInBytes - + tlsPrefixLen); + + /* Compute SHA1 on the second half of the + * pre_master_secret + */ + hashAlgorithm = CPA_CY_SYM_HASH_SHA1; + /* Initialize pointer where SHA1 key will go. */ + hashKeyOutput.pData = + &pCookie->hashKeyBuffer + [LAC_HASH_MD5_DIGEST_SIZE]; + hashKeyOutput.dataLenInBytes = + LAC_HASH_SHA1_DIGEST_SIZE; + computeHashKey(&inputSecret, + &hashKeyOutput, + &hashAlgorithm); + + pNestedModeSetupData->pInnerPrefixData = + &pCookie->hashKeyBuffer + [LAC_HASH_MD5_DIGEST_SIZE]; + pNestedModeSetupData->innerPrefixLenInBytes = + LAC_HASH_SHA1_DIGEST_SIZE; + } else { + /* secret = [s1 | s2 ] + * s1 = outer prefix, s2 = inner prefix + * length of s1 and s2 = ceil(secret_length / 2) + * (secret length + 1)/2 will always give the + * ceil as + * division by 2 + * (>>1) will give the smallest integral value + * not less than + * arg + */ + tlsPrefixLen = + (pKeyGenTlsOpData->secret.dataLenInBytes + + 1) >> + 1; + /* last byte of s1 will be first byte of s2 if + * Length is odd + */ + pNestedModeSetupData->pInnerPrefixData = + pKeyGenTlsOpData->secret.pData + + (pKeyGenTlsOpData->secret.dataLenInBytes - + tlsPrefixLen); + + pNestedModeSetupData->pOuterPrefixData = + pKeyGenTlsOpData->secret.pData; + + pNestedModeSetupData->innerPrefixLenInBytes = + pNestedModeSetupData + ->outerPrefixLenInBytes = tlsPrefixLen; + } + } + /* TLS v1.2 */ + else if (ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE == lacCmdId) { + CpaCyKeyGenTlsOpData *pKeyGenTlsOpData = + (CpaCyKeyGenTlsOpData *)pKeyGenSslTlsOpData; + CpaCySymHashAlgorithm hashAlgorithm = + (CpaCySymHashAlgorithm)hashAlgCipher; + + uSecretLen = pKeyGenTlsOpData->secret.dataLenInBytes; + + hashSetupData.hashAlgorithm = + (CpaCySymHashAlgorithm)hashAlgorithm; + hashSetupData.digestResultLenInBytes = + (Cpa32U)getDigestSizeFromHashAlgo(hashAlgorithm); + pNestedModeSetupData->outerHashAlgorithm = + (CpaCySymHashAlgorithm)hashAlgorithm; + if (CPA_CY_KEY_TLS_OP_MASTER_SECRET_DERIVE == + pKeyGenTlsOpData->tlsOp || + CPA_CY_KEY_TLS_OP_USER_DEFINED == + pKeyGenTlsOpData->tlsOp) { + switch (hashAlgorithm) { + case CPA_CY_SYM_HASH_SM3: + precompute = CPA_FALSE; + break; + case CPA_CY_SYM_HASH_SHA256: + if (uSecretLen > + ICP_QAT_FW_LA_TLS_V1_2_SECRET_LEN_MAX) { + precompute = CPA_TRUE; + } + break; + case CPA_CY_SYM_HASH_SHA384: + case CPA_CY_SYM_HASH_SHA512: + if (uSecretLen > + ICP_QAT_FW_LA_TLS_SECRET_LEN_MAX) { + precompute = CPA_TRUE; + } + break; + default: + break; + } + } + if (CPA_TRUE == precompute) { + /* Case when secret > algorithm block size + * RFC 4868: For SHA-256 Block size is 512 bits, + * for SHA-384 + * and SHA-512 Block size is 1024 bits + * Initialize pointer + * where SHAxxx key will go. + */ + hashKeyOutput.pData = + &pCookie->hashKeyBuffer[0]; + hashKeyOutput.dataLenInBytes = + hashSetupData.digestResultLenInBytes; + computeHashKey(&pKeyGenTlsOpData->secret, + &hashKeyOutput, + &hashSetupData.hashAlgorithm); + + /* Outer prefix = secret , inner prefix = secret + * secret < 64 bytes + */ + pNestedModeSetupData->pInnerPrefixData = + hashKeyOutput.pData; + pNestedModeSetupData->pOuterPrefixData = + hashKeyOutput.pData; + pNestedModeSetupData->innerPrefixLenInBytes = + hashKeyOutput.dataLenInBytes; + pNestedModeSetupData->outerPrefixLenInBytes = + hashKeyOutput.dataLenInBytes; + } else { + /* Outer prefix = secret , inner prefix = secret + * secret <= 64 bytes + */ + pNestedModeSetupData->pInnerPrefixData = + pKeyGenTlsOpData->secret.pData; + + pNestedModeSetupData->pOuterPrefixData = + pKeyGenTlsOpData->secret.pData; + + pNestedModeSetupData->innerPrefixLenInBytes = + pKeyGenTlsOpData->secret.dataLenInBytes; + pNestedModeSetupData->outerPrefixLenInBytes = + pKeyGenTlsOpData->secret.dataLenInBytes; + } + } + /* TLS v1.3 */ + else if ((ICP_QAT_FW_LA_CMD_HKDF_EXTRACT <= lacCmdId) && + (ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND_LABEL >= + lacCmdId)) { + CpaCyKeyGenHKDFOpData *pKeyGenTlsOpData = + (CpaCyKeyGenHKDFOpData *)pKeyGenSslTlsOpData; + CpaCySymHashAlgorithm hashAlgorithm = + getHashAlgorithmFromCipherSuiteHKDF(hashAlgCipher); + + /* Set HASH data */ + hashSetupData.hashAlgorithm = hashAlgorithm; + /* Calculate digest length from the HASH type */ + hashSetupData.digestResultLenInBytes = + cipherSuiteHKDFHashSizes[hashAlgCipher] + [LAC_KEY_HKDF_DIGESTS]; + /* Outer Hash type is the same as inner hash type */ + pNestedModeSetupData->outerHashAlgorithm = + hashAlgorithm; + + /* EXPAND (PRK): + * Outer prefix = secret, inner prefix = secret + * EXTRACT (SEED/SALT): + * Outer prefix = seed, inner prefix = seed + * Secret <= 64 Bytes + * We do not pre compute as secret can't be larger than + * 64 bytes + */ + + if ((ICP_QAT_FW_LA_CMD_HKDF_EXPAND == lacCmdId) || + (ICP_QAT_FW_LA_CMD_HKDF_EXPAND_LABEL == lacCmdId)) { + pNestedModeSetupData->pInnerPrefixData = + pKeyGenTlsOpData->secret; + pNestedModeSetupData->pOuterPrefixData = + pKeyGenTlsOpData->secret; + pNestedModeSetupData->innerPrefixLenInBytes = + pKeyGenTlsOpData->secretLen; + pNestedModeSetupData->outerPrefixLenInBytes = + pKeyGenTlsOpData->secretLen; + } else { + pNestedModeSetupData->pInnerPrefixData = + pKeyGenTlsOpData->seed; + pNestedModeSetupData->pOuterPrefixData = + pKeyGenTlsOpData->seed; + pNestedModeSetupData->innerPrefixLenInBytes = + pKeyGenTlsOpData->seedLen; + pNestedModeSetupData->outerPrefixLenInBytes = + pKeyGenTlsOpData->seedLen; + } + } + + /* Set the footer Data. + * Note that following function doesn't look at inner/outer + * prefix pointers in nested digest ctx + */ + LacSymQat_HashContentDescInit( + &keyGenReqFtr, + instanceHandle, + &hashSetupData, + pCookie + ->contentDesc, /* Pointer to base of hw setup block */ + LAC_SYM_KEY_NO_HASH_BLK_OFFSET_QW, + ICP_QAT_FW_SLICE_DRAM_WR, + qatHashMode, + CPA_FALSE, /* Not using sym Constants Table in SRAM */ + CPA_FALSE, /* Not using the optimised content Desc */ + NULL, /* Precompute data */ + &hashBlkSizeInBytes); + + /* SSL3 */ + if (ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE == lacCmdId) { + CpaCyKeyGenSslOpData *pKeyGenSslOpData = + (CpaCyKeyGenSslOpData *)pKeyGenSslTlsOpData; + Cpa8U *pLabel = NULL; + Cpa32U labelLen = 0; + Cpa8U iterations = 0; + Cpa64U labelPhysAddr = 0; + + /* Iterations = ceiling of output required / output per + * iteration Ceiling of a / b = (a + (b-1)) / b + */ + iterations = + (pKeyGenSslOpData->generatedKeyLenInBytes + + (LAC_SYM_QAT_KEY_SSL_BYTES_PER_ITERATION - 1)) >> + LAC_SYM_QAT_KEY_SSL_ITERATIONS_SHIFT; + + if (CPA_CY_KEY_SSL_OP_USER_DEFINED == + pKeyGenSslOpData->sslOp) { + pLabel = pKeyGenSslOpData->userLabel.pData; + labelLen = + pKeyGenSslOpData->userLabel.dataLenInBytes; + labelPhysAddr = LAC_OS_VIRT_TO_PHYS_EXTERNAL( + pService->generic_service_info, pLabel); + + if (labelPhysAddr == 0) { + LAC_LOG_ERROR( + "Unable to get the physical address of the" + " label"); + status = CPA_STATUS_FAIL; + } + } else { + pLabel = pService->pSslLabel; + + /* Calculate label length. + * eg. 3 iterations is ABBCCC so length is 6 + */ + labelLen = + ((iterations * iterations) + iterations) >> + 1; + labelPhysAddr = + LAC_OS_VIRT_TO_PHYS_INTERNAL(pLabel); + } + + LacSymQat_KeySslRequestPopulate( + &keyGenReqHdr, + &keyGenReqMid, + pKeyGenSslOpData->generatedKeyLenInBytes, + labelLen, + pKeyGenSslOpData->secret.dataLenInBytes, + iterations); + + LacSymQat_KeySslKeyMaterialInputPopulate( + &(pService->generic_service_info), + &(pCookie->u.sslKeyInput), + pKeyGenSslOpData->seed.pData, + labelPhysAddr, + pKeyGenSslOpData->secret.pData); + + inputPhysAddr = LAC_MEM_CAST_PTR_TO_UINT64( + pSymCookie->keySslKeyInputPhyAddr); + } + /* TLS v1.1, v1.2 */ + else if (ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE == lacCmdId || + ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE == lacCmdId) { + CpaCyKeyGenTlsOpData *pKeyGenTlsOpData = + (CpaCyKeyGenTlsOpData *)pKeyGenSslTlsOpData; + lac_sym_qat_hash_state_buffer_info_t + hashStateBufferInfo = { 0 }; + CpaBoolean hashStateBuffer = CPA_FALSE; + icp_qat_fw_auth_cd_ctrl_hdr_t *pHashControlBlock = + (icp_qat_fw_auth_cd_ctrl_hdr_t *)&( + keyGenReqFtr.cd_ctrl); + icp_qat_la_auth_req_params_t *pHashReqParams = NULL; + Cpa8U *pLabel = NULL; + Cpa32U labelLen = 0; + Cpa64U labelPhysAddr = 0; + hashStateBufferInfo.pData = pCookie->hashStateBuffer; + hashStateBufferInfo.pDataPhys = + LAC_MEM_CAST_PTR_TO_UINT64( + pSymCookie->keyHashStateBufferPhyAddr); + hashStateBufferInfo.stateStorageSzQuadWords = 0; + + LacSymQat_HashSetupReqParamsMetaData(&(keyGenReqFtr), + instanceHandle, + &(hashSetupData), + hashStateBuffer, + qatHashMode, + CPA_FALSE); + + pHashReqParams = (icp_qat_la_auth_req_params_t *)&( + keyGenReqFtr.serv_specif_rqpars); + + hashStateBufferInfo.prefixAadSzQuadWords = + LAC_BYTES_TO_QUADWORDS( + pHashReqParams->u2.inner_prefix_sz + + pHashControlBlock->outer_prefix_sz); + + /* Copy prefix data into hash state buffer */ + pMsgDummy = (Cpa8U *)&(keyGenReq); + pCacheDummyHdr = (Cpa8U *)&(keyGenReqHdr); + pCacheDummyMid = (Cpa8U *)&(keyGenReqMid); + pCacheDummyFtr = (Cpa8U *)&(keyGenReqFtr); + memcpy(pMsgDummy, + pCacheDummyHdr, + (LAC_LONG_WORD_IN_BYTES * + LAC_SIZE_OF_CACHE_HDR_IN_LW)); + memcpy(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_MID_IN_LW), + pCacheDummyMid, + (LAC_LONG_WORD_IN_BYTES * + LAC_SIZE_OF_CACHE_MID_IN_LW)); + memcpy(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_FTR_IN_LW), + pCacheDummyFtr, + (LAC_LONG_WORD_IN_BYTES * + LAC_SIZE_OF_CACHE_FTR_IN_LW)); + + LacSymQat_HashStatePrefixAadBufferPopulate( + &hashStateBufferInfo, + &keyGenReqFtr, + pNestedModeSetupData->pInnerPrefixData, + pNestedModeSetupData->innerPrefixLenInBytes, + pNestedModeSetupData->pOuterPrefixData, + pNestedModeSetupData->outerPrefixLenInBytes); + + /* Firmware only looks at hash state buffer pointer and + * the + * hash state buffer size so all other fields are set to + * 0 + */ + LacSymQat_HashRequestParamsPopulate( + &(keyGenReq), + 0, /* Auth offset */ + 0, /* Auth length */ + &(pService->generic_service_info), + &hashStateBufferInfo, /* Hash state prefix buffer */ + ICP_QAT_FW_LA_PARTIAL_NONE, + 0, /* Hash result size */ + CPA_FALSE, + NULL, + CPA_CY_SYM_HASH_NONE, /* Hash algorithm */ + NULL); /* HKDF only */ + + /* Set up the labels and their length */ + if (CPA_CY_KEY_TLS_OP_USER_DEFINED == + pKeyGenTlsOpData->tlsOp) { + pLabel = pKeyGenTlsOpData->userLabel.pData; + labelLen = + pKeyGenTlsOpData->userLabel.dataLenInBytes; + labelPhysAddr = LAC_OS_VIRT_TO_PHYS_EXTERNAL( + pService->generic_service_info, pLabel); + + if (labelPhysAddr == 0) { + LAC_LOG_ERROR( + "Unable to get the physical address of the" + " label"); + status = CPA_STATUS_FAIL; + } + } else if (CPA_CY_KEY_TLS_OP_MASTER_SECRET_DERIVE == + pKeyGenTlsOpData->tlsOp) { + pLabel = pService->pTlsLabel->masterSecret; + labelLen = + sizeof( + LAC_SYM_KEY_TLS_MASTER_SECRET_LABEL) - + 1; + labelPhysAddr = + LAC_OS_VIRT_TO_PHYS_INTERNAL(pLabel); + } else if (CPA_CY_KEY_TLS_OP_KEY_MATERIAL_DERIVE == + pKeyGenTlsOpData->tlsOp) { + pLabel = pService->pTlsLabel->keyMaterial; + labelLen = + sizeof(LAC_SYM_KEY_TLS_KEY_MATERIAL_LABEL) - + 1; + labelPhysAddr = + LAC_OS_VIRT_TO_PHYS_INTERNAL(pLabel); + } else if (CPA_CY_KEY_TLS_OP_CLIENT_FINISHED_DERIVE == + pKeyGenTlsOpData->tlsOp) { + pLabel = pService->pTlsLabel->clientFinished; + labelLen = + sizeof(LAC_SYM_KEY_TLS_CLIENT_FIN_LABEL) - + 1; + labelPhysAddr = + LAC_OS_VIRT_TO_PHYS_INTERNAL(pLabel); + } else { + pLabel = pService->pTlsLabel->serverFinished; + labelLen = + sizeof(LAC_SYM_KEY_TLS_SERVER_FIN_LABEL) - + 1; + labelPhysAddr = + LAC_OS_VIRT_TO_PHYS_INTERNAL(pLabel); + } + LacSymQat_KeyTlsRequestPopulate( + &keyGenReqMid, + pKeyGenTlsOpData->generatedKeyLenInBytes, + labelLen, + pKeyGenTlsOpData->secret.dataLenInBytes, + pKeyGenTlsOpData->seed.dataLenInBytes, + lacCmdId); + + LacSymQat_KeyTlsKeyMaterialInputPopulate( + &(pService->generic_service_info), + &(pCookie->u.tlsKeyInput), + pKeyGenTlsOpData->seed.pData, + labelPhysAddr); + + inputPhysAddr = LAC_MEM_CAST_PTR_TO_UINT64( + pSymCookie->keyTlsKeyInputPhyAddr); + } + /* TLS v1.3 */ + else if (ICP_QAT_FW_LA_CMD_HKDF_EXTRACT <= lacCmdId && + ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND >= + lacCmdId) { + CpaCyKeyGenHKDFOpData *pKeyGenTlsOpData = + (CpaCyKeyGenHKDFOpData *)pKeyGenSslTlsOpData; + lac_sym_qat_hash_state_buffer_info_t + hashStateBufferInfo = { 0 }; + CpaBoolean hashStateBuffer = CPA_FALSE; + icp_qat_fw_auth_cd_ctrl_hdr_t *pHashControlBlock = + (icp_qat_fw_auth_cd_ctrl_hdr_t *)&( + keyGenReqFtr.cd_ctrl); + icp_qat_la_auth_req_params_t *pHashReqParams = NULL; + hashStateBufferInfo.pData = pCookie->hashStateBuffer; + hashStateBufferInfo.pDataPhys = + LAC_MEM_CAST_PTR_TO_UINT64( + pSymCookie->keyHashStateBufferPhyAddr); + hashStateBufferInfo.stateStorageSzQuadWords = 0; + + LacSymQat_HashSetupReqParamsMetaData(&(keyGenReqFtr), + instanceHandle, + &(hashSetupData), + hashStateBuffer, + qatHashMode, + CPA_FALSE); + + pHashReqParams = (icp_qat_la_auth_req_params_t *)&( + keyGenReqFtr.serv_specif_rqpars); + + hashStateBufferInfo.prefixAadSzQuadWords = + LAC_BYTES_TO_QUADWORDS( + pHashReqParams->u2.inner_prefix_sz + + pHashControlBlock->outer_prefix_sz); + + /* Copy prefix data into hash state buffer */ + pMsgDummy = (Cpa8U *)&(keyGenReq); + pCacheDummyHdr = (Cpa8U *)&(keyGenReqHdr); + pCacheDummyMid = (Cpa8U *)&(keyGenReqMid); + pCacheDummyFtr = (Cpa8U *)&(keyGenReqFtr); + memcpy(pMsgDummy, + pCacheDummyHdr, + (LAC_LONG_WORD_IN_BYTES * + LAC_SIZE_OF_CACHE_HDR_IN_LW)); + memcpy(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_MID_IN_LW), + pCacheDummyMid, + (LAC_LONG_WORD_IN_BYTES * + LAC_SIZE_OF_CACHE_MID_IN_LW)); + memcpy(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_FTR_IN_LW), + pCacheDummyFtr, + (LAC_LONG_WORD_IN_BYTES * + LAC_SIZE_OF_CACHE_FTR_IN_LW)); + + LacSymQat_HashStatePrefixAadBufferPopulate( + &hashStateBufferInfo, + &keyGenReqFtr, + pNestedModeSetupData->pInnerPrefixData, + pNestedModeSetupData->innerPrefixLenInBytes, + pNestedModeSetupData->pOuterPrefixData, + pNestedModeSetupData->outerPrefixLenInBytes); + + /* Firmware only looks at hash state buffer pointer and + * the + * hash state buffer size so all other fields are set to + * 0 + */ + LacSymQat_HashRequestParamsPopulate( + &(keyGenReq), + 0, /* Auth offset */ + 0, /* Auth length */ + &(pService->generic_service_info), + &hashStateBufferInfo, /* Hash state prefix buffer */ + ICP_QAT_FW_LA_PARTIAL_NONE, + 0, /* Hash result size */ + CPA_FALSE, + NULL, + CPA_CY_SYM_HASH_NONE, /* Hash algorithm */ + pKeyGenTlsOpData->secret); /* IKM or PRK */ + + LacSymQat_KeyTlsRequestPopulate( + &keyGenReqMid, + cipherSuiteHKDFHashSizes[hashAlgCipher] + [LAC_KEY_HKDF_DIGESTS], + /* For EXTRACT, EXPAND, FW expects info to be passed + as label */ + pKeyGenTlsOpData->infoLen, + pKeyGenTlsOpData->secretLen, + pKeyGenTlsOpData->seedLen, + lacCmdId); + + LacSymQat_KeyTlsHKDFKeyMaterialInputPopulate( + &(pService->generic_service_info), + &(pCookie->u.tlsHKDFKeyInput), + pKeyGenTlsOpData, + 0, /* No subLabels used */ + lacCmdId); /* Pass op being performed */ + + inputPhysAddr = LAC_MEM_CAST_PTR_TO_UINT64( + pSymCookie->keyTlsKeyInputPhyAddr); + } + /* TLS v1.3 LABEL */ + else if (ICP_QAT_FW_LA_CMD_HKDF_EXPAND_LABEL == lacCmdId || + ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND_LABEL == + lacCmdId) { + CpaCyKeyGenHKDFOpData *pKeyGenTlsOpData = + (CpaCyKeyGenHKDFOpData *)pKeyGenSslTlsOpData; + Cpa64U subLabelsPhysAddr = 0; + lac_sym_qat_hash_state_buffer_info_t + hashStateBufferInfo = { 0 }; + CpaBoolean hashStateBuffer = CPA_FALSE; + icp_qat_fw_auth_cd_ctrl_hdr_t *pHashControlBlock = + (icp_qat_fw_auth_cd_ctrl_hdr_t *)&( + keyGenReqFtr.cd_ctrl); + icp_qat_la_auth_req_params_t *pHashReqParams = NULL; + hashStateBufferInfo.pData = pCookie->hashStateBuffer; + hashStateBufferInfo.pDataPhys = + LAC_MEM_CAST_PTR_TO_UINT64( + pSymCookie->keyHashStateBufferPhyAddr); + hashStateBufferInfo.stateStorageSzQuadWords = 0; + + LacSymQat_HashSetupReqParamsMetaData(&(keyGenReqFtr), + instanceHandle, + &(hashSetupData), + hashStateBuffer, + qatHashMode, + CPA_FALSE); + + pHashReqParams = (icp_qat_la_auth_req_params_t *)&( + keyGenReqFtr.serv_specif_rqpars); + + hashStateBufferInfo.prefixAadSzQuadWords = + LAC_BYTES_TO_QUADWORDS( + pHashReqParams->u2.inner_prefix_sz + + pHashControlBlock->outer_prefix_sz); + + /* Copy prefix data into hash state buffer */ + pMsgDummy = (Cpa8U *)&(keyGenReq); + pCacheDummyHdr = (Cpa8U *)&(keyGenReqHdr); + pCacheDummyMid = (Cpa8U *)&(keyGenReqMid); + pCacheDummyFtr = (Cpa8U *)&(keyGenReqFtr); + memcpy(pMsgDummy, + pCacheDummyHdr, + (LAC_LONG_WORD_IN_BYTES * + LAC_SIZE_OF_CACHE_HDR_IN_LW)); + memcpy(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_MID_IN_LW), + pCacheDummyMid, + (LAC_LONG_WORD_IN_BYTES * + LAC_SIZE_OF_CACHE_MID_IN_LW)); + memcpy(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_FTR_IN_LW), + pCacheDummyFtr, + (LAC_LONG_WORD_IN_BYTES * + LAC_SIZE_OF_CACHE_FTR_IN_LW)); + + LacSymQat_HashStatePrefixAadBufferPopulate( + &hashStateBufferInfo, + &keyGenReqFtr, + pNestedModeSetupData->pInnerPrefixData, + pNestedModeSetupData->innerPrefixLenInBytes, + pNestedModeSetupData->pOuterPrefixData, + pNestedModeSetupData->outerPrefixLenInBytes); + + /* Firmware only looks at hash state buffer pointer and + * the + * hash state buffer size so all other fields are set to + * 0 + */ + LacSymQat_HashRequestParamsPopulate( + &(keyGenReq), + 0, /* Auth offset */ + 0, /* Auth length */ + &(pService->generic_service_info), + &hashStateBufferInfo, /* Hash state prefix buffer */ + ICP_QAT_FW_LA_PARTIAL_NONE, + 0, /* Hash result size */ + CPA_FALSE, + NULL, + CPA_CY_SYM_HASH_NONE, /* Hash algorithm */ + pKeyGenTlsOpData->secret); /* IKM or PRK */ + + LacSymQat_KeyTlsRequestPopulate( + &keyGenReqMid, + cipherSuiteHKDFHashSizes[hashAlgCipher] + [LAC_KEY_HKDF_DIGESTS], + pKeyGenTlsOpData->numLabels, /* Number of Labels */ + pKeyGenTlsOpData->secretLen, + pKeyGenTlsOpData->seedLen, + lacCmdId); + + /* Get physical address of subLabels */ + switch (hashAlgCipher) { + case CPA_CY_HKDF_TLS_AES_128_GCM_SHA256: /* Fall Through + */ + case CPA_CY_HKDF_TLS_AES_128_CCM_SHA256: + case CPA_CY_HKDF_TLS_AES_128_CCM_8_SHA256: + subLabelsPhysAddr = pService->pTlsHKDFSubLabel + ->sublabelPhysAddr256; + break; + case CPA_CY_HKDF_TLS_CHACHA20_POLY1305_SHA256: + subLabelsPhysAddr = + pService->pTlsHKDFSubLabel + ->sublabelPhysAddrChaChaPoly; + break; + case CPA_CY_HKDF_TLS_AES_256_GCM_SHA384: + subLabelsPhysAddr = pService->pTlsHKDFSubLabel + ->sublabelPhysAddr384; + break; + default: + break; + } + + LacSymQat_KeyTlsHKDFKeyMaterialInputPopulate( + &(pService->generic_service_info), + &(pCookie->u.tlsHKDFKeyInput), + pKeyGenTlsOpData, + subLabelsPhysAddr, + lacCmdId); /* Pass op being performed */ + + inputPhysAddr = LAC_MEM_CAST_PTR_TO_UINT64( + pSymCookie->keyTlsKeyInputPhyAddr); + } + + outputPhysAddr = LAC_MEM_CAST_PTR_TO_UINT64( + LAC_OS_VIRT_TO_PHYS_EXTERNAL(pService->generic_service_info, + pKeyGenOutputData->pData)); + + if (outputPhysAddr == 0) { + LAC_LOG_ERROR( + "Unable to get the physical address of the" + " output buffer"); + status = CPA_STATUS_FAIL; + } + } + if (CPA_STATUS_SUCCESS == status) { + Cpa8U lw26[4]; + char *tmp = NULL; + unsigned char a; + int n = 0; + /* Make up the full keyGenReq struct from its constituents + * before calling the SalQatMsg functions below. + * Note: The full cache struct has been reduced to a + * header, mid and footer for memory size reduction + */ + pMsgDummy = (Cpa8U *)&(keyGenReq); + pCacheDummyHdr = (Cpa8U *)&(keyGenReqHdr); + pCacheDummyMid = (Cpa8U *)&(keyGenReqMid); + pCacheDummyFtr = (Cpa8U *)&(keyGenReqFtr); + + memcpy(pMsgDummy, + pCacheDummyHdr, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_HDR_IN_LW)); + memcpy(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_MID_IN_LW), + pCacheDummyMid, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_MID_IN_LW)); + memcpy(&lw26, + pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_FTR_IN_LW), + LAC_LONG_WORD_IN_BYTES); + memcpy(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_FTR_IN_LW), + pCacheDummyFtr, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_FTR_IN_LW)); + tmp = (char *)(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_FTR_IN_LW)); + + /* Copy LW26, or'd with what's already there, into the Msg, for + * TLS */ + for (n = 0; n < LAC_LONG_WORD_IN_BYTES; n++) { + a = (unsigned char)*(tmp + n); + lw26[n] = lw26[n] | a; + } + memcpy(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_FTR_IN_LW), + &lw26, + LAC_LONG_WORD_IN_BYTES); + + contentDescInfo.pData = pCookie->contentDesc; + contentDescInfo.hardwareSetupBlockPhys = + LAC_MEM_CAST_PTR_TO_UINT64( + pSymCookie->keyContentDescPhyAddr); + contentDescInfo.hwBlkSzQuadWords = + LAC_BYTES_TO_QUADWORDS(hashBlkSizeInBytes); + + /* Populate common request fields */ + SalQatMsg_ContentDescHdrWrite((icp_qat_fw_comn_req_t *)&( + keyGenReq), + &(contentDescInfo)); + + SalQatMsg_CmnHdrWrite((icp_qat_fw_comn_req_t *)&keyGenReq, + ICP_QAT_FW_COMN_REQ_CPM_FW_LA, + lacCmdId, + cmnRequestFlags, + laCmdFlags); + + SalQatMsg_CmnMidWrite((icp_qat_fw_la_bulk_req_t *)&(keyGenReq), + pCookie, + LAC_SYM_KEY_QAT_PTR_TYPE, + inputPhysAddr, + outputPhysAddr, + 0, + 0); + + /* Send to QAT */ + status = icp_adf_transPutMsg(pService->trans_handle_sym_tx, + (void *)&(keyGenReq), + LAC_QAT_SYM_REQ_SZ_LW); + } + if (CPA_STATUS_SUCCESS == status) { + /* Update stats */ + LacKey_StatsInc(lacCmdId, + LAC_KEY_REQUESTS, + pCookie->instanceHandle); + } else { + /* Clean up cookie memory */ + if (NULL != pCookie) { + LacKey_StatsInc(lacCmdId, + LAC_KEY_REQUEST_ERRORS, + pCookie->instanceHandle); + Lac_MemPoolEntryFree(pCookie); + } + } + return status; +} + +/** + * @ingroup LacSymKey + * Parameters check for TLS v1.0/1.1, v1.2, v1.3 and SSL3 + * @description + * Check user parameters against the firmware/spec requirements. + * + * @param[in] pKeyGenOpData Pointer to a structure containing all + * the data needed to perform the key + * generation operation. + * @param[in] hashAlgCipher Specifies the hash algorithm, + * or cipher we are using. + * According to RFC5246, this should be + * "SHA-256 or a stronger standard hash + * function." + * @param[in] pGeneratedKeyBuffer User output buffers. + * @param[in] cmdId Keygen operation to perform. + */ +static CpaStatus +LacSymKey_CheckParamSslTls(const void *pKeyGenOpData, + Cpa8U hashAlgCipher, + const CpaFlatBuffer *pGeneratedKeyBuffer, + icp_qat_fw_la_cmd_id_t cmdId) +{ + /* Api max value */ + Cpa32U maxSecretLen = 0; + Cpa32U maxSeedLen = 0; + Cpa32U maxOutputLen = 0; + Cpa32U maxInfoLen = 0; + Cpa32U maxLabelLen = 0; + + /* User info */ + Cpa32U uSecretLen = 0; + Cpa32U uSeedLen = 0; + Cpa32U uOutputLen = 0; + + LAC_CHECK_NULL_PARAM(pKeyGenOpData); + LAC_CHECK_NULL_PARAM(pGeneratedKeyBuffer); + LAC_CHECK_NULL_PARAM(pGeneratedKeyBuffer->pData); + + if (ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE == cmdId) { + CpaCyKeyGenSslOpData *opData = + (CpaCyKeyGenSslOpData *)pKeyGenOpData; + + /* User info */ + uSecretLen = opData->secret.dataLenInBytes; + uSeedLen = opData->seed.dataLenInBytes; + uOutputLen = opData->generatedKeyLenInBytes; + + /* Api max value */ + maxSecretLen = ICP_QAT_FW_LA_SSL_SECRET_LEN_MAX; + maxSeedLen = ICP_QAT_FW_LA_SSL_SEED_LEN_MAX; + maxOutputLen = ICP_QAT_FW_LA_SSL_OUTPUT_LEN_MAX; + + /* Check user buffers */ + LAC_CHECK_NULL_PARAM(opData->secret.pData); + LAC_CHECK_NULL_PARAM(opData->seed.pData); + + /* Check operation */ + if ((Cpa32U)opData->sslOp > CPA_CY_KEY_SSL_OP_USER_DEFINED) { + LAC_INVALID_PARAM_LOG("opData->sslOp"); + return CPA_STATUS_INVALID_PARAM; + } + if ((Cpa32U)opData->sslOp == CPA_CY_KEY_SSL_OP_USER_DEFINED) { + LAC_CHECK_NULL_PARAM(opData->userLabel.pData); + /* Maximum label length for SSL Key Gen request */ + if (opData->userLabel.dataLenInBytes > + ICP_QAT_FW_LA_SSL_LABEL_LEN_MAX) { + LAC_INVALID_PARAM_LOG( + "userLabel.dataLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* Only seed length for SSL3 Key Gen request */ + if (maxSeedLen != uSeedLen) { + LAC_INVALID_PARAM_LOG("seed.dataLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Maximum output length for SSL3 Key Gen request */ + if (uOutputLen > maxOutputLen) { + LAC_INVALID_PARAM_LOG("generatedKeyLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } + /* TLS v1.1 or TLS v.12 */ + else if (ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE == cmdId || + ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE == cmdId) { + CpaCyKeyGenTlsOpData *opData = + (CpaCyKeyGenTlsOpData *)pKeyGenOpData; + + /* User info */ + uSecretLen = opData->secret.dataLenInBytes; + uSeedLen = opData->seed.dataLenInBytes; + uOutputLen = opData->generatedKeyLenInBytes; + + if (ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE == cmdId) { + /* Api max value */ + /* ICP_QAT_FW_LA_TLS_V1_1_SECRET_LEN_MAX needs to be + * multiplied + * by 4 in order to verifiy the 512 conditions. We did + * not change + * ICP_QAT_FW_LA_TLS_V1_1_SECRET_LEN_MAX as it + * represents + * the max value tha firmware can handle. + */ + maxSecretLen = + ICP_QAT_FW_LA_TLS_V1_1_SECRET_LEN_MAX * 4; + } else { + /* Api max value */ + /* ICP_QAT_FW_LA_TLS_V1_2_SECRET_LEN_MAX needs to be + * multiplied + * by 8 in order to verifiy the 512 conditions. We did + * not change + * ICP_QAT_FW_LA_TLS_V1_2_SECRET_LEN_MAX as it + * represents + * the max value tha firmware can handle. + */ + maxSecretLen = + ICP_QAT_FW_LA_TLS_V1_2_SECRET_LEN_MAX * 8; + + /* Check Hash algorithm */ + if (0 == getDigestSizeFromHashAlgo(hashAlgCipher)) { + LAC_INVALID_PARAM_LOG("hashAlgorithm"); + return CPA_STATUS_INVALID_PARAM; + } + } + maxSeedLen = ICP_QAT_FW_LA_TLS_SEED_LEN_MAX; + maxOutputLen = ICP_QAT_FW_LA_TLS_OUTPUT_LEN_MAX; + /* Check user buffers */ + LAC_CHECK_NULL_PARAM(opData->secret.pData); + LAC_CHECK_NULL_PARAM(opData->seed.pData); + + /* Check operation */ + if ((Cpa32U)opData->tlsOp > CPA_CY_KEY_TLS_OP_USER_DEFINED) { + LAC_INVALID_PARAM_LOG("opData->tlsOp"); + return CPA_STATUS_INVALID_PARAM; + } else if ((Cpa32U)opData->tlsOp == + CPA_CY_KEY_TLS_OP_USER_DEFINED) { + LAC_CHECK_NULL_PARAM(opData->userLabel.pData); + /* Maximum label length for TLS Key Gen request */ + if (opData->userLabel.dataLenInBytes > + ICP_QAT_FW_LA_TLS_LABEL_LEN_MAX) { + LAC_INVALID_PARAM_LOG( + "userLabel.dataLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* Maximum/only seed length for TLS Key Gen request */ + if (((Cpa32U)opData->tlsOp != + CPA_CY_KEY_TLS_OP_MASTER_SECRET_DERIVE) && + ((Cpa32U)opData->tlsOp != + CPA_CY_KEY_TLS_OP_KEY_MATERIAL_DERIVE)) { + if (uSeedLen > maxSeedLen) { + LAC_INVALID_PARAM_LOG("seed.dataLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + if (maxSeedLen != uSeedLen) { + LAC_INVALID_PARAM_LOG("seed.dataLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* Maximum output length for TLS Key Gen request */ + if (uOutputLen > maxOutputLen) { + LAC_INVALID_PARAM_LOG("generatedKeyLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } + /* TLS v1.3 */ + else if (cmdId >= ICP_QAT_FW_LA_CMD_HKDF_EXTRACT && + cmdId <= ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND_LABEL) { + CpaCyKeyGenHKDFOpData *HKDF_Data = + (CpaCyKeyGenHKDFOpData *)pKeyGenOpData; + CpaCyKeyHKDFCipherSuite cipherSuite = hashAlgCipher; + CpaCySymHashAlgorithm hashAlgorithm = + getHashAlgorithmFromCipherSuiteHKDF(cipherSuite); + maxSeedLen = + cipherSuiteHKDFHashSizes[cipherSuite][LAC_KEY_HKDF_DIGESTS]; + maxSecretLen = CPA_CY_HKDF_KEY_MAX_SECRET_SZ; + maxInfoLen = CPA_CY_HKDF_KEY_MAX_INFO_SZ; + maxLabelLen = CPA_CY_HKDF_KEY_MAX_LABEL_SZ; + + uSecretLen = HKDF_Data->secretLen; + + /* Check using supported hash function */ + if (0 == + (uOutputLen = getDigestSizeFromHashAlgo(hashAlgorithm))) { + LAC_INVALID_PARAM_LOG("Hash function not supported"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Number of labels does not exceed the MAX */ + if (HKDF_Data->numLabels > CPA_CY_HKDF_KEY_MAX_LABEL_COUNT) { + LAC_INVALID_PARAM_LOG( + "CpaCyKeyGenHKDFOpData.numLabels"); + return CPA_STATUS_INVALID_PARAM; + } + + switch (cmdId) { + case ICP_QAT_FW_LA_CMD_HKDF_EXTRACT: + if (maxSeedLen < HKDF_Data->seedLen) { + LAC_INVALID_PARAM_LOG( + "CpaCyKeyGenHKDFOpData.seedLen"); + return CPA_STATUS_INVALID_PARAM; + } + break; + case ICP_QAT_FW_LA_CMD_HKDF_EXPAND: + maxSecretLen = + cipherSuiteHKDFHashSizes[cipherSuite] + [LAC_KEY_HKDF_DIGESTS]; + + if (maxInfoLen < HKDF_Data->infoLen) { + LAC_INVALID_PARAM_LOG( + "CpaCyKeyGenHKDFOpData.infoLen"); + return CPA_STATUS_INVALID_PARAM; + } + break; + case ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND: + uOutputLen *= 2; + if (maxSeedLen < HKDF_Data->seedLen) { + LAC_INVALID_PARAM_LOG( + "CpaCyKeyGenHKDFOpData.seedLen"); + return CPA_STATUS_INVALID_PARAM; + } + if (maxInfoLen < HKDF_Data->infoLen) { + LAC_INVALID_PARAM_LOG( + "CpaCyKeyGenHKDFOpData.infoLen"); + return CPA_STATUS_INVALID_PARAM; + } + break; + case ICP_QAT_FW_LA_CMD_HKDF_EXPAND_LABEL: /* Fall through */ + case ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND_LABEL: { + Cpa8U subl_mask = 0, subl_number = 1; + Cpa8U i = 0; + + if (maxSeedLen < HKDF_Data->seedLen) { + LAC_INVALID_PARAM_LOG( + "CpaCyKeyGenHKDFOpData.seedLen"); + return CPA_STATUS_INVALID_PARAM; + } + + /* If EXPAND set uOutputLen to zero */ + if (ICP_QAT_FW_LA_CMD_HKDF_EXPAND_LABEL == cmdId) { + uOutputLen = 0; + maxSecretLen = cipherSuiteHKDFHashSizes + [cipherSuite][LAC_KEY_HKDF_DIGESTS]; + } + + for (i = 0; i < HKDF_Data->numLabels; i++) { + /* Check that the labelLen does not overflow */ + if (maxLabelLen < + HKDF_Data->label[i].labelLen) { + LAC_INVALID_PARAM_LOG1( + "CpaCyKeyGenHKDFOpData.label[%d].labelLen", + i); + return CPA_STATUS_INVALID_PARAM; + } + + if (HKDF_Data->label[i].sublabelFlag & + ~HKDF_SUB_LABELS_ALL) { + LAC_INVALID_PARAM_LOG1( + "CpaCyKeyGenHKDFOpData.label[%d]." + "subLabelFlag", + i); + return CPA_STATUS_INVALID_PARAM; + } + + /* Calculate the appended subLabel output + * lengths and + * check that the output buffer that the user + * has + * supplied is the correct length. + */ + uOutputLen += cipherSuiteHKDFHashSizes + [cipherSuite][LAC_KEY_HKDF_DIGESTS]; + /* Get mask of subLabel */ + subl_mask = HKDF_Data->label[i].sublabelFlag; + + for (subl_number = 1; + subl_number <= LAC_KEY_HKDF_SUBLABELS_NUM; + subl_number++) { + /* Add the used subLabel key lengths */ + if (subl_mask & 1) { + uOutputLen += + cipherSuiteHKDFHashSizes + [cipherSuite] + [subl_number]; + } + subl_mask >>= 1; + } + } + } break; + default: + break; + } + } else { + LAC_INVALID_PARAM_LOG("TLS/SSL operation"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Maximum secret length for TLS/SSL Key Gen request */ + if (uSecretLen > maxSecretLen) { + LAC_INVALID_PARAM_LOG("HKFD.secretLen/secret.dataLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Check for enough space in the flat buffer */ + if (uOutputLen > pGeneratedKeyBuffer->dataLenInBytes) { + LAC_INVALID_PARAM_LOG("pGeneratedKeyBuffer->dataLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + return CPA_STATUS_SUCCESS; +} + +/** + * + */ +/** + * @ingroup LacSymKey + * Common Keygen Code for TLS v1.0/1.1, v1.2 and SSL3. + * @description + * Check user parameters and perform the required operation. + * + * @param[in] instanceHandle_in Instance handle. + * @param[in] pKeyGenCb Pointer to callback function to be + * invoked when the operation is complete. + * If this is set to a NULL value the + * function will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific + * call. Will be returned unchanged in the + * callback. + * @param[in] pKeyGenOpData Pointer to a structure containing all + * the data needed to perform the key + * generation operation. + * @param[in] hashAlgorithm Specifies the hash algorithm to use. + * According to RFC5246, this should be + * "SHA-256 or a stronger standard hash + * function." + * @param[out] pGeneratedKeyBuffer User output buffer. + * @param[in] cmdId Keygen operation to perform. + */ +static CpaStatus +LacSymKey_KeyGenSslTls(const CpaInstanceHandle instanceHandle_in, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const void *pKeyGenOpData, + Cpa8U hashAlgorithm, + CpaFlatBuffer *pGeneratedKeyBuffer, + icp_qat_fw_la_cmd_id_t cmdId) +{ + CpaStatus status = CPA_STATUS_FAIL; + CpaInstanceHandle instanceHandle = LacKey_GetHandle(instanceHandle_in); + CpaCyCapabilitiesInfo cyCapInfo; + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + + SAL_RUNNING_CHECK(instanceHandle); + SalCtrl_CyQueryCapabilities(instanceHandle, &cyCapInfo); + + if (IS_HKDF_UNSUPPORTED(cmdId, cyCapInfo.hkdfSupported)) { + LAC_LOG_ERROR("The device does not support HKDF"); + return CPA_STATUS_UNSUPPORTED; + } + + status = LacSymKey_CheckParamSslTls(pKeyGenOpData, + hashAlgorithm, + pGeneratedKeyBuffer, + cmdId); + if (CPA_STATUS_SUCCESS != status) + return status; + return LacSymKey_KeyGenSslTls_GenCommon(instanceHandle, + pKeyGenCb, + pCallbackTag, + cmdId, + LAC_CONST_PTR_CAST( + pKeyGenOpData), + hashAlgorithm, + pGeneratedKeyBuffer); +} + +/** + * @ingroup LacSymKey + * SSL Key Generation Function. + * @description + * This function is used for SSL key generation. It implements the key + * generation function defined in section 6.2.2 of the SSL 3.0 + * specification as described in + * http://www.mozilla.org/projects/security/pki/nss/ssl/draft302.txt. + * + * The input seed is taken as a flat buffer and the generated key is + * returned to caller in a flat destination data buffer. + * + * @param[in] instanceHandle_in Instance handle. + * @param[in] pKeyGenCb Pointer to callback function to be + * invoked when the operation is complete. + * If this is set to a NULL value the + * function will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific + * call. Will be returned unchanged in the + * callback. + * @param[in] pKeyGenSslOpData Pointer to a structure containing all + * the data needed to perform the SSL key + * generation operation. The client code + * allocates the memory for this + * structure. This component takes + * ownership of the memory until it is + * returned in the callback. + * @param[out] pGeneratedKeyBuffer Caller MUST allocate a sufficient + * buffer to hold the key generation + * output. The data pointer SHOULD be + * aligned on an 8-byte boundary. The + * length field passed in represents the + * size of the buffer in bytes. The value + * that is returned is the size of the + * result key in bytes. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + */ +CpaStatus +cpaCyKeyGenSsl(const CpaInstanceHandle instanceHandle_in, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const CpaCyKeyGenSslOpData *pKeyGenSslOpData, + CpaFlatBuffer *pGeneratedKeyBuffer) +{ + CpaInstanceHandle instanceHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + return LacSymKey_KeyGenSslTls(instanceHandle, + pKeyGenCb, + pCallbackTag, + LAC_CONST_PTR_CAST(pKeyGenSslOpData), + CPA_CY_SYM_HASH_NONE, /* Hash algorithm */ + pGeneratedKeyBuffer, + ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE); +} + +/** + * @ingroup LacSymKey + * TLS Key Generation Function. + * @description + * This function is used for TLS key generation. It implements the + * TLS PRF (Pseudo Random Function) as defined by RFC2246 (TLS v1.0) + * and RFC4346 (TLS v1.1). + * + * The input seed is taken as a flat buffer and the generated key is + * returned to caller in a flat destination data buffer. + * + * @param[in] instanceHandle_in Instance handle. + * @param[in] pKeyGenCb Pointer to callback function to be + * invoked when the operation is complete. + * If this is set to a NULL value the + * function will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific + * call. Will be returned unchanged in the + * callback. + * @param[in] pKeyGenTlsOpData Pointer to a structure containing all + * the data needed to perform the TLS key + * generation operation. The client code + * allocates the memory for this + * structure. This component takes + * ownership of the memory until it is + * returned in the callback. + * @param[out] pGeneratedKeyBuffer Caller MUST allocate a sufficient + * buffer to hold the key generation + * output. The data pointer SHOULD be + * aligned on an 8-byte boundary. The + * length field passed in represents the + * size of the buffer in bytes. The value + * that is returned is the size of the + * result key in bytes. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * + */ +CpaStatus +cpaCyKeyGenTls(const CpaInstanceHandle instanceHandle_in, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const CpaCyKeyGenTlsOpData *pKeyGenTlsOpData, + CpaFlatBuffer *pGeneratedKeyBuffer) +{ + CpaInstanceHandle instanceHandle = NULL; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + return LacSymKey_KeyGenSslTls(instanceHandle, + pKeyGenCb, + pCallbackTag, + LAC_CONST_PTR_CAST(pKeyGenTlsOpData), + CPA_CY_SYM_HASH_NONE, /* Hash algorithm */ + pGeneratedKeyBuffer, + ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE); +} + +/** + * @ingroup LacSymKey + * @description + * This function is used for TLS key generation. It implements the + * TLS PRF (Pseudo Random Function) as defined by RFC5246 (TLS v1.2). + * + * The input seed is taken as a flat buffer and the generated key is + * returned to caller in a flat destination data buffer. + * + * @param[in] instanceHandle_in Instance handle. + * @param[in] pKeyGenCb Pointer to callback function to be + * invoked when the operation is complete. + * If this is set to a NULL value the + * function will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific + * call. Will be returned unchanged in the + * callback. + * @param[in] pKeyGenTlsOpData Pointer to a structure containing all + * the data needed to perform the TLS key + * generation operation. The client code + * allocates the memory for this + * structure. This component takes + * ownership of the memory until it is + * returned in the callback. + * @param[in] hashAlgorithm Specifies the hash algorithm to use. + * According to RFC5246, this should be + * "SHA-256 or a stronger standard hash + * function." + * @param[out] pGeneratedKeyBuffer Caller MUST allocate a sufficient + * buffer to hold the key generation + * output. The data pointer SHOULD be + * aligned on an 8-byte boundary. The + * length field passed in represents the + * size of the buffer in bytes. The value + * that is returned is the size of the + * result key in bytes. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + */ +CpaStatus +cpaCyKeyGenTls2(const CpaInstanceHandle instanceHandle_in, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const CpaCyKeyGenTlsOpData *pKeyGenTlsOpData, + CpaCySymHashAlgorithm hashAlgorithm, + CpaFlatBuffer *pGeneratedKeyBuffer) +{ + CpaInstanceHandle instanceHandle = NULL; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + return LacSymKey_KeyGenSslTls(instanceHandle, + pKeyGenCb, + pCallbackTag, + LAC_CONST_PTR_CAST(pKeyGenTlsOpData), + hashAlgorithm, + pGeneratedKeyBuffer, + ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE); +} + +/** + * @ingroup LacSymKey + * @description + * This function is used for TLS1.3 HKDF key generation. It implements + * the "extract-then-expand" paradigm as defined by RFC 5869. + * + * The input seed/secret/info is taken as a flat buffer and the generated + * key(s)/labels are returned to caller in a flat data buffer. + * + * @param[in] instanceHandle_in Instance handle. + * @param[in] pKeyGenCb Pointer to callback function to be + * invoked when the operation is complete. + * If this is set to a NULL value the + * function will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific + * call. Will be returned unchanged in the + * callback. + * @param[in] pKeyGenTlsOpData Pointer to a structure containing + * the data needed to perform the HKDF key + * generation operation. + * The client code allocates the memory + * for this structure as contiguous + * pinned memory. + * This component takes ownership of the + * memory until it is returned in the + * callback. + * @param[in] hashAlgorithm Specifies the hash algorithm to use. + * According to RFC5246, this should be + * "SHA-256 or a stronger standard hash + * function." + * @param[out] pGeneratedKeyBuffer Caller MUST allocate a sufficient + * buffer to hold the key generation + * output. The data pointer SHOULD be + * aligned on an 8-byte boundary. The + * length field passed in represents the + * size of the buffer in bytes. The value + * that is returned is the size of the + * result key in bytes. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + */ +CpaStatus +cpaCyKeyGenTls3(const CpaInstanceHandle instanceHandle_in, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const CpaCyKeyGenHKDFOpData *pKeyGenTlsOpData, + CpaCyKeyHKDFCipherSuite cipherSuite, + CpaFlatBuffer *pGeneratedKeyBuffer) +{ + + LAC_CHECK_NULL_PARAM(pKeyGenTlsOpData); + switch (pKeyGenTlsOpData->hkdfKeyOp) { + case CPA_CY_HKDF_KEY_EXTRACT: /* Fall through */ + case CPA_CY_HKDF_KEY_EXPAND: + case CPA_CY_HKDF_KEY_EXTRACT_EXPAND: + case CPA_CY_HKDF_KEY_EXPAND_LABEL: + case CPA_CY_HKDF_KEY_EXTRACT_EXPAND_LABEL: + break; + default: + LAC_INVALID_PARAM_LOG("HKDF operation not supported"); + return CPA_STATUS_INVALID_PARAM; + } + + + return LacSymKey_KeyGenSslTls(instanceHandle_in, + pKeyGenCb, + pCallbackTag, + LAC_CONST_PTR_CAST(pKeyGenTlsOpData), + cipherSuite, + pGeneratedKeyBuffer, + (icp_qat_fw_la_cmd_id_t) + pKeyGenTlsOpData->hkdfKeyOp); +} + +/* + * LacSymKey_Init + */ +CpaStatus +LacSymKey_Init(CpaInstanceHandle instanceHandle_in) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaInstanceHandle instanceHandle = LacKey_GetHandle(instanceHandle_in); + sal_crypto_service_t *pService = NULL; + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + + pService = (sal_crypto_service_t *)instanceHandle; + + pService->pLacKeyStats = + LAC_OS_MALLOC(LAC_KEY_NUM_STATS * sizeof(QatUtilsAtomic)); + + if (NULL != pService->pLacKeyStats) { + LAC_OS_BZERO((void *)pService->pLacKeyStats, + LAC_KEY_NUM_STATS * sizeof(QatUtilsAtomic)); + + status = LAC_OS_CAMALLOC(&pService->pSslLabel, + ICP_QAT_FW_LA_SSL_LABEL_LEN_MAX, + LAC_8BYTE_ALIGNMENT, + pService->nodeAffinity); + } else { + status = CPA_STATUS_RESOURCE; + } + + if (CPA_STATUS_SUCCESS == status) { + Cpa32U i = 0; + Cpa32U offset = 0; + + /* Initialise SSL label ABBCCC..... */ + for (i = 0; i < ICP_QAT_FW_LA_SSL_ITERATES_LEN_MAX; i++) { + memset(pService->pSslLabel + offset, 'A' + i, i + 1); + offset += (i + 1); + } + + /* Allocate memory for TLS labels */ + status = LAC_OS_CAMALLOC(&pService->pTlsLabel, + sizeof(lac_sym_key_tls_labels_t), + LAC_8BYTE_ALIGNMENT, + pService->nodeAffinity); + } + + if (CPA_STATUS_SUCCESS == status) { + /* Allocate memory for HKDF sub_labels */ + status = + LAC_OS_CAMALLOC(&pService->pTlsHKDFSubLabel, + sizeof(lac_sym_key_tls_hkdf_sub_labels_t), + LAC_8BYTE_ALIGNMENT, + pService->nodeAffinity); + } + + if (CPA_STATUS_SUCCESS == status) { + LAC_OS_BZERO(pService->pTlsLabel, + sizeof(lac_sym_key_tls_labels_t)); + + /* Copy the TLS v1.2 labels into the dynamically allocated + * structure */ + memcpy(pService->pTlsLabel->masterSecret, + LAC_SYM_KEY_TLS_MASTER_SECRET_LABEL, + sizeof(LAC_SYM_KEY_TLS_MASTER_SECRET_LABEL) - 1); + + memcpy(pService->pTlsLabel->keyMaterial, + LAC_SYM_KEY_TLS_KEY_MATERIAL_LABEL, + sizeof(LAC_SYM_KEY_TLS_KEY_MATERIAL_LABEL) - 1); + + memcpy(pService->pTlsLabel->clientFinished, + LAC_SYM_KEY_TLS_CLIENT_FIN_LABEL, + sizeof(LAC_SYM_KEY_TLS_CLIENT_FIN_LABEL) - 1); + + memcpy(pService->pTlsLabel->serverFinished, + LAC_SYM_KEY_TLS_SERVER_FIN_LABEL, + sizeof(LAC_SYM_KEY_TLS_SERVER_FIN_LABEL) - 1); + + LAC_OS_BZERO(pService->pTlsHKDFSubLabel, + sizeof(lac_sym_key_tls_hkdf_sub_labels_t)); + + /* Copy the TLS v1.3 subLabels into the dynamically allocated + * struct */ + /* KEY SHA-256 */ + memcpy(&pService->pTlsHKDFSubLabel->keySublabel256, + &key256, + HKDF_SUB_LABEL_KEY_LENGTH); + pService->pTlsHKDFSubLabel->keySublabel256.labelLen = + HKDF_SUB_LABEL_KEY_LENGTH; + pService->pTlsHKDFSubLabel->keySublabel256.sublabelFlag = 1 + << QAT_FW_HKDF_INNER_SUBLABEL_16_BYTE_OKM_BITPOS; + /* KEY SHA-384 */ + memcpy(&pService->pTlsHKDFSubLabel->keySublabel384, + &key384, + HKDF_SUB_LABEL_KEY_LENGTH); + pService->pTlsHKDFSubLabel->keySublabel384.labelLen = + HKDF_SUB_LABEL_KEY_LENGTH; + pService->pTlsHKDFSubLabel->keySublabel384.sublabelFlag = 1 + << QAT_FW_HKDF_INNER_SUBLABEL_32_BYTE_OKM_BITPOS; + /* KEY CHACHAPOLY */ + memcpy(&pService->pTlsHKDFSubLabel->keySublabelChaChaPoly, + &keyChaChaPoly, + HKDF_SUB_LABEL_KEY_LENGTH); + pService->pTlsHKDFSubLabel->keySublabelChaChaPoly.labelLen = + HKDF_SUB_LABEL_KEY_LENGTH; + pService->pTlsHKDFSubLabel->keySublabelChaChaPoly.sublabelFlag = + 1 << QAT_FW_HKDF_INNER_SUBLABEL_32_BYTE_OKM_BITPOS; + /* IV SHA-256 */ + memcpy(&pService->pTlsHKDFSubLabel->ivSublabel256, + &iv256, + HKDF_SUB_LABEL_IV_LENGTH); + pService->pTlsHKDFSubLabel->ivSublabel256.labelLen = + HKDF_SUB_LABEL_IV_LENGTH; + pService->pTlsHKDFSubLabel->ivSublabel256.sublabelFlag = 1 + << QAT_FW_HKDF_INNER_SUBLABEL_12_BYTE_OKM_BITPOS; + /* IV SHA-384 */ + memcpy(&pService->pTlsHKDFSubLabel->ivSublabel384, + &iv384, + HKDF_SUB_LABEL_IV_LENGTH); + pService->pTlsHKDFSubLabel->ivSublabel384.labelLen = + HKDF_SUB_LABEL_IV_LENGTH; + pService->pTlsHKDFSubLabel->ivSublabel384.sublabelFlag = 1 + << QAT_FW_HKDF_INNER_SUBLABEL_12_BYTE_OKM_BITPOS; + /* IV CHACHAPOLY */ + memcpy(&pService->pTlsHKDFSubLabel->ivSublabelChaChaPoly, + &iv256, + HKDF_SUB_LABEL_IV_LENGTH); + pService->pTlsHKDFSubLabel->ivSublabelChaChaPoly.labelLen = + HKDF_SUB_LABEL_IV_LENGTH; + pService->pTlsHKDFSubLabel->ivSublabelChaChaPoly.sublabelFlag = + 1 << QAT_FW_HKDF_INNER_SUBLABEL_12_BYTE_OKM_BITPOS; + /* RESUMPTION SHA-256 */ + memcpy(&pService->pTlsHKDFSubLabel->resumptionSublabel256, + &resumption256, + HKDF_SUB_LABEL_RESUMPTION_LENGTH); + pService->pTlsHKDFSubLabel->resumptionSublabel256.labelLen = + HKDF_SUB_LABEL_RESUMPTION_LENGTH; + /* RESUMPTION SHA-384 */ + memcpy(&pService->pTlsHKDFSubLabel->resumptionSublabel384, + &resumption384, + HKDF_SUB_LABEL_RESUMPTION_LENGTH); + pService->pTlsHKDFSubLabel->resumptionSublabel384.labelLen = + HKDF_SUB_LABEL_RESUMPTION_LENGTH; + /* RESUMPTION CHACHAPOLY */ + memcpy( + &pService->pTlsHKDFSubLabel->resumptionSublabelChaChaPoly, + &resumption256, + HKDF_SUB_LABEL_RESUMPTION_LENGTH); + pService->pTlsHKDFSubLabel->resumptionSublabelChaChaPoly + .labelLen = HKDF_SUB_LABEL_RESUMPTION_LENGTH; + /* FINISHED SHA-256 */ + memcpy(&pService->pTlsHKDFSubLabel->finishedSublabel256, + &finished256, + HKDF_SUB_LABEL_FINISHED_LENGTH); + pService->pTlsHKDFSubLabel->finishedSublabel256.labelLen = + HKDF_SUB_LABEL_FINISHED_LENGTH; + /* FINISHED SHA-384 */ + memcpy(&pService->pTlsHKDFSubLabel->finishedSublabel384, + &finished384, + HKDF_SUB_LABEL_FINISHED_LENGTH); + pService->pTlsHKDFSubLabel->finishedSublabel384.labelLen = + HKDF_SUB_LABEL_FINISHED_LENGTH; + /* FINISHED CHACHAPOLY */ + memcpy(&pService->pTlsHKDFSubLabel->finishedSublabelChaChaPoly, + &finished256, + HKDF_SUB_LABEL_FINISHED_LENGTH); + pService->pTlsHKDFSubLabel->finishedSublabelChaChaPoly + .labelLen = HKDF_SUB_LABEL_FINISHED_LENGTH; + + /* Set physical address of sublabels */ + pService->pTlsHKDFSubLabel->sublabelPhysAddr256 = + LAC_OS_VIRT_TO_PHYS_INTERNAL( + &pService->pTlsHKDFSubLabel->keySublabel256); + pService->pTlsHKDFSubLabel->sublabelPhysAddr384 = + LAC_OS_VIRT_TO_PHYS_INTERNAL( + &pService->pTlsHKDFSubLabel->keySublabel384); + pService->pTlsHKDFSubLabel->sublabelPhysAddrChaChaPoly = + LAC_OS_VIRT_TO_PHYS_INTERNAL( + &pService->pTlsHKDFSubLabel->keySublabelChaChaPoly); + + /* Register request handlers */ + LacSymQat_RespHandlerRegister(ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE, + LacSymKey_SslTlsHandleResponse); + + LacSymQat_RespHandlerRegister( + ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE, + LacSymKey_SslTlsHandleResponse); + + LacSymQat_RespHandlerRegister( + ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE, + LacSymKey_SslTlsHandleResponse); + + LacSymQat_RespHandlerRegister(ICP_QAT_FW_LA_CMD_HKDF_EXTRACT, + LacSymKey_SslTlsHandleResponse); + + LacSymQat_RespHandlerRegister(ICP_QAT_FW_LA_CMD_HKDF_EXPAND, + LacSymKey_SslTlsHandleResponse); + + LacSymQat_RespHandlerRegister( + ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND, + LacSymKey_SslTlsHandleResponse); + + LacSymQat_RespHandlerRegister( + ICP_QAT_FW_LA_CMD_HKDF_EXPAND_LABEL, + LacSymKey_SslTlsHandleResponse); + + LacSymQat_RespHandlerRegister( + ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND_LABEL, + LacSymKey_SslTlsHandleResponse); + + LacSymQat_RespHandlerRegister(ICP_QAT_FW_LA_CMD_MGF1, + LacSymKey_MgfHandleResponse); + } + + if (CPA_STATUS_SUCCESS != status) { + LAC_OS_FREE(pService->pLacKeyStats); + LAC_OS_CAFREE(pService->pSslLabel); + LAC_OS_CAFREE(pService->pTlsLabel); + LAC_OS_CAFREE(pService->pTlsHKDFSubLabel); + } + + return status; +} + +/* + * LacSymKey_Shutdown + */ +CpaStatus +LacSymKey_Shutdown(CpaInstanceHandle instanceHandle_in) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaInstanceHandle instanceHandle = LacKey_GetHandle(instanceHandle_in); + sal_crypto_service_t *pService = NULL; + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + + pService = (sal_crypto_service_t *)instanceHandle; + + if (NULL != pService->pLacKeyStats) { + LAC_OS_FREE(pService->pLacKeyStats); + } + + LAC_OS_CAFREE(pService->pSslLabel); + LAC_OS_CAFREE(pService->pTlsLabel); + LAC_OS_CAFREE(pService->pTlsHKDFSubLabel); + + return status; +} Index: sys/dev/qat/qat_api/common/crypto/sym/lac_sym_alg_chain.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/lac_sym_alg_chain.c @@ -0,0 +1,1860 @@ +/*************************************************************************** + * + * + * + ***************************************************************************/ + +/** + *************************************************************************** + * @file lac_sym_alg_chain.c Algorithm Chaining Perform + * + * @ingroup LacAlgChain + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" + +#include "icp_accel_devices.h" +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_adf_debug.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +#include "lac_mem.h" +#include "lac_log.h" +#include "lac_sym.h" +#include "lac_list.h" +#include "icp_qat_fw_la.h" +#include "lac_sal_types_crypto.h" +#include "lac_sal.h" +#include "lac_sal_ctrl.h" +#include "lac_sym_alg_chain.h" +#include "lac_sym_cipher.h" +#include "lac_sym_cipher_defs.h" +#include "lac_sym_hash.h" +#include "lac_sym_hash_defs.h" +#include "lac_sym_qat_cipher.h" +#include "lac_sym_qat_hash.h" +#include "lac_sym_stats.h" +#include "lac_sym_queue.h" +#include "lac_sym_cb.h" +#include "sal_string_parse.h" +#include "lac_sym_auth_enc.h" +#include "lac_sym_qat.h" + +/** + * @ingroup LacAlgChain + * Function which checks for support of partial packets for symmetric + * crypto operations + * + * @param[in] pService Pointer to service descriptor + * @param[in/out] pSessionDesc Pointer to session descriptor + * + */ +static void +LacSymCheck_IsPartialSupported(Cpa32U capabilitiesMask, + lac_session_desc_t *pSessionDesc) +{ + CpaBoolean isHashPartialSupported = CPA_FALSE; + CpaBoolean isCipherPartialSupported = CPA_FALSE; + CpaBoolean isPartialSupported = CPA_FALSE; + + switch (pSessionDesc->cipherAlgorithm) { + /* Following ciphers don't support partial */ + case CPA_CY_SYM_CIPHER_KASUMI_F8: + case CPA_CY_SYM_CIPHER_AES_F8: + case CPA_CY_SYM_CIPHER_SNOW3G_UEA2: + case CPA_CY_SYM_CIPHER_CHACHA: + case CPA_CY_SYM_CIPHER_ZUC_EEA3: + break; + /* All others support partial */ + default: + isCipherPartialSupported = CPA_TRUE; + } + switch (pSessionDesc->hashAlgorithm) { + /* Following hash don't support partial */ + case CPA_CY_SYM_HASH_KASUMI_F9: + case CPA_CY_SYM_HASH_SNOW3G_UIA2: + case CPA_CY_SYM_HASH_POLY: + case CPA_CY_SYM_HASH_ZUC_EIA3: + case CPA_CY_SYM_HASH_SHAKE_128: + case CPA_CY_SYM_HASH_SHAKE_256: + break; + /* Following hash may support partial based on device capabilities */ + case CPA_CY_SYM_HASH_SHA3_256: + if (ICP_ACCEL_CAPABILITIES_SHA3_EXT & capabilitiesMask) { + isHashPartialSupported = CPA_TRUE; + } + break; + /* All others support partial */ + default: + isHashPartialSupported = CPA_TRUE; + } + switch (pSessionDesc->symOperation) { + case CPA_CY_SYM_OP_CIPHER: + isPartialSupported = isCipherPartialSupported; + break; + case CPA_CY_SYM_OP_HASH: + isPartialSupported = isHashPartialSupported; + break; + case CPA_CY_SYM_OP_ALGORITHM_CHAINING: + if (isCipherPartialSupported && isHashPartialSupported) { + isPartialSupported = CPA_TRUE; + } + break; + case CPA_CY_SYM_OP_NONE: + break; + } + pSessionDesc->isPartialSupported = isPartialSupported; +} + +/** + * @ingroup LacAlgChain + * This callback function will be invoked whenever a hash precompute + * operation completes. It will dequeue and send any QAT requests + * which were queued up while the precompute was in progress. + * + * @param[in] callbackTag Opaque value provided by user. This will + * be a pointer to the session descriptor. + * + * @retval + * None + * + */ +static void +LacSymAlgChain_HashPrecomputeDoneCb(void *callbackTag) +{ + LacSymCb_PendingReqsDequeue((lac_session_desc_t *)callbackTag); +} + +/** + * @ingroup LacAlgChain + * Walk the buffer list and find the address for the given offset within + * a buffer. + * + * @param[in] pBufferList Buffer List + * @param[in] packetOffset Offset in the buffer list for which address + * is to be found. + * @param[out] ppDataPtr This is where the sought pointer will be put + * @param[out] pSpaceLeft Pointer to a variable in which information about + * available space from the given offset to the end + * of the flat buffer it is located in will be returned + * + * @retval CPA_STATUS_SUCCESS Address with a given offset is found in the list + * @retval CPA_STATUS_FAIL Address with a given offset not found in the list. + * + */ +static CpaStatus +LacSymAlgChain_PtrFromOffsetGet(const CpaBufferList *pBufferList, + const Cpa32U packetOffset, + Cpa8U **ppDataPtr) +{ + Cpa32U currentOffset = 0; + Cpa32U i = 0; + + for (i = 0; i < pBufferList->numBuffers; i++) { + Cpa8U *pCurrData = pBufferList->pBuffers[i].pData; + Cpa32U currDataSize = pBufferList->pBuffers[i].dataLenInBytes; + + /* If the offset is within the address space of the current + * buffer */ + if ((packetOffset >= currentOffset) && + (packetOffset < (currentOffset + currDataSize))) { + /* increment by offset of the address in the current + * buffer */ + *ppDataPtr = pCurrData + (packetOffset - currentOffset); + return CPA_STATUS_SUCCESS; + } + + /* Increment by the size of the buffer */ + currentOffset += currDataSize; + } + + return CPA_STATUS_FAIL; +} + +static void +LacAlgChain_CipherCDBuild(const CpaCySymCipherSetupData *pCipherData, + lac_session_desc_t *pSessionDesc, + icp_qat_fw_slice_t nextSlice, + Cpa8U cipherOffsetInConstantsTable, + icp_qat_fw_comn_flags *pCmnRequestFlags, + icp_qat_fw_serv_specif_flags *pLaCmdFlags, + Cpa8U *pHwBlockBaseInDRAM, + Cpa32U *pHwBlockOffsetInDRAM) +{ + Cpa8U *pCipherKeyField = NULL; + Cpa8U cipherOffsetInReqQW = 0; + Cpa32U sizeInBytes = 0; + + /* Construct the ContentDescriptor in DRAM */ + cipherOffsetInReqQW = (*pHwBlockOffsetInDRAM / LAC_QUAD_WORD_IN_BYTES); + ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_SET( + *pLaCmdFlags, ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_CD_SETUP); + + /* construct cipherConfig in CD in DRAM */ + LacSymQat_CipherHwBlockPopulateCfgData(pSessionDesc, + pHwBlockBaseInDRAM + + *pHwBlockOffsetInDRAM, + &sizeInBytes); + + *pHwBlockOffsetInDRAM += sizeInBytes; + + /* Cipher key will be in CD in DRAM. + * The Request contains a ptr to the CD. + * This ptr will be copied into the request later once the CD is + * fully constructed, but the flag is set here. */ + pCipherKeyField = pHwBlockBaseInDRAM + *pHwBlockOffsetInDRAM; + ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(*pCmnRequestFlags, + QAT_COMN_CD_FLD_TYPE_64BIT_ADR); + + LacSymQat_CipherHwBlockPopulateKeySetup( + pCipherData, + pCipherData->cipherKeyLenInBytes, + pCipherKeyField, + &sizeInBytes); + /* update offset */ + *pHwBlockOffsetInDRAM += sizeInBytes; + + LacSymQat_CipherCtrlBlockWrite(&(pSessionDesc->reqCacheFtr), + pSessionDesc->cipherAlgorithm, + pSessionDesc->cipherKeyLenInBytes, + nextSlice, + cipherOffsetInReqQW); + if (LAC_CIPHER_IS_GCM(pSessionDesc->cipherAlgorithm) || + LAC_CIPHER_IS_CHACHA(pSessionDesc->cipherAlgorithm)) { + LacSymQat_CipherCtrlBlockWrite( + &(pSessionDesc->reqSpcCacheFtr), + pSessionDesc->cipherAlgorithm, + pSessionDesc->cipherKeyLenInBytes, + ICP_QAT_FW_SLICE_DRAM_WR, + cipherOffsetInReqQW); + } +} + +static void +LacAlgChain_HashCDBuild( + const CpaCySymHashSetupData *pHashData, + CpaInstanceHandle instanceHandle, + lac_session_desc_t *pSessionDesc, + icp_qat_fw_slice_t nextSlice, + Cpa8U hashOffsetInConstantsTable, + icp_qat_fw_comn_flags *pCmnRequestFlags, + icp_qat_fw_serv_specif_flags *pLaCmdFlags, + lac_sym_qat_hash_precompute_info_t *pPrecomputeData, + lac_sym_qat_hash_precompute_info_t *pPrecomputeDataOptimisedCd, + Cpa8U *pHwBlockBaseInDRAM, + Cpa32U *pHwBlockOffsetInDRAM, + Cpa8U *pOptimisedHwBlockBaseInDRAM, + Cpa32U *pOptimisedHwBlockOffsetInDRAM) +{ + Cpa32U sizeInBytes = 0; + Cpa32U hwBlockOffsetInQuadWords = + *pHwBlockOffsetInDRAM / LAC_QUAD_WORD_IN_BYTES; + + /* build: + * - the hash part of the ContentDescriptor in DRAM */ + /* - the hash part of the CD control block in the Request template */ + LacSymQat_HashContentDescInit(&(pSessionDesc->reqCacheFtr), + instanceHandle, + pHashData, + pHwBlockBaseInDRAM, + hwBlockOffsetInQuadWords, + nextSlice, + pSessionDesc->qatHashMode, + CPA_FALSE, + CPA_FALSE, + pPrecomputeData, + &sizeInBytes); + + /* Using DRAM CD so update offset */ + *pHwBlockOffsetInDRAM += sizeInBytes; + + sizeInBytes = 0; +} + +CpaStatus +LacAlgChain_SessionAADUpdate(lac_session_desc_t *pSessionDesc, + Cpa32U newAADLength) +{ + icp_qat_la_bulk_req_ftr_t *req_ftr = &pSessionDesc->reqCacheFtr; + icp_qat_la_auth_req_params_t *req_params = &req_ftr->serv_specif_rqpars; + + if (!pSessionDesc) + return CPA_STATUS_FAIL; + + pSessionDesc->aadLenInBytes = newAADLength; + req_params->u2.aad_sz = + LAC_ALIGN_POW2_ROUNDUP(newAADLength, LAC_HASH_AES_GCM_BLOCK_SIZE); + + if (CPA_TRUE == pSessionDesc->isSinglePass) { + Cpa8U *pHwBlockBaseInDRAM = NULL; + Cpa32U hwBlockOffsetInDRAM = 0; + Cpa32U pSizeInBytes = 0; + CpaCySymCipherAlgorithm cipher = pSessionDesc->cipherAlgorithm; + + pHwBlockBaseInDRAM = + (Cpa8U *)pSessionDesc->contentDescInfo.pData; + if (pSessionDesc->cipherDirection == + CPA_CY_SYM_CIPHER_DIRECTION_DECRYPT) { + if (LAC_CIPHER_IS_GCM(cipher)) { + hwBlockOffsetInDRAM = LAC_QUADWORDS_TO_BYTES( + LAC_SYM_QAT_CIPHER_OFFSET_IN_DRAM_GCM_SPC); + } else { + hwBlockOffsetInDRAM = LAC_QUADWORDS_TO_BYTES( + LAC_SYM_QAT_CIPHER_OFFSET_IN_DRAM_CHACHA_SPC); + } + } + LacSymQat_CipherHwBlockPopulateCfgData(pSessionDesc, + pHwBlockBaseInDRAM + + hwBlockOffsetInDRAM, + &pSizeInBytes); + } + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +LacAlgChain_SessionCipherKeyUpdate(lac_session_desc_t *pSessionDesc, + Cpa8U *pCipherKey) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + if (pSessionDesc == NULL || pCipherKey == NULL) + return CPA_STATUS_FAIL; + + if (LAC_CIPHER_IS_ARC4(pSessionDesc->cipherAlgorithm)) { + LacSymQat_CipherArc4StateInit( + pCipherKey, + pSessionDesc->cipherKeyLenInBytes, + pSessionDesc->cipherARC4InitialState); + } else { + CpaCySymCipherSetupData cipherSetupData = { 0 }; + Cpa32U sizeInBytes; + Cpa8U *pCipherKeyField; + sal_qat_content_desc_info_t *pCdInfo = + &(pSessionDesc->contentDescInfo); + + cipherSetupData.cipherAlgorithm = pSessionDesc->cipherAlgorithm; + cipherSetupData.cipherKeyLenInBytes = + pSessionDesc->cipherKeyLenInBytes; + cipherSetupData.pCipherKey = pCipherKey; + + switch (pSessionDesc->symOperation) { + case CPA_CY_SYM_OP_CIPHER: { + pCipherKeyField = (Cpa8U *)pCdInfo->pData + + sizeof(icp_qat_hw_cipher_config_t); + + LacSymQat_CipherHwBlockPopulateKeySetup( + &(cipherSetupData), + cipherSetupData.cipherKeyLenInBytes, + pCipherKeyField, + &sizeInBytes); + + if (pSessionDesc->useSymConstantsTable) { + pCipherKeyField = (Cpa8U *)&( + pSessionDesc->shramReqCacheHdr.cd_pars.s1 + .serv_specif_fields); + + LacSymQat_CipherHwBlockPopulateKeySetup( + &(cipherSetupData), + cipherSetupData.cipherKeyLenInBytes, + pCipherKeyField, + &sizeInBytes); + } + } break; + + case CPA_CY_SYM_OP_ALGORITHM_CHAINING: { + icp_qat_fw_cipher_auth_cd_ctrl_hdr_t *cd_ctrl = + (icp_qat_fw_cipher_auth_cd_ctrl_hdr_t + *)&pSessionDesc->reqCacheFtr.cd_ctrl; + + pCipherKeyField = (Cpa8U *)pCdInfo->pData + + cd_ctrl->cipher_cfg_offset * + LAC_QUAD_WORD_IN_BYTES + + sizeof(icp_qat_hw_cipher_config_t); + + LacSymQat_CipherHwBlockPopulateKeySetup( + &(cipherSetupData), + cipherSetupData.cipherKeyLenInBytes, + pCipherKeyField, + &sizeInBytes); + } break; + + default: + LAC_LOG_ERROR("Invalid sym operation\n"); + status = CPA_STATUS_INVALID_PARAM; + break; + } + } + return status; +} + +CpaStatus +LacAlgChain_SessionAuthKeyUpdate(lac_session_desc_t *pSessionDesc, + Cpa8U *pAuthKey) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa8U *pHwBlockBaseInDRAM = NULL; + Cpa8U *pOutHashSetup = NULL; + Cpa8U *pInnerState1 = NULL; + Cpa8U *pInnerState2 = NULL; + CpaCySymSessionSetupData sessionSetup = { 0 }; + + if (pSessionDesc == NULL || pAuthKey == NULL) + return CPA_STATUS_FAIL; + + icp_qat_fw_cipher_auth_cd_ctrl_hdr_t *cd_ctrl = + (icp_qat_fw_cipher_auth_cd_ctrl_hdr_t *)&pSessionDesc->reqCacheFtr + .cd_ctrl; + + pHwBlockBaseInDRAM = (Cpa8U *)pSessionDesc->contentDescInfo.pData; + + sessionSetup.hashSetupData.hashAlgorithm = pSessionDesc->hashAlgorithm; + sessionSetup.hashSetupData.hashMode = pSessionDesc->hashMode; + sessionSetup.hashSetupData.authModeSetupData.authKey = pAuthKey; + sessionSetup.hashSetupData.authModeSetupData.authKeyLenInBytes = + pSessionDesc->authKeyLenInBytes; + sessionSetup.hashSetupData.authModeSetupData.aadLenInBytes = + pSessionDesc->aadLenInBytes; + sessionSetup.hashSetupData.digestResultLenInBytes = + pSessionDesc->hashResultSize; + + sessionSetup.cipherSetupData.cipherAlgorithm = + pSessionDesc->cipherAlgorithm; + sessionSetup.cipherSetupData.cipherKeyLenInBytes = + pSessionDesc->cipherKeyLenInBytes; + + /* Calculate hash states offsets */ + pInnerState1 = pHwBlockBaseInDRAM + + cd_ctrl->hash_cfg_offset * LAC_QUAD_WORD_IN_BYTES + + sizeof(icp_qat_hw_auth_setup_t); + + pInnerState2 = pInnerState1 + cd_ctrl->inner_state1_sz; + + pOutHashSetup = pInnerState2 + cd_ctrl->inner_state2_sz; + + /* Calculate offset of cipher key */ + if (pSessionDesc->laCmdId == ICP_QAT_FW_LA_CMD_CIPHER_HASH) { + sessionSetup.cipherSetupData.pCipherKey = + (Cpa8U *)pHwBlockBaseInDRAM + + sizeof(icp_qat_hw_cipher_config_t); + } else if (pSessionDesc->laCmdId == ICP_QAT_FW_LA_CMD_HASH_CIPHER) { + sessionSetup.cipherSetupData.pCipherKey = + pOutHashSetup + sizeof(icp_qat_hw_cipher_config_t); + } else if (CPA_TRUE == pSessionDesc->isSinglePass) { + CpaCySymCipherAlgorithm cipher = pSessionDesc->cipherAlgorithm; + Cpa32U hwBlockOffsetInDRAM = 0; + + if (pSessionDesc->cipherDirection == + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT) { + sessionSetup.cipherSetupData.pCipherKey = + (Cpa8U *)pHwBlockBaseInDRAM + + sizeof(icp_qat_hw_cipher_config_t); + } else { + if (LAC_CIPHER_IS_GCM(cipher)) + hwBlockOffsetInDRAM = LAC_QUADWORDS_TO_BYTES( + LAC_SYM_QAT_CIPHER_OFFSET_IN_DRAM_GCM_SPC); + else + hwBlockOffsetInDRAM = LAC_QUADWORDS_TO_BYTES( + LAC_SYM_QAT_CIPHER_OFFSET_IN_DRAM_CHACHA_SPC); + sessionSetup.cipherSetupData.pCipherKey = + (Cpa8U *)pHwBlockBaseInDRAM + hwBlockOffsetInDRAM + + sizeof(icp_qat_hw_cipher_config_t); + } + } + + if (!sessionSetup.cipherSetupData.pCipherKey) + return CPA_STATUS_FAIL; + + if (CPA_CY_SYM_HASH_SHA3_256 == pSessionDesc->hashAlgorithm) { + if (CPA_FALSE == pSessionDesc->isAuthEncryptOp) { + lac_sym_qat_hash_state_buffer_info_t + *pHashStateBufferInfo = + &(pSessionDesc->hashStateBufferInfo); + + sal_crypto_service_t *pService = + (sal_crypto_service_t *)pSessionDesc->pInstance; + + status = LacHash_StatePrefixAadBufferInit( + &(pService->generic_service_info), + &(sessionSetup.hashSetupData), + &(pSessionDesc->reqCacheFtr), + pSessionDesc->qatHashMode, + pSessionDesc->hashStatePrefixBuffer, + pHashStateBufferInfo); + /* SHRAM Constants Table not used for Auth-Enc */ + } + } else if (CPA_CY_SYM_HASH_SNOW3G_UIA2 == pSessionDesc->hashAlgorithm) { + Cpa8U *authKey = + (Cpa8U *)pOutHashSetup + sizeof(icp_qat_hw_cipher_config_t); + memcpy(authKey, pAuthKey, pSessionDesc->authKeyLenInBytes); + } else if (CPA_CY_SYM_HASH_ZUC_EIA3 == pSessionDesc->hashAlgorithm || + CPA_CY_SYM_HASH_AES_CBC_MAC == pSessionDesc->hashAlgorithm) { + memcpy(pInnerState2, pAuthKey, pSessionDesc->authKeyLenInBytes); + } else if (CPA_CY_SYM_HASH_AES_CMAC == pSessionDesc->hashAlgorithm || + CPA_CY_SYM_HASH_KASUMI_F9 == pSessionDesc->hashAlgorithm || + IS_HASH_MODE_1(pSessionDesc->qatHashMode)) { + if (CPA_CY_SYM_HASH_AES_CMAC == pSessionDesc->hashAlgorithm) { + memset(pInnerState2, 0, cd_ctrl->inner_state2_sz); + } + + /* Block messages until precompute is completed */ + pSessionDesc->nonBlockingOpsInProgress = CPA_FALSE; + + status = LacHash_PrecomputeDataCreate( + pSessionDesc->pInstance, + (CpaCySymSessionSetupData *)&(sessionSetup), + LacSymAlgChain_HashPrecomputeDoneCb, + pSessionDesc, + pSessionDesc->hashStatePrefixBuffer, + pInnerState1, + pInnerState2); + } + + return status; +} + +/** @ingroup LacAlgChain */ +CpaStatus +LacAlgChain_SessionInit(const CpaInstanceHandle instanceHandle, + const CpaCySymSessionSetupData *pSessionSetupData, + lac_session_desc_t *pSessionDesc) +{ + CpaStatus stat, status = CPA_STATUS_SUCCESS; + sal_qat_content_desc_info_t *pCdInfo = NULL; + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + Cpa32U capabilitiesMask = + pService->generic_service_info.capabilitiesMask; + Cpa8U *pHwBlockBaseInDRAM = NULL; + Cpa8U *pOptimisedHwBlockBaseInDRAM = NULL; + Cpa32U hwBlockOffsetInDRAM = 0; + Cpa32U optimisedHwBlockOffsetInDRAM = 0; + Cpa8U cipherOffsetInConstantsTable = 0; + Cpa8U hashOffsetInConstantsTable = 0; + icp_qat_fw_comn_req_t *pMsg = NULL; + const CpaCySymCipherSetupData *pCipherData; + const CpaCySymHashSetupData *pHashData; + Cpa16U proto = ICP_QAT_FW_LA_NO_PROTO; /* no CCM/GCM/Snow3G */ + CpaCySymAlgChainOrder chainOrder = 0; + lac_sym_qat_hash_precompute_info_t precomputeData = { 0 }; + lac_sym_qat_hash_precompute_info_t precomputeDataOptimisedCd = { 0 }; + + pCipherData = &(pSessionSetupData->cipherSetupData); + pHashData = &(pSessionSetupData->hashSetupData); + + /*------------------------------------------------------------------------- + * Populate session data + *-----------------------------------------------------------------------*/ + + /* Initialise Request Queue */ + stat = LAC_SPINLOCK_INIT(&pSessionDesc->requestQueueLock); + if (CPA_STATUS_SUCCESS != stat) { + LAC_LOG_ERROR("Spinlock init failed for sessionLock"); + return CPA_STATUS_RESOURCE; + } + + pSessionDesc->pRequestQueueHead = NULL; + pSessionDesc->pRequestQueueTail = NULL; + pSessionDesc->nonBlockingOpsInProgress = CPA_TRUE; + pSessionDesc->pInstance = instanceHandle; + pSessionDesc->digestIsAppended = pSessionSetupData->digestIsAppended; + pSessionDesc->digestVerify = pSessionSetupData->verifyDigest; + + /* Reset the pending callback counter */ + qatUtilsAtomicSet(0, &pSessionDesc->u.pendingCbCount); + qatUtilsAtomicSet(0, &pSessionDesc->u.pendingDpCbCount); + + /* Partial state must be set to full, to indicate that next packet + * expected on the session is a full packet or the start of a + * partial packet. */ + pSessionDesc->partialState = CPA_CY_SYM_PACKET_TYPE_FULL; + + pSessionDesc->symOperation = pSessionSetupData->symOperation; + switch (pSessionDesc->symOperation) { + case CPA_CY_SYM_OP_CIPHER: + pSessionDesc->laCmdId = ICP_QAT_FW_LA_CMD_CIPHER; + pSessionDesc->isCipher = TRUE; + pSessionDesc->isAuth = FALSE; + pSessionDesc->isAuthEncryptOp = CPA_FALSE; + + if (CPA_CY_SYM_CIPHER_SNOW3G_UEA2 == + pSessionSetupData->cipherSetupData.cipherAlgorithm) { + proto = ICP_QAT_FW_LA_SNOW_3G_PROTO; + } else if (CPA_CY_SYM_CIPHER_ZUC_EEA3 == + pSessionSetupData->cipherSetupData.cipherAlgorithm) { + proto = ICP_QAT_FW_LA_ZUC_3G_PROTO; + } + break; + case CPA_CY_SYM_OP_HASH: + pSessionDesc->laCmdId = ICP_QAT_FW_LA_CMD_AUTH; + pSessionDesc->isCipher = FALSE; + pSessionDesc->isAuth = TRUE; + pSessionDesc->isAuthEncryptOp = CPA_FALSE; + + if (CPA_CY_SYM_HASH_SNOW3G_UIA2 == + pSessionSetupData->hashSetupData.hashAlgorithm) { + proto = ICP_QAT_FW_LA_SNOW_3G_PROTO; + } else if (CPA_CY_SYM_HASH_ZUC_EIA3 == + pSessionSetupData->hashSetupData.hashAlgorithm) { + proto = ICP_QAT_FW_LA_ZUC_3G_PROTO; + } + + break; + case CPA_CY_SYM_OP_ALGORITHM_CHAINING: + pSessionDesc->isCipher = TRUE; + pSessionDesc->isAuth = TRUE; + + { + /* set up some useful shortcuts */ + CpaCySymCipherAlgorithm cipherAlgorithm = + pSessionSetupData->cipherSetupData.cipherAlgorithm; + CpaCySymCipherDirection cipherDir = + pSessionSetupData->cipherSetupData.cipherDirection; + + if (LAC_CIPHER_IS_CCM(cipherAlgorithm)) { + pSessionDesc->isAuthEncryptOp = CPA_TRUE; + pSessionDesc->digestIsAppended = CPA_TRUE; + proto = ICP_QAT_FW_LA_CCM_PROTO; + + /* Derive chainOrder from direction for + * isAuthEncryptOp + * cases */ + /* For CCM & GCM modes: force digest verify flag + _TRUE + for decrypt and _FALSE for encrypt. For all + other cases + use user defined value */ + + if (CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT == + cipherDir) { + chainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER; + pSessionDesc->digestVerify = CPA_FALSE; + } else { + chainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH; + pSessionDesc->digestVerify = CPA_TRUE; + } + } else if (LAC_CIPHER_IS_GCM(cipherAlgorithm)) { + pSessionDesc->isAuthEncryptOp = CPA_TRUE; + proto = ICP_QAT_FW_LA_GCM_PROTO; + + /* Derive chainOrder from direction for + * isAuthEncryptOp + * cases */ + /* For CCM & GCM modes: force digest verify flag + _TRUE + for decrypt and _FALSE for encrypt. For all + other cases + use user defined value */ + + if (CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT == + cipherDir) { + chainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH; + pSessionDesc->digestVerify = CPA_FALSE; + } else { + chainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER; + pSessionDesc->digestVerify = CPA_TRUE; + } + } else if (LAC_CIPHER_IS_CHACHA(cipherAlgorithm)) { + pSessionDesc->isAuthEncryptOp = CPA_TRUE; + proto = ICP_QAT_FW_LA_SINGLE_PASS_PROTO; + + if (CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT == + cipherDir) { + chainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH; + } else { + chainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER; + } + } else { + pSessionDesc->isAuthEncryptOp = CPA_FALSE; + /* Use the chainOrder passed in */ + chainOrder = pSessionSetupData->algChainOrder; + if ((chainOrder != + CPA_CY_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER) && + (chainOrder != + CPA_CY_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH)) { + LAC_INVALID_PARAM_LOG("algChainOrder"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_CY_SYM_HASH_SNOW3G_UIA2 == + pSessionSetupData->hashSetupData + .hashAlgorithm) { + proto = ICP_QAT_FW_LA_SNOW_3G_PROTO; + } else if (CPA_CY_SYM_HASH_ZUC_EIA3 == + pSessionSetupData->hashSetupData + .hashAlgorithm) { + proto = ICP_QAT_FW_LA_ZUC_3G_PROTO; + } + } + + if (CPA_CY_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH == + chainOrder) { + pSessionDesc->laCmdId = + ICP_QAT_FW_LA_CMD_CIPHER_HASH; + } else if ( + CPA_CY_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER == + chainOrder) { + pSessionDesc->laCmdId = + ICP_QAT_FW_LA_CMD_HASH_CIPHER; + } + } + break; + default: + break; + } + + if (pSessionDesc->isCipher) { +/* Populate cipher specific session data */ + + status = LacCipher_SessionSetupDataCheck(pCipherData); + + if (CPA_STATUS_SUCCESS == status) { + pSessionDesc->cipherAlgorithm = + pCipherData->cipherAlgorithm; + pSessionDesc->cipherKeyLenInBytes = + pCipherData->cipherKeyLenInBytes; + pSessionDesc->cipherDirection = + pCipherData->cipherDirection; + + /* ARC4 base key isn't added to the content descriptor, + * because + * we don't need to pass it directly to the QAT engine. + * Instead + * an initial cipher state & key matrix is derived from + * the + * base key and provided to the QAT through the state + * pointer + * in the request params. We'll store this initial state + * in + * the session descriptor. */ + + if (LAC_CIPHER_IS_ARC4(pSessionDesc->cipherAlgorithm)) { + LacSymQat_CipherArc4StateInit( + pCipherData->pCipherKey, + pSessionDesc->cipherKeyLenInBytes, + pSessionDesc->cipherARC4InitialState); + + pSessionDesc->cipherARC4InitialStatePhysAddr = + LAC_OS_VIRT_TO_PHYS_EXTERNAL( + pService->generic_service_info, + pSessionDesc->cipherARC4InitialState); + + if (0 == + pSessionDesc + ->cipherARC4InitialStatePhysAddr) { + LAC_LOG_ERROR( + "Unable to get the physical address of " + "the initial state for ARC4\n"); + status = CPA_STATUS_FAIL; + } + } + } + } + + if ((CPA_STATUS_SUCCESS == status) && pSessionDesc->isAuth) { + /* Populate auth-specific session data */ + const CpaCySymHashSetupData *pHashData = + &pSessionSetupData->hashSetupData; + + status = LacHash_HashContextCheck(instanceHandle, pHashData); + if (CPA_STATUS_SUCCESS == status) { + pSessionDesc->hashResultSize = + pHashData->digestResultLenInBytes; + pSessionDesc->hashMode = pHashData->hashMode; + pSessionDesc->hashAlgorithm = pHashData->hashAlgorithm; + + /* Save the authentication key length for further update + */ + if (CPA_CY_SYM_HASH_MODE_AUTH == pHashData->hashMode) { + pSessionDesc->authKeyLenInBytes = + pHashData->authModeSetupData + .authKeyLenInBytes; + } + if (CPA_TRUE == pSessionDesc->isAuthEncryptOp || + (pHashData->hashAlgorithm == + CPA_CY_SYM_HASH_SNOW3G_UIA2 || + pHashData->hashAlgorithm == + CPA_CY_SYM_HASH_ZUC_EIA3)) { + pSessionDesc->aadLenInBytes = + pHashData->authModeSetupData.aadLenInBytes; + } + + /* Set the QAT hash mode */ + if ((pHashData->hashMode == + CPA_CY_SYM_HASH_MODE_NESTED) || + (pHashData->hashMode == + CPA_CY_SYM_HASH_MODE_PLAIN) || + (pHashData->hashMode == CPA_CY_SYM_HASH_MODE_AUTH && + pHashData->hashAlgorithm == + CPA_CY_SYM_HASH_AES_CBC_MAC)) { + pSessionDesc->qatHashMode = + ICP_QAT_HW_AUTH_MODE0; + } else /* CPA_CY_SYM_HASH_MODE_AUTH + && anything except CPA_CY_SYM_HASH_AES_CBC_MAC + */ + { + if (IS_HMAC_ALG(pHashData->hashAlgorithm)) { + /* SHA3 and SM3 HMAC do not support + * precompute, force MODE2 + * for AUTH */ + if ((CPA_CY_SYM_HASH_SHA3_224 == + pHashData->hashAlgorithm) || + (CPA_CY_SYM_HASH_SHA3_256 == + pHashData->hashAlgorithm) || + (CPA_CY_SYM_HASH_SHA3_384 == + pHashData->hashAlgorithm) || + (CPA_CY_SYM_HASH_SHA3_512 == + pHashData->hashAlgorithm) || + (CPA_CY_SYM_HASH_SM3 == + pHashData->hashAlgorithm)) { + pSessionDesc->qatHashMode = + ICP_QAT_HW_AUTH_MODE2; + } else { + pSessionDesc->qatHashMode = + ICP_QAT_HW_AUTH_MODE1; + } + } else if (CPA_CY_SYM_HASH_ZUC_EIA3 == + pHashData->hashAlgorithm) { + pSessionDesc->qatHashMode = + ICP_QAT_HW_AUTH_MODE0; + } else { + pSessionDesc->qatHashMode = + ICP_QAT_HW_AUTH_MODE1; + } + } + } + } + + /*------------------------------------------------------------------------- + * build the message templates + * create two content descriptors in the case we can support using SHRAM + * constants and an optimised content descriptor. we have to do this in + *case + * of partials. + * 64 byte content desciptor is used in the SHRAM case for + *AES-128-HMAC-SHA1 + *-----------------------------------------------------------------------*/ + if (CPA_STATUS_SUCCESS == status) { + + LacSymCheck_IsPartialSupported(capabilitiesMask, pSessionDesc); + + /* setup some convenience pointers */ + pCdInfo = &(pSessionDesc->contentDescInfo); + pHwBlockBaseInDRAM = (Cpa8U *)pCdInfo->pData; + hwBlockOffsetInDRAM = 0; + + /* + * Build the header flags with the default settings for this + * session. + */ + if (pSessionDesc->isDPSession == CPA_TRUE) { + pSessionDesc->cmnRequestFlags = + ICP_QAT_FW_COMN_FLAGS_BUILD( + QAT_COMN_CD_FLD_TYPE_64BIT_ADR, + LAC_SYM_DP_QAT_PTR_TYPE); + } else { + pSessionDesc->cmnRequestFlags = + ICP_QAT_FW_COMN_FLAGS_BUILD( + QAT_COMN_CD_FLD_TYPE_64BIT_ADR, + LAC_SYM_DEFAULT_QAT_PTR_TYPE); + } + + LacSymQat_LaSetDefaultFlags(&pSessionDesc->laCmdFlags, + pSessionDesc->symOperation); + + switch (pSessionDesc->symOperation) { + case CPA_CY_SYM_OP_CIPHER: { + LacAlgChain_CipherCDBuild( + pCipherData, + pSessionDesc, + ICP_QAT_FW_SLICE_DRAM_WR, + cipherOffsetInConstantsTable, + &pSessionDesc->cmnRequestFlags, + &pSessionDesc->laCmdFlags, + pHwBlockBaseInDRAM, + &hwBlockOffsetInDRAM); + } break; + case CPA_CY_SYM_OP_HASH: + LacAlgChain_HashCDBuild(pHashData, + instanceHandle, + pSessionDesc, + ICP_QAT_FW_SLICE_DRAM_WR, + hashOffsetInConstantsTable, + &pSessionDesc->cmnRequestFlags, + &pSessionDesc->laCmdFlags, + &precomputeData, + &precomputeDataOptimisedCd, + pHwBlockBaseInDRAM, + &hwBlockOffsetInDRAM, + NULL, + NULL); + break; + case CPA_CY_SYM_OP_ALGORITHM_CHAINING: + /* For CCM/GCM, CPM firmware currently expects the + * cipher and + * hash h/w setup blocks to be arranged according to the + * chain + * order (Except for GCM/CCM, order doesn't actually + * matter as + * long as the config offsets are set correctly in CD + * control + * blocks + */ + if (CPA_CY_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER == + chainOrder) { + LacAlgChain_HashCDBuild( + pHashData, + instanceHandle, + pSessionDesc, + ICP_QAT_FW_SLICE_CIPHER, + hashOffsetInConstantsTable, + &pSessionDesc->cmnRequestFlags, + &pSessionDesc->laCmdFlags, + &precomputeData, + &precomputeDataOptimisedCd, + pHwBlockBaseInDRAM, + &hwBlockOffsetInDRAM, + pOptimisedHwBlockBaseInDRAM, + &optimisedHwBlockOffsetInDRAM); + + LacAlgChain_CipherCDBuild( + pCipherData, + pSessionDesc, + ICP_QAT_FW_SLICE_DRAM_WR, + cipherOffsetInConstantsTable, + &pSessionDesc->cmnRequestFlags, + &pSessionDesc->laCmdFlags, + pHwBlockBaseInDRAM, + &hwBlockOffsetInDRAM); + if (LAC_CIPHER_IS_SPC( + pCipherData->cipherAlgorithm, + pHashData->hashAlgorithm, + capabilitiesMask)) { + pCdInfo->hwBlkSzQuadWords = + (LAC_BYTES_TO_QUADWORDS( + hwBlockOffsetInDRAM)); + pMsg = (icp_qat_fw_comn_req_t *)&( + pSessionDesc->reqSpcCacheHdr); + SalQatMsg_ContentDescHdrWrite( + (icp_qat_fw_comn_req_t *)pMsg, + pCdInfo); + } + } else { + LacAlgChain_CipherCDBuild( + pCipherData, + pSessionDesc, + ICP_QAT_FW_SLICE_AUTH, + cipherOffsetInConstantsTable, + &pSessionDesc->cmnRequestFlags, + &pSessionDesc->laCmdFlags, + pHwBlockBaseInDRAM, + &hwBlockOffsetInDRAM); + + if (LAC_CIPHER_IS_SPC( + pCipherData->cipherAlgorithm, + pHashData->hashAlgorithm, + capabilitiesMask)) { + pCdInfo->hwBlkSzQuadWords = + LAC_BYTES_TO_QUADWORDS( + hwBlockOffsetInDRAM); + pMsg = (icp_qat_fw_comn_req_t *)&( + pSessionDesc->reqSpcCacheHdr); + SalQatMsg_ContentDescHdrWrite( + (icp_qat_fw_comn_req_t *)pMsg, + pCdInfo); + } + LacAlgChain_HashCDBuild( + pHashData, + instanceHandle, + pSessionDesc, + ICP_QAT_FW_SLICE_DRAM_WR, + hashOffsetInConstantsTable, + &pSessionDesc->cmnRequestFlags, + &pSessionDesc->laCmdFlags, + &precomputeData, + &precomputeDataOptimisedCd, + pHwBlockBaseInDRAM, + &hwBlockOffsetInDRAM, + pOptimisedHwBlockBaseInDRAM, + &optimisedHwBlockOffsetInDRAM); + } + break; + default: + LAC_LOG_ERROR("Invalid sym operation\n"); + status = CPA_STATUS_INVALID_PARAM; + } + } + + if ((CPA_STATUS_SUCCESS == status) && pSessionDesc->isAuth) { + lac_sym_qat_hash_state_buffer_info_t *pHashStateBufferInfo = + &(pSessionDesc->hashStateBufferInfo); + CpaBoolean hashStateBuffer = CPA_TRUE; + + /* set up fields in both the cd_ctrl and reqParams which + * describe + * the ReqParams block */ + LacSymQat_HashSetupReqParamsMetaData( + &(pSessionDesc->reqCacheFtr), + instanceHandle, + pHashData, + hashStateBuffer, + pSessionDesc->qatHashMode, + pSessionDesc->digestVerify); + + /* populate the hash state prefix buffer info structure + * (part of user allocated session memory & the + * buffer itself. For CCM/GCM the buffer is stored in the + * cookie and is not initialised here) */ + if (CPA_FALSE == pSessionDesc->isAuthEncryptOp) { + LAC_CHECK_64_BYTE_ALIGNMENT( + &(pSessionDesc->hashStatePrefixBuffer[0])); + status = LacHash_StatePrefixAadBufferInit( + &(pService->generic_service_info), + pHashData, + &(pSessionDesc->reqCacheFtr), + pSessionDesc->qatHashMode, + pSessionDesc->hashStatePrefixBuffer, + pHashStateBufferInfo); + /* SHRAM Constants Table not used for Auth-Enc */ + } + + if (CPA_STATUS_SUCCESS == status) { + if (IS_HASH_MODE_1(pSessionDesc->qatHashMode) || + CPA_CY_SYM_HASH_ZUC_EIA3 == + pHashData->hashAlgorithm) { + LAC_CHECK_64_BYTE_ALIGNMENT( + &(pSessionDesc->hashStatePrefixBuffer[0])); + + /* Block messages until precompute is completed + */ + pSessionDesc->nonBlockingOpsInProgress = + CPA_FALSE; + status = LacHash_PrecomputeDataCreate( + instanceHandle, + (CpaCySymSessionSetupData *) + pSessionSetupData, + LacSymAlgChain_HashPrecomputeDoneCb, + pSessionDesc, + pSessionDesc->hashStatePrefixBuffer, + precomputeData.pState1, + precomputeData.pState2); + } else if (pHashData->hashAlgorithm == + CPA_CY_SYM_HASH_AES_CBC_MAC) { + LAC_OS_BZERO(precomputeData.pState2, + precomputeData.state2Size); + memcpy(precomputeData.pState2, + pHashData->authModeSetupData.authKey, + pHashData->authModeSetupData + .authKeyLenInBytes); + } + } + if (CPA_STATUS_SUCCESS == status) { + + if (pSessionDesc->digestVerify) { + + ICP_QAT_FW_LA_CMP_AUTH_SET( + pSessionDesc->laCmdFlags, + ICP_QAT_FW_LA_CMP_AUTH_RES); + ICP_QAT_FW_LA_RET_AUTH_SET( + pSessionDesc->laCmdFlags, + ICP_QAT_FW_LA_NO_RET_AUTH_RES); + } else { + + ICP_QAT_FW_LA_RET_AUTH_SET( + pSessionDesc->laCmdFlags, + ICP_QAT_FW_LA_RET_AUTH_RES); + ICP_QAT_FW_LA_CMP_AUTH_SET( + pSessionDesc->laCmdFlags, + ICP_QAT_FW_LA_NO_CMP_AUTH_RES); + } + } + } + + if (CPA_STATUS_SUCCESS == status) { + + pCdInfo->hwBlkSzQuadWords = + LAC_BYTES_TO_QUADWORDS(hwBlockOffsetInDRAM); + pMsg = (icp_qat_fw_comn_req_t *)&(pSessionDesc->reqCacheHdr); + + /* Configure the ContentDescriptor field + * in the request if not done already */ + SalQatMsg_ContentDescHdrWrite((icp_qat_fw_comn_req_t *)pMsg, + pCdInfo); + + if (CPA_CY_SYM_CIPHER_ZUC_EEA3 == + pSessionSetupData->cipherSetupData.cipherAlgorithm || + pHashData->hashAlgorithm == CPA_CY_SYM_HASH_ZUC_EIA3) { + /* New bit position (12) for ZUC. The FW provides a + * specific macro + * to use to set the ZUC proto flag. With the new FW I/F + * this needs + * to be set for both Cipher and Auth */ + ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET( + pSessionDesc->laCmdFlags, proto); + } else { + /* Configure the common header */ + ICP_QAT_FW_LA_PROTO_SET(pSessionDesc->laCmdFlags, + proto); + } + + /* set Append flag, if digest is appended */ + if (pSessionDesc->digestIsAppended) { + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( + pSessionDesc->laCmdFlags, + ICP_QAT_FW_LA_DIGEST_IN_BUFFER); + } else { + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( + pSessionDesc->laCmdFlags, + ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER); + } + + SalQatMsg_CmnHdrWrite((icp_qat_fw_comn_req_t *)pMsg, + ICP_QAT_FW_COMN_REQ_CPM_FW_LA, + pSessionDesc->laCmdId, + pSessionDesc->cmnRequestFlags, + pSessionDesc->laCmdFlags); + } + + return status; +} + +/** @ingroup LacAlgChain */ +CpaStatus +LacAlgChain_Perform(const CpaInstanceHandle instanceHandle, + lac_session_desc_t *pSessionDesc, + void *pCallbackTag, + const CpaCySymOpData *pOpData, + const CpaBufferList *pSrcBuffer, + CpaBufferList *pDstBuffer, + CpaBoolean *pVerifyResult) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + Cpa32U capabilitiesMask = + pService->generic_service_info.capabilitiesMask; + lac_sym_bulk_cookie_t *pCookie = NULL; + lac_sym_cookie_t *pSymCookie = NULL; + icp_qat_fw_la_bulk_req_t *pMsg = NULL; + Cpa8U *pMsgDummy = NULL; + Cpa8U *pCacheDummyHdr = NULL; + Cpa8U *pCacheDummyFtr = NULL; + Cpa32U qatPacketType = 0; + CpaBufferList *pBufferList = NULL; + Cpa8U *pDigestResult = NULL; + Cpa64U srcAddrPhys = 0; + Cpa64U dstAddrPhys = 0; + icp_qat_fw_la_cmd_id_t laCmdId; + sal_qat_content_desc_info_t *pCdInfo = NULL; + Cpa8U *pHwBlockBaseInDRAM = NULL; + Cpa32U hwBlockOffsetInDRAM = 0; + Cpa32U sizeInBytes = 0; + icp_qat_fw_cipher_cd_ctrl_hdr_t *pSpcCdCtrlHdr = NULL; + CpaCySymCipherAlgorithm cipher; + CpaCySymHashAlgorithm hash; + Cpa8U paddingLen = 0; + Cpa8U blockLen = 0; + Cpa64U srcPktSize = 0; + + /* Set the command id */ + laCmdId = pSessionDesc->laCmdId; + + cipher = pSessionDesc->cipherAlgorithm; + hash = pSessionDesc->hashAlgorithm; + + /* Convert Alg Chain Request to Cipher Request for CCP and + * AES_GCM single pass */ + if (!pSessionDesc->isSinglePass && + LAC_CIPHER_IS_SPC(cipher, hash, capabilitiesMask) && + (LAC_CIPHER_SPC_IV_SIZE == pOpData->ivLenInBytes)) { + pSessionDesc->laCmdId = ICP_QAT_FW_LA_CMD_CIPHER; + laCmdId = pSessionDesc->laCmdId; + pSessionDesc->symOperation = CPA_CY_SYM_OP_CIPHER; + pSessionDesc->isSinglePass = CPA_TRUE; + pSessionDesc->isCipher = CPA_TRUE; + pSessionDesc->isAuthEncryptOp = CPA_FALSE; + pSessionDesc->isAuth = CPA_FALSE; + if (CPA_CY_SYM_HASH_AES_GMAC == pSessionDesc->hashAlgorithm) { + pSessionDesc->aadLenInBytes = + pOpData->messageLenToHashInBytes; + if (ICP_QAT_FW_SPC_AAD_SZ_MAX < + pSessionDesc->aadLenInBytes) { + LAC_INVALID_PARAM_LOG( + "aadLenInBytes for AES_GMAC"); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* New bit position (13) for SINGLE PASS. + * The FW provides a specific macro to use to set the proto flag + */ + ICP_QAT_FW_LA_SINGLE_PASS_PROTO_FLAG_SET( + pSessionDesc->laCmdFlags, ICP_QAT_FW_LA_SINGLE_PASS_PROTO); + ICP_QAT_FW_LA_PROTO_SET(pSessionDesc->laCmdFlags, 0); + + pCdInfo = &(pSessionDesc->contentDescInfo); + pHwBlockBaseInDRAM = (Cpa8U *)pCdInfo->pData; + if (CPA_CY_SYM_CIPHER_DIRECTION_DECRYPT == + pSessionDesc->cipherDirection) { + if (LAC_CIPHER_IS_GCM(cipher)) + hwBlockOffsetInDRAM = LAC_QUADWORDS_TO_BYTES( + LAC_SYM_QAT_CIPHER_OFFSET_IN_DRAM_GCM_SPC); + else + hwBlockOffsetInDRAM = LAC_QUADWORDS_TO_BYTES( + LAC_SYM_QAT_CIPHER_OFFSET_IN_DRAM_CHACHA_SPC); + } + /* construct cipherConfig in CD in DRAM */ + LacSymQat_CipherHwBlockPopulateCfgData(pSessionDesc, + pHwBlockBaseInDRAM + + hwBlockOffsetInDRAM, + &sizeInBytes); + SalQatMsg_CmnHdrWrite((icp_qat_fw_comn_req_t *)&( + pSessionDesc->reqSpcCacheHdr), + ICP_QAT_FW_COMN_REQ_CPM_FW_LA, + pSessionDesc->laCmdId, + pSessionDesc->cmnRequestFlags, + pSessionDesc->laCmdFlags); + } else if (CPA_CY_SYM_HASH_AES_GMAC == pSessionDesc->hashAlgorithm) { + pSessionDesc->aadLenInBytes = pOpData->messageLenToHashInBytes; + } + + if (LAC_CIPHER_IS_CHACHA(cipher) && + (LAC_CIPHER_SPC_IV_SIZE != pOpData->ivLenInBytes)) { + LAC_INVALID_PARAM_LOG("IV for CHACHA"); + return CPA_STATUS_INVALID_PARAM; + } else if (CPA_CY_SYM_HASH_AES_GMAC == pSessionDesc->hashAlgorithm) { + if (pOpData->messageLenToHashInBytes == 0 || + pOpData->pAdditionalAuthData != NULL) { + LAC_INVALID_PARAM_LOG( + "For AES_GMAC, AAD Length " + "(messageLenToHashInBytes) must " + "be non zero and pAdditionalAuthData " + "must be NULL"); + status = CPA_STATUS_INVALID_PARAM; + } + } + + if (CPA_TRUE == pSessionDesc->isAuthEncryptOp) { + if (CPA_CY_SYM_HASH_AES_CCM == pSessionDesc->hashAlgorithm) { + status = LacSymAlgChain_CheckCCMData( + pOpData->pAdditionalAuthData, + pOpData->pIv, + pOpData->messageLenToCipherInBytes, + pOpData->ivLenInBytes); + if (CPA_STATUS_SUCCESS == status) { + LacSymAlgChain_PrepareCCMData( + pSessionDesc, + pOpData->pAdditionalAuthData, + pOpData->pIv, + pOpData->messageLenToCipherInBytes, + pOpData->ivLenInBytes); + } + } else if (CPA_CY_SYM_HASH_AES_GCM == + pSessionDesc->hashAlgorithm) { + if (pSessionDesc->aadLenInBytes != 0 && + pOpData->pAdditionalAuthData == NULL) { + LAC_INVALID_PARAM_LOG("pAdditionalAuthData"); + status = CPA_STATUS_INVALID_PARAM; + } + if (CPA_STATUS_SUCCESS == status) { + LacSymAlgChain_PrepareGCMData( + pSessionDesc, pOpData->pAdditionalAuthData); + } + } + } + + /* allocate cookie (used by callback function) */ + if (CPA_STATUS_SUCCESS == status) { + pSymCookie = (lac_sym_cookie_t *)Lac_MemPoolEntryAlloc( + pService->lac_sym_cookie_pool); + if (pSymCookie == NULL) { + LAC_LOG_ERROR("Cannot allocate cookie - NULL"); + status = CPA_STATUS_RESOURCE; + } else if ((void *)CPA_STATUS_RETRY == pSymCookie) { + pSymCookie = NULL; + status = CPA_STATUS_RETRY; + } else { + pCookie = &(pSymCookie->u.bulkCookie); + } + } + + if (CPA_STATUS_SUCCESS == status) { + /* write the buffer descriptors */ + if (IS_ZERO_LENGTH_BUFFER_SUPPORTED(cipher, hash)) { + status = + LacBuffDesc_BufferListDescWriteAndAllowZeroBuffer( + (CpaBufferList *)pSrcBuffer, + &srcAddrPhys, + CPA_FALSE, + &(pService->generic_service_info)); + } else { + status = LacBuffDesc_BufferListDescWrite( + (CpaBufferList *)pSrcBuffer, + &srcAddrPhys, + CPA_FALSE, + &(pService->generic_service_info)); + } + if (CPA_STATUS_SUCCESS != status) { + LAC_LOG_ERROR("Unable to write src buffer descriptors"); + } + + /* For out of place operations */ + if ((pSrcBuffer != pDstBuffer) && + (CPA_STATUS_SUCCESS == status)) { + if (IS_ZERO_LENGTH_BUFFER_SUPPORTED(cipher, hash)) { + status = + LacBuffDesc_BufferListDescWriteAndAllowZeroBuffer( + pDstBuffer, + &dstAddrPhys, + CPA_FALSE, + &(pService->generic_service_info)); + } else { + status = LacBuffDesc_BufferListDescWrite( + pDstBuffer, + &dstAddrPhys, + CPA_FALSE, + &(pService->generic_service_info)); + } + if (CPA_STATUS_SUCCESS != status) { + LAC_LOG_ERROR( + "Unable to write dest buffer descriptors"); + } + } + } + if (CPA_STATUS_SUCCESS == status) { + /* populate the cookie */ + pCookie->pCallbackTag = pCallbackTag; + pCookie->sessionCtx = pOpData->sessionCtx; + pCookie->pOpData = (const CpaCySymOpData *)pOpData; + pCookie->pDstBuffer = pDstBuffer; + pCookie->updateSessionIvOnSend = CPA_FALSE; + pCookie->updateUserIvOnRecieve = CPA_FALSE; + pCookie->updateKeySizeOnRecieve = CPA_FALSE; + pCookie->pNext = NULL; + pCookie->instanceHandle = pService; + + /* get the qat packet type for LAC packet type */ + LacSymQat_packetTypeGet(pOpData->packetType, + pSessionDesc->partialState, + &qatPacketType); + /* + * For XTS mode, the key size must be updated after + * the first partial has been sent. Set a flag here so the + * response knows to do this. + */ + if ((laCmdId != ICP_QAT_FW_LA_CMD_AUTH) && + (CPA_CY_SYM_PACKET_TYPE_PARTIAL == pOpData->packetType) && + (LAC_CIPHER_IS_XTS_MODE(pSessionDesc->cipherAlgorithm)) && + (qatPacketType == ICP_QAT_FW_LA_PARTIAL_START)) { + pCookie->updateKeySizeOnRecieve = CPA_TRUE; + } + + /* + * Now create the Request. + * Start by populating it from the cache in the session + * descriptor. + */ + pMsg = &(pCookie->qatMsg); + pMsgDummy = (Cpa8U *)pMsg; + + if (pSessionDesc->isSinglePass) { + pCacheDummyHdr = + (Cpa8U *)&(pSessionDesc->reqSpcCacheHdr); + pCacheDummyFtr = + (Cpa8U *)&(pSessionDesc->reqSpcCacheFtr); + } else { + /* Normally, we want to use the SHRAM Constants Table if + * possible + * for best performance (less DRAM accesses incurred by + * CPM). But + * we can't use it for partial-packet hash operations. + * This is why + * we build 2 versions of the message template at + * sessionInit, + * one for SHRAM Constants Table usage and the other + * (default) for + * Content Descriptor h/w setup data in DRAM. And we + * chose between + * them here on a per-request basis, when we know the + * packetType + */ + if ((!pSessionDesc->useSymConstantsTable) || + (pSessionDesc->isAuth && + (CPA_CY_SYM_PACKET_TYPE_FULL != + pOpData->packetType))) { + pCacheDummyHdr = + (Cpa8U *)&(pSessionDesc->reqCacheHdr); + pCacheDummyFtr = + (Cpa8U *)&(pSessionDesc->reqCacheFtr); + } else { + pCacheDummyHdr = + (Cpa8U *)&(pSessionDesc->shramReqCacheHdr); + pCacheDummyFtr = + (Cpa8U *)&(pSessionDesc->shramReqCacheFtr); + } + } + memcpy(pMsgDummy, + pCacheDummyHdr, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_HDR_IN_LW)); + memset((pMsgDummy + + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_HDR_IN_LW)), + 0, + (LAC_LONG_WORD_IN_BYTES * + LAC_SIZE_OF_CACHE_TO_CLEAR_IN_LW)); + memcpy(pMsgDummy + (LAC_LONG_WORD_IN_BYTES * + LAC_START_OF_CACHE_FTR_IN_LW), + pCacheDummyFtr, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_FTR_IN_LW)); + /* + * Populate the comn_mid section + */ + SalQatMsg_CmnMidWrite(pMsg, + pCookie, + LAC_SYM_DEFAULT_QAT_PTR_TYPE, + srcAddrPhys, + dstAddrPhys, + 0, + 0); + + /* + * Populate the serv_specif_flags field of the Request header + * Some of the flags are set up here. + * Others are set up later when the RequestParams are set up. + */ + + LacSymQat_LaPacketCommandFlagSet( + qatPacketType, + laCmdId, + pSessionDesc->cipherAlgorithm, + &pMsg->comn_hdr.serv_specif_flags, + pOpData->ivLenInBytes); + + if (pSessionDesc->isSinglePass) { + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + pMsg->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_NOT_12_OCTETS); + + if (CPA_CY_SYM_PACKET_TYPE_PARTIAL == + pOpData->packetType) { + ICP_QAT_FW_LA_RET_AUTH_SET( + pMsg->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_NO_RET_AUTH_RES); + + ICP_QAT_FW_LA_CMP_AUTH_SET( + pMsg->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_NO_CMP_AUTH_RES); + } + } + + LacBuffDesc_BufferListTotalSizeGet(pSrcBuffer, &srcPktSize); + + /* + * Populate the CipherRequestParams section of the Request + */ + if (laCmdId != ICP_QAT_FW_LA_CMD_AUTH) { + + Cpa8U *pIvBuffer = NULL; + + status = LacCipher_PerformParamCheck( + pSessionDesc->cipherAlgorithm, pOpData, srcPktSize); + if (CPA_STATUS_SUCCESS != status) { + /* free the cookie */ + Lac_MemPoolEntryFree(pCookie); + return status; + } + + if (CPA_STATUS_SUCCESS == status) { + /* align cipher IV */ + status = LacCipher_PerformIvCheck( + &(pService->generic_service_info), + pCookie, + qatPacketType, + &pIvBuffer); + } + if (pSessionDesc->isSinglePass && + ((ICP_QAT_FW_LA_PARTIAL_MID == qatPacketType) || + (ICP_QAT_FW_LA_PARTIAL_END == qatPacketType))) { + /* For SPC stateful cipher state size for mid + * and + * end partial packet is 48 bytes + */ + pSpcCdCtrlHdr = + (icp_qat_fw_cipher_cd_ctrl_hdr_t *)&( + pMsg->cd_ctrl); + pSpcCdCtrlHdr->cipher_state_sz = + LAC_BYTES_TO_QUADWORDS( + LAC_SYM_QAT_CIPHER_STATE_SIZE_SPC); + } + /*populate the cipher request parameters */ + if (CPA_STATUS_SUCCESS == status) { + Cpa64U ivBufferPhysAddr = 0; + + if (pIvBuffer != NULL) { + /* User OpData memory being used for IV + * buffer */ + /* get the physical address */ + ivBufferPhysAddr = + LAC_OS_VIRT_TO_PHYS_EXTERNAL( + pService->generic_service_info, + pIvBuffer); + if (0 == ivBufferPhysAddr) { + LAC_LOG_ERROR( + "Unable to get the physical address " + "of the IV\n"); + status = CPA_STATUS_FAIL; + } + } + + if (status == CPA_STATUS_SUCCESS) { + status = + LacSymQat_CipherRequestParamsPopulate( + pMsg, + pOpData + ->cryptoStartSrcOffsetInBytes, + pOpData + ->messageLenToCipherInBytes, + ivBufferPhysAddr, + pIvBuffer); + } + } + + if (CPA_STATUS_SUCCESS == status && + pSessionDesc->isSinglePass) { + Cpa64U aadBufferPhysAddr = 0; + + /* For CHACHA and AES-GCM there is an AAD buffer + * if + * aadLenInBytes is nonzero In case of AES-GMAC, + * AAD buffer + * passed in the src buffer. + */ + if (0 != pSessionDesc->aadLenInBytes && + CPA_CY_SYM_HASH_AES_GMAC != + pSessionDesc->hashAlgorithm) { + LAC_CHECK_NULL_PARAM( + pOpData->pAdditionalAuthData); + blockLen = + LacSymQat_CipherBlockSizeBytesGet( + pSessionDesc->cipherAlgorithm); + if ((pSessionDesc->aadLenInBytes % + blockLen) != 0) { + paddingLen = blockLen - + (pSessionDesc + ->aadLenInBytes % + blockLen); + memset( + &pOpData->pAdditionalAuthData + [pSessionDesc + ->aadLenInBytes], + 0, + paddingLen); + } + + /* User OpData memory being used for aad + * buffer */ + /* get the physical address */ + aadBufferPhysAddr = + LAC_OS_VIRT_TO_PHYS_EXTERNAL( + pService->generic_service_info, + pOpData->pAdditionalAuthData); + if (0 == aadBufferPhysAddr) { + LAC_LOG_ERROR( + "Unable to get the physical address " + "of the aad\n"); + status = CPA_STATUS_FAIL; + } + } + + if (CPA_STATUS_SUCCESS == status) { + icp_qat_fw_la_cipher_req_params_t *pCipherReqParams = + (icp_qat_fw_la_cipher_req_params_t + *)((Cpa8U *)&( + pMsg->serv_specif_rqpars) + + ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET); + pCipherReqParams->spc_aad_addr = + aadBufferPhysAddr; + pCipherReqParams->spc_aad_sz = + pSessionDesc->aadLenInBytes; + + if (CPA_TRUE != + pSessionDesc->digestIsAppended) { + Cpa64U digestBufferPhysAddr = 0; + /* User OpData memory being used + * for digest buffer */ + /* get the physical address */ + digestBufferPhysAddr = + LAC_OS_VIRT_TO_PHYS_EXTERNAL( + pService + ->generic_service_info, + pOpData->pDigestResult); + if (0 != digestBufferPhysAddr) { + pCipherReqParams + ->spc_auth_res_addr = + digestBufferPhysAddr; + pCipherReqParams + ->spc_auth_res_sz = + pSessionDesc + ->hashResultSize; + } else { + LAC_LOG_ERROR( + "Unable to get the physical address " + "of the digest\n"); + status = + CPA_STATUS_FAIL; + } + } + } + } + } + + /* + * Set up HashRequestParams part of Request + */ + if ((status == CPA_STATUS_SUCCESS) && + (laCmdId != ICP_QAT_FW_LA_CMD_CIPHER)) { + Cpa32U authOffsetInBytes = + pOpData->hashStartSrcOffsetInBytes; + Cpa32U authLenInBytes = + pOpData->messageLenToHashInBytes; + + status = LacHash_PerformParamCheck(instanceHandle, + pSessionDesc, + pOpData, + srcPktSize, + pVerifyResult); + if (CPA_STATUS_SUCCESS != status) { + /* free the cookie */ + Lac_MemPoolEntryFree(pCookie); + return status; + } + if (CPA_STATUS_SUCCESS == status) { + /* Info structure for CCM/GCM */ + lac_sym_qat_hash_state_buffer_info_t + hashStateBufferInfo = { 0 }; + lac_sym_qat_hash_state_buffer_info_t + *pHashStateBufferInfo = + &(pSessionDesc->hashStateBufferInfo); + + if (CPA_TRUE == pSessionDesc->isAuthEncryptOp) { + icp_qat_fw_la_auth_req_params_t *pHashReqParams = + (icp_qat_fw_la_auth_req_params_t + *)((Cpa8U *)&( + pMsg->serv_specif_rqpars) + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + hashStateBufferInfo.pData = + pOpData->pAdditionalAuthData; + if (pOpData->pAdditionalAuthData == + NULL) { + hashStateBufferInfo.pDataPhys = + 0; + } else { + hashStateBufferInfo + .pDataPhys = LAC_MEM_CAST_PTR_TO_UINT64( + LAC_OS_VIRT_TO_PHYS_EXTERNAL( + pService + ->generic_service_info, + pOpData + ->pAdditionalAuthData)); + } + + hashStateBufferInfo + .stateStorageSzQuadWords = 0; + hashStateBufferInfo + .prefixAadSzQuadWords = + LAC_BYTES_TO_QUADWORDS( + pHashReqParams->u2.aad_sz); + + /* Overwrite hash state buffer info + * structure pointer + * with the one created for CCM/GCM */ + pHashStateBufferInfo = + &hashStateBufferInfo; + + /* Aad buffer could be null in the GCM + * case */ + if (0 == + hashStateBufferInfo.pDataPhys && + CPA_CY_SYM_HASH_AES_GCM != + pSessionDesc->hashAlgorithm && + CPA_CY_SYM_HASH_AES_GMAC != + pSessionDesc->hashAlgorithm) { + LAC_LOG_ERROR( + "Unable to get the physical address" + "of the AAD\n"); + status = CPA_STATUS_FAIL; + } + + /* for CCM/GCM the hash and cipher data + * regions + * are equal */ + authOffsetInBytes = + pOpData + ->cryptoStartSrcOffsetInBytes; + + /* For authenticated encryption, + * authentication length is + * determined by + * messageLenToCipherInBytes for AES-GCM + * and + * AES-CCM, and by + * messageLenToHashInBytes for AES-GMAC. + * You don't see the latter here, as + * that is the initial + * value of authLenInBytes. */ + if (pSessionDesc->hashAlgorithm != + CPA_CY_SYM_HASH_AES_GMAC) + authLenInBytes = + pOpData + ->messageLenToCipherInBytes; + } else if (CPA_CY_SYM_HASH_SNOW3G_UIA2 == + pSessionDesc->hashAlgorithm || + CPA_CY_SYM_HASH_ZUC_EIA3 == + pSessionDesc->hashAlgorithm) { + hashStateBufferInfo.pData = + pOpData->pAdditionalAuthData; + hashStateBufferInfo.pDataPhys = + LAC_OS_VIRT_TO_PHYS_EXTERNAL( + pService->generic_service_info, + hashStateBufferInfo.pData); + hashStateBufferInfo + .stateStorageSzQuadWords = 0; + hashStateBufferInfo + .prefixAadSzQuadWords = + LAC_BYTES_TO_QUADWORDS( + pSessionDesc->aadLenInBytes); + + pHashStateBufferInfo = + &hashStateBufferInfo; + + if (0 == + hashStateBufferInfo.pDataPhys) { + LAC_LOG_ERROR( + "Unable to get the physical address" + "of the AAD\n"); + status = CPA_STATUS_FAIL; + } + } + if (CPA_CY_SYM_HASH_AES_CCM == + pSessionDesc->hashAlgorithm) { + if (CPA_CY_SYM_CIPHER_DIRECTION_DECRYPT == + pSessionDesc->cipherDirection) { + /* On a decrypt path pSrcBuffer + * is used as this is + * where encrypted digest is + * located. Firmware + * uses encrypted digest for + * compare/verification*/ + pBufferList = + (CpaBufferList *)pSrcBuffer; + } else { + /* On an encrypt path pDstBuffer + * is used as this is + * where encrypted digest will + * be written */ + pBufferList = + (CpaBufferList *)pDstBuffer; + } + status = LacSymAlgChain_PtrFromOffsetGet( + pBufferList, + pOpData->cryptoStartSrcOffsetInBytes + + pOpData + ->messageLenToCipherInBytes, + &pDigestResult); + if (CPA_STATUS_SUCCESS != status) { + LAC_LOG_ERROR( + "Cannot set digest pointer within the" + " buffer list - offset out of bounds"); + } + } else { + pDigestResult = pOpData->pDigestResult; + } + + if (CPA_CY_SYM_OP_ALGORITHM_CHAINING == + pSessionDesc->symOperation) { + /* In alg chaining mode, packets are not + * seen as partials + * for hash operations. Override to + * NONE. + */ + qatPacketType = + ICP_QAT_FW_LA_PARTIAL_NONE; + } + if (CPA_TRUE == + pSessionDesc->digestIsAppended) { + /*Check if the destination buffer can + * handle the digest + * if digestIsAppend is true*/ + if (srcPktSize < + (authOffsetInBytes + + authLenInBytes + + pSessionDesc->hashResultSize)) { + status = + CPA_STATUS_INVALID_PARAM; + } + } + if (CPA_STATUS_SUCCESS == status) { + /* populate the hash request parameters + */ + status = + LacSymQat_HashRequestParamsPopulate( + pMsg, + authOffsetInBytes, + authLenInBytes, + &(pService + ->generic_service_info), + pHashStateBufferInfo, + qatPacketType, + pSessionDesc->hashResultSize, + pSessionDesc->digestVerify, + pSessionDesc->digestIsAppended ? + NULL : + pDigestResult, + pSessionDesc->hashAlgorithm, + NULL); + } + } + } + } + + /* + * send the message to the QAT + */ + if (CPA_STATUS_SUCCESS == status) { + qatUtilsAtomicInc(&(pSessionDesc->u.pendingCbCount)); + + status = LacSymQueue_RequestSend(instanceHandle, + pCookie, + pSessionDesc); + + if (CPA_STATUS_SUCCESS != status) { + /* Decrease pending callback counter on send fail. */ + qatUtilsAtomicDec(&(pSessionDesc->u.pendingCbCount)); + } + } + /* Case that will catch all error status's for this function */ + if (CPA_STATUS_SUCCESS != status) { + /* free the cookie */ + if (NULL != pSymCookie) { + Lac_MemPoolEntryFree(pSymCookie); + } + } + return status; +} Index: sys/dev/qat/qat_api/common/crypto/sym/lac_sym_api.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/lac_sym_api.c @@ -0,0 +1,1130 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_api.c Implementation of the symmetric API + * + * @ingroup LacSym + * + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" +#include "cpa_cy_im.h" + +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_adf_transport_dp.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +#include "icp_qat_fw_la.h" + +/* + ****************************************************************************** + * Include private header files + ****************************************************************************** + */ +#include "lac_common.h" +#include "lac_log.h" +#include "lac_mem.h" +#include "lac_mem_pools.h" +#include "lac_list.h" +#include "lac_sym.h" +#include "lac_sym_qat.h" +#include "lac_sal.h" +#include "lac_sal_ctrl.h" +#include "lac_session.h" +#include "lac_sym_cipher.h" +#include "lac_sym_hash.h" +#include "lac_sym_alg_chain.h" +#include "lac_sym_stats.h" +#include "lac_sym_partial.h" +#include "lac_sym_qat_hash_defs_lookup.h" +#include "lac_sym_cb.h" +#include "lac_buffer_desc.h" +#include "lac_sync.h" +#include "lac_hooks.h" +#include "lac_sal_types_crypto.h" +#include "sal_service_state.h" + +#define IS_EXT_ALG_CHAIN_UNSUPPORTED( \ + cipherAlgorithm, hashAlgorithm, extAlgchainSupported) \ + ((((CPA_CY_SYM_CIPHER_ZUC_EEA3 == cipherAlgorithm || \ + CPA_CY_SYM_CIPHER_SNOW3G_UEA2 == cipherAlgorithm) && \ + CPA_CY_SYM_HASH_AES_CMAC == hashAlgorithm) || \ + ((CPA_CY_SYM_CIPHER_NULL == cipherAlgorithm || \ + CPA_CY_SYM_CIPHER_AES_CTR == cipherAlgorithm || \ + CPA_CY_SYM_CIPHER_ZUC_EEA3 == cipherAlgorithm) && \ + CPA_CY_SYM_HASH_SNOW3G_UIA2 == hashAlgorithm) || \ + ((CPA_CY_SYM_CIPHER_NULL == cipherAlgorithm || \ + CPA_CY_SYM_CIPHER_AES_CTR == cipherAlgorithm || \ + CPA_CY_SYM_CIPHER_SNOW3G_UEA2 == cipherAlgorithm) && \ + CPA_CY_SYM_HASH_ZUC_EIA3 == hashAlgorithm)) && \ + !extAlgchainSupported) + +/*** Local functions definitions ***/ +static CpaStatus +LacSymPerform_BufferParamCheck(const CpaBufferList *const pSrcBuffer, + const CpaBufferList *const pDstBuffer, + const lac_session_desc_t *const pSessionDesc, + const CpaCySymOpData *const pOpData); + +void getCtxSize(const CpaCySymSessionSetupData *pSessionSetupData, + Cpa32U *pSessionCtxSizeInBytes); + +/** + ***************************************************************************** + * @ingroup LacSym + * Generic bufferList callback function. + * @description + * This function is used when the API is called in synchronous mode. + * It's assumed the callbackTag holds a lac_sync_op_data_t type + * and when the callback is received, this callback shall set the + * status and opResult element of that cookie structure and + * kick the sid. + * This function may be used directly as a callback function. + * + * @param[in] callbackTag Callback Tag + * @param[in] status Status of callback + * @param[in] operationType Operation Type + * @param[in] pOpData Pointer to the Op Data + * @param[out] pDstBuffer Pointer to destination buffer list + * @param[out] opResult Boolean to indicate the result of the operation + * + * @return void + * + *****************************************************************************/ +void +LacSync_GenBufListVerifyCb(void *pCallbackTag, + CpaStatus status, + CpaCySymOp operationType, + void *pOpData, + CpaBufferList *pDstBuffer, + CpaBoolean opResult) +{ + LacSync_GenVerifyWakeupSyncCaller(pCallbackTag, status, opResult); +} + +/* +******************************************************************************* +* Define static function definitions +******************************************************************************* +*/ +/** + * @ingroup LacSym + * Function which perform parameter checks on session setup data + * + * @param[in] CpaInstanceHandle Instance Handle + * @param[in] pSessionSetupData Pointer to session setup data + * + * @retval CPA_STATUS_SUCCESS The operation succeeded + * @retval CPA_STATUS_INVALID_PARAM An invalid parameter value was found + */ +static CpaStatus +LacSymSession_ParamCheck(const CpaInstanceHandle instanceHandle, + const CpaCySymSessionSetupData *pSessionSetupData) +{ + /* initialize convenient pointers to cipher and hash contexts */ + const CpaCySymCipherSetupData *const pCipherSetupData = + (const CpaCySymCipherSetupData *)&pSessionSetupData + ->cipherSetupData; + const CpaCySymHashSetupData *const pHashSetupData = + &pSessionSetupData->hashSetupData; + + CpaCySymCapabilitiesInfo capInfo; + CpaCyCapabilitiesInfo cyCapInfo; + cpaCySymQueryCapabilities(instanceHandle, &capInfo); + SalCtrl_CyQueryCapabilities(instanceHandle, &cyCapInfo); + + /* Ensure cipher algorithm is correct and supported */ + if ((CPA_CY_SYM_OP_ALGORITHM_CHAINING == + pSessionSetupData->symOperation) || + (CPA_CY_SYM_OP_CIPHER == pSessionSetupData->symOperation)) { + /* Protect against value of cipher outside the bitmap + * and check if cipher algorithm is correct + */ + if ((pCipherSetupData->cipherAlgorithm >= + CPA_CY_SYM_CIPHER_CAP_BITMAP_SIZE) || + (!CPA_BITMAP_BIT_TEST(capInfo.ciphers, + pCipherSetupData->cipherAlgorithm))) { + LAC_INVALID_PARAM_LOG("cipherAlgorithm"); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* Ensure hash algorithm is correct and supported */ + if ((CPA_CY_SYM_OP_ALGORITHM_CHAINING == + pSessionSetupData->symOperation) || + (CPA_CY_SYM_OP_HASH == pSessionSetupData->symOperation)) { + /* Ensure SHAKE algorithms are not supported */ + if ((CPA_CY_SYM_HASH_SHAKE_128 == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_SHAKE_256 == + pHashSetupData->hashAlgorithm)) { + LAC_INVALID_PARAM_LOG( + "Hash algorithms SHAKE-128 and SHAKE-256 " + "are not supported."); + return CPA_STATUS_UNSUPPORTED; + } + + /* Protect against value of hash outside the bitmap + * and check if hash algorithm is correct + */ + if ((pHashSetupData->hashAlgorithm >= + CPA_CY_SYM_HASH_CAP_BITMAP_SIZE) || + (!CPA_BITMAP_BIT_TEST(capInfo.hashes, + pHashSetupData->hashAlgorithm))) { + LAC_INVALID_PARAM_LOG("hashAlgorithm"); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* ensure CCM, GCM, Kasumi, Snow3G and ZUC cipher and hash algorithms + * are + * selected together for Algorithm Chaining */ + if (CPA_CY_SYM_OP_ALGORITHM_CHAINING == + pSessionSetupData->symOperation) { + /* ensure both hash and cipher algorithms are POLY and CHACHA */ + if (((CPA_CY_SYM_CIPHER_CHACHA == + pCipherSetupData->cipherAlgorithm) && + (CPA_CY_SYM_HASH_POLY != pHashSetupData->hashAlgorithm)) || + ((CPA_CY_SYM_HASH_POLY == pHashSetupData->hashAlgorithm) && + (CPA_CY_SYM_CIPHER_CHACHA != + pCipherSetupData->cipherAlgorithm))) { + LAC_INVALID_PARAM_LOG( + "Invalid combination of Cipher/Hash " + "Algorithms for CHACHA/POLY"); + return CPA_STATUS_INVALID_PARAM; + } + + /* ensure both hash and cipher algorithms are CCM */ + if (((CPA_CY_SYM_CIPHER_AES_CCM == + pCipherSetupData->cipherAlgorithm) && + (CPA_CY_SYM_HASH_AES_CCM != + pHashSetupData->hashAlgorithm)) || + ((CPA_CY_SYM_HASH_AES_CCM == + pHashSetupData->hashAlgorithm) && + (CPA_CY_SYM_CIPHER_AES_CCM != + pCipherSetupData->cipherAlgorithm))) { + LAC_INVALID_PARAM_LOG( + "Invalid combination of Cipher/Hash Algorithms for CCM"); + return CPA_STATUS_INVALID_PARAM; + } + + /* ensure both hash and cipher algorithms are GCM/GMAC */ + if ((CPA_CY_SYM_CIPHER_AES_GCM == + pCipherSetupData->cipherAlgorithm && + (CPA_CY_SYM_HASH_AES_GCM != + pHashSetupData->hashAlgorithm && + CPA_CY_SYM_HASH_AES_GMAC != + pHashSetupData->hashAlgorithm)) || + ((CPA_CY_SYM_HASH_AES_GCM == + pHashSetupData->hashAlgorithm || + CPA_CY_SYM_HASH_AES_GMAC == + pHashSetupData->hashAlgorithm) && + CPA_CY_SYM_CIPHER_AES_GCM != + pCipherSetupData->cipherAlgorithm)) { + LAC_INVALID_PARAM_LOG( + "Invalid combination of Cipher/Hash Algorithms for GCM"); + return CPA_STATUS_INVALID_PARAM; + } + + /* ensure both hash and cipher algorithms are Kasumi */ + if (((CPA_CY_SYM_CIPHER_KASUMI_F8 == + pCipherSetupData->cipherAlgorithm) && + (CPA_CY_SYM_HASH_KASUMI_F9 != + pHashSetupData->hashAlgorithm)) || + ((CPA_CY_SYM_HASH_KASUMI_F9 == + pHashSetupData->hashAlgorithm) && + (CPA_CY_SYM_CIPHER_KASUMI_F8 != + pCipherSetupData->cipherAlgorithm))) { + LAC_INVALID_PARAM_LOG( + "Invalid combination of Cipher/Hash Algorithms for Kasumi"); + return CPA_STATUS_INVALID_PARAM; + } + + if (IS_EXT_ALG_CHAIN_UNSUPPORTED( + pCipherSetupData->cipherAlgorithm, + pHashSetupData->hashAlgorithm, + cyCapInfo.extAlgchainSupported)) { + LAC_UNSUPPORTED_PARAM_LOG( + "ExtAlgChain feature not supported"); + return CPA_STATUS_UNSUPPORTED; + } + + /* ensure both hash and cipher algorithms are Snow3G */ + if (((CPA_CY_SYM_CIPHER_SNOW3G_UEA2 == + pCipherSetupData->cipherAlgorithm) && + (CPA_CY_SYM_HASH_SNOW3G_UIA2 != + pHashSetupData->hashAlgorithm)) || + ((CPA_CY_SYM_HASH_SNOW3G_UIA2 == + pHashSetupData->hashAlgorithm) && + (CPA_CY_SYM_CIPHER_SNOW3G_UEA2 != + pCipherSetupData->cipherAlgorithm))) { + LAC_INVALID_PARAM_LOG( + "Invalid combination of Cipher/Hash Algorithms for Snow3G"); + return CPA_STATUS_INVALID_PARAM; + } + + /* ensure both hash and cipher algorithms are ZUC */ + if (((CPA_CY_SYM_CIPHER_ZUC_EEA3 == + pCipherSetupData->cipherAlgorithm) && + (CPA_CY_SYM_HASH_ZUC_EIA3 != + pHashSetupData->hashAlgorithm)) || + ((CPA_CY_SYM_HASH_ZUC_EIA3 == + pHashSetupData->hashAlgorithm) && + (CPA_CY_SYM_CIPHER_ZUC_EEA3 != + pCipherSetupData->cipherAlgorithm))) { + LAC_INVALID_PARAM_LOG( + "Invalid combination of Cipher/Hash Algorithms for ZUC"); + return CPA_STATUS_INVALID_PARAM; + } + } + /* not Algorithm Chaining so prevent CCM/GCM being selected */ + else if (CPA_CY_SYM_OP_CIPHER == pSessionSetupData->symOperation) { + /* ensure cipher algorithm is not CCM or GCM */ + if ((CPA_CY_SYM_CIPHER_AES_CCM == + pCipherSetupData->cipherAlgorithm) || + (CPA_CY_SYM_CIPHER_AES_GCM == + pCipherSetupData->cipherAlgorithm) || + (CPA_CY_SYM_CIPHER_CHACHA == + pCipherSetupData->cipherAlgorithm)) { + LAC_INVALID_PARAM_LOG( + "Invalid Cipher Algorithm for non-Algorithm " + "Chaining operation"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (CPA_CY_SYM_OP_HASH == pSessionSetupData->symOperation) { + /* ensure hash algorithm is not CCM or GCM/GMAC */ + if ((CPA_CY_SYM_HASH_AES_CCM == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_AES_GCM == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_AES_GMAC == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_POLY == pHashSetupData->hashAlgorithm)) { + LAC_INVALID_PARAM_LOG( + "Invalid Hash Algorithm for non-Algorithm Chaining operation"); + return CPA_STATUS_INVALID_PARAM; + } + } + /* Unsupported operation. Return error */ + else { + LAC_INVALID_PARAM_LOG("symOperation"); + return CPA_STATUS_INVALID_PARAM; + } + + /* ensure that cipher direction param is + * valid for cipher and algchain ops */ + if (CPA_CY_SYM_OP_HASH != pSessionSetupData->symOperation) { + if ((pCipherSetupData->cipherDirection != + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT) && + (pCipherSetupData->cipherDirection != + CPA_CY_SYM_CIPHER_DIRECTION_DECRYPT)) { + LAC_INVALID_PARAM_LOG("Invalid Cipher Direction"); + return CPA_STATUS_INVALID_PARAM; + } + } + + return CPA_STATUS_SUCCESS; +} + + +/** + * @ingroup LacSym + * Function which perform parameter checks on data buffers for symmetric + * crypto operations + * + * @param[in] pSrcBuffer Pointer to source buffer list + * @param[in] pDstBuffer Pointer to destination buffer list + * @param[in] pSessionDesc Pointer to session descriptor + * @param[in] pOpData Pointer to CryptoSymOpData. + * + * @retval CPA_STATUS_SUCCESS The operation succeeded + * @retval CPA_STATUS_INVALID_PARAM An invalid parameter value was found + */ + +static CpaStatus +LacSymPerform_BufferParamCheck(const CpaBufferList *const pSrcBuffer, + const CpaBufferList *const pDstBuffer, + const lac_session_desc_t *const pSessionDesc, + const CpaCySymOpData *const pOpData) +{ + Cpa64U srcBufferLen = 0, dstBufferLen = 0; + CpaStatus status = CPA_STATUS_SUCCESS; + + /* verify packet type is in correct range */ + switch (pOpData->packetType) { + case CPA_CY_SYM_PACKET_TYPE_FULL: + case CPA_CY_SYM_PACKET_TYPE_PARTIAL: + case CPA_CY_SYM_PACKET_TYPE_LAST_PARTIAL: + break; + default: { + LAC_INVALID_PARAM_LOG("packetType"); + return CPA_STATUS_INVALID_PARAM; + } + } + + if (!((CPA_CY_SYM_OP_CIPHER != pSessionDesc->symOperation && + CPA_CY_SYM_HASH_MODE_PLAIN == pSessionDesc->hashMode) && + (0 == pOpData->messageLenToHashInBytes))) { + if (IS_ZERO_LENGTH_BUFFER_SUPPORTED( + pSessionDesc->cipherAlgorithm, + pSessionDesc->hashAlgorithm)) { + status = LacBuffDesc_BufferListVerifyNull( + pSrcBuffer, &srcBufferLen, LAC_NO_ALIGNMENT_SHIFT); + } else { + status = LacBuffDesc_BufferListVerify( + pSrcBuffer, &srcBufferLen, LAC_NO_ALIGNMENT_SHIFT); + } + if (CPA_STATUS_SUCCESS != status) { + LAC_INVALID_PARAM_LOG("Source buffer invalid"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + /* check MetaData !NULL */ + if (NULL == pSrcBuffer->pPrivateMetaData) { + LAC_INVALID_PARAM_LOG( + "Source buffer MetaData cannot be NULL"); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* out of place checks */ + if (pSrcBuffer != pDstBuffer) { + /* exception for this check is zero length hash requests to + * allow */ + /* for srcBufflen=DstBufferLen=0 */ + if (!((CPA_CY_SYM_OP_CIPHER != pSessionDesc->symOperation && + CPA_CY_SYM_HASH_MODE_PLAIN == pSessionDesc->hashMode) && + (0 == pOpData->messageLenToHashInBytes))) { + /* Verify buffer(s) for dest packet & return packet + * length */ + if (IS_ZERO_LENGTH_BUFFER_SUPPORTED( + pSessionDesc->cipherAlgorithm, + pSessionDesc->hashAlgorithm)) { + status = LacBuffDesc_BufferListVerifyNull( + pDstBuffer, + &dstBufferLen, + LAC_NO_ALIGNMENT_SHIFT); + } else { + status = LacBuffDesc_BufferListVerify( + pDstBuffer, + &dstBufferLen, + LAC_NO_ALIGNMENT_SHIFT); + } + if (CPA_STATUS_SUCCESS != status) { + LAC_INVALID_PARAM_LOG( + "Destination buffer invalid"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + /* check MetaData !NULL */ + if (NULL == pDstBuffer->pPrivateMetaData) { + LAC_INVALID_PARAM_LOG( + "Dest buffer MetaData cannot be NULL"); + return CPA_STATUS_INVALID_PARAM; + } + } + /* Check that src Buffer and dst Buffer Lengths are equal */ + if (srcBufferLen != dstBufferLen) { + LAC_INVALID_PARAM_LOG( + "Source and Dest buffer lengths need to be equal "); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* check for partial packet suport for the session operation */ + if (CPA_CY_SYM_PACKET_TYPE_FULL != pOpData->packetType) { + if (CPA_FALSE == pSessionDesc->isPartialSupported) { + /* return out here to simplify cleanup */ + LAC_INVALID_PARAM_LOG( + "Partial packets not supported for operation"); + return CPA_STATUS_INVALID_PARAM; + } else { + /* This function checks to see if the partial packet + * sequence + * is correct */ + if (CPA_STATUS_SUCCESS != + LacSym_PartialPacketStateCheck( + pOpData->packetType, + pSessionDesc->partialState)) { + LAC_INVALID_PARAM_LOG("Partial packet Type"); + return CPA_STATUS_INVALID_PARAM; + } + } + } + return CPA_STATUS_SUCCESS; +} + +/** @ingroup LacSym */ +CpaStatus +cpaCySymInitSession(const CpaInstanceHandle instanceHandle_in, + const CpaCySymCbFunc pSymCb, + const CpaCySymSessionSetupData *pSessionSetupData, + CpaCySymSessionCtx pSessionCtx) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaInstanceHandle instanceHandle = NULL; + sal_service_t *pService = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + + pService = (sal_service_t *)instanceHandle; + + /* check crypto service is running otherwise return an error */ + SAL_RUNNING_CHECK(pService); + + status = LacSym_InitSession(instanceHandle, + pSymCb, + pSessionSetupData, + CPA_FALSE, /* isDPSession */ + pSessionCtx); + + if (CPA_STATUS_SUCCESS == status) { + /* Increment the stats for a session registered successfully */ + LAC_SYM_STAT_INC(numSessionsInitialized, instanceHandle); + } else /* if there was an error */ + { + LAC_SYM_STAT_INC(numSessionErrors, instanceHandle); + } + + return status; +} + +CpaStatus +cpaCySymSessionInUse(CpaCySymSessionCtx pSessionCtx, CpaBoolean *pSessionInUse) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + lac_session_desc_t *pSessionDesc = NULL; + + LAC_CHECK_NULL_PARAM(pSessionInUse); + LAC_CHECK_INSTANCE_HANDLE(pSessionCtx); + + *pSessionInUse = CPA_FALSE; + + pSessionDesc = LAC_SYM_SESSION_DESC_FROM_CTX_GET(pSessionCtx); + + /* If there are pending requests */ + if (pSessionDesc->isDPSession) { + if (qatUtilsAtomicGet(&(pSessionDesc->u.pendingDpCbCount))) + *pSessionInUse = CPA_TRUE; + } else { + if (qatUtilsAtomicGet(&(pSessionDesc->u.pendingCbCount))) + *pSessionInUse = CPA_TRUE; + } + + return status; +} + +CpaStatus +LacSym_InitSession(const CpaInstanceHandle instanceHandle, + const CpaCySymCbFunc pSymCb, + const CpaCySymSessionSetupData *pSessionSetupData, + const CpaBoolean isDPSession, + CpaCySymSessionCtx pSessionCtx) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + lac_session_desc_t *pSessionDesc = NULL; + Cpa32U sessionCtxSizeInBytes = 0; + CpaPhysicalAddr physAddress = 0; + CpaPhysicalAddr physAddressAligned = 0; + sal_service_t *pService = NULL; + const CpaCySymCipherSetupData *pCipherSetupData = NULL; + const CpaCySymHashSetupData *pHashSetupData = NULL; + +/* Instance param checking done by calling function */ + + LAC_CHECK_NULL_PARAM(pSessionSetupData); + LAC_CHECK_NULL_PARAM(pSessionCtx); + status = LacSymSession_ParamCheck(instanceHandle, pSessionSetupData); + LAC_CHECK_STATUS(status); + + /* set the session priority for QAT AL*/ + if ((CPA_CY_PRIORITY_HIGH == pSessionSetupData->sessionPriority) || + (CPA_CY_PRIORITY_NORMAL == pSessionSetupData->sessionPriority)) { + // do nothing - clean up this code. use RANGE macro + } else { + LAC_INVALID_PARAM_LOG("sessionPriority"); + return CPA_STATUS_INVALID_PARAM; + } + + + pCipherSetupData = &pSessionSetupData->cipherSetupData; + pHashSetupData = &pSessionSetupData->hashSetupData; + + pService = (sal_service_t *)instanceHandle; + + /* Re-align the session structure to 64 byte alignment */ + physAddress = + LAC_OS_VIRT_TO_PHYS_EXTERNAL((*pService), + (Cpa8U *)pSessionCtx + sizeof(void *)); + + if (0 == physAddress) { + LAC_LOG_ERROR( + "Unable to get the physical address of the session"); + return CPA_STATUS_FAIL; + } + + physAddressAligned = + LAC_ALIGN_POW2_ROUNDUP(physAddress, LAC_64BYTE_ALIGNMENT); + + pSessionDesc = (lac_session_desc_t *) + /* Move the session pointer by the physical offset + between aligned and unaligned memory */ + ((Cpa8U *)pSessionCtx + sizeof(void *) + + (physAddressAligned - physAddress)); + + /* save the aligned pointer in the first bytes (size of unsigned long) + * of the session memory */ + *((LAC_ARCH_UINT *)pSessionCtx) = (LAC_ARCH_UINT)pSessionDesc; + + /* start off with a clean session */ + /* Choose Session Context size */ + getCtxSize(pSessionSetupData, &sessionCtxSizeInBytes); + switch (sessionCtxSizeInBytes) { + case LAC_SYM_SESSION_D1_SIZE: + memset(pSessionDesc, 0, sizeof(lac_session_desc_d1_t)); + break; + case LAC_SYM_SESSION_D2_SIZE: + memset(pSessionDesc, 0, sizeof(lac_session_desc_d2_t)); + break; + default: + memset(pSessionDesc, 0, sizeof(lac_session_desc_t)); + break; + } + + /* Setup content descriptor info structure + * assumption that content descriptor is the first field in + * in the session descriptor */ + pSessionDesc->contentDescInfo.pData = (Cpa8U *)pSessionDesc; + pSessionDesc->contentDescInfo.hardwareSetupBlockPhys = + physAddressAligned; + + pSessionDesc->contentDescOptimisedInfo.pData = + ((Cpa8U *)pSessionDesc + LAC_SYM_QAT_CONTENT_DESC_MAX_SIZE); + pSessionDesc->contentDescOptimisedInfo.hardwareSetupBlockPhys = + (physAddressAligned + LAC_SYM_QAT_CONTENT_DESC_MAX_SIZE); + + /* Set the Common Session Information */ + pSessionDesc->symOperation = pSessionSetupData->symOperation; + + if (CPA_FALSE == isDPSession) { + /* For asynchronous - use the user supplied callback + * for synchronous - use the internal synchronous callback */ + pSessionDesc->pSymCb = ((void *)NULL != (void *)pSymCb) ? + pSymCb : + LacSync_GenBufListVerifyCb; + } + + pSessionDesc->isDPSession = isDPSession; + if ((CPA_CY_SYM_HASH_AES_GCM == pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_AES_GMAC == pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_AES_CCM == pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_CIPHER_CHACHA == pCipherSetupData->cipherAlgorithm) || + (CPA_CY_SYM_CIPHER_ARC4 == pCipherSetupData->cipherAlgorithm)) { + pSessionDesc->writeRingMsgFunc = LacDp_WriteRingMsgFull; + } else { + pSessionDesc->writeRingMsgFunc = LacDp_WriteRingMsgOpt; + } + + if (CPA_STATUS_SUCCESS == status) { + /* Session set up via API call (not internal one) */ + /* Services such as DRBG call the crypto api as part of their + * service + * hence the need to for the flag, it is needed to distinguish + * between + * an internal and external session. + */ + pSessionDesc->internalSession = CPA_FALSE; + + status = LacAlgChain_SessionInit(instanceHandle, + pSessionSetupData, + pSessionDesc); + } + return status; +} + +/** @ingroup LacSym */ +CpaStatus +cpaCySymRemoveSession(const CpaInstanceHandle instanceHandle_in, + CpaCySymSessionCtx pSessionCtx) +{ + lac_session_desc_t *pSessionDesc = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + CpaInstanceHandle instanceHandle = NULL; + Cpa64U numPendingRequests = 0; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(pSessionCtx); + + /* check crypto service is running otherwise return an error */ + SAL_RUNNING_CHECK(instanceHandle); + pSessionDesc = LAC_SYM_SESSION_DESC_FROM_CTX_GET(pSessionCtx); + + LAC_CHECK_NULL_PARAM(pSessionDesc); + + if (CPA_TRUE == pSessionDesc->isDPSession) { + /* + * Based on one instance, we can initialize multiple sessions. + * For example, we can initialize the session "X" and session + * "Y" with + * the same instance "A". If there is no operation pending for + * session + * "X", we can remove the session "X". + * + * Now we only check the @pSessionDesc->pendingDpCbCount, if it + * becomes + * zero, we can remove the session. + * + * Why? + * (1) We increase it in the cpaCySymDpEnqueueOp/ + * cpaCySymDpEnqueueOpBatch. + * (2) We decrease it in the LacSymCb_ProcessCallback. + * + * If the @pSessionDesc->pendingDpCbCount becomes zero, it means + * there is no operation pending for the session "X" anymore, so + * we can + * remove this session. Maybe there is still some requests left + * in the + * instance's ring (icp_adf_queueDataToSend() returns true), but + * the + * request does not belong to "X", it belongs to session "Y". + */ + numPendingRequests = + qatUtilsAtomicGet(&(pSessionDesc->u.pendingDpCbCount)); + } else { + numPendingRequests = + qatUtilsAtomicGet(&(pSessionDesc->u.pendingCbCount)); + } + + /* If there are pending requests */ + if (0 != numPendingRequests) { + QAT_UTILS_LOG("There are %llu requests pending\n", + (unsigned long long)numPendingRequests); + status = CPA_STATUS_RETRY; + if (CPA_TRUE == pSessionDesc->isDPSession) { + /* Need to update tail if messages queue on tx hi ring + for + data plane api */ + icp_comms_trans_handle trans_handle = + ((sal_crypto_service_t *)instanceHandle) + ->trans_handle_sym_tx; + + if (CPA_TRUE == icp_adf_queueDataToSend(trans_handle)) { + /* process the remaining messages in the ring */ + QAT_UTILS_LOG("Submitting enqueued requests\n"); + /* + * SalQatMsg_updateQueueTail + */ + SalQatMsg_updateQueueTail(trans_handle); + return status; + } + } + } + if (CPA_STATUS_SUCCESS == status) { + if (CPA_STATUS_SUCCESS != + LAC_SPINLOCK_DESTROY(&pSessionDesc->requestQueueLock)) { + LAC_LOG_ERROR("Failed to destroy request queue lock"); + } + if (CPA_FALSE == pSessionDesc->isDPSession) { + LAC_SYM_STAT_INC(numSessionsRemoved, instanceHandle); + } + } else if (CPA_FALSE == pSessionDesc->isDPSession) { + LAC_SYM_STAT_INC(numSessionErrors, instanceHandle); + } + return status; +} + +/** @ingroup LacSym */ +static CpaStatus +LacSym_Perform(const CpaInstanceHandle instanceHandle, + void *callbackTag, + const CpaCySymOpData *pOpData, + const CpaBufferList *pSrcBuffer, + CpaBufferList *pDstBuffer, + CpaBoolean *pVerifyResult, + CpaBoolean isAsyncMode) +{ + lac_session_desc_t *pSessionDesc = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + /* check crypto service is running otherwise return an error */ + SAL_RUNNING_CHECK(instanceHandle); + LAC_CHECK_NULL_PARAM(pOpData); + LAC_CHECK_NULL_PARAM(pOpData->sessionCtx); + LAC_CHECK_NULL_PARAM(pSrcBuffer); + LAC_CHECK_NULL_PARAM(pDstBuffer); + + pSessionDesc = LAC_SYM_SESSION_DESC_FROM_CTX_GET(pOpData->sessionCtx); + LAC_CHECK_NULL_PARAM(pSessionDesc); + + /*check whether Payload size is zero for CHACHA-POLY*/ + if ((CPA_CY_SYM_CIPHER_CHACHA == pSessionDesc->cipherAlgorithm) && + (CPA_CY_SYM_HASH_POLY == pSessionDesc->hashAlgorithm) && + (CPA_CY_SYM_OP_ALGORITHM_CHAINING == pSessionDesc->symOperation)) { + if (!pOpData->messageLenToCipherInBytes) { + LAC_INVALID_PARAM_LOG( + "Invalid messageLenToCipherInBytes for CHACHA-POLY"); + return CPA_STATUS_INVALID_PARAM; + } + } + + + /* If synchronous Operation - Callback function stored in the session + * descriptor so a flag is set in the perform to indicate that + * the perform is being re-called for the synchronous operation */ + if ((LacSync_GenBufListVerifyCb == pSessionDesc->pSymCb) && + isAsyncMode == CPA_TRUE) { + CpaBoolean opResult = CPA_FALSE; + lac_sync_op_data_t *pSyncCallbackData = NULL; + + status = LacSync_CreateSyncCookie(&pSyncCallbackData); + + if (CPA_STATUS_SUCCESS == status) { + status = LacSym_Perform(instanceHandle, + pSyncCallbackData, + pOpData, + pSrcBuffer, + pDstBuffer, + pVerifyResult, + CPA_FALSE); + } else { + /* Failure allocating sync cookie */ + LAC_SYM_STAT_INC(numSymOpRequestErrors, instanceHandle); + return status; + } + + if (CPA_STATUS_SUCCESS == status) { + CpaStatus syncStatus = CPA_STATUS_SUCCESS; + syncStatus = LacSync_WaitForCallback( + pSyncCallbackData, + LAC_SYM_SYNC_CALLBACK_TIMEOUT, + &status, + &opResult); + /* If callback doesn't come back */ + if (CPA_STATUS_SUCCESS != syncStatus) { + LAC_SYM_STAT_INC(numSymOpCompletedErrors, + instanceHandle); + LAC_LOG_ERROR("Callback timed out"); + status = syncStatus; + } + } else { + /* As the Request was not sent the Callback will never + * be called, so need to indicate that we're finished + * with cookie so it can be destroyed. */ + LacSync_SetSyncCookieComplete(pSyncCallbackData); + } + + if (CPA_STATUS_SUCCESS == status) { + if (NULL != pVerifyResult) { + *pVerifyResult = opResult; + } + } + + LacSync_DestroySyncCookie(&pSyncCallbackData); + return status; + } + + status = + LacSymPerform_BufferParamCheck((const CpaBufferList *)pSrcBuffer, + pDstBuffer, + pSessionDesc, + pOpData); + LAC_CHECK_STATUS(status); + + if ((!pSessionDesc->digestIsAppended) && + (CPA_CY_SYM_OP_ALGORITHM_CHAINING == pSessionDesc->symOperation)) { + /* Check that pDigestResult is not NULL */ + LAC_CHECK_NULL_PARAM(pOpData->pDigestResult); + } + + status = LacAlgChain_Perform(instanceHandle, + pSessionDesc, + callbackTag, + pOpData, + pSrcBuffer, + pDstBuffer, + pVerifyResult); + + if (CPA_STATUS_SUCCESS == status) { + /* check for partial packet suport for the session operation */ + if (CPA_CY_SYM_PACKET_TYPE_FULL != pOpData->packetType) { + LacSym_PartialPacketStateUpdate( + pOpData->packetType, &pSessionDesc->partialState); + } + /* increment #requests stat */ + LAC_SYM_STAT_INC(numSymOpRequests, instanceHandle); + } + /* Retry also results in the errors stat been incremented */ + else { + /* increment #errors stat */ + LAC_SYM_STAT_INC(numSymOpRequestErrors, instanceHandle); + } + return status; +} + +/** @ingroup LacSym */ +CpaStatus +cpaCySymPerformOp(const CpaInstanceHandle instanceHandle_in, + void *callbackTag, + const CpaCySymOpData *pOpData, + const CpaBufferList *pSrcBuffer, + CpaBufferList *pDstBuffer, + CpaBoolean *pVerifyResult) +{ + CpaInstanceHandle instanceHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + return LacSym_Perform(instanceHandle, + callbackTag, + pOpData, + pSrcBuffer, + pDstBuffer, + pVerifyResult, + CPA_TRUE); +} + +/** @ingroup LacSym */ +CpaStatus +cpaCySymQueryStats(const CpaInstanceHandle instanceHandle_in, + struct _CpaCySymStats *pSymStats) +{ + + CpaInstanceHandle instanceHandle = NULL; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(pSymStats); + + /* check if crypto service is running + * otherwise return an error */ + SAL_RUNNING_CHECK(instanceHandle); + + /* copy the fields from the internal structure into the api defined + * structure */ + LacSym_Stats32CopyGet(instanceHandle, pSymStats); + return CPA_STATUS_SUCCESS; +} + +/** @ingroup LacSym */ +CpaStatus +cpaCySymQueryStats64(const CpaInstanceHandle instanceHandle_in, + CpaCySymStats64 *pSymStats) +{ + + CpaInstanceHandle instanceHandle = NULL; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(pSymStats); + + /* check if crypto service is running + * otherwise return an error */ + SAL_RUNNING_CHECK(instanceHandle); + + /* copy the fields from the internal structure into the api defined + * structure */ + LacSym_Stats64CopyGet(instanceHandle, pSymStats); + + return CPA_STATUS_SUCCESS; +} + +/** @ingroup LacSym */ +CpaStatus +cpaCySymSessionCtxGetSize(const CpaInstanceHandle instanceHandle_in, + const CpaCySymSessionSetupData *pSessionSetupData, + Cpa32U *pSessionCtxSizeInBytes) +{ + CpaInstanceHandle instanceHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(pSessionSetupData); + LAC_CHECK_NULL_PARAM(pSessionCtxSizeInBytes); + + /* check crypto service is running otherwise return an error */ + SAL_RUNNING_CHECK(instanceHandle); + *pSessionCtxSizeInBytes = LAC_SYM_SESSION_SIZE; + + return CPA_STATUS_SUCCESS; +} + +/** @ingroup LacSym */ +CpaStatus +cpaCySymSessionCtxGetDynamicSize( + const CpaInstanceHandle instanceHandle_in, + const CpaCySymSessionSetupData *pSessionSetupData, + Cpa32U *pSessionCtxSizeInBytes) +{ + CpaInstanceHandle instanceHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(pSessionSetupData); + LAC_CHECK_NULL_PARAM(pSessionCtxSizeInBytes); + + /* check crypto service is running otherwise return an error */ + SAL_RUNNING_CHECK(instanceHandle); + /* Choose Session Context size */ + getCtxSize(pSessionSetupData, pSessionCtxSizeInBytes); + + + return CPA_STATUS_SUCCESS; +} + +void +getCtxSize(const CpaCySymSessionSetupData *pSessionSetupData, + Cpa32U *pSessionCtxSizeInBytes) +{ + /* using lac_session_desc_d1_t */ + if ((pSessionSetupData->cipherSetupData.cipherAlgorithm != + CPA_CY_SYM_CIPHER_ARC4) && + (pSessionSetupData->cipherSetupData.cipherAlgorithm != + CPA_CY_SYM_CIPHER_SNOW3G_UEA2) && + (pSessionSetupData->hashSetupData.hashAlgorithm != + CPA_CY_SYM_HASH_SNOW3G_UIA2) && + (pSessionSetupData->cipherSetupData.cipherAlgorithm != + CPA_CY_SYM_CIPHER_AES_CCM) && + (pSessionSetupData->cipherSetupData.cipherAlgorithm != + CPA_CY_SYM_CIPHER_AES_GCM) && + (pSessionSetupData->hashSetupData.hashMode != + CPA_CY_SYM_HASH_MODE_AUTH) && + (pSessionSetupData->hashSetupData.hashMode != + CPA_CY_SYM_HASH_MODE_NESTED) && + (pSessionSetupData->partialsNotRequired == CPA_TRUE)) { + *pSessionCtxSizeInBytes = LAC_SYM_SESSION_D1_SIZE; + } + /* using lac_session_desc_d2_t */ + else if (((pSessionSetupData->cipherSetupData.cipherAlgorithm == + CPA_CY_SYM_CIPHER_AES_CCM) || + (pSessionSetupData->cipherSetupData.cipherAlgorithm == + CPA_CY_SYM_CIPHER_AES_GCM)) && + (pSessionSetupData->partialsNotRequired == CPA_TRUE)) { + *pSessionCtxSizeInBytes = LAC_SYM_SESSION_D2_SIZE; + } + /* using lac_session_desc_t */ + else { + *pSessionCtxSizeInBytes = LAC_SYM_SESSION_SIZE; + } +} + +/** + ****************************************************************************** + * @ingroup LacSym + *****************************************************************************/ +CpaStatus +cpaCyBufferListGetMetaSize(const CpaInstanceHandle instanceHandle_in, + Cpa32U numBuffers, + Cpa32U *pSizeInBytes) +{ + + CpaInstanceHandle instanceHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + instanceHandle = instanceHandle_in; + } + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(pSizeInBytes); + + /* In the case of zero buffers we still need to allocate one + * descriptor to pass to the firmware */ + if (0 == numBuffers) { + numBuffers = 1; + } + + /* Note: icp_buffer_list_desc_t is 8 bytes in size and + * icp_flat_buffer_desc_t is 16 bytes in size. Therefore if + * icp_buffer_list_desc_t is aligned + * so will each icp_flat_buffer_desc_t structure */ + + *pSizeInBytes = sizeof(icp_buffer_list_desc_t) + + (sizeof(icp_flat_buffer_desc_t) * numBuffers) + + ICP_DESCRIPTOR_ALIGNMENT_BYTES; + + + return CPA_STATUS_SUCCESS; +} Index: sys/dev/qat/qat_api/common/crypto/sym/lac_sym_auth_enc.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/lac_sym_auth_enc.c @@ -0,0 +1,197 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_auth_enc.c + * + * @ingroup LacAuthEnc + * + * @description + * Authenticated encryption specific functionality. + * For CCM related code NIST SP 800-38C is followed. + * For GCM related code NIST SP 800-38D is followed. + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" + +#include "icp_accel_devices.h" +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_adf_debug.h" +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "lac_log.h" +#include "lac_common.h" +#include "lac_session.h" +#include "lac_sym_auth_enc.h" + +/* These defines describe position of the flag fields + * in B0 block for CCM algorithm*/ +#define LAC_ALG_CHAIN_CCM_B0_FLAGS_ADATA_SHIFT 6 +#define LAC_ALG_CHAIN_CCM_B0_FLAGS_T_SHIFT 3 + +/* This macro builds flags field to be put in B0 block for CCM algorithm */ +#define LAC_ALG_CHAIN_CCM_BUILD_B0_FLAGS(Adata, t, q) \ + ((((Adata) > 0 ? 1 : 0) << LAC_ALG_CHAIN_CCM_B0_FLAGS_ADATA_SHIFT) | \ + ((((t)-2) >> 1) << LAC_ALG_CHAIN_CCM_B0_FLAGS_T_SHIFT) | ((q)-1)) + +/** + * @ingroup LacAuthEnc + */ +CpaStatus +LacSymAlgChain_CheckCCMData(Cpa8U *pAdditionalAuthData, + Cpa8U *pIv, + Cpa32U messageLenToCipherInBytes, + Cpa32U ivLenInBytes) +{ + Cpa8U q = 0; + + LAC_CHECK_NULL_PARAM(pIv); + LAC_CHECK_NULL_PARAM(pAdditionalAuthData); + + /* check if n is within permitted range */ + if (ivLenInBytes < LAC_ALG_CHAIN_CCM_N_LEN_IN_BYTES_MIN || + ivLenInBytes > LAC_ALG_CHAIN_CCM_N_LEN_IN_BYTES_MAX) { + LAC_INVALID_PARAM_LOG2("ivLenInBytes for CCM algorithm " + "must be between %d and %d inclusive", + LAC_ALG_CHAIN_CCM_N_LEN_IN_BYTES_MIN, + LAC_ALG_CHAIN_CCM_N_LEN_IN_BYTES_MAX); + return CPA_STATUS_INVALID_PARAM; + } + + q = LAC_ALG_CHAIN_CCM_NQ_CONST - ivLenInBytes; + + /* Check if q is big enough to hold actual length of message to cipher + * if q = 8 -> maxlen = 2^64 always good as + * messageLenToCipherInBytes is 32 bits + * if q = 7 -> maxlen = 2^56 always good + * if q = 6 -> maxlen = 2^48 always good + * if q = 5 -> maxlen = 2^40 always good + * if q = 4 -> maxlen = 2^32 always good. + */ + if ((messageLenToCipherInBytes >= (1 << (q * LAC_NUM_BITS_IN_BYTE))) && + (q < sizeof(Cpa32U))) { + LAC_INVALID_PARAM_LOG( + "messageLenToCipherInBytes too long for the given" + " ivLenInBytes for CCM algorithm\n"); + return CPA_STATUS_INVALID_PARAM; + } + + return CPA_STATUS_SUCCESS; +} + + +/** + * @ingroup LacAuthEnc + */ +void +LacSymAlgChain_PrepareCCMData(lac_session_desc_t *pSessionDesc, + Cpa8U *pAdditionalAuthData, + Cpa8U *pIv, + Cpa32U messageLenToCipherInBytes, + Cpa32U ivLenInBytes) +{ + Cpa8U n = + ivLenInBytes; /* assumes ivLenInBytes has been param checked */ + Cpa8U q = LAC_ALG_CHAIN_CCM_NQ_CONST - n; + Cpa8U lenOfEncodedLen = 0; + Cpa16U lenAEncoded = 0; + Cpa32U bitStrQ = 0; + + /* populate Ctr0 block - stored in pIv */ + pIv[0] = (q - 1); + /* bytes 1 to n are already set with nonce by the user */ + /* set last q bytes with 0 */ + memset(pIv + n + 1, 0, q); + + /* Encode the length of associated data 'a'. As the API limits the + * length + * of an array pointed by pAdditionalAuthData to be 240 bytes max, the + * maximum length of 'a' might be 240 - 16 - 2 = 222. Hence the encoding + * below is simplified. */ + if (pSessionDesc->aadLenInBytes > 0) { + lenOfEncodedLen = sizeof(Cpa16U); + lenAEncoded = QAT_UTILS_HOST_TO_NW_16( + (Cpa16U)pSessionDesc->aadLenInBytes); + } + + /* populate B0 block */ + /* first, set the flags field */ + pAdditionalAuthData[0] = + LAC_ALG_CHAIN_CCM_BUILD_B0_FLAGS(lenOfEncodedLen, + pSessionDesc->hashResultSize, + q); + /* bytes 1 to n are already set with nonce by the user*/ + /* put Q in bytes 16-q...15 */ + bitStrQ = QAT_UTILS_HOST_TO_NW_32(messageLenToCipherInBytes); + + if (q > sizeof(bitStrQ)) { + memset(pAdditionalAuthData + n + 1, 0, q); + memcpy(pAdditionalAuthData + n + 1 + (q - sizeof(bitStrQ)), + (Cpa8U *)&bitStrQ, + sizeof(bitStrQ)); + } else { + memcpy(pAdditionalAuthData + n + 1, + ((Cpa8U *)&bitStrQ) + (sizeof(bitStrQ) - q), + q); + } + + /* populate B1-Bn blocks */ + if (lenAEncoded > 0) { + *(Cpa16U + *)(&pAdditionalAuthData[1 + LAC_ALG_CHAIN_CCM_NQ_CONST]) = + lenAEncoded; + /* Next bytes are already set by the user with + * the associated data 'a' */ + + /* Check if padding is required */ + if (((pSessionDesc->aadLenInBytes + lenOfEncodedLen) % + LAC_HASH_AES_CCM_BLOCK_SIZE) != 0) { + Cpa8U paddingLen = 0; + Cpa8U paddingIndex = 0; + + paddingLen = LAC_HASH_AES_CCM_BLOCK_SIZE - + ((pSessionDesc->aadLenInBytes + lenOfEncodedLen) % + LAC_HASH_AES_CCM_BLOCK_SIZE); + + paddingIndex = 1 + LAC_ALG_CHAIN_CCM_NQ_CONST; + paddingIndex += + lenOfEncodedLen + pSessionDesc->aadLenInBytes; + + memset(&pAdditionalAuthData[paddingIndex], + 0, + paddingLen); + } + } +} + +/** + * @ingroup LacAuthEnc + */ +void +LacSymAlgChain_PrepareGCMData(lac_session_desc_t *pSessionDesc, + Cpa8U *pAdditionalAuthData) +{ + Cpa8U paddingLen = 0; + + if ((pSessionDesc->aadLenInBytes % LAC_HASH_AES_GCM_BLOCK_SIZE) != 0) { + paddingLen = LAC_HASH_AES_GCM_BLOCK_SIZE - + (pSessionDesc->aadLenInBytes % LAC_HASH_AES_GCM_BLOCK_SIZE); + + memset(&pAdditionalAuthData[pSessionDesc->aadLenInBytes], + 0, + paddingLen); + } +} Index: sys/dev/qat/qat_api/common/crypto/sym/lac_sym_cb.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/lac_sym_cb.c @@ -0,0 +1,545 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_cb.c Callback handler functions for symmetric components + * + * @ingroup LacSym + * + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" + +#include "icp_accel_devices.h" +#include "icp_adf_init.h" +#include "icp_qat_fw_la.h" +#include "icp_adf_transport.h" +#include "icp_adf_debug.h" + +#include "lac_sym.h" +#include "lac_sym_cipher.h" +#include "lac_common.h" +#include "lac_list.h" +#include "lac_sal_types_crypto.h" +#include "lac_sal.h" +#include "lac_sal_ctrl.h" +#include "lac_session.h" +#include "lac_sym_stats.h" +#include "lac_log.h" +#include "lac_sym_cb.h" +#include "lac_sym_hash.h" +#include "lac_sym_qat_cipher.h" +#include "lac_sym_qat.h" + +#define DEQUEUE_MSGPUT_MAX_RETRIES 10000 + +/* +******************************************************************************* +* Define static function definitions +******************************************************************************* +*/ + +/** + ***************************************************************************** + * @ingroup LacSymCb + * Function to clean computed data. + * + * @description + * This function cleans GCM or CCM data in the case of a failure. + * + * @param[in] pSessionDesc pointer to the session descriptor + * @param[out] pBufferList pointer to the bufferlist to clean + * @param[in] pOpData pointer to operation data + * @param[in] isCCM is it a CCM operation boolean + * + * @return None + *****************************************************************************/ +static void +LacSymCb_CleanUserData(const lac_session_desc_t *pSessionDesc, + CpaBufferList *pBufferList, + const CpaCySymOpData *pOpData, + CpaBoolean isCCM) +{ + Cpa8U authTagLen = 0; + + /* Retrieve authTagLen */ + authTagLen = pSessionDesc->hashResultSize; + + /* Cleaning */ + if (isCCM) { + /* for CCM the digest is inside the buffer list */ + LacBuffDesc_BufferListZeroFromOffset( + pBufferList, + pOpData->cryptoStartSrcOffsetInBytes, + pOpData->messageLenToCipherInBytes + authTagLen); + } else { + /* clean buffer list */ + LacBuffDesc_BufferListZeroFromOffset( + pBufferList, + pOpData->cryptoStartSrcOffsetInBytes, + pOpData->messageLenToCipherInBytes); + } + if ((CPA_TRUE != pSessionDesc->digestIsAppended) && + (NULL != pOpData->pDigestResult)) { + /* clean digest */ + memset(pOpData->pDigestResult, 0, authTagLen); + } +} + +/** + ***************************************************************************** + * @ingroup LacSymCb + * Definition of callback function for processing symmetric responses + * + * @description + * This callback is invoked to process symmetric response messages from + * the QAT. It will extract some details from the message and invoke + * the user's callback to complete a symmetric operation. + * + * @param[in] pCookie Pointer to cookie associated with this request + * @param[in] qatRespStatusOkFlag Boolean indicating ok/fail status from QAT + * @param[in] status Status variable indicating an error occurred + * in sending the message (e.g. when dequeueing) + * @param[in] pSessionDesc Session descriptor + * + * @return None + *****************************************************************************/ +static void +LacSymCb_ProcessCallbackInternal(lac_sym_bulk_cookie_t *pCookie, + CpaBoolean qatRespStatusOkFlag, + CpaStatus status, + lac_session_desc_t *pSessionDesc) +{ + CpaCySymCbFunc pSymCb = NULL; + void *pCallbackTag = NULL; + CpaCySymOpData *pOpData = NULL; + CpaBufferList *pDstBuffer = NULL; + CpaCySymOp operationType = CPA_CY_SYM_OP_NONE; + CpaStatus dequeueStatus = CPA_STATUS_SUCCESS; + + CpaInstanceHandle instanceHandle = CPA_INSTANCE_HANDLE_SINGLE; + /* NOTE: cookie pointer validated in previous function */ + instanceHandle = pCookie->instanceHandle; + + pOpData = (CpaCySymOpData *)LAC_CONST_PTR_CAST(pCookie->pOpData); + operationType = pSessionDesc->symOperation; + + /* Set the destination pointer to the one supplied in the cookie. */ + pDstBuffer = pCookie->pDstBuffer; + + /* For a digest verify operation - for full packet and final partial + * only, perform a comparison with the digest generated and with the one + * supplied in the packet. */ + + if (((pSessionDesc->isSinglePass && + (CPA_CY_SYM_CIPHER_AES_GCM == pSessionDesc->cipherAlgorithm)) || + (CPA_CY_SYM_OP_CIPHER != operationType)) && + (CPA_TRUE == pSessionDesc->digestVerify) && + ((CPA_CY_SYM_PACKET_TYPE_FULL == pOpData->packetType) || + (CPA_CY_SYM_PACKET_TYPE_LAST_PARTIAL == pOpData->packetType))) { + if (CPA_FALSE == qatRespStatusOkFlag) { + LAC_SYM_STAT_INC(numSymOpVerifyFailures, + instanceHandle); + + /* The comparison has failed at this point (status is + * fail), + * need to clean any sensitive calculated data up to + * this point. + * The data calculated is no longer useful to the end + * result and + * does not need to be returned to the user so setting + * buffers to + * zero. + */ + if (pSessionDesc->cipherAlgorithm == + CPA_CY_SYM_CIPHER_AES_CCM) { + LacSymCb_CleanUserData(pSessionDesc, + pDstBuffer, + pOpData, + CPA_TRUE); + } else if (pSessionDesc->cipherAlgorithm == + CPA_CY_SYM_CIPHER_AES_GCM) { + LacSymCb_CleanUserData(pSessionDesc, + pDstBuffer, + pOpData, + CPA_FALSE); + } + } + } else { + /* Most commands have no point of failure and always return + * success. This is the default response from the QAT. + * If status is already set to an error value, don't overwrite + * it + */ + if ((CPA_STATUS_SUCCESS == status) && + (CPA_TRUE != qatRespStatusOkFlag)) { + LAC_LOG_ERROR("Response status value not as expected"); + status = CPA_STATUS_FAIL; + } + } + + pSymCb = pSessionDesc->pSymCb; + pCallbackTag = pCookie->pCallbackTag; + + /* State returned to the client for intermediate partials packets + * for hash only and cipher only partial packets. Cipher update + * allow next partial through */ + if (CPA_CY_SYM_PACKET_TYPE_PARTIAL == pOpData->packetType) { + if ((CPA_CY_SYM_OP_CIPHER == operationType) || + (CPA_CY_SYM_OP_ALGORITHM_CHAINING == operationType)) { + if (CPA_TRUE == pCookie->updateUserIvOnRecieve) { + /* Update the user's IV buffer + * Very important to do this BEFORE dequeuing + * subsequent partial requests, as the state + * buffer + * may get overwritten + */ + memcpy(pCookie->pOpData->pIv, + pSessionDesc->cipherPartialOpState, + pCookie->pOpData->ivLenInBytes); + } + if (CPA_TRUE == pCookie->updateKeySizeOnRecieve && + LAC_CIPHER_IS_XTS_MODE( + pSessionDesc->cipherAlgorithm)) { + LacSymQat_CipherXTSModeUpdateKeyLen( + pSessionDesc, + pSessionDesc->cipherKeyLenInBytes / 2); + } + } + } else if (CPA_CY_SYM_PACKET_TYPE_LAST_PARTIAL == pOpData->packetType) { + if ((CPA_CY_SYM_OP_CIPHER == operationType) || + (CPA_CY_SYM_OP_ALGORITHM_CHAINING == operationType)) { + if (CPA_TRUE == LAC_CIPHER_IS_XTS_MODE( + pSessionDesc->cipherAlgorithm)) { + /* + * For XTS mode, we replace the updated key with + * the original key - for subsequent partial + * requests + * + */ + LacSymQat_CipherXTSModeUpdateKeyLen( + pSessionDesc, + pSessionDesc->cipherKeyLenInBytes); + } + } + } + + if ((CPA_CY_SYM_PACKET_TYPE_FULL != pOpData->packetType) && + (qatRespStatusOkFlag != CPA_FALSE)) { + /* There may be requests blocked pending the completion of this + * operation + */ + + dequeueStatus = LacSymCb_PendingReqsDequeue(pSessionDesc); + if (CPA_STATUS_SUCCESS != dequeueStatus) { + LAC_SYM_STAT_INC(numSymOpCompletedErrors, + instanceHandle); + qatRespStatusOkFlag = CPA_FALSE; + if (CPA_STATUS_SUCCESS == status) { + status = dequeueStatus; + } + } + } + + if (CPA_STATUS_SUCCESS == status) { + /* update stats */ + if (pSessionDesc->internalSession == CPA_FALSE) { + LAC_SYM_STAT_INC(numSymOpCompleted, instanceHandle); + if (CPA_STATUS_SUCCESS != status) { + LAC_SYM_STAT_INC(numSymOpCompletedErrors, + instanceHandle); + } + } + } + + qatUtilsAtomicDec(&(pSessionDesc->u.pendingCbCount)); + + /* deallocate the memory for the internal callback cookie */ + Lac_MemPoolEntryFree(pCookie); + + /* user callback function is the last thing to be called */ + pSymCb(pCallbackTag, + status, + operationType, + pOpData, + pDstBuffer, + qatRespStatusOkFlag); +} + +/** + ****************************************************************************** + * @ingroup LacSymCb + * Definition of callback function for processing symmetric Data Plane + * responses + * + * @description + * This callback checks the status, decrements the number of operations + * pending and calls the user callback + * + * @param[in/out] pResponse pointer to the response structure + * @param[in] qatRespStatusOkFlag status + * @param[in] pSessionDesc pointer to the session descriptor + * + * @return None + ******************************************************************************/ +static void +LacSymCb_ProcessDpCallback(CpaCySymDpOpData *pResponse, + CpaBoolean qatRespStatusOkFlag, + lac_session_desc_t *pSessionDesc) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + /* For CCM and GCM, if qatRespStatusOkFlag is false, the data has to be + * cleaned as stated in RFC 3610; in DP mode, it is the user + * responsability + * to do so */ + + if (CPA_FALSE == pSessionDesc->isSinglePass) { + if ((CPA_CY_SYM_OP_CIPHER == pSessionDesc->symOperation) || + (CPA_FALSE == pSessionDesc->digestVerify)) { + /* If not doing digest compare and qatRespStatusOkFlag + != + CPA_TRUE + then there is something very wrong */ + if (CPA_FALSE == qatRespStatusOkFlag) { + LAC_LOG_ERROR( + "Response status value not as expected"); + status = CPA_STATUS_FAIL; + } + } + } + + ((sal_crypto_service_t *)pResponse->instanceHandle) + ->pSymDpCb(pResponse, status, qatRespStatusOkFlag); + /* + * Decrement the number of pending CB. + * + * If the @pendingDpCbCount becomes zero, we may remove the session, + * please + * read more information in the cpaCySymRemoveSession(). + * + * But there is a field in the @pResponse to store the session, + * the "sessionCtx". In another word, in the above @->pSymDpCb() + * callback, + * it may use the session again. If we decrease the @pendingDpCbCount + * before + * the @->pSymDpCb(), there is a _risk_ the @->pSymDpCb() may reference + * to + * a deleted session. + * + * So in order to avoid the risk, we decrease the @pendingDpCbCount + * after + * the @->pSymDpCb() callback. + */ + qatUtilsAtomicDec(&pSessionDesc->u.pendingDpCbCount); +} + +/** + ****************************************************************************** + * @ingroup LacSymCb + * Definition of callback function for processing symmetric responses + * + * @description + * This callback, which is registered with the common symmetric response + * message handler, is invoked to process symmetric response messages from + * the QAT. It will extract the response status from the cmnRespFlags set + * by the QAT, and then will pass it to @ref + * LacSymCb_ProcessCallbackInternal to complete the response processing. + * + * @param[in] lacCmdId ID of the symmetric QAT command of the request + * message + * @param[in] pOpaqueData pointer to opaque data in the request message + * @param[in] cmnRespFlags Flags set by QAT to indicate response status + * + * @return None + ******************************************************************************/ +static void +LacSymCb_ProcessCallback(icp_qat_fw_la_cmd_id_t lacCmdId, + void *pOpaqueData, + icp_qat_fw_comn_flags cmnRespFlags) +{ + CpaCySymDpOpData *pDpOpData = (CpaCySymDpOpData *)pOpaqueData; + lac_session_desc_t *pSessionDesc = + LAC_SYM_SESSION_DESC_FROM_CTX_GET(pDpOpData->sessionCtx); + CpaBoolean qatRespStatusOkFlag = + (CpaBoolean)(ICP_QAT_FW_COMN_STATUS_FLAG_OK == + ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(cmnRespFlags)); + + if (CPA_TRUE == pSessionDesc->isDPSession) { + /* DP session */ + LacSymCb_ProcessDpCallback(pDpOpData, + qatRespStatusOkFlag, + pSessionDesc); + } else { + /* Trad session */ + LacSymCb_ProcessCallbackInternal((lac_sym_bulk_cookie_t *) + pOpaqueData, + qatRespStatusOkFlag, + CPA_STATUS_SUCCESS, + pSessionDesc); + } +} + +/* +******************************************************************************* +* Define public/global function definitions +******************************************************************************* +*/ + +/** + * @ingroup LacSymCb + * + * @return CpaStatus + * value returned will be the result of icp_adf_transPutMsg + */ +CpaStatus +LacSymCb_PendingReqsDequeue(lac_session_desc_t *pSessionDesc) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_crypto_service_t *pService = NULL; + Cpa32U retries = 0; + + pService = (sal_crypto_service_t *)pSessionDesc->pInstance; + + /* Need to protect access to queue head and tail pointers, which may + * be accessed by multiple contexts simultaneously for enqueue and + * dequeue operations + */ + if (CPA_STATUS_SUCCESS != + LAC_SPINLOCK(&pSessionDesc->requestQueueLock)) { + LAC_LOG_ERROR("Failed to lock request queue"); + return CPA_STATUS_RESOURCE; + } + + /* Clear the blocking flag in the session descriptor */ + pSessionDesc->nonBlockingOpsInProgress = CPA_TRUE; + + while ((NULL != pSessionDesc->pRequestQueueHead) && + (CPA_TRUE == pSessionDesc->nonBlockingOpsInProgress)) { + + /* If we send a partial packet request, set the + * blockingOpsInProgress + * flag for the session to indicate that subsequent requests + * must be + * queued up until this request completes + */ + if (CPA_CY_SYM_PACKET_TYPE_FULL != + pSessionDesc->pRequestQueueHead->pOpData->packetType) { + pSessionDesc->nonBlockingOpsInProgress = CPA_FALSE; + } + + /* At this point, we're clear to send the request. For cipher + * requests, + * we need to check if the session IV needs to be updated. This + * can + * only be done when no other partials are in flight for this + * session, + * to ensure the cipherPartialOpState buffer in the session + * descriptor + * is not currently in use + */ + if (CPA_TRUE == + pSessionDesc->pRequestQueueHead->updateSessionIvOnSend) { + if (LAC_CIPHER_IS_ARC4(pSessionDesc->cipherAlgorithm)) { + memcpy(pSessionDesc->cipherPartialOpState, + pSessionDesc->cipherARC4InitialState, + LAC_CIPHER_ARC4_STATE_LEN_BYTES); + } else { + memcpy(pSessionDesc->cipherPartialOpState, + pSessionDesc->pRequestQueueHead->pOpData + ->pIv, + pSessionDesc->pRequestQueueHead->pOpData + ->ivLenInBytes); + } + } + + /* + * Now we'll attempt to send the message directly to QAT. We'll + * keep + * looing until it succeeds (or at least a very high number of + * retries), + * as the failure only happens when the ring is full, and this + * is only + * a temporary situation. After a few retries, space will become + * availble, allowing the putMsg to succeed. + */ + retries = 0; + do { + /* Send to QAT */ + status = icp_adf_transPutMsg( + pService->trans_handle_sym_tx, + (void *)&(pSessionDesc->pRequestQueueHead->qatMsg), + LAC_QAT_SYM_REQ_SZ_LW); + + retries++; + /* + * Yield to allow other threads that may be on this + * session to poll + * and make some space on the ring + */ + if (CPA_STATUS_SUCCESS != status) { + qatUtilsYield(); + } + } while ((CPA_STATUS_SUCCESS != status) && + (retries < DEQUEUE_MSGPUT_MAX_RETRIES)); + + if ((CPA_STATUS_SUCCESS != status) || + (retries >= DEQUEUE_MSGPUT_MAX_RETRIES)) { + LAC_LOG_ERROR( + "Failed to SalQatMsg_transPutMsg, maximum retries exceeded."); + goto cleanup; + } + + pSessionDesc->pRequestQueueHead = + pSessionDesc->pRequestQueueHead->pNext; + } + + /* If we've drained the queue, ensure the tail pointer is set to NULL */ + if (NULL == pSessionDesc->pRequestQueueHead) { + pSessionDesc->pRequestQueueTail = NULL; + } + +cleanup: + if (CPA_STATUS_SUCCESS != + LAC_SPINUNLOCK(&pSessionDesc->requestQueueLock)) { + LAC_LOG_ERROR("Failed to unlock request queue"); + } + return status; +} + +/** + * @ingroup LacSymCb + */ +void +LacSymCb_CallbacksRegister() +{ + /*** HASH ***/ + LacSymQat_RespHandlerRegister(ICP_QAT_FW_LA_CMD_AUTH, + LacSymCb_ProcessCallback); + + /*** ALGORITHM-CHAINING CIPHER_HASH***/ + LacSymQat_RespHandlerRegister(ICP_QAT_FW_LA_CMD_CIPHER_HASH, + LacSymCb_ProcessCallback); + + /*** ALGORITHM-CHAINING HASH_CIPHER***/ + LacSymQat_RespHandlerRegister(ICP_QAT_FW_LA_CMD_HASH_CIPHER, + LacSymCb_ProcessCallback); + + /*** CIPHER ***/ + LacSymQat_RespHandlerRegister(ICP_QAT_FW_LA_CMD_CIPHER, + LacSymCb_ProcessCallback); + + /* Call compile time param check function to ensure it is included + in the build by the compiler - this compile time check + ensures callbacks run as expected */ + LacSym_CompileTimeAssertions(); +} Index: sys/dev/qat/qat_api/common/crypto/sym/lac_sym_cipher.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/lac_sym_cipher.c @@ -0,0 +1,416 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_cipher.c Cipher + * + * @ingroup LacCipher + * + * @description Functions specific to cipher + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" +#include "cpa_cy_sym.h" + +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" + +#include "icp_qat_fw_la.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "lac_sym_cipher.h" +#include "lac_session.h" +#include "lac_mem.h" +#include "lac_common.h" +#include "lac_list.h" +#include "lac_sym.h" +#include "lac_sym_key.h" +#include "lac_sym_qat_hash_defs_lookup.h" +#include "lac_sal_types_crypto.h" +#include "lac_sal.h" +#include "lac_sal_ctrl.h" +#include "lac_sym_cipher_defs.h" +#include "lac_sym_cipher.h" +#include "lac_sym_stats.h" +#include "lac_sym.h" +#include "lac_sym_qat_cipher.h" +#include "lac_log.h" +#include "lac_buffer_desc.h" + +/* +******************************************************************************* +* Static Variables +******************************************************************************* +*/ + +CpaStatus +LacCipher_PerformIvCheck(sal_service_t *pService, + lac_sym_bulk_cookie_t *pCbCookie, + Cpa32U qatPacketType, + Cpa8U **ppIvBuffer) +{ + const CpaCySymOpData *pOpData = pCbCookie->pOpData; + lac_session_desc_t *pSessionDesc = + LAC_SYM_SESSION_DESC_FROM_CTX_GET(pOpData->sessionCtx); + CpaCySymCipherAlgorithm algorithm = pSessionDesc->cipherAlgorithm; + + /* Perform IV check. */ + if (LAC_CIPHER_IS_CTR_MODE(algorithm) || + LAC_CIPHER_IS_CBC_MODE(algorithm) || + LAC_CIPHER_IS_AES_F8(algorithm) || + LAC_CIPHER_IS_XTS_MODE(algorithm)) { + unsigned ivLenInBytes = + LacSymQat_CipherIvSizeBytesGet(algorithm); + LAC_CHECK_NULL_PARAM(pOpData->pIv); + if (pOpData->ivLenInBytes != ivLenInBytes) { + if (!(/* GCM with 12 byte IV is OK */ + (LAC_CIPHER_IS_GCM(algorithm) && + pOpData->ivLenInBytes == + LAC_CIPHER_IV_SIZE_GCM_12) || + /* IV len for CCM has been checked before */ + LAC_CIPHER_IS_CCM(algorithm))) { + LAC_INVALID_PARAM_LOG("invalid cipher IV size"); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* Always copy the user's IV into another cipher state buffer if + * the request is part of a partial packet sequence + * (ensures that pipelined partial requests use same + * buffer) + */ + if (ICP_QAT_FW_LA_PARTIAL_NONE == qatPacketType) { + /* Set the value of the ppIvBuffer to that supplied + * by the user. + * NOTE: There is no guarantee that this address is + * aligned on + * an 8 or 64 Byte address. */ + *ppIvBuffer = pOpData->pIv; + } else { + /* For partial packets, we use a per-session buffer to + * maintain + * the IV. This allows us to easily pass the updated IV + * forward + * to the next partial in the sequence. This makes + * internal + * buffering of partials easier to implement. + */ + *ppIvBuffer = pSessionDesc->cipherPartialOpState; + + /* Ensure that the user's IV buffer gets updated between + * partial + * requests so that they may also see the residue from + * the + * previous partial. Not needed for final partials + * though. + */ + if ((ICP_QAT_FW_LA_PARTIAL_START == qatPacketType) || + (ICP_QAT_FW_LA_PARTIAL_MID == qatPacketType)) { + pCbCookie->updateUserIvOnRecieve = CPA_TRUE; + + if (ICP_QAT_FW_LA_PARTIAL_START == + qatPacketType) { + /* if the previous partial state was + * full, then this is + * the first partial in the sequence so + * we need to copy + * in the user's IV. But, we have to be + * very careful + * here not to overwrite the + * cipherPartialOpState just + * yet in case there's a previous + * partial sequence in + * flight, so we defer the copy for now. + * This will be + * completed in the + * LacSymQueue_RequestSend() function. + */ + pCbCookie->updateSessionIvOnSend = + CPA_TRUE; + } + /* For subsequent partials in a sequence, we'll + * re-use the + * IV that was written back by the QAT, using + * internal + * request queueing if necessary to ensure that + * the next + * partial request isn't issued to the QAT until + * the + * previous one completes + */ + } + } + } else if (LAC_CIPHER_IS_KASUMI(algorithm)) { + LAC_CHECK_NULL_PARAM(pOpData->pIv); + + if (LAC_CIPHER_IS_KASUMI(algorithm) && + (pOpData->ivLenInBytes != LAC_CIPHER_KASUMI_F8_IV_LENGTH)) { + LAC_INVALID_PARAM_LOG("invalid cipher IV size"); + return CPA_STATUS_INVALID_PARAM; + } + + + *ppIvBuffer = pOpData->pIv; + } else if (LAC_CIPHER_IS_SNOW3G_UEA2(algorithm)) { + LAC_CHECK_NULL_PARAM(pOpData->pIv); + if (LAC_CIPHER_IS_SNOW3G_UEA2(algorithm) && + (pOpData->ivLenInBytes != ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ)) { + LAC_INVALID_PARAM_LOG("invalid cipher IV size"); + return CPA_STATUS_INVALID_PARAM; + } + *ppIvBuffer = pOpData->pIv; + } else if (LAC_CIPHER_IS_ARC4(algorithm)) { + if (ICP_QAT_FW_LA_PARTIAL_NONE == qatPacketType) { + /* For full packets, the initial ARC4 state is stored in + * the + * session descriptor. Use it directly. + */ + *ppIvBuffer = pSessionDesc->cipherARC4InitialState; + } else { + /* For partial packets, we maintain the running ARC4 + * state in + * dedicated buffer in the session descriptor + */ + *ppIvBuffer = pSessionDesc->cipherPartialOpState; + + if (ICP_QAT_FW_LA_PARTIAL_START == qatPacketType) { + /* if the previous partial state was full, then + * this is the + * first partial in the sequence so we need to + * (re-)initialise + * the contents of the state buffer using the + * initial state + * that is stored in the session descriptor. + * But, we have to be + * very careful here not to overwrite the + * cipherPartialOpState + * just yet in case there's a previous partial + * sequence in + * flight, so we defer the copy for now. This + * will be completed + * in the LacSymQueue_RequestSend() function + * when clear to send. + */ + pCbCookie->updateSessionIvOnSend = CPA_TRUE; + } + } + } else if (LAC_CIPHER_IS_ZUC_EEA3(algorithm)) { + LAC_CHECK_NULL_PARAM(pOpData->pIv); + if (pOpData->ivLenInBytes != ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ) { + LAC_INVALID_PARAM_LOG("invalid cipher IV size"); + return CPA_STATUS_INVALID_PARAM; + } + *ppIvBuffer = pOpData->pIv; + } else { + *ppIvBuffer = NULL; + } + + return CPA_STATUS_SUCCESS; +} + + +CpaStatus +LacCipher_SessionSetupDataCheck(const CpaCySymCipherSetupData *pCipherSetupData) +{ + /* No key required for NULL algorithm */ + if (!LAC_CIPHER_IS_NULL(pCipherSetupData->cipherAlgorithm)) { + LAC_CHECK_NULL_PARAM(pCipherSetupData->pCipherKey); + + /* Check that algorithm and keys passed in are correct size */ + if (LAC_CIPHER_IS_ARC4(pCipherSetupData->cipherAlgorithm)) { + if (pCipherSetupData->cipherKeyLenInBytes > + ICP_QAT_HW_ARC4_KEY_SZ) { + LAC_INVALID_PARAM_LOG( + "Invalid ARC4 cipher key length"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_CCM( + pCipherSetupData->cipherAlgorithm)) { + if (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_AES_128_KEY_SZ) { + LAC_INVALID_PARAM_LOG( + "Invalid AES CCM cipher key length"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_XTS_MODE( + pCipherSetupData->cipherAlgorithm)) { + if ((pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_AES_128_XTS_KEY_SZ) && + (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_AES_256_XTS_KEY_SZ)) { + LAC_INVALID_PARAM_LOG( + "Invalid AES XTS cipher key length"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_AES( + pCipherSetupData->cipherAlgorithm)) { + if ((pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_AES_128_KEY_SZ) && + (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_AES_192_KEY_SZ) && + (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_AES_256_KEY_SZ)) { + LAC_INVALID_PARAM_LOG( + "Invalid AES cipher key length"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_AES_F8( + pCipherSetupData->cipherAlgorithm)) { + if ((pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_AES_128_F8_KEY_SZ) && + (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_AES_192_F8_KEY_SZ) && + (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_AES_256_F8_KEY_SZ)) { + LAC_INVALID_PARAM_LOG( + "Invalid AES cipher key length"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_DES( + pCipherSetupData->cipherAlgorithm)) { + if (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_DES_KEY_SZ) { + LAC_INVALID_PARAM_LOG( + "Invalid DES cipher key length"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_TRIPLE_DES( + pCipherSetupData->cipherAlgorithm)) { + if (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_3DES_KEY_SZ) { + LAC_INVALID_PARAM_LOG( + "Invalid Triple-DES cipher key length"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_KASUMI( + pCipherSetupData->cipherAlgorithm)) { + /* QAT-FW only supports 128 bits Cipher Key size for + * Kasumi F8 + * Ref: 3GPP TS 55.216 V6.2.0 */ + if (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_KASUMI_KEY_SZ) { + LAC_INVALID_PARAM_LOG( + "Invalid Kasumi cipher key length"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_SNOW3G_UEA2( + pCipherSetupData->cipherAlgorithm)) { + /* QAT-FW only supports 256 bits Cipher Key size for + * Snow_3G */ + if (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ) { + LAC_INVALID_PARAM_LOG( + "Invalid Snow_3G cipher key length"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_ZUC_EEA3( + pCipherSetupData->cipherAlgorithm)) { + /* ZUC EEA3 */ + if (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ) { + LAC_INVALID_PARAM_LOG( + "Invalid ZUC cipher key length"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_CHACHA( + pCipherSetupData->cipherAlgorithm)) { + if (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_CHACHAPOLY_KEY_SZ) { + LAC_INVALID_PARAM_LOG( + "Invalid CHACHAPOLY cipher key length"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_SM4( + pCipherSetupData->cipherAlgorithm)) { + if (pCipherSetupData->cipherKeyLenInBytes != + ICP_QAT_HW_SM4_KEY_SZ) { + LAC_INVALID_PARAM_LOG( + "Invalid SM4 cipher key length"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + LAC_INVALID_PARAM_LOG("Invalid cipher algorithm"); + return CPA_STATUS_INVALID_PARAM; + } + } + return CPA_STATUS_SUCCESS; +} + +CpaStatus +LacCipher_PerformParamCheck(CpaCySymCipherAlgorithm algorithm, + const CpaCySymOpData *pOpData, + const Cpa64U packetLen) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + /* The following check will cover the dstBuffer as well, since + * the dstBuffer cannot be smaller than the srcBuffer (checked in + * LacSymPerform_BufferParamCheck() called from LacSym_Perform()) + */ + if ((pOpData->messageLenToCipherInBytes + + pOpData->cryptoStartSrcOffsetInBytes) > packetLen) { + LAC_INVALID_PARAM_LOG("cipher len + offset greater than " + "srcBuffer packet len"); + status = CPA_STATUS_INVALID_PARAM; + } + + if (CPA_STATUS_SUCCESS == status) { + /* + * XTS Mode allow for ciphers which are not multiples of + * the block size. + */ + /* Perform algorithm-specific checks */ + if (LAC_CIPHER_IS_XTS_MODE(algorithm) && + ((pOpData->packetType == CPA_CY_SYM_PACKET_TYPE_FULL) || + (pOpData->packetType == + CPA_CY_SYM_PACKET_TYPE_LAST_PARTIAL))) { + /* + * If this is the last of a partial request + */ + if (pOpData->messageLenToCipherInBytes < + ICP_QAT_HW_AES_BLK_SZ) { + LAC_INVALID_PARAM_LOG( + "data size must be greater than block " + "size for last XTS partial or XTS " + "full packet"); + status = CPA_STATUS_INVALID_PARAM; + } + } else if (!(LAC_CIPHER_IS_ARC4(algorithm) || + LAC_CIPHER_IS_CTR_MODE(algorithm) || + LAC_CIPHER_IS_F8_MODE(algorithm) || + LAC_CIPHER_IS_SNOW3G_UEA2(algorithm) || + LAC_CIPHER_IS_XTS_MODE(algorithm) || + LAC_CIPHER_IS_CHACHA(algorithm) || + LAC_CIPHER_IS_ZUC_EEA3(algorithm))) { + /* Mask & check below is based on assumption that block + * size is + * a power of 2. If data size is not a multiple of the + * block size, + * the "remainder" bits selected by the mask be non-zero + */ + if (pOpData->messageLenToCipherInBytes & + (LacSymQat_CipherBlockSizeBytesGet(algorithm) - + 1)) { + LAC_INVALID_PARAM_LOG( + "data size must be block size multiple"); + status = CPA_STATUS_INVALID_PARAM; + } + } + } + + return status; +} + Index: sys/dev/qat/qat_api/common/crypto/sym/lac_sym_compile_check.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/lac_sym_compile_check.c @@ -0,0 +1,45 @@ +/*************************************************************************** + * + * + * + ***************************************************************************/ + +/** + *************************************************************************** + * @file lac_sym_compile_check.c + * + * @ingroup LacSym + * + * This file checks at compile time that some assumptions about the layout + * of key structures are as expected. + * + * + ***************************************************************************/ + +#include "cpa.h" + +#include "lac_common.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +#include "lac_sym.h" +#include "cpa_cy_sym_dp.h" + +#define COMPILE_TIME_ASSERT(pred) \ + switch (0) { \ + case 0: \ + case pred:; \ + } + +void +LacSym_CompileTimeAssertions(void) +{ + /* ************************************************************* + * Check sessionCtx is at the same location in bulk cookie and + * CpaCySymDpOpData. + * This is required for the callbacks to work as expected - + * see LacSymCb_ProcessCallback + * ************************************************************* */ + + COMPILE_TIME_ASSERT(offsetof(lac_sym_bulk_cookie_t, sessionCtx) == + offsetof(CpaCySymDpOpData, sessionCtx)); +} Index: sys/dev/qat/qat_api/common/crypto/sym/lac_sym_dp.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/lac_sym_dp.c @@ -0,0 +1,1080 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_dp.c + * Implementation of the symmetric data plane API + * + * @ingroup cpaCySymDp + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" +#include "cpa_cy_sym_dp.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +#include "icp_accel_devices.h" +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_adf_transport_dp.h" +#include "icp_adf_debug.h" +#include "icp_sal_poll.h" + +#include "qat_utils.h" + +#include "lac_mem.h" +#include "lac_log.h" +#include "lac_sym.h" +#include "lac_sym_qat_cipher.h" +#include "lac_list.h" +#include "lac_sal_types_crypto.h" +#include "sal_service_state.h" +#include "lac_sym_auth_enc.h" + +typedef void (*write_ringMsgFunc_t)(CpaCySymDpOpData *pRequest, + icp_qat_fw_la_bulk_req_t *pCurrentQatMsg); + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Check that the operation data is valid + * + * @description + * Check that all the parameters defined in the operation data are valid + * + * @param[in] pRequest Pointer to an operation data for crypto + * data plane API + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +LacDp_EnqueueParamCheck(const CpaCySymDpOpData *pRequest) +{ + lac_session_desc_t *pSessionDesc = NULL; + CpaCySymCipherAlgorithm cipher = 0; + CpaCySymHashAlgorithm hash = 0; + Cpa32U capabilitiesMask = 0; + + LAC_CHECK_NULL_PARAM(pRequest); + LAC_CHECK_NULL_PARAM(pRequest->instanceHandle); + LAC_CHECK_NULL_PARAM(pRequest->sessionCtx); + + /* Ensure this is a crypto instance */ + SAL_CHECK_INSTANCE_TYPE(pRequest->instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + + pSessionDesc = LAC_SYM_SESSION_DESC_FROM_CTX_GET(pRequest->sessionCtx); + if (NULL == pSessionDesc) { + do { + qatUtilsSleep(500); + pSessionDesc = LAC_SYM_SESSION_DESC_FROM_CTX_GET( + pRequest->sessionCtx); + } while (NULL == pSessionDesc); + } + if (NULL == pSessionDesc) { + LAC_INVALID_PARAM_LOG("Session context not as expected"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_FALSE == pSessionDesc->isDPSession) { + LAC_INVALID_PARAM_LOG( + "Session not initialised for data plane API"); + return CPA_STATUS_INVALID_PARAM; + } + + /*check whether Payload size is zero for CHACHA-POLY */ + if ((CPA_CY_SYM_CIPHER_CHACHA == pSessionDesc->cipherAlgorithm) && + (CPA_CY_SYM_HASH_POLY == pSessionDesc->hashAlgorithm) && + (CPA_CY_SYM_OP_ALGORITHM_CHAINING == pSessionDesc->symOperation)) { + if (!pRequest->messageLenToCipherInBytes) { + LAC_INVALID_PARAM_LOG( + "Invalid messageLenToCipherInBytes for CHACHA-POLY"); + return CPA_STATUS_INVALID_PARAM; + } + } + + if (0 == pRequest->srcBuffer) { + LAC_INVALID_PARAM_LOG("Invalid srcBuffer"); + return CPA_STATUS_INVALID_PARAM; + } + if (0 == pRequest->dstBuffer) { + LAC_INVALID_PARAM_LOG("Invalid destBuffer"); + return CPA_STATUS_INVALID_PARAM; + } + if (0 == pRequest->thisPhys) { + LAC_INVALID_PARAM_LOG("Invalid thisPhys"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Check that src buffer Len = dst buffer Len + Note this also checks that they are of the same type */ + if (pRequest->srcBufferLen != pRequest->dstBufferLen) { + LAC_INVALID_PARAM_LOG( + "Source and Destination buffer lengths need to be equal"); + return CPA_STATUS_INVALID_PARAM; + } + + /* digestVerify and digestIsAppended on Hash-Only operation not + * supported */ + if (pSessionDesc->digestIsAppended && pSessionDesc->digestVerify && + (CPA_CY_SYM_OP_HASH == pSessionDesc->symOperation)) { + LAC_INVALID_PARAM_LOG( + "digestVerify and digestIsAppended set " + "on Hash-Only operation is not supported"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Cipher specific tests */ + if (CPA_CY_SYM_OP_HASH != pSessionDesc->symOperation) { + /* Perform IV check */ + if ((LAC_CIPHER_IS_CTR_MODE(pSessionDesc->cipherAlgorithm) || + LAC_CIPHER_IS_CBC_MODE(pSessionDesc->cipherAlgorithm) || + LAC_CIPHER_IS_AES_F8(pSessionDesc->cipherAlgorithm)) && + (!(LAC_CIPHER_IS_CCM(pSessionDesc->cipherAlgorithm)))) { + Cpa32U ivLenInBytes = LacSymQat_CipherIvSizeBytesGet( + pSessionDesc->cipherAlgorithm); + if (pRequest->ivLenInBytes != ivLenInBytes) { + if (!(/* GCM with 12 byte IV is OK */ + (LAC_CIPHER_IS_GCM( + pSessionDesc->cipherAlgorithm) && + pRequest->ivLenInBytes == + LAC_CIPHER_IV_SIZE_GCM_12))) { + LAC_INVALID_PARAM_LOG( + "invalid cipher IV size"); + return CPA_STATUS_INVALID_PARAM; + } + } + if (0 == pRequest->iv) { + LAC_INVALID_PARAM_LOG("invalid iv of 0"); + return CPA_STATUS_INVALID_PARAM; + } + + /* pRequest->pIv is only used for CCM so is not checked + * here */ + } else if (LAC_CIPHER_IS_KASUMI( + pSessionDesc->cipherAlgorithm)) { + if (LAC_CIPHER_KASUMI_F8_IV_LENGTH != + pRequest->ivLenInBytes) { + LAC_INVALID_PARAM_LOG("invalid cipher IV size"); + return CPA_STATUS_INVALID_PARAM; + } + if (0 == pRequest->iv) { + LAC_INVALID_PARAM_LOG("invalid iv of 0"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_SNOW3G_UEA2( + pSessionDesc->cipherAlgorithm)) { + if (ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ != + pRequest->ivLenInBytes) { + LAC_INVALID_PARAM_LOG("invalid cipher IV size"); + return CPA_STATUS_INVALID_PARAM; + } + if (0 == pRequest->iv) { + LAC_INVALID_PARAM_LOG("invalid iv of 0"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_ZUC_EEA3( + pSessionDesc->cipherAlgorithm)) { + if (ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ != + pRequest->ivLenInBytes) { + LAC_INVALID_PARAM_LOG("invalid cipher IV size"); + return CPA_STATUS_INVALID_PARAM; + } + if (0 == pRequest->iv) { + LAC_INVALID_PARAM_LOG("invalid iv of 0"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (LAC_CIPHER_IS_CCM(pSessionDesc->cipherAlgorithm)) { + if (CPA_STATUS_SUCCESS != + LacSymAlgChain_CheckCCMData( + pRequest->pAdditionalAuthData, + pRequest->pIv, + pRequest->messageLenToCipherInBytes, + pRequest->ivLenInBytes)) { + return CPA_STATUS_INVALID_PARAM; + } + } + + /* Perform algorithm-specific checks */ + if (!(LAC_CIPHER_IS_ARC4(pSessionDesc->cipherAlgorithm) || + LAC_CIPHER_IS_CTR_MODE(pSessionDesc->cipherAlgorithm) || + LAC_CIPHER_IS_F8_MODE(pSessionDesc->cipherAlgorithm) || + LAC_CIPHER_IS_SNOW3G_UEA2( + pSessionDesc->cipherAlgorithm) || + LAC_CIPHER_IS_ZUC_EEA3(pSessionDesc->cipherAlgorithm))) { + /* Mask & check below is based on assumption that block + * size is + * a power of 2. If data size is not a multiple of the + * block size, + * the "remainder" bits selected by the mask be non-zero + */ + if (pRequest->messageLenToCipherInBytes & + (LacSymQat_CipherBlockSizeBytesGet( + pSessionDesc->cipherAlgorithm) - + 1)) { + LAC_INVALID_PARAM_LOG( + "Data size must be block size multiple"); + return CPA_STATUS_INVALID_PARAM; + } + } + + cipher = pSessionDesc->cipherAlgorithm; + hash = pSessionDesc->hashAlgorithm; + capabilitiesMask = + ((sal_crypto_service_t *)pRequest->instanceHandle) + ->generic_service_info.capabilitiesMask; + if (LAC_CIPHER_IS_SPC(cipher, hash, capabilitiesMask) && + (LAC_CIPHER_SPC_IV_SIZE == pRequest->ivLenInBytes)) { + /* For CHACHA and AES_GCM single pass there is an AAD + * buffer + * if aadLenInBytes is nonzero. AES_GMAC AAD is stored + * in + * source buffer, therefore there is no separate AAD + * buffer. */ + if ((0 != pSessionDesc->aadLenInBytes) && + (CPA_CY_SYM_HASH_AES_GMAC != + pSessionDesc->hashAlgorithm)) { + LAC_CHECK_NULL_PARAM( + pRequest->pAdditionalAuthData); + } + + /* Ensure AAD length for AES_GMAC spc */ + if ((CPA_CY_SYM_HASH_AES_GMAC == hash) && + (ICP_QAT_FW_SPC_AAD_SZ_MAX < + pRequest->messageLenToHashInBytes)) { + LAC_INVALID_PARAM_LOG( + "aadLenInBytes for AES_GMAC"); + return CPA_STATUS_INVALID_PARAM; + } + } + } + + /* Hash specific tests */ + if (CPA_CY_SYM_OP_CIPHER != pSessionDesc->symOperation) { + /* For CCM, snow3G and ZUC there is always an AAD buffer + For GCM there is an AAD buffer if aadLenInBytes is + nonzero */ + if ((CPA_CY_SYM_HASH_AES_CCM == pSessionDesc->hashAlgorithm) || + (CPA_CY_SYM_HASH_AES_GCM == pSessionDesc->hashAlgorithm && + (0 != pSessionDesc->aadLenInBytes))) { + LAC_CHECK_NULL_PARAM(pRequest->pAdditionalAuthData); + if (0 == pRequest->additionalAuthData) { + LAC_INVALID_PARAM_LOG( + "Invalid additionalAuthData"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (CPA_CY_SYM_HASH_SNOW3G_UIA2 == + pSessionDesc->hashAlgorithm || + CPA_CY_SYM_HASH_ZUC_EIA3 == + pSessionDesc->hashAlgorithm) { + if (0 == pRequest->additionalAuthData) { + LAC_INVALID_PARAM_LOG( + "Invalid additionalAuthData"); + return CPA_STATUS_INVALID_PARAM; + } + } + + if ((CPA_CY_SYM_HASH_AES_CCM != pSessionDesc->hashAlgorithm) && + (!pSessionDesc->digestIsAppended) && + (0 == pRequest->digestResult)) { + LAC_INVALID_PARAM_LOG("Invalid digestResult"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_CY_SYM_HASH_AES_CCM == pSessionDesc->hashAlgorithm) { + if ((pRequest->cryptoStartSrcOffsetInBytes + + pRequest->messageLenToCipherInBytes + + pSessionDesc->hashResultSize) > + pRequest->dstBufferLen) { + LAC_INVALID_PARAM_LOG( + "CCM - Not enough room for" + " digest in destination buffer"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (CPA_TRUE == pSessionDesc->digestIsAppended) { + if (CPA_CY_SYM_HASH_AES_GMAC == + pSessionDesc->hashAlgorithm) { + if ((pRequest->hashStartSrcOffsetInBytes + + pRequest->messageLenToHashInBytes + + pSessionDesc->hashResultSize) > + pRequest->dstBufferLen) { + LAC_INVALID_PARAM_LOG( + "Append Digest - Not enough room for" + " digest in destination buffer for " + "AES GMAC algorithm"); + return CPA_STATUS_INVALID_PARAM; + } + } + if (CPA_CY_SYM_HASH_AES_GCM == + pSessionDesc->hashAlgorithm) { + if ((pRequest->cryptoStartSrcOffsetInBytes + + pRequest->messageLenToCipherInBytes + + pSessionDesc->hashResultSize) > + pRequest->dstBufferLen) { + LAC_INVALID_PARAM_LOG( + "Append Digest - Not enough room " + "for digest in destination buffer" + " for GCM algorithm"); + return CPA_STATUS_INVALID_PARAM; + } + } + + if ((pRequest->hashStartSrcOffsetInBytes + + pRequest->messageLenToHashInBytes + + pSessionDesc->hashResultSize) > + pRequest->dstBufferLen) { + LAC_INVALID_PARAM_LOG( + "Append Digest - Not enough room for" + " digest in destination buffer"); + return CPA_STATUS_INVALID_PARAM; + } + } + if (CPA_CY_SYM_HASH_AES_GMAC == pSessionDesc->hashAlgorithm) { + if (pRequest->messageLenToHashInBytes == 0 || + pRequest->pAdditionalAuthData != NULL) { + LAC_INVALID_PARAM_LOG( + "For AES_GMAC, AAD Length " + "(messageLenToHashInBytes) must be " + "non zero and pAdditionalAuthData " + "must be NULL"); + return CPA_STATUS_INVALID_PARAM; + } + } + } + + if (CPA_DP_BUFLIST != pRequest->srcBufferLen) { + if ((CPA_CY_SYM_OP_HASH != pSessionDesc->symOperation) && + ((pRequest->messageLenToCipherInBytes + + pRequest->cryptoStartSrcOffsetInBytes) > + pRequest->srcBufferLen)) { + LAC_INVALID_PARAM_LOG( + "cipher len + offset greater than " + "srcBufferLen"); + return CPA_STATUS_INVALID_PARAM; + } else if ((CPA_CY_SYM_OP_CIPHER != + pSessionDesc->symOperation) && + (CPA_CY_SYM_HASH_AES_CCM != + pSessionDesc->hashAlgorithm) && + (CPA_CY_SYM_HASH_AES_GCM != + pSessionDesc->hashAlgorithm) && + (CPA_CY_SYM_HASH_AES_GMAC != + pSessionDesc->hashAlgorithm) && + ((pRequest->messageLenToHashInBytes + + pRequest->hashStartSrcOffsetInBytes) > + pRequest->srcBufferLen)) { + LAC_INVALID_PARAM_LOG( + "hash len + offset greater than srcBufferLen"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + LAC_CHECK_8_BYTE_ALIGNMENT(pRequest->srcBuffer); + LAC_CHECK_8_BYTE_ALIGNMENT(pRequest->dstBuffer); + } + + LAC_CHECK_8_BYTE_ALIGNMENT(pRequest->thisPhys); + + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Write Message on the ring and write request params + * This is the optimized version, which should not be used for + * algorithm of CCM, GCM and RC4 + * + * @description + * Write Message on the ring and write request params + * + * @param[in/out] pRequest Pointer to operation data for crypto + * data plane API + * @param[in/out] pCurrentQatMsg Pointer to ring memory where msg will + * be written + * + * @retval none + * + *****************************************************************************/ + +void +LacDp_WriteRingMsgOpt(CpaCySymDpOpData *pRequest, + icp_qat_fw_la_bulk_req_t *pCurrentQatMsg) +{ + lac_session_desc_t *pSessionDesc = + LAC_SYM_SESSION_DESC_FROM_CTX_GET(pRequest->sessionCtx); + Cpa8U *pMsgDummy = NULL; + Cpa8U *pCacheDummyHdr = NULL; + Cpa8U *pCacheDummyFtr = NULL; + + pMsgDummy = (Cpa8U *)pCurrentQatMsg; + /* Write Request */ + /* + * Fill in the header and footer bytes of the ET ring message - cached + * from + * the session descriptor. + */ + pCacheDummyHdr = (Cpa8U *)&(pSessionDesc->reqCacheHdr); + pCacheDummyFtr = (Cpa8U *)&(pSessionDesc->reqCacheFtr); + + memcpy(pMsgDummy, + pCacheDummyHdr, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_HDR_IN_LW)); + memset((pMsgDummy + + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_HDR_IN_LW)), + 0, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_TO_CLEAR_IN_LW)); + memcpy(pMsgDummy + + (LAC_LONG_WORD_IN_BYTES * LAC_START_OF_CACHE_FTR_IN_LW), + pCacheDummyFtr, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_FTR_IN_LW)); + + SalQatMsg_CmnMidWrite(pCurrentQatMsg, + pRequest, + (CPA_DP_BUFLIST == pRequest->srcBufferLen ? + QAT_COMN_PTR_TYPE_SGL : + QAT_COMN_PTR_TYPE_FLAT), + pRequest->srcBuffer, + pRequest->dstBuffer, + pRequest->srcBufferLen, + pRequest->dstBufferLen); + + /* Write Request Params */ + if (pSessionDesc->isCipher) { + + LacSymQat_CipherRequestParamsPopulate( + pCurrentQatMsg, + pRequest->cryptoStartSrcOffsetInBytes, + pRequest->messageLenToCipherInBytes, + pRequest->iv, + pRequest->pIv); + } + + if (pSessionDesc->isAuth) { + lac_sym_qat_hash_state_buffer_info_t *pHashStateBufferInfo = + &(pSessionDesc->hashStateBufferInfo); + icp_qat_fw_la_auth_req_params_t *pAuthReqPars = + (icp_qat_fw_la_auth_req_params_t + *)((Cpa8U *)&(pCurrentQatMsg->serv_specif_rqpars) + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + if ((CPA_CY_SYM_HASH_SNOW3G_UIA2 != + pSessionDesc->hashAlgorithm && + CPA_CY_SYM_HASH_AES_CCM != pSessionDesc->hashAlgorithm && + CPA_CY_SYM_HASH_AES_GCM != pSessionDesc->hashAlgorithm && + CPA_CY_SYM_HASH_AES_GMAC != pSessionDesc->hashAlgorithm && + CPA_CY_SYM_HASH_ZUC_EIA3 != pSessionDesc->hashAlgorithm) && + (pHashStateBufferInfo->prefixAadSzQuadWords > 0)) { + /* prefixAadSzQuadWords > 0 when there is prefix data + - i.e. nested hash or HMAC no precompute cases + Note partials not supported on DP api so we do not need + dynamic hash state in this case */ + pRequest->additionalAuthData = + pHashStateBufferInfo->pDataPhys + + LAC_QUADWORDS_TO_BYTES( + pHashStateBufferInfo->stateStorageSzQuadWords); + } + + /* The first 24 bytes in icp_qat_fw_la_auth_req_params_t can be + * copied directly from the op request data because they share a + * corresponding layout. The remaining 4 bytes are taken + * from the session message template and use values + * preconfigured at + * sessionInit (updated per request for some specific cases + * below) + */ + memcpy(pAuthReqPars, + (Cpa32U *)&(pRequest->hashStartSrcOffsetInBytes), + ((unsigned long)&(pAuthReqPars->u2.inner_prefix_sz) - + (unsigned long)pAuthReqPars)); + + if (CPA_TRUE == pSessionDesc->isAuthEncryptOp) { + pAuthReqPars->hash_state_sz = + LAC_BYTES_TO_QUADWORDS(pAuthReqPars->u2.aad_sz); + } else if (CPA_CY_SYM_HASH_SNOW3G_UIA2 == + pSessionDesc->hashAlgorithm || + CPA_CY_SYM_HASH_ZUC_EIA3 == + pSessionDesc->hashAlgorithm) { + pAuthReqPars->hash_state_sz = + LAC_BYTES_TO_QUADWORDS(pSessionDesc->aadLenInBytes); + } + } + +} + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Write Message on the ring and write request params + * + * @description + * Write Message on the ring and write request params + * + * @param[in/out] pRequest Pointer to operation data for crypto + * data plane API + * @param[in/out] pCurrentQatMsg Pointer to ring memory where msg will + * be written + * + * @retval none + * + *****************************************************************************/ + +void +LacDp_WriteRingMsgFull(CpaCySymDpOpData *pRequest, + icp_qat_fw_la_bulk_req_t *pCurrentQatMsg) +{ + lac_session_desc_t *pSessionDesc = + LAC_SYM_SESSION_DESC_FROM_CTX_GET(pRequest->sessionCtx); + Cpa8U *pMsgDummy = NULL; + Cpa8U *pCacheDummyHdr = NULL; + Cpa8U *pCacheDummyFtr = NULL; + sal_qat_content_desc_info_t *pCdInfo = NULL; + Cpa8U *pHwBlockBaseInDRAM = NULL; + Cpa32U hwBlockOffsetInDRAM = 0; + Cpa32U sizeInBytes = 0; + CpaCySymCipherAlgorithm cipher = pSessionDesc->cipherAlgorithm; + CpaCySymHashAlgorithm hash = pSessionDesc->hashAlgorithm; + Cpa32U capabilitiesMask = + ((sal_crypto_service_t *)pRequest->instanceHandle) + ->generic_service_info.capabilitiesMask; + + Cpa8U paddingLen = 0; + Cpa8U blockLen = 0; + + pMsgDummy = (Cpa8U *)pCurrentQatMsg; + /* Write Request */ + /* + * Fill in the header and footer bytes of the ET ring message - cached + * from + * the session descriptor. + */ + + if (!pSessionDesc->isSinglePass && + LAC_CIPHER_IS_SPC(cipher, hash, capabilitiesMask) && + (LAC_CIPHER_SPC_IV_SIZE == pRequest->ivLenInBytes)) { + pSessionDesc->isSinglePass = CPA_TRUE; + pSessionDesc->isCipher = CPA_TRUE; + pSessionDesc->isAuthEncryptOp = CPA_FALSE; + pSessionDesc->isAuth = CPA_FALSE; + pSessionDesc->symOperation = CPA_CY_SYM_OP_CIPHER; + pSessionDesc->laCmdId = ICP_QAT_FW_LA_CMD_CIPHER; + if (CPA_CY_SYM_HASH_AES_GMAC == pSessionDesc->hashAlgorithm) { + pSessionDesc->aadLenInBytes = + pRequest->messageLenToHashInBytes; + } + /* New bit position (13) for SINGLE PASS. + * The FW provides a specific macro to use to set the proto flag + */ + ICP_QAT_FW_LA_SINGLE_PASS_PROTO_FLAG_SET( + pSessionDesc->laCmdFlags, ICP_QAT_FW_LA_SINGLE_PASS_PROTO); + ICP_QAT_FW_LA_PROTO_SET(pSessionDesc->laCmdFlags, 0); + + pCdInfo = &(pSessionDesc->contentDescInfo); + pHwBlockBaseInDRAM = (Cpa8U *)pCdInfo->pData; + if (CPA_CY_SYM_CIPHER_DIRECTION_DECRYPT == + pSessionDesc->cipherDirection) { + if (LAC_CIPHER_IS_GCM(cipher)) + hwBlockOffsetInDRAM = LAC_QUADWORDS_TO_BYTES( + LAC_SYM_QAT_CIPHER_OFFSET_IN_DRAM_GCM_SPC); + else + hwBlockOffsetInDRAM = LAC_QUADWORDS_TO_BYTES( + LAC_SYM_QAT_CIPHER_OFFSET_IN_DRAM_CHACHA_SPC); + } + /* construct cipherConfig in CD in DRAM */ + LacSymQat_CipherHwBlockPopulateCfgData(pSessionDesc, + pHwBlockBaseInDRAM + + hwBlockOffsetInDRAM, + &sizeInBytes); + SalQatMsg_CmnHdrWrite((icp_qat_fw_comn_req_t *)&( + pSessionDesc->reqSpcCacheHdr), + ICP_QAT_FW_COMN_REQ_CPM_FW_LA, + pSessionDesc->laCmdId, + pSessionDesc->cmnRequestFlags, + pSessionDesc->laCmdFlags); + } else if (CPA_CY_SYM_HASH_AES_GMAC == pSessionDesc->hashAlgorithm) { + pSessionDesc->aadLenInBytes = pRequest->messageLenToHashInBytes; + } + if (pSessionDesc->isSinglePass) { + pCacheDummyHdr = (Cpa8U *)&(pSessionDesc->reqSpcCacheHdr); + pCacheDummyFtr = (Cpa8U *)&(pSessionDesc->reqSpcCacheFtr); + } else { + if (!pSessionDesc->useSymConstantsTable) { + pCacheDummyHdr = (Cpa8U *)&(pSessionDesc->reqCacheHdr); + pCacheDummyFtr = (Cpa8U *)&(pSessionDesc->reqCacheFtr); + } else { + pCacheDummyHdr = + (Cpa8U *)&(pSessionDesc->shramReqCacheHdr); + pCacheDummyFtr = + (Cpa8U *)&(pSessionDesc->shramReqCacheFtr); + } + } + memcpy(pMsgDummy, + pCacheDummyHdr, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_HDR_IN_LW)); + memset((pMsgDummy + + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_HDR_IN_LW)), + 0, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_TO_CLEAR_IN_LW)); + memcpy(pMsgDummy + + (LAC_LONG_WORD_IN_BYTES * LAC_START_OF_CACHE_FTR_IN_LW), + pCacheDummyFtr, + (LAC_LONG_WORD_IN_BYTES * LAC_SIZE_OF_CACHE_FTR_IN_LW)); + + SalQatMsg_CmnMidWrite(pCurrentQatMsg, + pRequest, + (CPA_DP_BUFLIST == pRequest->srcBufferLen ? + QAT_COMN_PTR_TYPE_SGL : + QAT_COMN_PTR_TYPE_FLAT), + pRequest->srcBuffer, + pRequest->dstBuffer, + pRequest->srcBufferLen, + pRequest->dstBufferLen); + + if (CPA_CY_SYM_HASH_AES_CCM == pSessionDesc->hashAlgorithm && + pSessionDesc->isAuth == CPA_TRUE) { + /* prepare IV and AAD for CCM */ + LacSymAlgChain_PrepareCCMData( + pSessionDesc, + pRequest->pAdditionalAuthData, + pRequest->pIv, + pRequest->messageLenToCipherInBytes, + pRequest->ivLenInBytes); + + /* According to the API, for CCM and GCM, + * messageLenToHashInBytes + * and hashStartSrcOffsetInBytes are not initialized by the + * user and must be set by the driver + */ + pRequest->hashStartSrcOffsetInBytes = + pRequest->cryptoStartSrcOffsetInBytes; + pRequest->messageLenToHashInBytes = + pRequest->messageLenToCipherInBytes; + } else if (!pSessionDesc->isSinglePass && + (CPA_CY_SYM_HASH_AES_GCM == pSessionDesc->hashAlgorithm || + CPA_CY_SYM_HASH_AES_GMAC == pSessionDesc->hashAlgorithm)) { + /* GCM case */ + if (CPA_CY_SYM_HASH_AES_GMAC != pSessionDesc->hashAlgorithm) { + /* According to the API, for CCM and GCM, + * messageLenToHashInBytes and hashStartSrcOffsetInBytes + * are not initialized by the user and must be set + * by the driver + */ + pRequest->hashStartSrcOffsetInBytes = + pRequest->cryptoStartSrcOffsetInBytes; + pRequest->messageLenToHashInBytes = + pRequest->messageLenToCipherInBytes; + + LacSymAlgChain_PrepareGCMData( + pSessionDesc, pRequest->pAdditionalAuthData); + } + + if (LAC_CIPHER_IV_SIZE_GCM_12 == pRequest->ivLenInBytes) { + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + pCurrentQatMsg->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + } + } + + /* Write Request Params */ + if (pSessionDesc->isCipher) { + if (CPA_CY_SYM_CIPHER_ARC4 == pSessionDesc->cipherAlgorithm) { + /* ARC4 does not have an IV but the field is used to + * store the + * initial state */ + pRequest->iv = + pSessionDesc->cipherARC4InitialStatePhysAddr; + } + + LacSymQat_CipherRequestParamsPopulate( + pCurrentQatMsg, + pRequest->cryptoStartSrcOffsetInBytes, + pRequest->messageLenToCipherInBytes, + pRequest->iv, + pRequest->pIv); + if (pSessionDesc->isSinglePass) { + icp_qat_fw_la_cipher_req_params_t *pCipherReqParams = + (icp_qat_fw_la_cipher_req_params_t + *)((Cpa8U *)&( + pCurrentQatMsg->serv_specif_rqpars) + + ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET); + + pCipherReqParams->spc_aad_addr = + (uint64_t)pRequest->additionalAuthData; + pCipherReqParams->spc_aad_sz = + pSessionDesc->aadLenInBytes; + + pCipherReqParams->spc_auth_res_addr = + (uint64_t)pRequest->digestResult; + pCipherReqParams->spc_auth_res_sz = + pSessionDesc->hashResultSize; + + /* For CHACHA and AES_GCM single pass AAD buffer needs + * alignment + * if aadLenInBytes is nonzero. + * In case of AES-GMAC, AAD buffer passed in the src + * buffer. + */ + if (0 != pSessionDesc->aadLenInBytes && + CPA_CY_SYM_HASH_AES_GMAC != + pSessionDesc->hashAlgorithm) { + blockLen = LacSymQat_CipherBlockSizeBytesGet( + pSessionDesc->cipherAlgorithm); + if ((pSessionDesc->aadLenInBytes % blockLen) != + 0) { + paddingLen = blockLen - + (pSessionDesc->aadLenInBytes % + blockLen); + memset( + &pRequest->pAdditionalAuthData + [pSessionDesc->aadLenInBytes], + 0, + paddingLen); + } + } + } + } + + if (pSessionDesc->isAuth) { + lac_sym_qat_hash_state_buffer_info_t *pHashStateBufferInfo = + &(pSessionDesc->hashStateBufferInfo); + icp_qat_fw_la_auth_req_params_t *pAuthReqPars = + (icp_qat_fw_la_auth_req_params_t + *)((Cpa8U *)&(pCurrentQatMsg->serv_specif_rqpars) + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + if ((CPA_CY_SYM_HASH_SNOW3G_UIA2 != + pSessionDesc->hashAlgorithm && + CPA_CY_SYM_HASH_AES_CCM != pSessionDesc->hashAlgorithm && + CPA_CY_SYM_HASH_AES_GCM != pSessionDesc->hashAlgorithm && + CPA_CY_SYM_HASH_AES_GMAC != pSessionDesc->hashAlgorithm && + CPA_CY_SYM_HASH_ZUC_EIA3 != pSessionDesc->hashAlgorithm) && + (pHashStateBufferInfo->prefixAadSzQuadWords > 0)) { + /* prefixAadSzQuadWords > 0 when there is prefix data + - i.e. nested hash or HMAC no precompute cases + Note partials not supported on DP api so we do not need + dynamic hash state in this case */ + pRequest->additionalAuthData = + pHashStateBufferInfo->pDataPhys + + LAC_QUADWORDS_TO_BYTES( + pHashStateBufferInfo->stateStorageSzQuadWords); + } + + /* The first 24 bytes in icp_qat_fw_la_auth_req_params_t can be + * copied directly from the op request data because they share a + * corresponding layout. The remaining 4 bytes are taken + * from the session message template and use values + * preconfigured at + * sessionInit (updated per request for some specific cases + * below) + */ + + /* We force a specific compiler optimisation here. The length + * to + * be copied turns out to be always 16, and by coding a memcpy + * with + * a literal value the compiler will compile inline code (in + * fact, + * only two vector instructions) to effect the copy. This gives + * us + * a huge performance increase. + */ + unsigned long cplen = + (unsigned long)&(pAuthReqPars->u2.inner_prefix_sz) - + (unsigned long)pAuthReqPars; + if (cplen == 16) + memcpy(pAuthReqPars, + (Cpa32U *)&(pRequest->hashStartSrcOffsetInBytes), + 16); + else + memcpy(pAuthReqPars, + (Cpa32U *)&(pRequest->hashStartSrcOffsetInBytes), + cplen); + + if (CPA_TRUE == pSessionDesc->isAuthEncryptOp) { + pAuthReqPars->hash_state_sz = + LAC_BYTES_TO_QUADWORDS(pAuthReqPars->u2.aad_sz); + } else if (CPA_CY_SYM_HASH_SNOW3G_UIA2 == + pSessionDesc->hashAlgorithm || + CPA_CY_SYM_HASH_ZUC_EIA3 == + pSessionDesc->hashAlgorithm) { + pAuthReqPars->hash_state_sz = + LAC_BYTES_TO_QUADWORDS(pSessionDesc->aadLenInBytes); + } + } + +} + +CpaStatus +cpaCySymDpSessionCtxGetSize(const CpaInstanceHandle instanceHandle, + const CpaCySymSessionSetupData *pSessionSetupData, + Cpa32U *pSessionCtxSizeInBytes) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + /* CPA_INSTANCE_HANDLE_SINGLE is not supported on DP apis */ + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + /* All other param checks are common with trad api */ + /* Check for valid pointers */ + LAC_CHECK_NULL_PARAM(pSessionCtxSizeInBytes); + status = cpaCySymSessionCtxGetSize(instanceHandle, + pSessionSetupData, + pSessionCtxSizeInBytes); + + return status; +} + +CpaStatus +cpaCySymDpSessionCtxGetDynamicSize( + const CpaInstanceHandle instanceHandle, + const CpaCySymSessionSetupData *pSessionSetupData, + Cpa32U *pSessionCtxSizeInBytes) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + /* CPA_INSTANCE_HANDLE_SINGLE is not supported on DP apis */ + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + /* All other param checks are common with trad api */ + /* Check for valid pointers */ + LAC_CHECK_NULL_PARAM(pSessionCtxSizeInBytes); + status = cpaCySymSessionCtxGetDynamicSize(instanceHandle, + pSessionSetupData, + pSessionCtxSizeInBytes); + + return status; +} + +/** @ingroup cpaCySymDp */ +CpaStatus +cpaCySymDpInitSession(CpaInstanceHandle instanceHandle, + const CpaCySymSessionSetupData *pSessionSetupData, + CpaCySymDpSessionCtx sessionCtx) +{ + CpaStatus status = CPA_STATUS_FAIL; + sal_service_t *pService = NULL; + + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(pSessionSetupData); + pService = (sal_service_t *)instanceHandle; + + /* Check crypto service is running otherwise return an error */ + SAL_RUNNING_CHECK(pService); + + status = LacSym_InitSession(instanceHandle, + NULL, /* Callback */ + pSessionSetupData, + CPA_TRUE, /* isDPSession */ + sessionCtx); + return status; +} + +CpaStatus +cpaCySymDpRemoveSession(const CpaInstanceHandle instanceHandle, + CpaCySymDpSessionCtx sessionCtx) +{ + + /* CPA_INSTANCE_HANDLE_SINGLE is not supported on DP apis */ + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); +/* All other param checks are common with trad api */ + + return cpaCySymRemoveSession(instanceHandle, sessionCtx); +} + +CpaStatus +cpaCySymDpRegCbFunc(const CpaInstanceHandle instanceHandle, + const CpaCySymDpCbFunc pSymDpCb) +{ + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(pSymDpCb); + SAL_RUNNING_CHECK(instanceHandle); + pService->pSymDpCb = pSymDpCb; + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaCySymDpEnqueueOp(CpaCySymDpOpData *pRequest, const CpaBoolean performOpNow) +{ + icp_qat_fw_la_bulk_req_t *pCurrentQatMsg = NULL; + icp_comms_trans_handle trans_handle = NULL; + lac_session_desc_t *pSessionDesc = NULL; + write_ringMsgFunc_t callFunc; + + CpaStatus status = CPA_STATUS_SUCCESS; + + + LAC_CHECK_NULL_PARAM(pRequest); + status = LacDp_EnqueueParamCheck(pRequest); + if (CPA_STATUS_SUCCESS != status) { + return status; + } + + trans_handle = ((sal_crypto_service_t *)pRequest->instanceHandle) + ->trans_handle_sym_tx; + + pSessionDesc = LAC_SYM_SESSION_DESC_FROM_CTX_GET(pRequest->sessionCtx); + icp_adf_getSingleQueueAddr(trans_handle, (void **)&pCurrentQatMsg); + if (NULL == pCurrentQatMsg) { + /* + * No space is available on the queue. + */ + return CPA_STATUS_RETRY; + } + + callFunc = (write_ringMsgFunc_t)pSessionDesc->writeRingMsgFunc; + + LAC_CHECK_NULL_PARAM(callFunc); + + callFunc(pRequest, pCurrentQatMsg); + + qatUtilsAtomicInc(&(pSessionDesc->u.pendingDpCbCount)); + + if (CPA_TRUE == performOpNow) { + SalQatMsg_updateQueueTail(trans_handle); + } + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaCySymDpPerformOpNow(const CpaInstanceHandle instanceHandle) +{ + icp_comms_trans_handle trans_handle = NULL; + + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(instanceHandle); + + trans_handle = + ((sal_crypto_service_t *)instanceHandle)->trans_handle_sym_tx; + + if (CPA_TRUE == icp_adf_queueDataToSend(trans_handle)) { + SalQatMsg_updateQueueTail(trans_handle); + } + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaCySymDpEnqueueOpBatch(const Cpa32U numberRequests, + CpaCySymDpOpData *pRequests[], + const CpaBoolean performOpNow) +{ + icp_qat_fw_la_bulk_req_t *pCurrentQatMsg = NULL; + icp_comms_trans_handle trans_handle = NULL; + lac_session_desc_t *pSessionDesc = NULL; + write_ringMsgFunc_t callFunc; + Cpa32U i = 0; + + CpaStatus status = CPA_STATUS_SUCCESS; + sal_crypto_service_t *pService = NULL; + + + LAC_CHECK_NULL_PARAM(pRequests); + LAC_CHECK_NULL_PARAM(pRequests[0]); + LAC_CHECK_NULL_PARAM(pRequests[0]->instanceHandle); + + pService = (sal_crypto_service_t *)(pRequests[0]->instanceHandle); + + if ((0 == numberRequests) || + (numberRequests > pService->maxNumSymReqBatch)) { + LAC_INVALID_PARAM_LOG1( + "The number of requests needs to be between 1 " + "and %d", + pService->maxNumSymReqBatch); + return CPA_STATUS_INVALID_PARAM; + } + + for (i = 0; i < numberRequests; i++) { + status = LacDp_EnqueueParamCheck(pRequests[i]); + if (CPA_STATUS_SUCCESS != status) { + return status; + } + + /* Check that all instance handles are the same */ + if (pRequests[i]->instanceHandle != + pRequests[0]->instanceHandle) { + LAC_INVALID_PARAM_LOG( + "All instance handles should be the same " + "in the requests"); + return CPA_STATUS_INVALID_PARAM; + } + } + + trans_handle = ((sal_crypto_service_t *)pRequests[0]->instanceHandle) + ->trans_handle_sym_tx; + pSessionDesc = + LAC_SYM_SESSION_DESC_FROM_CTX_GET(pRequests[0]->sessionCtx); + icp_adf_getQueueMemory(trans_handle, + numberRequests, + (void **)&pCurrentQatMsg); + if (NULL == pCurrentQatMsg) { + /* + * No space is available on the queue. + */ + return CPA_STATUS_RETRY; + } + + for (i = 0; i < numberRequests; i++) { + pSessionDesc = + LAC_SYM_SESSION_DESC_FROM_CTX_GET(pRequests[i]->sessionCtx); + callFunc = (write_ringMsgFunc_t)pSessionDesc->writeRingMsgFunc; + callFunc(pRequests[i], pCurrentQatMsg); + icp_adf_getQueueNext(trans_handle, (void **)&pCurrentQatMsg); + qatUtilsAtomicAdd(1, &(pSessionDesc->u.pendingDpCbCount)); + } + + if (CPA_TRUE == performOpNow) { + SalQatMsg_updateQueueTail(trans_handle); + } + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +icp_sal_CyPollDpInstance(const CpaInstanceHandle instanceHandle, + const Cpa32U responseQuota) +{ + icp_comms_trans_handle trans_handle = NULL; + + LAC_CHECK_INSTANCE_HANDLE(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(instanceHandle); + + trans_handle = + ((sal_crypto_service_t *)instanceHandle)->trans_handle_sym_rx; + + return icp_adf_pollQueue(trans_handle, responseQuota); +} Index: sys/dev/qat/qat_api/common/crypto/sym/lac_sym_hash.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/lac_sym_hash.c @@ -0,0 +1,783 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_hash.c + * + * @ingroup LacHash + * + * Hash specific functionality + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" + +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +#include "lac_common.h" +#include "lac_mem.h" +#include "lac_sym.h" +#include "lac_session.h" +#include "lac_sym_hash.h" +#include "lac_log.h" +#include "lac_sym_qat_hash.h" +#include "lac_sym_qat_hash_defs_lookup.h" +#include "lac_sym_cb.h" +#include "lac_sync.h" + +#define LAC_HASH_ALG_MODE_NOT_SUPPORTED(alg, mode) \ + ((((CPA_CY_SYM_HASH_KASUMI_F9 == (alg)) || \ + (CPA_CY_SYM_HASH_SNOW3G_UIA2 == (alg)) || \ + (CPA_CY_SYM_HASH_AES_XCBC == (alg)) || \ + (CPA_CY_SYM_HASH_AES_CCM == (alg)) || \ + (CPA_CY_SYM_HASH_AES_GCM == (alg)) || \ + (CPA_CY_SYM_HASH_AES_GMAC == (alg)) || \ + (CPA_CY_SYM_HASH_AES_CMAC == (alg)) || \ + (CPA_CY_SYM_HASH_ZUC_EIA3 == (alg))) && \ + (CPA_CY_SYM_HASH_MODE_AUTH != (mode))) || \ + (((CPA_CY_SYM_HASH_SHA3_224 == (alg)) || \ + (CPA_CY_SYM_HASH_SHA3_256 == (alg)) || \ + (CPA_CY_SYM_HASH_SHA3_384 == (alg)) || \ + (CPA_CY_SYM_HASH_SHA3_512 == (alg))) && \ + (CPA_CY_SYM_HASH_MODE_NESTED == (mode))) || \ + (((CPA_CY_SYM_HASH_SHAKE_128 == (alg)) || \ + (CPA_CY_SYM_HASH_SHAKE_256 == (alg))) && \ + (CPA_CY_SYM_HASH_MODE_AUTH == (mode)))) + +/**< Macro to check for valid algorithm-mode combination */ + +/** + * @ingroup LacHash + * This callback function will be invoked whenever a synchronous + * hash precompute operation completes. It will set the wait + * queue flag for the synchronous operation. + * + * @param[in] pCallbackTag Opaque value provided by user. This will + * be a pointer to a wait queue flag. + * + * @retval + * None + * + */ +static void +LacHash_SyncPrecomputeDoneCb(void *pCallbackTag) +{ + LacSync_GenWakeupSyncCaller(pCallbackTag, CPA_STATUS_SUCCESS); +} + +/** @ingroup LacHash */ +CpaStatus +LacHash_StatePrefixAadBufferInit( + sal_service_t *pService, + const CpaCySymHashSetupData *pHashSetupData, + icp_qat_la_bulk_req_ftr_t *pReq, + icp_qat_hw_auth_mode_t qatHashMode, + Cpa8U *pHashStateBuffer, + lac_sym_qat_hash_state_buffer_info_t *pHashStateBufferInfo) +{ + /* set up the hash state prefix buffer info structure */ + pHashStateBufferInfo->pData = pHashStateBuffer; + + pHashStateBufferInfo->pDataPhys = LAC_MEM_CAST_PTR_TO_UINT64( + LAC_OS_VIRT_TO_PHYS_EXTERNAL((*pService), pHashStateBuffer)); + + if (pHashStateBufferInfo->pDataPhys == 0) { + LAC_LOG_ERROR("Unable to get the physical address of " + "the hash state buffer"); + return CPA_STATUS_FAIL; + } + + LacSymQat_HashStatePrefixAadBufferSizeGet(pReq, pHashStateBufferInfo); + + /* Prefix data gets copied to the hash state buffer for nested mode */ + if (CPA_CY_SYM_HASH_MODE_NESTED == pHashSetupData->hashMode) { + LacSymQat_HashStatePrefixAadBufferPopulate( + pHashStateBufferInfo, + pReq, + pHashSetupData->nestedModeSetupData.pInnerPrefixData, + pHashSetupData->nestedModeSetupData.innerPrefixLenInBytes, + pHashSetupData->nestedModeSetupData.pOuterPrefixData, + pHashSetupData->nestedModeSetupData.outerPrefixLenInBytes); + } + /* For mode2 HMAC the key gets copied into both the inner and + * outer prefix fields */ + else if (IS_HASH_MODE_2_AUTH(qatHashMode, pHashSetupData->hashMode)) { + LacSymQat_HashStatePrefixAadBufferPopulate( + pHashStateBufferInfo, + pReq, + pHashSetupData->authModeSetupData.authKey, + pHashSetupData->authModeSetupData.authKeyLenInBytes, + pHashSetupData->authModeSetupData.authKey, + pHashSetupData->authModeSetupData.authKeyLenInBytes); + } + /* else do nothing for the other cases */ + return CPA_STATUS_SUCCESS; +} + +/** @ingroup LacHash */ +CpaStatus +LacHash_PrecomputeDataCreate(const CpaInstanceHandle instanceHandle, + CpaCySymSessionSetupData *pSessionSetup, + lac_hash_precompute_done_cb_t callbackFn, + void *pCallbackTag, + Cpa8U *pWorkingBuffer, + Cpa8U *pState1, + Cpa8U *pState2) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa8U *pAuthKey = NULL; + Cpa32U authKeyLenInBytes = 0; + CpaCySymHashAlgorithm hashAlgorithm = + pSessionSetup->hashSetupData.hashAlgorithm; + CpaCySymHashAuthModeSetupData *pAuthModeSetupData = + &pSessionSetup->hashSetupData.authModeSetupData; + + /* synchronous operation */ + if (NULL == callbackFn) { + lac_sync_op_data_t *pSyncCallbackData = NULL; + + status = LacSync_CreateSyncCookie(&pSyncCallbackData); + + if (CPA_STATUS_SUCCESS == status) { + status = LacHash_PrecomputeDataCreate( + instanceHandle, + pSessionSetup, + LacHash_SyncPrecomputeDoneCb, + /* wait queue condition from sync cookie */ + pSyncCallbackData, + pWorkingBuffer, + pState1, + pState2); + } else { + return status; + } + + if (CPA_STATUS_SUCCESS == status) { + CpaStatus syncStatus = CPA_STATUS_SUCCESS; + + syncStatus = LacSync_WaitForCallback( + pSyncCallbackData, + LAC_SYM_SYNC_CALLBACK_TIMEOUT, + &status, + NULL); + + /* If callback doesn't come back */ + if (CPA_STATUS_SUCCESS != syncStatus) { + QAT_UTILS_LOG( + "callback functions for precomputes did not return\n"); + status = syncStatus; + } + } else { + /* As the Request was not sent the Callback will never + * be called, so need to indicate that we're finished + * with cookie so it can be destroyed. */ + LacSync_SetSyncCookieComplete(pSyncCallbackData); + } + LacSync_DestroySyncCookie(&pSyncCallbackData); + + return status; + } + + /* set up convenience pointers */ + pAuthKey = pAuthModeSetupData->authKey; + authKeyLenInBytes = pAuthModeSetupData->authKeyLenInBytes; + + /* Pre-compute data state pointers must already be set up + * by LacSymQat_HashSetupBlockInit() + */ + + /* state1 is not allocated for AES XCBC/CCM/GCM/Kasumi/UIA2 + * so for these algorithms set state2 only */ + if (CPA_CY_SYM_HASH_AES_XCBC == hashAlgorithm) { + status = LacSymHash_AesECBPreCompute(instanceHandle, + hashAlgorithm, + authKeyLenInBytes, + pAuthKey, + pWorkingBuffer, + pState2, + callbackFn, + pCallbackTag); + } else if (CPA_CY_SYM_HASH_AES_CMAC == hashAlgorithm) { + /* First, copy the original key to pState2 */ + memcpy(pState2, pAuthKey, authKeyLenInBytes); + /* Then precompute */ + status = LacSymHash_AesECBPreCompute(instanceHandle, + hashAlgorithm, + authKeyLenInBytes, + pAuthKey, + pWorkingBuffer, + pState2, + callbackFn, + pCallbackTag); + } else if (CPA_CY_SYM_HASH_AES_CCM == hashAlgorithm) { + /* + * The Inner Hash Initial State2 block must contain K + * (the cipher key) and 16 zeroes which will be replaced with + * EK(Ctr0) by the QAT-ME. + */ + + /* write the auth key which for CCM is equivalent to cipher key + */ + memcpy(pState2, + pSessionSetup->cipherSetupData.pCipherKey, + pSessionSetup->cipherSetupData.cipherKeyLenInBytes); + + /* initialize remaining buffer space to all zeroes */ + LAC_OS_BZERO( + pState2 + + pSessionSetup->cipherSetupData.cipherKeyLenInBytes, + ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ); + + /* There is no request sent to the QAT for this operation, + * so just invoke the user's callback directly to signal + * completion of the precompute + */ + callbackFn(pCallbackTag); + } else if (CPA_CY_SYM_HASH_AES_GCM == hashAlgorithm || + CPA_CY_SYM_HASH_AES_GMAC == hashAlgorithm) { + /* + * The Inner Hash Initial State2 block contains the following + * H (the Galois Hash Multiplier) + * len(A) (the length of A), (length before padding) + * 16 zeroes which will be replaced with EK(Ctr0) by the + * QAT. + */ + + /* Memset state2 to 0 */ + LAC_OS_BZERO(pState2, + ICP_QAT_HW_GALOIS_H_SZ + + ICP_QAT_HW_GALOIS_LEN_A_SZ + + ICP_QAT_HW_GALOIS_E_CTR0_SZ); + + /* write H (the Galois Hash Multiplier) where H = E(K, 0...0) + * This will only write bytes 0-15 of pState2 + */ + status = LacSymHash_AesECBPreCompute( + instanceHandle, + hashAlgorithm, + pSessionSetup->cipherSetupData.cipherKeyLenInBytes, + pSessionSetup->cipherSetupData.pCipherKey, + pWorkingBuffer, + pState2, + callbackFn, + pCallbackTag); + + if (CPA_STATUS_SUCCESS == status) { + /* write len(A) (the length of A) into bytes 16-19 of + * pState2 + * in big-endian format. This field is 8 bytes */ + *(Cpa32U *)&pState2[ICP_QAT_HW_GALOIS_H_SZ] = + LAC_MEM_WR_32(pAuthModeSetupData->aadLenInBytes); + } + } else if (CPA_CY_SYM_HASH_KASUMI_F9 == hashAlgorithm) { + Cpa32U wordIndex = 0; + Cpa32U *pTempKey = (Cpa32U *)(pState2 + authKeyLenInBytes); + /* + * The Inner Hash Initial State2 block must contain IK + * (Initialisation Key), followed by IK XOR-ed with KM + * (Key Modifier): IK||(IK^KM). + */ + + /* write the auth key */ + memcpy(pState2, pAuthKey, authKeyLenInBytes); + /* initialise temp key with auth key */ + memcpy(pTempKey, pAuthKey, authKeyLenInBytes); + + /* XOR Key with KASUMI F9 key modifier at 4 bytes level */ + for (wordIndex = 0; + wordIndex < LAC_BYTES_TO_LONGWORDS(authKeyLenInBytes); + wordIndex++) { + pTempKey[wordIndex] ^= + LAC_HASH_KASUMI_F9_KEY_MODIFIER_4_BYTES; + } + /* There is no request sent to the QAT for this operation, + * so just invoke the user's callback directly to signal + * completion of the precompute + */ + callbackFn(pCallbackTag); + } else if (CPA_CY_SYM_HASH_SNOW3G_UIA2 == hashAlgorithm) { + /* + * The Inner Hash Initial State2 should be all zeros + */ + LAC_OS_BZERO(pState2, ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ); + + /* There is no request sent to the QAT for this operation, + * so just invoke the user's callback directly to signal + * completion of the precompute + */ + callbackFn(pCallbackTag); + } else if (CPA_CY_SYM_HASH_ZUC_EIA3 == hashAlgorithm) { + /* + * The Inner Hash Initial State2 should contain the key + * and zero the rest of the state. + */ + LAC_OS_BZERO(pState2, ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ); + memcpy(pState2, pAuthKey, authKeyLenInBytes); + + /* There is no request sent to the QAT for this operation, + * so just invoke the user's callback directly to signal + * completion of the precompute + */ + callbackFn(pCallbackTag); + } else if (CPA_CY_SYM_HASH_POLY == hashAlgorithm) { + /* There is no request sent to the QAT for this operation, + * so just invoke the user's callback directly to signal + * completion of the precompute + */ + callbackFn(pCallbackTag); + } else /* For Hmac Precomputes */ + { + status = LacSymHash_HmacPreComputes(instanceHandle, + hashAlgorithm, + authKeyLenInBytes, + pAuthKey, + pWorkingBuffer, + pState1, + pState2, + callbackFn, + pCallbackTag); + } + + return status; +} + + +/** @ingroup LacHash */ +CpaStatus +LacHash_HashContextCheck(CpaInstanceHandle instanceHandle, + const CpaCySymHashSetupData *pHashSetupData) +{ + lac_sym_qat_hash_alg_info_t *pHashAlgInfo = NULL; + lac_sym_qat_hash_alg_info_t *pOuterHashAlgInfo = NULL; + CpaCySymCapabilitiesInfo capInfo; + + /*Protect against value of hash outside the bitmap*/ + if ((pHashSetupData->hashAlgorithm) >= + CPA_CY_SYM_HASH_CAP_BITMAP_SIZE) { + LAC_INVALID_PARAM_LOG("hashAlgorithm"); + return CPA_STATUS_INVALID_PARAM; + } + cpaCySymQueryCapabilities(instanceHandle, &capInfo); + if (!CPA_BITMAP_BIT_TEST(capInfo.hashes, + pHashSetupData->hashAlgorithm) && + pHashSetupData->hashAlgorithm != CPA_CY_SYM_HASH_AES_CBC_MAC) { + /* Ensure SHAKE algorithms are not supported */ + if ((CPA_CY_SYM_HASH_SHAKE_128 == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_SHAKE_256 == + pHashSetupData->hashAlgorithm)) { + LAC_INVALID_PARAM_LOG( + "Hash algorithms SHAKE-128 and SHAKE-256 " + "are not supported."); + return CPA_STATUS_UNSUPPORTED; + } + + LAC_INVALID_PARAM_LOG("hashAlgorithm"); + return CPA_STATUS_INVALID_PARAM; + } + + switch (pHashSetupData->hashMode) { + case CPA_CY_SYM_HASH_MODE_PLAIN: + case CPA_CY_SYM_HASH_MODE_AUTH: + case CPA_CY_SYM_HASH_MODE_NESTED: + break; + + default: { + LAC_INVALID_PARAM_LOG("hashMode"); + return CPA_STATUS_INVALID_PARAM; + } + } + + if (LAC_HASH_ALG_MODE_NOT_SUPPORTED(pHashSetupData->hashAlgorithm, + pHashSetupData->hashMode)) { + LAC_INVALID_PARAM_LOG("hashAlgorithm and hashMode combination"); + return CPA_STATUS_INVALID_PARAM; + } + + LacSymQat_HashAlgLookupGet(instanceHandle, + pHashSetupData->hashAlgorithm, + &pHashAlgInfo); + + /* note: nested hash mode checks digest length against outer algorithm + */ + if ((CPA_CY_SYM_HASH_MODE_PLAIN == pHashSetupData->hashMode) || + (CPA_CY_SYM_HASH_MODE_AUTH == pHashSetupData->hashMode)) { + /* Check Digest Length is permitted by the algorithm */ + if ((0 == pHashSetupData->digestResultLenInBytes) || + (pHashSetupData->digestResultLenInBytes > + pHashAlgInfo->digestLength)) { + LAC_INVALID_PARAM_LOG("digestResultLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } + + if (CPA_CY_SYM_HASH_MODE_AUTH == pHashSetupData->hashMode) { + if (CPA_CY_SYM_HASH_AES_GCM == pHashSetupData->hashAlgorithm || + CPA_CY_SYM_HASH_AES_GMAC == pHashSetupData->hashAlgorithm) { + Cpa32U aadDataSize = 0; + + /* RFC 4106: Implementations MUST support a full-length + * 16-octet + * ICV, and MAY support 8 or 12 octet ICVs, and MUST NOT + * support + * other ICV lengths. */ + if ((pHashSetupData->digestResultLenInBytes != + LAC_HASH_AES_GCM_ICV_SIZE_8) && + (pHashSetupData->digestResultLenInBytes != + LAC_HASH_AES_GCM_ICV_SIZE_12) && + (pHashSetupData->digestResultLenInBytes != + LAC_HASH_AES_GCM_ICV_SIZE_16)) { + LAC_INVALID_PARAM_LOG("digestResultLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + + /* ensure aadLen is within maximum limit imposed by QAT + */ + aadDataSize = + pHashSetupData->authModeSetupData.aadLenInBytes; + + /* round the aad size to the multiple of GCM hash block + * size. */ + aadDataSize = + LAC_ALIGN_POW2_ROUNDUP(aadDataSize, + LAC_HASH_AES_GCM_BLOCK_SIZE); + + if (aadDataSize > ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX && + CPA_CY_SYM_HASH_AES_GMAC != + pHashSetupData->hashAlgorithm) { + LAC_INVALID_PARAM_LOG("aadLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (CPA_CY_SYM_HASH_AES_CCM == + pHashSetupData->hashAlgorithm) { + Cpa32U aadDataSize = 0; + + /* RFC 3610: Valid values are 4, 6, 8, 10, 12, 14, and + * 16 octets */ + if ((pHashSetupData->digestResultLenInBytes >= + LAC_HASH_AES_CCM_ICV_SIZE_MIN) && + (pHashSetupData->digestResultLenInBytes <= + LAC_HASH_AES_CCM_ICV_SIZE_MAX)) { + if ((pHashSetupData->digestResultLenInBytes & + 0x01) != 0) { + LAC_INVALID_PARAM_LOG( + "digestResultLenInBytes must be a multiple of 2"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + LAC_INVALID_PARAM_LOG("digestResultLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + + /* ensure aadLen is within maximum limit imposed by QAT + */ + /* at the beginning of the buffer there is B0 block */ + aadDataSize = LAC_HASH_AES_CCM_BLOCK_SIZE; + + /* then, if there is some 'a' data, the buffer will + * store encoded + * length of 'a' and 'a' itself */ + if (pHashSetupData->authModeSetupData.aadLenInBytes > + 0) { + /* as the QAT API puts the requirement on the + * pAdditionalAuthData not to be bigger than 240 + * bytes then we + * just need 2 bytes to store encoded length of + * 'a' */ + aadDataSize += sizeof(Cpa16U); + aadDataSize += pHashSetupData->authModeSetupData + .aadLenInBytes; + } + + /* round the aad size to the multiple of CCM block + * size.*/ + aadDataSize = + LAC_ALIGN_POW2_ROUNDUP(aadDataSize, + LAC_HASH_AES_CCM_BLOCK_SIZE); + if (aadDataSize > ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX) { + LAC_INVALID_PARAM_LOG("aadLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (CPA_CY_SYM_HASH_KASUMI_F9 == + pHashSetupData->hashAlgorithm) { + /* QAT-FW only supports 128 bit Integrity Key size for + * Kasumi f9 + * Ref: 3GPP TS 35.201 version 7.0.0 Release 7 */ + if (pHashSetupData->authModeSetupData + .authKeyLenInBytes != + ICP_QAT_HW_KASUMI_KEY_SZ) { + LAC_INVALID_PARAM_LOG("authKeyLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (CPA_CY_SYM_HASH_SNOW3G_UIA2 == + pHashSetupData->hashAlgorithm) { + + /* QAT-FW only supports 128 bits Integrity Key size for + * Snow3g */ + if (pHashSetupData->authModeSetupData + .authKeyLenInBytes != + ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ) { + LAC_INVALID_PARAM_LOG("authKeyLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + /* For Snow3g hash aad field contains IV - it needs to + * be 16 + * bytes long + */ + if (pHashSetupData->authModeSetupData.aadLenInBytes != + ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ) { + LAC_INVALID_PARAM_LOG("aadLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (CPA_CY_SYM_HASH_AES_XCBC == + pHashSetupData->hashAlgorithm || + CPA_CY_SYM_HASH_AES_CMAC == + pHashSetupData->hashAlgorithm || + CPA_CY_SYM_HASH_AES_CBC_MAC == + pHashSetupData->hashAlgorithm) { + /* ensure auth key len is valid (128-bit keys supported) + */ + if ((pHashSetupData->authModeSetupData + .authKeyLenInBytes != + ICP_QAT_HW_AES_128_KEY_SZ)) { + LAC_INVALID_PARAM_LOG("authKeyLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (CPA_CY_SYM_HASH_ZUC_EIA3 == + pHashSetupData->hashAlgorithm) { + + /* QAT-FW only supports 128 bits Integrity Key size for + * ZUC */ + if (pHashSetupData->authModeSetupData + .authKeyLenInBytes != + ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ) { + LAC_INVALID_PARAM_LOG("authKeyLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + /* For ZUC EIA3 hash aad field contains IV - it needs to + * be 16 + * bytes long + */ + if (pHashSetupData->authModeSetupData.aadLenInBytes != + ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ) { + LAC_INVALID_PARAM_LOG("aadLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } else if (CPA_CY_SYM_HASH_POLY == + pHashSetupData->hashAlgorithm) { + if (pHashSetupData->digestResultLenInBytes != + ICP_QAT_HW_SPC_CTR_SZ) { + LAC_INVALID_PARAM_LOG("Digest Length for CCP"); + return CPA_STATUS_INVALID_PARAM; + } + if (pHashSetupData->authModeSetupData.aadLenInBytes > + ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX) { + LAC_INVALID_PARAM_LOG("AAD Length for CCP"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + /* The key size must be less than or equal the block + * length */ + if (pHashSetupData->authModeSetupData + .authKeyLenInBytes > + pHashAlgInfo->blockLength) { + LAC_INVALID_PARAM_LOG("authKeyLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* when the key size is greater than 0 check pointer is not null + */ + if (CPA_CY_SYM_HASH_AES_CCM != pHashSetupData->hashAlgorithm && + CPA_CY_SYM_HASH_AES_GCM != pHashSetupData->hashAlgorithm && + pHashSetupData->authModeSetupData.authKeyLenInBytes > 0) { + LAC_CHECK_NULL_PARAM( + pHashSetupData->authModeSetupData.authKey); + } + } else if (CPA_CY_SYM_HASH_MODE_NESTED == pHashSetupData->hashMode) { + if (!CPA_BITMAP_BIT_TEST(capInfo.hashes, + pHashSetupData->nestedModeSetupData + .outerHashAlgorithm)) { + /* Ensure SHAKE algorithms are not supported */ + if ((CPA_CY_SYM_HASH_SHAKE_128 == + pHashSetupData->nestedModeSetupData + .outerHashAlgorithm) || + (CPA_CY_SYM_HASH_SHAKE_256 == + pHashSetupData->nestedModeSetupData + .outerHashAlgorithm)) { + LAC_INVALID_PARAM_LOG( + "Hash algorithms SHAKE-128 and SHAKE-256 " + "are not supported."); + return CPA_STATUS_UNSUPPORTED; + } + + LAC_INVALID_PARAM_LOG("outerHashAlgorithm"); + return CPA_STATUS_INVALID_PARAM; + } + + if (LAC_HASH_ALG_MODE_NOT_SUPPORTED( + pHashSetupData->nestedModeSetupData.outerHashAlgorithm, + pHashSetupData->hashMode)) { + LAC_INVALID_PARAM_LOG( + "outerHashAlgorithm and hashMode combination"); + return CPA_STATUS_INVALID_PARAM; + } + + LacSymQat_HashAlgLookupGet( + instanceHandle, + pHashSetupData->nestedModeSetupData.outerHashAlgorithm, + &pOuterHashAlgInfo); + + /* Check Digest Length is permitted by the algorithm */ + if ((0 == pHashSetupData->digestResultLenInBytes) || + (pHashSetupData->digestResultLenInBytes > + pOuterHashAlgInfo->digestLength)) { + LAC_INVALID_PARAM_LOG("digestResultLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pHashSetupData->nestedModeSetupData.innerPrefixLenInBytes > + LAC_MAX_INNER_OUTER_PREFIX_SIZE_BYTES) { + LAC_INVALID_PARAM_LOG("innerPrefixLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pHashSetupData->nestedModeSetupData.innerPrefixLenInBytes > + 0) { + LAC_CHECK_NULL_PARAM(pHashSetupData->nestedModeSetupData + .pInnerPrefixData); + } + + if (pHashSetupData->nestedModeSetupData.outerPrefixLenInBytes > + LAC_MAX_INNER_OUTER_PREFIX_SIZE_BYTES) { + LAC_INVALID_PARAM_LOG("outerPrefixLenInBytes"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pHashSetupData->nestedModeSetupData.outerPrefixLenInBytes > + 0) { + LAC_CHECK_NULL_PARAM(pHashSetupData->nestedModeSetupData + .pOuterPrefixData); + } + } + + return CPA_STATUS_SUCCESS; +} + +/** @ingroup LacHash */ +CpaStatus +LacHash_PerformParamCheck(CpaInstanceHandle instanceHandle, + lac_session_desc_t *pSessionDesc, + const CpaCySymOpData *pOpData, + Cpa64U srcPktSize, + const CpaBoolean *pVerifyResult) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + lac_sym_qat_hash_alg_info_t *pHashAlgInfo = NULL; + + /* digestVerify and digestIsAppended on Hash-Only operation not + * supported */ + if (pSessionDesc->digestIsAppended && pSessionDesc->digestVerify && + (CPA_CY_SYM_OP_HASH == pSessionDesc->symOperation)) { + LAC_INVALID_PARAM_LOG( + "digestVerify and digestIsAppended set " + "on Hash-Only operation is not supported"); + return CPA_STATUS_INVALID_PARAM; + } + + /* check the digest result pointer */ + if ((CPA_CY_SYM_PACKET_TYPE_PARTIAL != pOpData->packetType) && + !pSessionDesc->digestIsAppended && + (NULL == pOpData->pDigestResult)) { + LAC_INVALID_PARAM_LOG("pDigestResult is NULL"); + return CPA_STATUS_INVALID_PARAM; + } + + /* + * Check if the pVerifyResult pointer is not null for hash operation + * when + * the packet is the last one and user has set verifyDigest flag + * Also, this is only needed for symchronous operation, so check if the + * callback pointer is the internal synchronous one rather than a user- + * supplied one. + */ + if ((CPA_TRUE == pSessionDesc->digestVerify) && + (CPA_CY_SYM_PACKET_TYPE_PARTIAL != pOpData->packetType) && + (LacSync_GenBufListVerifyCb == pSessionDesc->pSymCb)) { + if (NULL == pVerifyResult) { + LAC_INVALID_PARAM_LOG( + "Null pointer pVerifyResult for hash op"); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* verify start offset + messageLenToDigest is inside the source packet. + * this also verifies that the start offset is inside the packet + * Note: digest is specified as a pointer therefore it can be + * written anywhere so we cannot check for this been inside a buffer + * CCM/GCM specify the auth region using just the cipher params as this + * region is the same for auth and cipher. It is not checked here */ + if ((CPA_CY_SYM_HASH_AES_CCM == pSessionDesc->hashAlgorithm) || + (CPA_CY_SYM_HASH_AES_GCM == pSessionDesc->hashAlgorithm)) { + /* ensure AAD data pointer is non-NULL if AAD len > 0 */ + if ((pSessionDesc->aadLenInBytes > 0) && + (NULL == pOpData->pAdditionalAuthData)) { + LAC_INVALID_PARAM_LOG("pAdditionalAuthData is NULL"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + if ((pOpData->hashStartSrcOffsetInBytes + + pOpData->messageLenToHashInBytes) > srcPktSize) { + LAC_INVALID_PARAM_LOG( + "hashStartSrcOffsetInBytes + " + "messageLenToHashInBytes > Src Buffer Packet Length"); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* For Snow3g & ZUC hash pAdditionalAuthData field + * of OpData should contain IV */ + if ((CPA_CY_SYM_HASH_SNOW3G_UIA2 == pSessionDesc->hashAlgorithm) || + (CPA_CY_SYM_HASH_ZUC_EIA3 == pSessionDesc->hashAlgorithm)) { + if (NULL == pOpData->pAdditionalAuthData) { + LAC_INVALID_PARAM_LOG("pAdditionalAuthData is NULL"); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* partial packets need to be multiples of the algorithm block size in + * hash + * only mode (except for final partial packet) */ + if ((CPA_CY_SYM_PACKET_TYPE_PARTIAL == pOpData->packetType) && + (CPA_CY_SYM_OP_HASH == pSessionDesc->symOperation)) { + LacSymQat_HashAlgLookupGet(instanceHandle, + pSessionDesc->hashAlgorithm, + &pHashAlgInfo); + + /* check if the message is a multiple of the block size. */ + if ((pOpData->messageLenToHashInBytes % + pHashAlgInfo->blockLength) != 0) { + LAC_INVALID_PARAM_LOG( + "messageLenToHashInBytes not block size"); + return CPA_STATUS_INVALID_PARAM; + } + } + + return status; +} + Index: sys/dev/qat/qat_api/common/crypto/sym/lac_sym_hash_sw_precomputes.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/lac_sym_hash_sw_precomputes.c @@ -0,0 +1,353 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_hash_sw_precomputes.c + * + * @ingroup LacHashDefs + * + * Hash Software + ***************************************************************************/ + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" + +#include "icp_accel_devices.h" +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_adf_debug.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +#include "qat_utils.h" +#include "lac_mem.h" +#include "lac_sym.h" +#include "lac_log.h" +#include "lac_mem_pools.h" +#include "lac_list.h" +#include "lac_sym_hash_defs.h" +#include "lac_sym_qat_hash_defs_lookup.h" +#include "lac_sal_types_crypto.h" +#include "lac_sal.h" +#include "lac_session.h" +#include "lac_sym_hash_precomputes.h" + +static CpaStatus +LacSymHash_Compute(CpaCySymHashAlgorithm hashAlgorithm, + lac_sym_qat_hash_alg_info_t *pHashAlgInfo, + Cpa8U *in, + Cpa8U *out) +{ + /* + * Note: from SHA hashes appropriate endian swapping is required. + * For sha1, sha224 and sha256 double words based swapping. + * For sha384 and sha512 quad words swapping. + * No endianes swapping for md5 is required. + */ + CpaStatus status = CPA_STATUS_FAIL; + Cpa32U i = 0; + switch (hashAlgorithm) { + case CPA_CY_SYM_HASH_MD5: + if (CPA_STATUS_SUCCESS != qatUtilsHashMD5(in, out)) { + LAC_LOG_ERROR("qatUtilsHashMD5 Failed\n"); + return status; + } + status = CPA_STATUS_SUCCESS; + break; + case CPA_CY_SYM_HASH_SHA1: + if (CPA_STATUS_SUCCESS != qatUtilsHashSHA1(in, out)) { + LAC_LOG_ERROR("qatUtilsHashSHA1 Failed\n"); + return status; + } + for (i = 0; i < LAC_BYTES_TO_LONGWORDS(pHashAlgInfo->stateSize); + i++) { + ((Cpa32U *)(out))[i] = + LAC_MEM_WR_32(((Cpa32U *)(out))[i]); + } + status = CPA_STATUS_SUCCESS; + break; + case CPA_CY_SYM_HASH_SHA224: + if (CPA_STATUS_SUCCESS != qatUtilsHashSHA224(in, out)) { + LAC_LOG_ERROR("qatUtilsHashSHA224 Failed\n"); + return status; + } + for (i = 0; i < LAC_BYTES_TO_LONGWORDS(pHashAlgInfo->stateSize); + i++) { + ((Cpa32U *)(out))[i] = + LAC_MEM_WR_32(((Cpa32U *)(out))[i]); + } + status = CPA_STATUS_SUCCESS; + break; + case CPA_CY_SYM_HASH_SHA256: + if (CPA_STATUS_SUCCESS != qatUtilsHashSHA256(in, out)) { + LAC_LOG_ERROR("qatUtilsHashSHA256 Failed\n"); + return status; + } + for (i = 0; i < LAC_BYTES_TO_LONGWORDS(pHashAlgInfo->stateSize); + i++) { + ((Cpa32U *)(out))[i] = + LAC_MEM_WR_32(((Cpa32U *)(out))[i]); + } + status = CPA_STATUS_SUCCESS; + break; + case CPA_CY_SYM_HASH_SHA384: + if (CPA_STATUS_SUCCESS != qatUtilsHashSHA384(in, out)) { + LAC_LOG_ERROR("qatUtilsHashSHA384 Failed\n"); + return status; + } + for (i = 0; i < LAC_BYTES_TO_QUADWORDS(pHashAlgInfo->stateSize); + i++) { + ((Cpa64U *)(out))[i] = + LAC_MEM_WR_64(((Cpa64U *)(out))[i]); + } + status = CPA_STATUS_SUCCESS; + break; + case CPA_CY_SYM_HASH_SHA512: + if (CPA_STATUS_SUCCESS != qatUtilsHashSHA512(in, out)) { + LAC_LOG_ERROR("qatUtilsHashSHA512 Failed\n"); + return status; + } + for (i = 0; i < LAC_BYTES_TO_QUADWORDS(pHashAlgInfo->stateSize); + i++) { + ((Cpa64U *)(out))[i] = + LAC_MEM_WR_64(((Cpa64U *)(out))[i]); + } + status = CPA_STATUS_SUCCESS; + break; + default: + return CPA_STATUS_INVALID_PARAM; + } + return status; +} + +CpaStatus +LacSymHash_HmacPreComputes(CpaInstanceHandle instanceHandle, + CpaCySymHashAlgorithm hashAlgorithm, + Cpa32U authKeyLenInBytes, + Cpa8U *pAuthKey, + Cpa8U *pWorkingMemory, + Cpa8U *pState1, + Cpa8U *pState2, + lac_hash_precompute_done_cb_t callbackFn, + void *pCallbackTag) +{ + Cpa8U *pIpadData = NULL; + Cpa8U *pOpadData = NULL; + CpaStatus status = CPA_STATUS_FAIL; + lac_sym_hash_precomp_op_data_t *pHmacIpadOpData = + (lac_sym_hash_precomp_op_data_t *)pWorkingMemory; + lac_sym_hash_precomp_op_data_t *pHmacOpadOpData = pHmacIpadOpData + 1; + + /* Convenience pointers */ + lac_sym_hash_hmac_precomp_qat_t *pHmacIpadQatData = + &pHmacIpadOpData->u.hmacQatData; + lac_sym_hash_hmac_precomp_qat_t *pHmacOpadQatData = + &pHmacOpadOpData->u.hmacQatData; + + lac_sym_qat_hash_alg_info_t *pHashAlgInfo = NULL; + Cpa32U i = 0; + Cpa32U padLenBytes = 0; + + LacSymQat_HashAlgLookupGet(instanceHandle, + hashAlgorithm, + &pHashAlgInfo); + pHmacIpadOpData->stateSize = pHashAlgInfo->stateSize; + pHmacOpadOpData->stateSize = pHashAlgInfo->stateSize; + + /* Copy HMAC key into buffers */ + if (authKeyLenInBytes > 0) { + memcpy(pHmacIpadQatData->data, pAuthKey, authKeyLenInBytes); + memcpy(pHmacOpadQatData->data, pAuthKey, authKeyLenInBytes); + } + + padLenBytes = pHashAlgInfo->blockLength - authKeyLenInBytes; + + /* Clear the remaining buffer space */ + if (padLenBytes > 0) { + LAC_OS_BZERO(pHmacIpadQatData->data + authKeyLenInBytes, + padLenBytes); + LAC_OS_BZERO(pHmacOpadQatData->data + authKeyLenInBytes, + padLenBytes); + } + + /* XOR Key with IPAD at 4-byte level */ + for (i = 0; i < pHashAlgInfo->blockLength; i++) { + Cpa8U *ipad = pHmacIpadQatData->data + i; + Cpa8U *opad = pHmacOpadQatData->data + i; + + *ipad ^= LAC_HASH_IPAD_BYTE; + *opad ^= LAC_HASH_OPAD_BYTE; + } + pIpadData = (Cpa8U *)pHmacIpadQatData->data; + pOpadData = (Cpa8U *)pHmacOpadQatData->data; + + status = LacSymHash_Compute(hashAlgorithm, + pHashAlgInfo, + (Cpa8U *)pIpadData, + pState1); + + if (CPA_STATUS_SUCCESS == status) { + status = LacSymHash_Compute(hashAlgorithm, + pHashAlgInfo, + (Cpa8U *)pOpadData, + pState2); + } + + if (CPA_STATUS_SUCCESS == status) { + callbackFn(pCallbackTag); + } + return status; +} + +CpaStatus +LacSymHash_AesECBPreCompute(CpaInstanceHandle instanceHandle, + CpaCySymHashAlgorithm hashAlgorithm, + Cpa32U authKeyLenInBytes, + Cpa8U *pAuthKey, + Cpa8U *pWorkingMemory, + Cpa8U *pState, + lac_hash_precompute_done_cb_t callbackFn, + void *pCallbackTag) +{ + CpaStatus status = CPA_STATUS_FAIL; + Cpa32U stateSize = 0, x = 0; + lac_sym_qat_hash_alg_info_t *pHashAlgInfo = NULL; + + if (CPA_CY_SYM_HASH_AES_XCBC == hashAlgorithm) { + Cpa8U *in = pWorkingMemory; + Cpa8U *out = pState; + LacSymQat_HashAlgLookupGet(instanceHandle, + hashAlgorithm, + &pHashAlgInfo); + stateSize = pHashAlgInfo->stateSize; + memcpy(pWorkingMemory, pHashAlgInfo->initState, stateSize); + + for (x = 0; x < LAC_HASH_XCBC_PRECOMP_KEY_NUM; x++) { + if (CPA_STATUS_SUCCESS != + qatUtilsAESEncrypt( + pAuthKey, authKeyLenInBytes, in, out)) { + return status; + } + in += LAC_HASH_XCBC_MAC_BLOCK_SIZE; + out += LAC_HASH_XCBC_MAC_BLOCK_SIZE; + } + status = CPA_STATUS_SUCCESS; + } else if (CPA_CY_SYM_HASH_AES_CMAC == hashAlgorithm) { + Cpa8U *out = pState; + Cpa8U k1[LAC_HASH_CMAC_BLOCK_SIZE], + k2[LAC_HASH_CMAC_BLOCK_SIZE]; + Cpa8U *ptr = NULL, i = 0; + stateSize = LAC_HASH_CMAC_BLOCK_SIZE; + LacSymQat_HashAlgLookupGet(instanceHandle, + hashAlgorithm, + &pHashAlgInfo); + /* Original state size includes K, K1 and K2 which are of equal + * length. + * For precompute state size is only of the length of K which is + * equal + * to the block size for CPA_CY_SYM_HASH_AES_CMAC. + * The algorithm is described in rfc4493 + * K is just copeid, K1 and K2 need to be single inplace encrypt + * with AES. + * */ + memcpy(out, pHashAlgInfo->initState, stateSize); + memcpy(out, pAuthKey, authKeyLenInBytes); + out += LAC_HASH_CMAC_BLOCK_SIZE; + + for (x = 0; x < LAC_HASH_XCBC_PRECOMP_KEY_NUM - 1; x++) { + if (CPA_STATUS_SUCCESS != + qatUtilsAESEncrypt( + pAuthKey, authKeyLenInBytes, out, out)) { + return status; + } + out += LAC_HASH_CMAC_BLOCK_SIZE; + } + + ptr = pState + LAC_HASH_CMAC_BLOCK_SIZE; + + /* Derived keys (k1 and k2), copy them to + * pPrecompOpData->pState, + * but remember that at the beginning is original key (K0) + */ + /* Calculating K1 */ + for (i = 0; i < LAC_HASH_CMAC_BLOCK_SIZE; i++, ptr++) { + k1[i] = (*ptr) << 1; + if (i != 0) { + k1[i - 1] |= + (*ptr) >> (LAC_NUM_BITS_IN_BYTE - 1); + } + if (i + 1 == LAC_HASH_CMAC_BLOCK_SIZE) { + /* If msb of pState + LAC_HASH_CMAC_BLOCK_SIZE + is set xor + with RB. Because only the final byte of RB is + non-zero + this is all we need to xor */ + if ((*(pState + LAC_HASH_CMAC_BLOCK_SIZE)) & + LAC_SYM_HASH_MSBIT_MASK) { + k1[i] ^= LAC_SYM_AES_CMAC_RB_128; + } + } + } + + /* Calculating K2 */ + for (i = 0; i < LAC_HASH_CMAC_BLOCK_SIZE; i++) { + k2[i] = (k1[i]) << 1; + if (i != 0) { + k2[i - 1] |= + (k1[i]) >> (LAC_NUM_BITS_IN_BYTE - 1); + } + if (i + 1 == LAC_HASH_CMAC_BLOCK_SIZE) { + /* If msb of k1 is set xor last byte with RB */ + if (k1[0] & LAC_SYM_HASH_MSBIT_MASK) { + k2[i] ^= LAC_SYM_AES_CMAC_RB_128; + } + } + } + /* Now, when we have K1 & K2 lets copy them to the state2 */ + ptr = pState + LAC_HASH_CMAC_BLOCK_SIZE; + memcpy(ptr, k1, LAC_HASH_CMAC_BLOCK_SIZE); + ptr += LAC_HASH_CMAC_BLOCK_SIZE; + memcpy(ptr, k2, LAC_HASH_CMAC_BLOCK_SIZE); + status = CPA_STATUS_SUCCESS; + } else if (CPA_CY_SYM_HASH_AES_GCM == hashAlgorithm || + CPA_CY_SYM_HASH_AES_GMAC == hashAlgorithm) { + Cpa8U *in = pWorkingMemory; + Cpa8U *out = pState; + LAC_OS_BZERO(pWorkingMemory, ICP_QAT_HW_GALOIS_H_SZ); + + if (CPA_STATUS_SUCCESS != + qatUtilsAESEncrypt(pAuthKey, authKeyLenInBytes, in, out)) { + return status; + } + status = CPA_STATUS_SUCCESS; + } else { + return CPA_STATUS_INVALID_PARAM; + } + callbackFn(pCallbackTag); + return status; +} + +CpaStatus +LacSymHash_HmacPrecompInit(CpaInstanceHandle instanceHandle) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + return status; +} + +void +LacSymHash_HmacPrecompShutdown(CpaInstanceHandle instanceHandle) +{ + return; +} Index: sys/dev/qat/qat_api/common/crypto/sym/lac_sym_partial.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/lac_sym_partial.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_partial.c common partial packet functions + * + * @ingroup LacSym + * + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" + +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" + +#include "lac_log.h" +#include "lac_sym.h" +#include "cpa_cy_sym.h" +#include "lac_common.h" + +#include "lac_sym_partial.h" + +CpaStatus +LacSym_PartialPacketStateCheck(CpaCySymPacketType packetType, + CpaCySymPacketType partialState) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + /* ASSUMPTION - partial requests on a given session must be issued + * sequentially to guarantee ordering + * (i.e. issuing partials on concurrent threads for a particular session + * just wouldn't work) + */ + + /* state is no partial - only a partial is allowed */ + if (((CPA_CY_SYM_PACKET_TYPE_FULL == partialState) && + (CPA_CY_SYM_PACKET_TYPE_PARTIAL == packetType)) || + + /* state is partial - only a partial or final partial is allowed */ + ((CPA_CY_SYM_PACKET_TYPE_PARTIAL == partialState) && + ((CPA_CY_SYM_PACKET_TYPE_PARTIAL == packetType) || + (CPA_CY_SYM_PACKET_TYPE_LAST_PARTIAL == packetType)))) { + status = CPA_STATUS_SUCCESS; + } else /* invalid sequence */ + { + LAC_INVALID_PARAM_LOG("invalid partial packet sequence"); + status = CPA_STATUS_INVALID_PARAM; + } + + return status; +} + +void +LacSym_PartialPacketStateUpdate(CpaCySymPacketType packetType, + CpaCySymPacketType *pPartialState) +{ + /* if previous packet was either a full or ended a partial stream, + * update + * state to partial to indicate a new partial stream was created */ + if (CPA_CY_SYM_PACKET_TYPE_FULL == *pPartialState) { + *pPartialState = CPA_CY_SYM_PACKET_TYPE_PARTIAL; + } else { + /* if packet type is final - reset the partial state */ + if (CPA_CY_SYM_PACKET_TYPE_LAST_PARTIAL == packetType) { + *pPartialState = CPA_CY_SYM_PACKET_TYPE_FULL; + } + } +} Index: sys/dev/qat/qat_api/common/crypto/sym/lac_sym_queue.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/lac_sym_queue.c @@ -0,0 +1,165 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_queue.c Functions for sending/queuing symmetric requests + * + * @ingroup LacSym + * + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" +#include "cpa_cy_sym.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "icp_accel_devices.h" +#include "icp_adf_init.h" +#include "icp_adf_debug.h" +#include "icp_adf_transport.h" +#include "lac_sym_queue.h" +#include "lac_sym_qat.h" +#include "lac_session.h" +#include "lac_sym.h" +#include "lac_log.h" +#include "icp_qat_fw_la.h" +#include "lac_sal_types_crypto.h" + +#define GetSingleBitFromByte(byte, bit) ((byte) & (1 << (bit))) + +/* +******************************************************************************* +* Define public/global function definitions +******************************************************************************* +*/ + +CpaStatus +LacSymQueue_RequestSend(const CpaInstanceHandle instanceHandle, + lac_sym_bulk_cookie_t *pRequest, + lac_session_desc_t *pSessionDesc) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaBoolean enqueued = CPA_FALSE; + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + /* Enqueue the message instead of sending directly if: + * (i) a blocking operation is in progress + * (ii) there are previous requests already in the queue + */ + if ((CPA_FALSE == pSessionDesc->nonBlockingOpsInProgress) || + (NULL != pSessionDesc->pRequestQueueTail)) { + if (CPA_STATUS_SUCCESS != + LAC_SPINLOCK(&pSessionDesc->requestQueueLock)) { + LAC_LOG_ERROR("Failed to lock request queue"); + return CPA_STATUS_RESOURCE; + } + + /* Re-check blockingOpsInProgress and pRequestQueueTail in case + * either + * changed before the lock was acquired. The lock is shared + * with + * the callback context which drains this queue + */ + if ((CPA_FALSE == pSessionDesc->nonBlockingOpsInProgress) || + (NULL != pSessionDesc->pRequestQueueTail)) { + /* Enqueue the message and exit */ + /* The FIFO queue is made up of a head and tail pointer. + * The head pointer points to the first/oldest, entry + * in the queue, and the tail pointer points to the + * last/newest + * entry in the queue + */ + + if (NULL != pSessionDesc->pRequestQueueTail) { + /* Queue is non-empty. Add this request to the + * list */ + pSessionDesc->pRequestQueueTail->pNext = + pRequest; + } else { + /* Queue is empty. Initialise the head pointer + * as well */ + pSessionDesc->pRequestQueueHead = pRequest; + } + + pSessionDesc->pRequestQueueTail = pRequest; + + /* request is queued, don't send to QAT here */ + enqueued = CPA_TRUE; + } + if (CPA_STATUS_SUCCESS != + LAC_SPINUNLOCK(&pSessionDesc->requestQueueLock)) { + LAC_LOG_ERROR("Failed to unlock request queue"); + } + } + + if (CPA_FALSE == enqueued) { + /* If we send a partial packet request, set the + * blockingOpsInProgress + * flag for the session to indicate that subsequent requests + * must be + * queued up until this request completes + * + * @assumption + * If we have got here it means that there were no previous + * blocking + * operations in progress and, since multiple partial packet + * requests + * on a given session cannot be issued concurrently, there + * should be + * no need for a critical section around the following code + */ + if (CPA_CY_SYM_PACKET_TYPE_FULL != + pRequest->pOpData->packetType) { + /* Select blocking operations which this reqest will + * complete */ + pSessionDesc->nonBlockingOpsInProgress = CPA_FALSE; + } + + /* At this point, we're clear to send the request. For cipher + * requests, + * we need to check if the session IV needs to be updated. This + * can + * only be done when no other partials are in flight for this + * session, + * to ensure the cipherPartialOpState buffer in the session + * descriptor + * is not currently in use + */ + if (CPA_TRUE == pRequest->updateSessionIvOnSend) { + if (LAC_CIPHER_IS_ARC4(pSessionDesc->cipherAlgorithm)) { + memcpy(pSessionDesc->cipherPartialOpState, + pSessionDesc->cipherARC4InitialState, + LAC_CIPHER_ARC4_STATE_LEN_BYTES); + } else { + memcpy(pSessionDesc->cipherPartialOpState, + pRequest->pOpData->pIv, + pRequest->pOpData->ivLenInBytes); + } + } + + /* Send to QAT */ + status = icp_adf_transPutMsg(pService->trans_handle_sym_tx, + (void *)&(pRequest->qatMsg), + LAC_QAT_SYM_REQ_SZ_LW); + + /* if fail to send request, we need to change + * nonBlockingOpsInProgress + * to CPA_TRUE + */ + if ((CPA_STATUS_SUCCESS != status) && + (CPA_CY_SYM_PACKET_TYPE_FULL != + pRequest->pOpData->packetType)) { + pSessionDesc->nonBlockingOpsInProgress = CPA_TRUE; + } + } + return status; +} Index: sys/dev/qat/qat_api/common/crypto/sym/lac_sym_stats.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/lac_sym_stats.c @@ -0,0 +1,139 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_stats.c Implementation of symmetric stats + * + * @ingroup LacSym + * + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" +#include "cpa_cy_sym.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "lac_mem_pools.h" +#include "icp_adf_transport.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +#include "icp_qat_fw_la.h" +#include "lac_sym_qat.h" +#include "lac_sym_stats.h" +#include "lac_sal_types_crypto.h" +#include "sal_statistics.h" + +/* Number of Symmetric Crypto statistics */ +#define LAC_SYM_NUM_STATS (sizeof(CpaCySymStats64) / sizeof(Cpa64U)) + +CpaStatus +LacSym_StatsInit(CpaInstanceHandle instanceHandle) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + + pService->pLacSymStatsArr = + LAC_OS_MALLOC(LAC_SYM_NUM_STATS * sizeof(QatUtilsAtomic)); + + if (NULL != pService->pLacSymStatsArr) { + LAC_OS_BZERO((void *)LAC_CONST_VOLATILE_PTR_CAST( + pService->pLacSymStatsArr), + LAC_SYM_NUM_STATS * sizeof(QatUtilsAtomic)); + } else { + status = CPA_STATUS_RESOURCE; + } + return status; +} + +void +LacSym_StatsFree(CpaInstanceHandle instanceHandle) +{ + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + if (NULL != pService->pLacSymStatsArr) { + LAC_OS_FREE(pService->pLacSymStatsArr); + } +} + +void +LacSym_StatsInc(Cpa32U offset, CpaInstanceHandle instanceHandle) +{ + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + if (CPA_TRUE == + pService->generic_service_info.stats->bSymStatsEnabled) { + qatUtilsAtomicInc( + &pService->pLacSymStatsArr[offset / sizeof(Cpa64U)]); + } +} + +void +LacSym_Stats32CopyGet(CpaInstanceHandle instanceHandle, + struct _CpaCySymStats *const pSymStats) +{ + int i = 0; + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + + for (i = 0; i < LAC_SYM_NUM_STATS; i++) { + ((Cpa32U *)pSymStats)[i] = + (Cpa32U)qatUtilsAtomicGet(&pService->pLacSymStatsArr[i]); + } +} + +void +LacSym_Stats64CopyGet(CpaInstanceHandle instanceHandle, + CpaCySymStats64 *const pSymStats) +{ + int i = 0; + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + + for (i = 0; i < LAC_SYM_NUM_STATS; i++) { + ((Cpa64U *)pSymStats)[i] = + qatUtilsAtomicGet(&pService->pLacSymStatsArr[i]); + } +} + +void +LacSym_StatsShow(CpaInstanceHandle instanceHandle) +{ + CpaCySymStats64 symStats = { 0 }; + + LacSym_Stats64CopyGet(instanceHandle, &symStats); + + QAT_UTILS_LOG(SEPARATOR BORDER + " Symmetric Stats " BORDER + "\n" SEPARATOR); + + /* Session Info */ + QAT_UTILS_LOG(BORDER " Sessions Initialized: %16llu " BORDER + "\n" BORDER + " Sessions Removed: %16llu " BORDER + "\n" BORDER + " Session Errors: %16llu " BORDER + "\n" SEPARATOR, + (unsigned long long)symStats.numSessionsInitialized, + (unsigned long long)symStats.numSessionsRemoved, + (unsigned long long)symStats.numSessionErrors); + + /* Session info */ + QAT_UTILS_LOG( + BORDER " Symmetric Requests: %16llu " BORDER "\n" BORDER + " Symmetric Request Errors: %16llu " BORDER "\n" BORDER + " Symmetric Completed: %16llu " BORDER "\n" BORDER + " Symmetric Completed Errors: %16llu " BORDER "\n" BORDER + " Symmetric Verify Failures: %16llu " BORDER + "\n" SEPARATOR, + (unsigned long long)symStats.numSymOpRequests, + (unsigned long long)symStats.numSymOpRequestErrors, + (unsigned long long)symStats.numSymOpCompleted, + (unsigned long long)symStats.numSymOpCompletedErrors, + (unsigned long long)symStats.numSymOpVerifyFailures); +} Index: sys/dev/qat/qat_api/common/crypto/sym/qat/lac_sym_qat.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/qat/lac_sym_qat.c @@ -0,0 +1,227 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_qat.c Interfaces for populating the symmetric qat structures + * + * @ingroup LacSymQat + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "icp_accel_devices.h" +#include "icp_adf_cfg.h" +#include "lac_log.h" +#include "lac_sym.h" +#include "lac_sym_qat.h" +#include "lac_sal_types_crypto.h" +#include "sal_string_parse.h" +#include "lac_sym_key.h" +#include "lac_sym_qat_hash_defs_lookup.h" +#include "lac_sym_qat_cipher.h" +#include "lac_sym_qat_hash.h" + +#define EMBEDDED_CIPHER_KEY_MAX_SIZE 16 +static void +LacSymQat_SymLogSliceHangError(icp_qat_fw_la_cmd_id_t symCmdId) +{ + Cpa8U cmdId = symCmdId; + + switch (cmdId) { + case ICP_QAT_FW_LA_CMD_CIPHER: + case ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP: + LAC_LOG_ERROR("slice hang detected on CPM cipher slice."); + break; + + case ICP_QAT_FW_LA_CMD_AUTH: + case ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP: + LAC_LOG_ERROR("slice hang detected on CPM auth slice."); + break; + + case ICP_QAT_FW_LA_CMD_CIPHER_HASH: + case ICP_QAT_FW_LA_CMD_HASH_CIPHER: + case ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE: + case ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE: + case ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE: + case ICP_QAT_FW_LA_CMD_MGF1: + default: + LAC_LOG_ERROR( + "slice hang detected on CPM cipher or auth slice."); + } + return; +} + +/* sym crypto response handlers */ +static sal_qat_resp_handler_func_t + respHandlerSymTbl[ICP_QAT_FW_LA_CMD_DELIMITER]; + +void +LacSymQat_SymRespHandler(void *pRespMsg) +{ + Cpa8U lacCmdId = 0; + void *pOpaqueData = NULL; + icp_qat_fw_la_resp_t *pRespMsgFn = NULL; + Cpa8U opStatus = ICP_QAT_FW_COMN_STATUS_FLAG_OK; + Cpa8U comnErr = ERR_CODE_NO_ERROR; + + pRespMsgFn = (icp_qat_fw_la_resp_t *)pRespMsg; + LAC_MEM_SHARED_READ_TO_PTR(pRespMsgFn->opaque_data, pOpaqueData); + + lacCmdId = pRespMsgFn->comn_resp.cmd_id; + opStatus = pRespMsgFn->comn_resp.comn_status; + comnErr = pRespMsgFn->comn_resp.comn_error.s.comn_err_code; + + /* log the slice hang and endpoint push/pull error inside the response + */ + if (ERR_CODE_SSM_ERROR == (Cpa8S)comnErr) { + LacSymQat_SymLogSliceHangError(lacCmdId); + } else if (ERR_CODE_ENDPOINT_ERROR == (Cpa8S)comnErr) { + LAC_LOG_ERROR("The PCIe End Point Push/Pull or" + " TI/RI Parity error detected."); + } + + /* call the response message handler registered for the command ID */ + respHandlerSymTbl[lacCmdId]((icp_qat_fw_la_cmd_id_t)lacCmdId, + pOpaqueData, + (icp_qat_fw_comn_flags)opStatus); +} + +CpaStatus +LacSymQat_Init(CpaInstanceHandle instanceHandle) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + /* Initialise the Hash lookup table */ + status = LacSymQat_HashLookupInit(instanceHandle); + + return status; +} + +void +LacSymQat_RespHandlerRegister(icp_qat_fw_la_cmd_id_t lacCmdId, + sal_qat_resp_handler_func_t pCbHandler) +{ + if (lacCmdId >= ICP_QAT_FW_LA_CMD_DELIMITER) { + QAT_UTILS_LOG("Invalid Command ID\n"); + return; + } + + /* set the response handler for the command ID */ + respHandlerSymTbl[lacCmdId] = pCbHandler; +} + +void +LacSymQat_LaPacketCommandFlagSet(Cpa32U qatPacketType, + icp_qat_fw_la_cmd_id_t laCmdId, + CpaCySymCipherAlgorithm cipherAlgorithm, + Cpa16U *pLaCommandFlags, + Cpa32U ivLenInBytes) +{ + /* For Chacha ciphers set command flag as partial none to proceed + * with stateless processing */ + if (LAC_CIPHER_IS_CHACHA(cipherAlgorithm) || + LAC_CIPHER_IS_SM4(cipherAlgorithm)) { + ICP_QAT_FW_LA_PARTIAL_SET(*pLaCommandFlags, + ICP_QAT_FW_LA_PARTIAL_NONE); + return; + } + ICP_QAT_FW_LA_PARTIAL_SET(*pLaCommandFlags, qatPacketType); + + /* For ECB-mode ciphers, IV is NULL so update-state flag + * must be disabled always. + * For all other ciphers and auth + * update state is disabled for full packets and final partials */ + if (((laCmdId != ICP_QAT_FW_LA_CMD_AUTH) && + LAC_CIPHER_IS_ECB_MODE(cipherAlgorithm)) || + (ICP_QAT_FW_LA_PARTIAL_NONE == qatPacketType) || + (ICP_QAT_FW_LA_PARTIAL_END == qatPacketType)) { + ICP_QAT_FW_LA_UPDATE_STATE_SET(*pLaCommandFlags, + ICP_QAT_FW_LA_NO_UPDATE_STATE); + } + /* For first or middle partials set the update state command flag */ + else { + ICP_QAT_FW_LA_UPDATE_STATE_SET(*pLaCommandFlags, + ICP_QAT_FW_LA_UPDATE_STATE); + + if (laCmdId == ICP_QAT_FW_LA_CMD_AUTH) { + /* For hash only partial - verify and return auth result + * are + * disabled */ + ICP_QAT_FW_LA_RET_AUTH_SET( + *pLaCommandFlags, ICP_QAT_FW_LA_NO_RET_AUTH_RES); + + ICP_QAT_FW_LA_CMP_AUTH_SET( + *pLaCommandFlags, ICP_QAT_FW_LA_NO_CMP_AUTH_RES); + } + } + + if ((LAC_CIPHER_IS_GCM(cipherAlgorithm)) && + (LAC_CIPHER_IV_SIZE_GCM_12 == ivLenInBytes)) + + { + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + *pLaCommandFlags, ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + } +} + +void +LacSymQat_packetTypeGet(CpaCySymPacketType packetType, + CpaCySymPacketType packetState, + Cpa32U *pQatPacketType) +{ + /* partial */ + if (CPA_CY_SYM_PACKET_TYPE_PARTIAL == packetType) { + /* if the previous state was full, then this is the first packet + */ + if (CPA_CY_SYM_PACKET_TYPE_FULL == packetState) { + *pQatPacketType = ICP_QAT_FW_LA_PARTIAL_START; + } else { + *pQatPacketType = ICP_QAT_FW_LA_PARTIAL_MID; + } + } + /* final partial */ + else if (CPA_CY_SYM_PACKET_TYPE_LAST_PARTIAL == packetType) { + *pQatPacketType = ICP_QAT_FW_LA_PARTIAL_END; + } + /* full packet - CPA_CY_SYM_PACKET_TYPE_FULL */ + else { + *pQatPacketType = ICP_QAT_FW_LA_PARTIAL_NONE; + } +} + +void +LacSymQat_LaSetDefaultFlags(icp_qat_fw_serv_specif_flags *laCmdFlags, + CpaCySymOp symOp) +{ + + ICP_QAT_FW_LA_PARTIAL_SET(*laCmdFlags, ICP_QAT_FW_LA_PARTIAL_NONE); + + ICP_QAT_FW_LA_UPDATE_STATE_SET(*laCmdFlags, + ICP_QAT_FW_LA_NO_UPDATE_STATE); + + if (symOp != CPA_CY_SYM_OP_CIPHER) { + ICP_QAT_FW_LA_RET_AUTH_SET(*laCmdFlags, + ICP_QAT_FW_LA_RET_AUTH_RES); + } else { + ICP_QAT_FW_LA_RET_AUTH_SET(*laCmdFlags, + ICP_QAT_FW_LA_NO_RET_AUTH_RES); + } + + ICP_QAT_FW_LA_CMP_AUTH_SET(*laCmdFlags, ICP_QAT_FW_LA_NO_CMP_AUTH_RES); + + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + *laCmdFlags, ICP_QAT_FW_LA_GCM_IV_LEN_NOT_12_OCTETS); +} Index: sys/dev/qat/qat_api/common/crypto/sym/qat/lac_sym_qat_cipher.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/qat/lac_sym_qat_cipher.c @@ -0,0 +1,889 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_qat_cipher.c QAT-related support functions for Cipher + * + * @ingroup LacSymQat_Cipher + * + * @description Functions to support the QAT related operations for Cipher + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ + +#include "cpa.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +#include "lac_sym_qat.h" +#include "lac_sym_qat_cipher.h" +#include "lac_mem.h" +#include "lac_common.h" +#include "cpa_cy_sym.h" +#include "lac_sym_qat.h" +#include "lac_sym_cipher_defs.h" +#include "icp_qat_hw.h" +#include "icp_qat_fw_la.h" + +/***************************************************************************** + * Internal data + *****************************************************************************/ + +typedef enum _icp_qat_hw_key_depend { + IS_KEY_DEP_NO = 0, + IS_KEY_DEP_YES, +} icp_qat_hw_key_depend; + +/* LAC_CIPHER_IS_XTS_MODE */ +static const uint8_t key_size_xts[] = { + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + ICP_QAT_HW_CIPHER_ALGO_AES128, // ICP_QAT_HW_AES_128_XTS_KEY_SZ + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + ICP_QAT_HW_CIPHER_ALGO_AES256 // ICP_QAT_HW_AES_256_XTS_KEY_SZ +}; +/* LAC_CIPHER_IS_AES */ +static const uint8_t key_size_aes[] = { + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + ICP_QAT_HW_CIPHER_ALGO_AES128, // ICP_QAT_HW_AES_128_KEY_SZ + 0, + 0, + 0, + 0, + 0, + 0, + 0, + ICP_QAT_HW_CIPHER_ALGO_AES192, // ICP_QAT_HW_AES_192_KEY_SZ + 0, + 0, + 0, + 0, + 0, + 0, + 0, + ICP_QAT_HW_CIPHER_ALGO_AES256 // ICP_QAT_HW_AES_256_KEY_SZ +}; +/* LAC_CIPHER_IS_AES_F8 */ +static const uint8_t key_size_f8[] = { + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + ICP_QAT_HW_CIPHER_ALGO_AES128, // ICP_QAT_HW_AES_128_F8_KEY_SZ + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + ICP_QAT_HW_CIPHER_ALGO_AES192, // ICP_QAT_HW_AES_192_F8_KEY_SZ + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + ICP_QAT_HW_CIPHER_ALGO_AES256 // ICP_QAT_HW_AES_256_F8_KEY_SZ +}; +/* LAC_CIPHER_IS_SM4 */ +static const uint8_t key_size_sm4[] = { + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + ICP_QAT_HW_CIPHER_ALGO_SM4 // ICP_QAT_HW_SM4_KEY_SZ +}; + +typedef struct _icp_qat_hw_cipher_info { + icp_qat_hw_cipher_algo_t algorithm; + icp_qat_hw_cipher_mode_t mode; + icp_qat_hw_cipher_convert_t key_convert[2]; + icp_qat_hw_cipher_dir_t dir[2]; + icp_qat_hw_key_depend isKeyLenDepend; + const uint8_t *pAlgByKeySize; +} icp_qat_hw_cipher_info; + +static const icp_qat_hw_cipher_info icp_qat_alg_info[] = + { + /* CPA_CY_SYM_CIPHER_NULL */ + { + ICP_QAT_HW_CIPHER_ALGO_NULL, + ICP_QAT_HW_CIPHER_ECB_MODE, + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_NO, + NULL, + }, + /* CPA_CY_SYM_CIPHER_ARC4 */ + { + ICP_QAT_HW_CIPHER_ALGO_ARC4, + ICP_QAT_HW_CIPHER_ECB_MODE, + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + /* Streaming ciphers are a special case. Decrypt = encrypt */ + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_ENCRYPT }, + IS_KEY_DEP_NO, + NULL, + }, + /* CPA_CY_SYM_CIPHER_AES_ECB */ + { + ICP_QAT_HW_CIPHER_ALGO_AES128, + ICP_QAT_HW_CIPHER_ECB_MODE, + /* AES decrypt key needs to be reversed. Instead of reversing the key + * at session registration, it is instead reversed on-the-fly by + * setting the KEY_CONVERT bit here + */ + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_KEY_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_YES, + key_size_aes, + }, + /* CPA_CY_SYM_CIPHER_AES_CBC */ + { + ICP_QAT_HW_CIPHER_ALGO_AES128, + ICP_QAT_HW_CIPHER_CBC_MODE, + /* AES decrypt key needs to be reversed. Instead of reversing the key + * at session registration, it is instead reversed on-the-fly by + * setting the KEY_CONVERT bit here + */ + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_KEY_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_YES, + key_size_aes, + }, + /* CPA_CY_SYM_CIPHER_AES_CTR */ + { + ICP_QAT_HW_CIPHER_ALGO_AES128, + ICP_QAT_HW_CIPHER_CTR_MODE, + /* AES decrypt key needs to be reversed. Instead of reversing the key + * at session registration, it is instead reversed on-the-fly by + * setting the KEY_CONVERT bit here + */ + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + /* Streaming ciphers are a special case. Decrypt = encrypt + * Overriding default values previously set for AES + */ + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_ENCRYPT }, + IS_KEY_DEP_YES, + key_size_aes, + }, + /* CPA_CY_SYM_CIPHER_AES_CCM */ + { + ICP_QAT_HW_CIPHER_ALGO_AES128, + ICP_QAT_HW_CIPHER_CTR_MODE, + /* AES decrypt key needs to be reversed. Instead of reversing the key + * at session registration, it is instead reversed on-the-fly by + * setting the KEY_CONVERT bit here + */ + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + /* Streaming ciphers are a special case. Decrypt = encrypt + * Overriding default values previously set for AES + */ + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_ENCRYPT }, + IS_KEY_DEP_YES, + key_size_aes, + }, + /* CPA_CY_SYM_CIPHER_AES_GCM */ + { + ICP_QAT_HW_CIPHER_ALGO_AES128, + ICP_QAT_HW_CIPHER_CTR_MODE, + /* AES decrypt key needs to be reversed. Instead of reversing the key + * at session registration, it is instead reversed on-the-fly by + * setting the KEY_CONVERT bit here + */ + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + /* Streaming ciphers are a special case. Decrypt = encrypt + * Overriding default values previously set for AES + */ + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_ENCRYPT }, + IS_KEY_DEP_YES, + key_size_aes, + }, + /* CPA_CY_SYM_CIPHER_DES_ECB */ + { + ICP_QAT_HW_CIPHER_ALGO_DES, + ICP_QAT_HW_CIPHER_ECB_MODE, + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_NO, + NULL, + }, + /* CPA_CY_SYM_CIPHER_DES_CBC */ + { + ICP_QAT_HW_CIPHER_ALGO_DES, + ICP_QAT_HW_CIPHER_CBC_MODE, + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_NO, + NULL, + }, + /* CPA_CY_SYM_CIPHER_3DES_ECB */ + { + ICP_QAT_HW_CIPHER_ALGO_3DES, + ICP_QAT_HW_CIPHER_ECB_MODE, + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_NO, + NULL, + }, + /* CPA_CY_SYM_CIPHER_3DES_CBC */ + { + ICP_QAT_HW_CIPHER_ALGO_3DES, + ICP_QAT_HW_CIPHER_CBC_MODE, + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_NO, + NULL, + }, + /* CPA_CY_SYM_CIPHER_3DES_CTR */ + { + ICP_QAT_HW_CIPHER_ALGO_3DES, + ICP_QAT_HW_CIPHER_CTR_MODE, + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + /* Streaming ciphers are a special case. Decrypt = encrypt + * Overriding default values previously set for AES + */ + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_ENCRYPT }, + IS_KEY_DEP_NO, + NULL, + }, + /* CPA_CY_SYM_CIPHER_KASUMI_F8 */ + { + ICP_QAT_HW_CIPHER_ALGO_KASUMI, + ICP_QAT_HW_CIPHER_F8_MODE, + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + /* Streaming ciphers are a special case. Decrypt = encrypt */ + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_ENCRYPT }, + IS_KEY_DEP_NO, + NULL, + }, + /* CPA_CY_SYM_CIPHER_SNOW3G_UEA2 */ + { + /* The KEY_CONVERT bit has to be set for Snow_3G operation */ + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2, + ICP_QAT_HW_CIPHER_ECB_MODE, + { ICP_QAT_HW_CIPHER_KEY_CONVERT, ICP_QAT_HW_CIPHER_KEY_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_NO, + NULL, + }, + /* CPA_CY_SYM_CIPHER_AES_F8 */ + { + ICP_QAT_HW_CIPHER_ALGO_AES128, + ICP_QAT_HW_CIPHER_F8_MODE, + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + /* Streaming ciphers are a special case. Decrypt = encrypt */ + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_ENCRYPT }, + IS_KEY_DEP_YES, + key_size_f8, + }, + /* CPA_CY_SYM_CIPHER_AES_XTS */ + { + ICP_QAT_HW_CIPHER_ALGO_AES128, + ICP_QAT_HW_CIPHER_XTS_MODE, + /* AES decrypt key needs to be reversed. Instead of reversing the key + * at session registration, it is instead reversed on-the-fly by + * setting the KEY_CONVERT bit here + */ + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_KEY_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_YES, + key_size_xts, + }, + /* CPA_CY_SYM_CIPHER_ZUC_EEA3 */ + { + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3, + ICP_QAT_HW_CIPHER_ECB_MODE, + { ICP_QAT_HW_CIPHER_KEY_CONVERT, ICP_QAT_HW_CIPHER_KEY_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_NO, + NULL, + }, + /* CPA_CY_SYM_CIPHER_CHACHA */ + { + ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305, + ICP_QAT_HW_CIPHER_CTR_MODE, + { ICP_QAT_HW_CIPHER_KEY_CONVERT, ICP_QAT_HW_CIPHER_KEY_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_NO, + NULL, + }, + /* CPA_CY_SYM_CIPHER_SM4_ECB */ + { + ICP_QAT_HW_CIPHER_ALGO_SM4, + ICP_QAT_HW_CIPHER_ECB_MODE, + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_KEY_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_YES, + key_size_sm4, + }, + /* CPA_CY_SYM_CIPHER_SM4_CBC */ + { + ICP_QAT_HW_CIPHER_ALGO_SM4, + ICP_QAT_HW_CIPHER_CBC_MODE, + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_KEY_CONVERT }, + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_DECRYPT }, + IS_KEY_DEP_YES, + key_size_sm4, + }, + /* CPA_CY_SYM_CIPHER_SM4_CTR */ + { + ICP_QAT_HW_CIPHER_ALGO_SM4, + ICP_QAT_HW_CIPHER_CTR_MODE, + { ICP_QAT_HW_CIPHER_NO_CONVERT, ICP_QAT_HW_CIPHER_NO_CONVERT }, + /* Streaming ciphers are a special case. Decrypt = encrypt */ + { ICP_QAT_HW_CIPHER_ENCRYPT, ICP_QAT_HW_CIPHER_ENCRYPT }, + IS_KEY_DEP_YES, + key_size_sm4, + }, + }; + +/***************************************************************************** + * Internal functions + *****************************************************************************/ + +void +LacSymQat_CipherCtrlBlockWrite(icp_qat_la_bulk_req_ftr_t *pMsg, + Cpa32U cipherAlgorithm, + Cpa32U targetKeyLenInBytes, + icp_qat_fw_slice_t nextSlice, + Cpa8U cipherCfgOffsetInQuadWord) +{ + icp_qat_fw_cipher_cd_ctrl_hdr_t *cd_ctrl = + (icp_qat_fw_cipher_cd_ctrl_hdr_t *)&(pMsg->cd_ctrl); + + /* state_padding_sz is nonzero for f8 mode only */ + cd_ctrl->cipher_padding_sz = 0; + + /* Base Key is not passed down to QAT in the case of ARC4 or NULL */ + if (LAC_CIPHER_IS_ARC4(cipherAlgorithm) || + LAC_CIPHER_IS_NULL(cipherAlgorithm)) { + cd_ctrl->cipher_key_sz = 0; + } else if (LAC_CIPHER_IS_KASUMI(cipherAlgorithm)) { + cd_ctrl->cipher_key_sz = + LAC_BYTES_TO_QUADWORDS(ICP_QAT_HW_KASUMI_F8_KEY_SZ); + cd_ctrl->cipher_padding_sz = + ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR; + } else if (LAC_CIPHER_IS_SNOW3G_UEA2(cipherAlgorithm)) { + /* For Snow3G UEA2 content descriptor key size is + key size plus iv size */ + cd_ctrl->cipher_key_sz = + LAC_BYTES_TO_QUADWORDS(ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ + + ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ); + } else if (LAC_CIPHER_IS_AES_F8(cipherAlgorithm)) { + cd_ctrl->cipher_key_sz = + LAC_BYTES_TO_QUADWORDS(targetKeyLenInBytes); + cd_ctrl->cipher_padding_sz = + 2 * ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR; + } else if (LAC_CIPHER_IS_ZUC_EEA3(cipherAlgorithm)) { + /* For ZUC EEA3 content descriptor key size is + key size plus iv size */ + cd_ctrl->cipher_key_sz = + LAC_BYTES_TO_QUADWORDS(ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ + + ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ); + } else { + cd_ctrl->cipher_key_sz = + LAC_BYTES_TO_QUADWORDS(targetKeyLenInBytes); + } + + cd_ctrl->cipher_state_sz = LAC_BYTES_TO_QUADWORDS( + LacSymQat_CipherIvSizeBytesGet(cipherAlgorithm)); + + cd_ctrl->cipher_cfg_offset = cipherCfgOffsetInQuadWord; + + ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, nextSlice); + ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_CIPHER); +} + +void +LacSymQat_CipherGetCfgData(lac_session_desc_t *pSession, + icp_qat_hw_cipher_algo_t *pAlgorithm, + icp_qat_hw_cipher_mode_t *pMode, + icp_qat_hw_cipher_dir_t *pDir, + icp_qat_hw_cipher_convert_t *pKey_convert) +{ + + CpaCySymCipherAlgorithm cipherAlgorithm = 0; + icp_qat_hw_cipher_dir_t cipherDirection = 0; + + /* Set defaults */ + *pKey_convert = ICP_QAT_HW_CIPHER_NO_CONVERT; + *pAlgorithm = ICP_QAT_HW_CIPHER_ALGO_NULL; + *pMode = ICP_QAT_HW_CIPHER_ECB_MODE; + *pDir = ICP_QAT_HW_CIPHER_ENCRYPT; + + /* decrease since it's numbered from 1 instead of 0 */ + cipherAlgorithm = pSession->cipherAlgorithm - 1; + cipherDirection = + pSession->cipherDirection == CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT ? + ICP_QAT_HW_CIPHER_ENCRYPT : + ICP_QAT_HW_CIPHER_DECRYPT; + + *pAlgorithm = icp_qat_alg_info[cipherAlgorithm].algorithm; + *pMode = icp_qat_alg_info[cipherAlgorithm].mode; + *pDir = icp_qat_alg_info[cipherAlgorithm].dir[cipherDirection]; + *pKey_convert = + icp_qat_alg_info[cipherAlgorithm].key_convert[cipherDirection]; + + if (IS_KEY_DEP_NO != icp_qat_alg_info[cipherAlgorithm].isKeyLenDepend) { + *pAlgorithm = icp_qat_alg_info[cipherAlgorithm] + .pAlgByKeySize[pSession->cipherKeyLenInBytes]; + } + /* Set the mode */ + if (LAC_CIPHER_IS_CTR_MODE(pSession->cipherAlgorithm)) { + *pMode = ICP_QAT_HW_CIPHER_CTR_MODE; + *pKey_convert = ICP_QAT_HW_CIPHER_NO_CONVERT; + /* CCP and AES_GCM single pass, despite being limited to + * CTR/AEAD mode, + * support both Encrypt/Decrypt modes - this is because of the + * differences in the hash computation/verification paths in + * encrypt/decrypt modes respectively. + * By default CCP is set as CTR Mode.Set AEAD Mode for AES_GCM. + */ + if (pSession->isSinglePass) { + if (LAC_CIPHER_IS_GCM(pSession->cipherAlgorithm)) + *pMode = ICP_QAT_HW_CIPHER_AEAD_MODE; + if (cipherDirection == ICP_QAT_HW_CIPHER_DECRYPT) + *pDir = ICP_QAT_HW_CIPHER_DECRYPT; + } + } +} + +void +LacSymQat_CipherHwBlockPopulateCfgData(lac_session_desc_t *pSession, + const void *pCipherHwBlock, + Cpa32U *pSizeInBytes) +{ + icp_qat_hw_cipher_algo_t algorithm = ICP_QAT_HW_CIPHER_ALGO_NULL; + icp_qat_hw_cipher_mode_t mode = ICP_QAT_HW_CIPHER_ECB_MODE; + icp_qat_hw_cipher_dir_t dir = ICP_QAT_HW_CIPHER_ENCRYPT; + icp_qat_hw_cipher_convert_t key_convert; + icp_qat_hw_cipher_config_t *pCipherConfig = + (icp_qat_hw_cipher_config_t *)pCipherHwBlock; + Cpa32U aed_hash_cmp_length = 0; + + *pSizeInBytes = 0; + + LacSymQat_CipherGetCfgData( + pSession, &algorithm, &mode, &dir, &key_convert); + + /* Build the cipher config into the hardware setup block */ + if (pSession->isSinglePass) { + aed_hash_cmp_length = pSession->hashResultSize; + pCipherConfig->reserved = ICP_QAT_HW_CIPHER_CONFIG_BUILD_UPPER( + pSession->aadLenInBytes); + } else { + pCipherConfig->reserved = 0; + } + + pCipherConfig->val = ICP_QAT_HW_CIPHER_CONFIG_BUILD( + mode, algorithm, key_convert, dir, aed_hash_cmp_length); + + *pSizeInBytes = sizeof(icp_qat_hw_cipher_config_t); +} + +void +LacSymQat_CipherHwBlockPopulateKeySetup( + const CpaCySymCipherSetupData *pCipherSetupData, + Cpa32U targetKeyLenInBytes, + const void *pCipherHwBlock, + Cpa32U *pSizeInBytes) +{ + Cpa8U *pCipherKey = (Cpa8U *)pCipherHwBlock; + Cpa32U actualKeyLenInBytes = pCipherSetupData->cipherKeyLenInBytes; + + *pSizeInBytes = 0; + + /* Key is copied into content descriptor for all cases except for + * Arc4 and Null cipher */ + if (!(LAC_CIPHER_IS_ARC4(pCipherSetupData->cipherAlgorithm) || + LAC_CIPHER_IS_NULL(pCipherSetupData->cipherAlgorithm))) { + /* Set the Cipher key field in the cipher block */ + memcpy(pCipherKey, + pCipherSetupData->pCipherKey, + actualKeyLenInBytes); + /* Pad the key with 0's if required */ + if (0 < (targetKeyLenInBytes - actualKeyLenInBytes)) { + LAC_OS_BZERO(pCipherKey + actualKeyLenInBytes, + targetKeyLenInBytes - actualKeyLenInBytes); + } + *pSizeInBytes += targetKeyLenInBytes; + + /* For Kasumi in F8 mode Cipher Key is concatenated with + * Cipher Key XOR-ed with Key Modifier (CK||CK^KM) */ + if (LAC_CIPHER_IS_KASUMI(pCipherSetupData->cipherAlgorithm)) { + Cpa32U wordIndex = 0; + Cpa32U *pu32CipherKey = + (Cpa32U *)pCipherSetupData->pCipherKey; + Cpa32U *pTempKey = + (Cpa32U *)(pCipherKey + targetKeyLenInBytes); + + /* XOR Key with KASUMI F8 key modifier at 4 bytes level + */ + for (wordIndex = 0; wordIndex < + LAC_BYTES_TO_LONGWORDS(targetKeyLenInBytes); + wordIndex++) { + pTempKey[wordIndex] = pu32CipherKey[wordIndex] ^ + LAC_CIPHER_KASUMI_F8_KEY_MODIFIER_4_BYTES; + } + + *pSizeInBytes += targetKeyLenInBytes; + + /* also add padding for F8 */ + *pSizeInBytes += LAC_QUADWORDS_TO_BYTES( + ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR); + LAC_OS_BZERO((Cpa8U *)pTempKey + targetKeyLenInBytes, + LAC_QUADWORDS_TO_BYTES( + ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR)); + } + /* For AES in F8 mode Cipher Key is concatenated with + * Cipher Key XOR-ed with Key Mask (CK||CK^KM) */ + else if (LAC_CIPHER_IS_AES_F8( + pCipherSetupData->cipherAlgorithm)) { + Cpa32U index = 0; + Cpa8U *pTempKey = + pCipherKey + (targetKeyLenInBytes / 2); + *pSizeInBytes += targetKeyLenInBytes; + /* XOR Key with key Mask */ + for (index = 0; index < targetKeyLenInBytes; index++) { + pTempKey[index] = + pCipherKey[index] ^ pTempKey[index]; + } + pTempKey = (pCipherKey + targetKeyLenInBytes); + /* also add padding for AES F8 */ + *pSizeInBytes += 2 * targetKeyLenInBytes; + LAC_OS_BZERO(pTempKey, 2 * targetKeyLenInBytes); + } else if (LAC_CIPHER_IS_SNOW3G_UEA2( + pCipherSetupData->cipherAlgorithm)) { + /* For Snow3G zero area after the key for FW */ + LAC_OS_BZERO(pCipherKey + targetKeyLenInBytes, + ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ); + + *pSizeInBytes += ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ; + } else if (LAC_CIPHER_IS_ZUC_EEA3( + pCipherSetupData->cipherAlgorithm)) { + /* For ZUC zero area after the key for FW */ + LAC_OS_BZERO(pCipherKey + targetKeyLenInBytes, + ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ); + + *pSizeInBytes += ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ; + } + } +} + +/***************************************************************************** + * External functions + *****************************************************************************/ + +Cpa8U +LacSymQat_CipherBlockSizeBytesGet(CpaCySymCipherAlgorithm cipherAlgorithm) +{ + if (LAC_CIPHER_IS_ARC4(cipherAlgorithm)) { + return LAC_CIPHER_ARC4_BLOCK_LEN_BYTES; + } else if (LAC_CIPHER_IS_AES(cipherAlgorithm) || + LAC_CIPHER_IS_AES_F8(cipherAlgorithm)) { + return ICP_QAT_HW_AES_BLK_SZ; + } else if (LAC_CIPHER_IS_DES(cipherAlgorithm)) { + return ICP_QAT_HW_DES_BLK_SZ; + } else if (LAC_CIPHER_IS_TRIPLE_DES(cipherAlgorithm)) { + return ICP_QAT_HW_3DES_BLK_SZ; + } else if (LAC_CIPHER_IS_KASUMI(cipherAlgorithm)) { + return ICP_QAT_HW_KASUMI_BLK_SZ; + } else if (LAC_CIPHER_IS_SNOW3G_UEA2(cipherAlgorithm)) { + return ICP_QAT_HW_SNOW_3G_BLK_SZ; + } else if (LAC_CIPHER_IS_ZUC_EEA3(cipherAlgorithm)) { + return ICP_QAT_HW_ZUC_3G_BLK_SZ; + } else if (LAC_CIPHER_IS_NULL(cipherAlgorithm)) { + return LAC_CIPHER_NULL_BLOCK_LEN_BYTES; + } else if (LAC_CIPHER_IS_CHACHA(cipherAlgorithm)) { + return ICP_QAT_HW_CHACHAPOLY_BLK_SZ; + } else if (LAC_CIPHER_IS_SM4(cipherAlgorithm)) { + return ICP_QAT_HW_SM4_BLK_SZ; + } else { + QAT_UTILS_LOG("Algorithm not supported in Cipher\n"); + return 0; + } +} + +Cpa32U +LacSymQat_CipherIvSizeBytesGet(CpaCySymCipherAlgorithm cipherAlgorithm) +{ + if (CPA_CY_SYM_CIPHER_ARC4 == cipherAlgorithm) { + return LAC_CIPHER_ARC4_STATE_LEN_BYTES; + } else if (LAC_CIPHER_IS_KASUMI(cipherAlgorithm)) { + return ICP_QAT_HW_KASUMI_BLK_SZ; + } else if (LAC_CIPHER_IS_SNOW3G_UEA2(cipherAlgorithm)) { + return ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ; + } else if (LAC_CIPHER_IS_ZUC_EEA3(cipherAlgorithm)) { + return ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ; + } else if (LAC_CIPHER_IS_CHACHA(cipherAlgorithm)) { + return ICP_QAT_HW_CHACHAPOLY_IV_SZ; + } else if (LAC_CIPHER_IS_ECB_MODE(cipherAlgorithm)) { + return 0; + } else { + return (Cpa32U)LacSymQat_CipherBlockSizeBytesGet( + cipherAlgorithm); + } +} + +inline CpaStatus +LacSymQat_CipherRequestParamsPopulate(icp_qat_fw_la_bulk_req_t *pReq, + Cpa32U cipherOffsetInBytes, + Cpa32U cipherLenInBytes, + Cpa64U ivBufferPhysAddr, + Cpa8U *pIvBufferVirt) +{ + icp_qat_fw_la_cipher_req_params_t *pCipherReqParams; + icp_qat_fw_cipher_cd_ctrl_hdr_t *pCipherCdCtrlHdr; + icp_qat_fw_serv_specif_flags *pCipherSpecificFlags; + + pCipherReqParams = (icp_qat_fw_la_cipher_req_params_t + *)((Cpa8U *)&(pReq->serv_specif_rqpars) + + ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET); + pCipherCdCtrlHdr = (icp_qat_fw_cipher_cd_ctrl_hdr_t *)&(pReq->cd_ctrl); + pCipherSpecificFlags = &(pReq->comn_hdr.serv_specif_flags); + + pCipherReqParams->cipher_offset = cipherOffsetInBytes; + pCipherReqParams->cipher_length = cipherLenInBytes; + + /* Don't copy the buffer into the Msg if + * it's too big for the cipher_IV_array + * OR if the FW needs to update it + * OR if there's no buffer supplied + * OR if last partial + */ + if ((pCipherCdCtrlHdr->cipher_state_sz > + LAC_SYM_QAT_HASH_IV_REQ_MAX_SIZE_QW) || + (ICP_QAT_FW_LA_UPDATE_STATE_GET(*pCipherSpecificFlags) == + ICP_QAT_FW_LA_UPDATE_STATE) || + (pIvBufferVirt == NULL) || + (ICP_QAT_FW_LA_PARTIAL_GET(*pCipherSpecificFlags) == + ICP_QAT_FW_LA_PARTIAL_END)) { + /* Populate the field with a ptr to the flat buffer */ + pCipherReqParams->u.s.cipher_IV_ptr = ivBufferPhysAddr; + pCipherReqParams->u.s.resrvd1 = 0; + /* Set the flag indicating the field format */ + ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET( + *pCipherSpecificFlags, ICP_QAT_FW_CIPH_IV_64BIT_PTR); + } else { + /* Populate the field with the contents of the buffer, + * zero field first as data may be smaller than the field */ + memset(pCipherReqParams->u.cipher_IV_array, + 0, + LAC_LONGWORDS_TO_BYTES(ICP_QAT_FW_NUM_LONGWORDS_4)); + + /* We force a specific compiler optimisation here. The length + * to + * be copied turns out to be always 16, and by coding a memcpy + * with + * a literal value the compiler will compile inline code (in + * fact, + * only two vector instructions) to effect the copy. This gives + * us + * a huge performance increase. + */ + unsigned long cplen = + LAC_QUADWORDS_TO_BYTES(pCipherCdCtrlHdr->cipher_state_sz); + + if (cplen == 16) + memcpy(pCipherReqParams->u.cipher_IV_array, + pIvBufferVirt, + 16); + else + memcpy(pCipherReqParams->u.cipher_IV_array, + pIvBufferVirt, + cplen); + /* Set the flag indicating the field format */ + ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET( + *pCipherSpecificFlags, ICP_QAT_FW_CIPH_IV_16BYTE_DATA); + } + + return CPA_STATUS_SUCCESS; +} + +void +LacSymQat_CipherArc4StateInit(const Cpa8U *pKey, + Cpa32U keyLenInBytes, + Cpa8U *pArc4CipherState) +{ + Cpa32U i = 0; + Cpa32U j = 0; + Cpa32U k = 0; + + for (i = 0; i < LAC_CIPHER_ARC4_KEY_MATRIX_LEN_BYTES; ++i) { + pArc4CipherState[i] = (Cpa8U)i; + } + + for (i = 0; i < LAC_CIPHER_ARC4_KEY_MATRIX_LEN_BYTES; ++i) { + Cpa8U swap = 0; + + if (k >= keyLenInBytes) + k -= keyLenInBytes; + + j = (j + pArc4CipherState[i] + pKey[k]); + if (j >= LAC_CIPHER_ARC4_KEY_MATRIX_LEN_BYTES) + j %= LAC_CIPHER_ARC4_KEY_MATRIX_LEN_BYTES; + ++k; + + /* Swap state[i] & state[j] */ + swap = pArc4CipherState[i]; + pArc4CipherState[i] = pArc4CipherState[j]; + pArc4CipherState[j] = swap; + } + + /* Initialise i & j values for QAT */ + pArc4CipherState[LAC_CIPHER_ARC4_KEY_MATRIX_LEN_BYTES] = 0; + pArc4CipherState[LAC_CIPHER_ARC4_KEY_MATRIX_LEN_BYTES + 1] = 0; +} + +/* Update the cipher_key_sz in the Request cache prepared and stored + * in the session */ +void +LacSymQat_CipherXTSModeUpdateKeyLen(lac_session_desc_t *pSessionDesc, + Cpa32U newKeySizeInBytes) +{ + icp_qat_fw_cipher_cd_ctrl_hdr_t *pCipherControlBlock = NULL; + + pCipherControlBlock = (icp_qat_fw_cipher_cd_ctrl_hdr_t *)&( + pSessionDesc->reqCacheFtr.cd_ctrl); + + pCipherControlBlock->cipher_key_sz = + LAC_BYTES_TO_QUADWORDS(newKeySizeInBytes); +} Index: sys/dev/qat/qat_api/common/crypto/sym/qat/lac_sym_qat_hash.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/qat/lac_sym_qat_hash.c @@ -0,0 +1,942 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_qat_hash.c + * + * @ingroup LacSymQatHash + * + * Implementation for populating QAT data structures for hash operation + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +#include "lac_log.h" +#include "lac_mem.h" +#include "lac_sym.h" +#include "lac_common.h" +#include "lac_sym_qat.h" +#include "lac_list.h" +#include "lac_sal_types.h" +#include "lac_sym_qat_hash.h" +#include "lac_sym_qat_hash_defs_lookup.h" + +/** + * This structure contains pointers into the hash setup block of the + * security descriptor. As the hash setup block contains fields that + * are of variable length, pointers must be calculated to these fields + * and the hash setup block is populated using these pointers. */ +typedef struct lac_hash_blk_ptrs_s { + icp_qat_hw_auth_setup_t *pInHashSetup; + /**< inner hash setup */ + Cpa8U *pInHashInitState1; + /**< inner initial state 1 */ + Cpa8U *pInHashInitState2; + /**< inner initial state 2 */ + icp_qat_hw_auth_setup_t *pOutHashSetup; + /**< outer hash setup */ + Cpa8U *pOutHashInitState1; + /**< outer hash initial state */ +} lac_hash_blk_ptrs_t; + +typedef struct lac_hash_blk_ptrs_optimised_s { + Cpa8U *pInHashInitState1; + /**< inner initial state 1 */ + Cpa8U *pInHashInitState2; + /**< inner initial state 2 */ + +} lac_hash_blk_ptrs_optimised_t; + +/** + * This function calculates the pointers into the hash setup block + * based on the control block + * + * @param[in] pHashControlBlock Pointer to hash control block + * @param[in] pHwBlockBase pointer to base of hardware block + * @param[out] pHashBlkPtrs structure containing pointers to + * various fields in the hash setup block + * + * @return void + */ +static void +LacSymQat_HashHwBlockPtrsInit(icp_qat_fw_auth_cd_ctrl_hdr_t *pHashControlBlock, + void *pHwBlockBase, + lac_hash_blk_ptrs_t *pHashBlkPtrs); + +static void +LacSymQat_HashSetupBlockOptimisedFormatInit( + const CpaCySymHashSetupData *pHashSetupData, + icp_qat_fw_auth_cd_ctrl_hdr_t *pHashControlBlock, + void *pHwBlockBase, + icp_qat_hw_auth_mode_t qatHashMode, + lac_sym_qat_hash_precompute_info_t *pPrecompute, + lac_sym_qat_hash_defs_t *pHashDefs, + lac_sym_qat_hash_defs_t *pOuterHashDefs); + +/** + * This function populates the hash setup block + * + * @param[in] pHashSetupData Pointer to the hash context + * @param[in] pHashControlBlock Pointer to hash control block + * @param[in] pHwBlockBase pointer to base of hardware block + * @param[in] qatHashMode QAT hash mode + * @param[in] pPrecompute For auth mode, this is the pointer + * to the precompute data. Otherwise this + * should be set to NULL + * @param[in] pHashDefs Pointer to Hash definitions + * @param[in] pOuterHashDefs Pointer to Outer Hash definitions. + * Required for nested hash mode only + * + * @return void + */ +static void +LacSymQat_HashSetupBlockInit(const CpaCySymHashSetupData *pHashSetupData, + icp_qat_fw_auth_cd_ctrl_hdr_t *pHashControlBlock, + void *pHwBlockBase, + icp_qat_hw_auth_mode_t qatHashMode, + lac_sym_qat_hash_precompute_info_t *pPrecompute, + lac_sym_qat_hash_defs_t *pHashDefs, + lac_sym_qat_hash_defs_t *pOuterHashDefs); + +/** @ingroup LacSymQatHash */ +void +LacSymQat_HashGetCfgData(CpaInstanceHandle pInstance, + icp_qat_hw_auth_mode_t qatHashMode, + CpaCySymHashMode apiHashMode, + CpaCySymHashAlgorithm apiHashAlgorithm, + icp_qat_hw_auth_algo_t *pQatAlgorithm, + CpaBoolean *pQatNested) +{ + lac_sym_qat_hash_defs_t *pHashDefs = NULL; + + LacSymQat_HashDefsLookupGet(pInstance, apiHashAlgorithm, &pHashDefs); + *pQatAlgorithm = pHashDefs->qatInfo->algoEnc; + + if (IS_HASH_MODE_2(qatHashMode)) { + /* set bit for nested hashing */ + *pQatNested = ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED; + } + /* Nested hash in mode 0. */ + else if (CPA_CY_SYM_HASH_MODE_NESTED == apiHashMode) { + /* set bit for nested hashing */ + *pQatNested = ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED; + } + /* mode0 - plain or mode1 - auth */ + else { + *pQatNested = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED; + } +} + +/** @ingroup LacSymQatHash */ +void +LacSymQat_HashContentDescInit(icp_qat_la_bulk_req_ftr_t *pMsg, + CpaInstanceHandle instanceHandle, + const CpaCySymHashSetupData *pHashSetupData, + void *pHwBlockBase, + Cpa32U hwBlockOffsetInQuadWords, + icp_qat_fw_slice_t nextSlice, + icp_qat_hw_auth_mode_t qatHashMode, + CpaBoolean useSymConstantsTable, + CpaBoolean useOptimisedContentDesc, + lac_sym_qat_hash_precompute_info_t *pPrecompute, + Cpa32U *pHashBlkSizeInBytes) +{ + + icp_qat_fw_auth_cd_ctrl_hdr_t *cd_ctrl = + (icp_qat_fw_auth_cd_ctrl_hdr_t *)&(pMsg->cd_ctrl); + lac_sym_qat_hash_defs_t *pHashDefs = NULL; + lac_sym_qat_hash_defs_t *pOuterHashDefs = NULL; + Cpa32U hashSetupBlkSize = 0; + + /* setup the offset in QuadWords into the hw blk */ + cd_ctrl->hash_cfg_offset = hwBlockOffsetInQuadWords; + + ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, nextSlice); + ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_AUTH); + + LacSymQat_HashDefsLookupGet(instanceHandle, + pHashSetupData->hashAlgorithm, + &pHashDefs); + + /* Hmac in mode 2 TLS */ + if (IS_HASH_MODE_2(qatHashMode)) { + /* Set bit for nested hashing. + * Make sure not to overwrite other flags in hash_flags byte. + */ + ICP_QAT_FW_HASH_FLAG_AUTH_HDR_NESTED_SET( + cd_ctrl->hash_flags, ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED); + } + /* Nested hash in mode 0 */ + else if (CPA_CY_SYM_HASH_MODE_NESTED == pHashSetupData->hashMode) { + /* Set bit for nested hashing. + * Make sure not to overwrite other flags in hash_flags byte. + */ + ICP_QAT_FW_HASH_FLAG_AUTH_HDR_NESTED_SET( + cd_ctrl->hash_flags, ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED); + } + /* mode0 - plain or mode1 - auth */ + else { + ICP_QAT_FW_HASH_FLAG_AUTH_HDR_NESTED_SET( + cd_ctrl->hash_flags, ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED); + } + + /* set the final digest size */ + cd_ctrl->final_sz = pHashSetupData->digestResultLenInBytes; + + /* set the state1 size */ + cd_ctrl->inner_state1_sz = + LAC_ALIGN_POW2_ROUNDUP(pHashDefs->qatInfo->state1Length, + LAC_QUAD_WORD_IN_BYTES); + + /* set the inner result size to the digest length */ + cd_ctrl->inner_res_sz = pHashDefs->algInfo->digestLength; + + /* set the state2 size - only for mode 1 Auth algos and AES CBC MAC */ + if (IS_HASH_MODE_1(qatHashMode) || + pHashSetupData->hashAlgorithm == CPA_CY_SYM_HASH_AES_CBC_MAC || + pHashSetupData->hashAlgorithm == CPA_CY_SYM_HASH_ZUC_EIA3) { + cd_ctrl->inner_state2_sz = + LAC_ALIGN_POW2_ROUNDUP(pHashDefs->qatInfo->state2Length, + LAC_QUAD_WORD_IN_BYTES); + } else { + cd_ctrl->inner_state2_sz = 0; + } + + cd_ctrl->inner_state2_offset = cd_ctrl->hash_cfg_offset + + LAC_BYTES_TO_QUADWORDS(sizeof(icp_qat_hw_auth_setup_t) + + cd_ctrl->inner_state1_sz); + + /* size of inner part of hash setup block */ + hashSetupBlkSize = sizeof(icp_qat_hw_auth_setup_t) + + cd_ctrl->inner_state1_sz + cd_ctrl->inner_state2_sz; + + /* For nested hashing - Fill in the outer fields */ + if (CPA_CY_SYM_HASH_MODE_NESTED == pHashSetupData->hashMode || + IS_HASH_MODE_2(qatHashMode)) { + /* For nested - use the outer algorithm. This covers TLS and + * nested hash. For HMAC mode2 use inner algorithm again */ + CpaCySymHashAlgorithm outerAlg = + (CPA_CY_SYM_HASH_MODE_NESTED == pHashSetupData->hashMode) ? + pHashSetupData->nestedModeSetupData.outerHashAlgorithm : + pHashSetupData->hashAlgorithm; + + LacSymQat_HashDefsLookupGet(instanceHandle, + outerAlg, + &pOuterHashDefs); + + /* outer config offset */ + cd_ctrl->outer_config_offset = cd_ctrl->inner_state2_offset + + LAC_BYTES_TO_QUADWORDS(cd_ctrl->inner_state2_sz); + + cd_ctrl->outer_state1_sz = + LAC_ALIGN_POW2_ROUNDUP(pOuterHashDefs->algInfo->stateSize, + LAC_QUAD_WORD_IN_BYTES); + + /* outer result size */ + cd_ctrl->outer_res_sz = pOuterHashDefs->algInfo->digestLength; + + /* outer_prefix_offset will be the size of the inner prefix data + * plus the hash state storage size. */ + /* The prefix buffer is part of the ReqParams, so this param + * will be + * setup where ReqParams are set up */ + + /* add on size of outer part of hash block */ + hashSetupBlkSize += + sizeof(icp_qat_hw_auth_setup_t) + cd_ctrl->outer_state1_sz; + } else { + cd_ctrl->outer_config_offset = 0; + cd_ctrl->outer_state1_sz = 0; + cd_ctrl->outer_res_sz = 0; + } + + if (CPA_CY_SYM_HASH_SNOW3G_UIA2 == pHashSetupData->hashAlgorithm) { + /* add the size for the cipher config word, the key and the IV*/ + hashSetupBlkSize += sizeof(icp_qat_hw_cipher_config_t) + + pHashSetupData->authModeSetupData.authKeyLenInBytes + + ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ; + } + + *pHashBlkSizeInBytes = hashSetupBlkSize; + + if (useOptimisedContentDesc) { + LacSymQat_HashSetupBlockOptimisedFormatInit(pHashSetupData, + cd_ctrl, + pHwBlockBase, + qatHashMode, + pPrecompute, + pHashDefs, + pOuterHashDefs); + } else if (!useSymConstantsTable) { + /***************************************************************************** + * Populate Hash Setup block * + *****************************************************************************/ + LacSymQat_HashSetupBlockInit(pHashSetupData, + cd_ctrl, + pHwBlockBase, + qatHashMode, + pPrecompute, + pHashDefs, + pOuterHashDefs); + } +} + +/* This fn populates fields in both the CD ctrl block and the ReqParams block + * which describe the Hash ReqParams: + * cd_ctrl.outer_prefix_offset + * cd_ctrl.outer_prefix_sz + * req_params.inner_prefix_sz/aad_sz + * req_params.hash_state_sz + * req_params.auth_res_sz + * + */ +void +LacSymQat_HashSetupReqParamsMetaData( + icp_qat_la_bulk_req_ftr_t *pMsg, + CpaInstanceHandle instanceHandle, + const CpaCySymHashSetupData *pHashSetupData, + CpaBoolean hashStateBuffer, + icp_qat_hw_auth_mode_t qatHashMode, + CpaBoolean digestVerify) +{ + icp_qat_fw_auth_cd_ctrl_hdr_t *cd_ctrl = NULL; + icp_qat_la_auth_req_params_t *pHashReqParams = NULL; + lac_sym_qat_hash_defs_t *pHashDefs = NULL; + + cd_ctrl = (icp_qat_fw_auth_cd_ctrl_hdr_t *)&(pMsg->cd_ctrl); + pHashReqParams = + (icp_qat_la_auth_req_params_t *)(&(pMsg->serv_specif_rqpars)); + + LacSymQat_HashDefsLookupGet(instanceHandle, + pHashSetupData->hashAlgorithm, + &pHashDefs); + + /* Hmac in mode 2 TLS */ + if (IS_HASH_MODE_2(qatHashMode)) { + /* Inner and outer prefixes are the block length */ + pHashReqParams->u2.inner_prefix_sz = + pHashDefs->algInfo->blockLength; + cd_ctrl->outer_prefix_sz = pHashDefs->algInfo->blockLength; + cd_ctrl->outer_prefix_offset = LAC_BYTES_TO_QUADWORDS( + LAC_ALIGN_POW2_ROUNDUP((pHashReqParams->u2.inner_prefix_sz), + LAC_QUAD_WORD_IN_BYTES)); + } + /* Nested hash in mode 0 */ + else if (CPA_CY_SYM_HASH_MODE_NESTED == pHashSetupData->hashMode) { + + /* set inner and outer prefixes */ + pHashReqParams->u2.inner_prefix_sz = + pHashSetupData->nestedModeSetupData.innerPrefixLenInBytes; + cd_ctrl->outer_prefix_sz = + pHashSetupData->nestedModeSetupData.outerPrefixLenInBytes; + cd_ctrl->outer_prefix_offset = LAC_BYTES_TO_QUADWORDS( + LAC_ALIGN_POW2_ROUNDUP((pHashReqParams->u2.inner_prefix_sz), + LAC_QUAD_WORD_IN_BYTES)); + } + /* mode0 - plain or mode1 - auth */ + else { + Cpa16U aadDataSize = 0; + + /* For Auth Encrypt set the aad size */ + if (CPA_CY_SYM_HASH_AES_CCM == pHashSetupData->hashAlgorithm) { + /* at the beginning of the buffer there is B0 block */ + aadDataSize = LAC_HASH_AES_CCM_BLOCK_SIZE; + + /* then, if there is some 'a' data, the buffer will + * store encoded + * length of 'a' and 'a' itself */ + if (pHashSetupData->authModeSetupData.aadLenInBytes > + 0) { + /* as the QAT API puts the requirement on the + * pAdditionalAuthData not to be bigger than 240 + * bytes then we + * just need 2 bytes to store encoded length of + * 'a' */ + aadDataSize += sizeof(Cpa16U); + aadDataSize += pHashSetupData->authModeSetupData + .aadLenInBytes; + } + + /* round the aad size to the multiple of CCM block + * size.*/ + pHashReqParams->u2.aad_sz = + LAC_ALIGN_POW2_ROUNDUP(aadDataSize, + LAC_HASH_AES_CCM_BLOCK_SIZE); + } else if (CPA_CY_SYM_HASH_AES_GCM == + pHashSetupData->hashAlgorithm) { + aadDataSize = + pHashSetupData->authModeSetupData.aadLenInBytes; + + /* round the aad size to the multiple of GCM hash block + * size. */ + pHashReqParams->u2.aad_sz = + LAC_ALIGN_POW2_ROUNDUP(aadDataSize, + LAC_HASH_AES_GCM_BLOCK_SIZE); + } else { + pHashReqParams->u2.aad_sz = 0; + } + + cd_ctrl->outer_prefix_sz = 0; + cd_ctrl->outer_prefix_offset = 0; + } + + /* If there is a hash state prefix buffer */ + if (CPA_TRUE == hashStateBuffer) { + /* Note, this sets up size for both aad and non-aad cases */ + pHashReqParams->hash_state_sz = LAC_BYTES_TO_QUADWORDS( + LAC_ALIGN_POW2_ROUNDUP(pHashReqParams->u2.inner_prefix_sz, + LAC_QUAD_WORD_IN_BYTES) + + LAC_ALIGN_POW2_ROUNDUP(cd_ctrl->outer_prefix_sz, + LAC_QUAD_WORD_IN_BYTES)); + } else { + pHashReqParams->hash_state_sz = 0; + } + + if (CPA_TRUE == digestVerify) { + /* auth result size in bytes to be read in for a verify + * operation */ + pHashReqParams->auth_res_sz = + pHashSetupData->digestResultLenInBytes; + } else { + pHashReqParams->auth_res_sz = 0; + } + + pHashReqParams->resrvd1 = 0; +} + +void +LacSymQat_HashHwBlockPtrsInit(icp_qat_fw_auth_cd_ctrl_hdr_t *cd_ctrl, + void *pHwBlockBase, + lac_hash_blk_ptrs_t *pHashBlkPtrs) +{ + /* encoded offset for inner config is converted to a byte offset. */ + pHashBlkPtrs->pInHashSetup = + (icp_qat_hw_auth_setup_t *)((Cpa8U *)pHwBlockBase + + (cd_ctrl->hash_cfg_offset * + LAC_QUAD_WORD_IN_BYTES)); + + pHashBlkPtrs->pInHashInitState1 = (Cpa8U *)pHashBlkPtrs->pInHashSetup + + sizeof(icp_qat_hw_auth_setup_t); + + pHashBlkPtrs->pInHashInitState2 = + (Cpa8U *)(pHashBlkPtrs->pInHashInitState1) + + cd_ctrl->inner_state1_sz; + + pHashBlkPtrs->pOutHashSetup = + (icp_qat_hw_auth_setup_t *)((Cpa8U *)(pHashBlkPtrs + ->pInHashInitState2) + + cd_ctrl->inner_state2_sz); + + pHashBlkPtrs->pOutHashInitState1 = + (Cpa8U *)(pHashBlkPtrs->pOutHashSetup) + + sizeof(icp_qat_hw_auth_setup_t); +} + +static void +LacSymQat_HashSetupBlockInit(const CpaCySymHashSetupData *pHashSetupData, + icp_qat_fw_auth_cd_ctrl_hdr_t *pHashControlBlock, + void *pHwBlockBase, + icp_qat_hw_auth_mode_t qatHashMode, + lac_sym_qat_hash_precompute_info_t *pPrecompute, + lac_sym_qat_hash_defs_t *pHashDefs, + lac_sym_qat_hash_defs_t *pOuterHashDefs) +{ + Cpa32U innerConfig = 0; + lac_hash_blk_ptrs_t hashBlkPtrs = { 0 }; + Cpa32U aed_hash_cmp_length = 0; + + LacSymQat_HashHwBlockPtrsInit(pHashControlBlock, + pHwBlockBase, + &hashBlkPtrs); + + innerConfig = ICP_QAT_HW_AUTH_CONFIG_BUILD( + qatHashMode, + pHashDefs->qatInfo->algoEnc, + pHashSetupData->digestResultLenInBytes); + + /* Set the Inner hash configuration */ + hashBlkPtrs.pInHashSetup->auth_config.config = innerConfig; + hashBlkPtrs.pInHashSetup->auth_config.reserved = 0; + + /* For mode 1 pre-computes for auth algorithms */ + if (IS_HASH_MODE_1(qatHashMode) || + CPA_CY_SYM_HASH_AES_CBC_MAC == pHashSetupData->hashAlgorithm || + CPA_CY_SYM_HASH_ZUC_EIA3 == pHashSetupData->hashAlgorithm) { + /* for HMAC in mode 1 authCounter is the block size + * else the authCounter is 0. The firmware expects the counter + * to be + * big endian */ + LAC_MEM_SHARED_WRITE_SWAP( + hashBlkPtrs.pInHashSetup->auth_counter.counter, + pHashDefs->qatInfo->authCounter); + + /* state 1 is set to 0 for the following algorithms */ + if ((CPA_CY_SYM_HASH_AES_XCBC == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_AES_CMAC == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_AES_CBC_MAC == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_KASUMI_F9 == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_SNOW3G_UIA2 == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_AES_CCM == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_AES_GMAC == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_AES_GCM == + pHashSetupData->hashAlgorithm) || + (CPA_CY_SYM_HASH_ZUC_EIA3 == + pHashSetupData->hashAlgorithm)) { + LAC_OS_BZERO(hashBlkPtrs.pInHashInitState1, + pHashDefs->qatInfo->state1Length); + } + + /* Pad remaining bytes of sha1 precomputes */ + if (CPA_CY_SYM_HASH_SHA1 == pHashSetupData->hashAlgorithm) { + Cpa32U state1PadLen = 0; + Cpa32U state2PadLen = 0; + + if (pHashControlBlock->inner_state1_sz > + pHashDefs->algInfo->stateSize) { + state1PadLen = + pHashControlBlock->inner_state1_sz - + pHashDefs->algInfo->stateSize; + } + + if (pHashControlBlock->inner_state2_sz > + pHashDefs->algInfo->stateSize) { + state2PadLen = + pHashControlBlock->inner_state2_sz - + pHashDefs->algInfo->stateSize; + } + + if (state1PadLen > 0) { + + LAC_OS_BZERO(hashBlkPtrs.pInHashInitState1 + + pHashDefs->algInfo->stateSize, + state1PadLen); + } + + if (state2PadLen > 0) { + LAC_OS_BZERO(hashBlkPtrs.pInHashInitState2 + + pHashDefs->algInfo->stateSize, + state2PadLen); + } + } + + pPrecompute->state1Size = pHashDefs->qatInfo->state1Length; + pPrecompute->state2Size = pHashDefs->qatInfo->state2Length; + + /* Set the destination for pre-compute state1 data to be written + */ + pPrecompute->pState1 = hashBlkPtrs.pInHashInitState1; + + /* Set the destination for pre-compute state1 data to be written + */ + pPrecompute->pState2 = hashBlkPtrs.pInHashInitState2; + } + /* For digest and nested digest */ + else { + Cpa32U padLen = pHashControlBlock->inner_state1_sz - + pHashDefs->algInfo->stateSize; + + /* counter set to 0 */ + hashBlkPtrs.pInHashSetup->auth_counter.counter = 0; + + /* set the inner hash state 1 */ + memcpy(hashBlkPtrs.pInHashInitState1, + pHashDefs->algInfo->initState, + pHashDefs->algInfo->stateSize); + + if (padLen > 0) { + LAC_OS_BZERO(hashBlkPtrs.pInHashInitState1 + + pHashDefs->algInfo->stateSize, + padLen); + } + } + + hashBlkPtrs.pInHashSetup->auth_counter.reserved = 0; + + /* Fill in the outer part of the hash setup block */ + if ((CPA_CY_SYM_HASH_MODE_NESTED == pHashSetupData->hashMode || + IS_HASH_MODE_2(qatHashMode)) && + (NULL != pOuterHashDefs)) { + Cpa32U outerConfig = ICP_QAT_HW_AUTH_CONFIG_BUILD( + qatHashMode, + pOuterHashDefs->qatInfo->algoEnc, + pHashSetupData->digestResultLenInBytes); + + Cpa32U padLen = pHashControlBlock->outer_state1_sz - + pOuterHashDefs->algInfo->stateSize; + + /* populate the auth config */ + hashBlkPtrs.pOutHashSetup->auth_config.config = outerConfig; + hashBlkPtrs.pOutHashSetup->auth_config.reserved = 0; + + /* outer Counter set to 0 */ + hashBlkPtrs.pOutHashSetup->auth_counter.counter = 0; + hashBlkPtrs.pOutHashSetup->auth_counter.reserved = 0; + + /* set outer hash state 1 */ + memcpy(hashBlkPtrs.pOutHashInitState1, + pOuterHashDefs->algInfo->initState, + pOuterHashDefs->algInfo->stateSize); + + if (padLen > 0) { + LAC_OS_BZERO(hashBlkPtrs.pOutHashInitState1 + + pOuterHashDefs->algInfo->stateSize, + padLen); + } + } + + if (CPA_CY_SYM_HASH_SNOW3G_UIA2 == pHashSetupData->hashAlgorithm) { + icp_qat_hw_cipher_config_t *pCipherConfig = + (icp_qat_hw_cipher_config_t *)hashBlkPtrs.pOutHashSetup; + + pCipherConfig->val = ICP_QAT_HW_CIPHER_CONFIG_BUILD( + ICP_QAT_HW_CIPHER_ECB_MODE, + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2, + ICP_QAT_HW_CIPHER_KEY_CONVERT, + ICP_QAT_HW_CIPHER_ENCRYPT, + aed_hash_cmp_length); + + pCipherConfig->reserved = 0; + + memcpy((Cpa8U *)pCipherConfig + + sizeof(icp_qat_hw_cipher_config_t), + pHashSetupData->authModeSetupData.authKey, + pHashSetupData->authModeSetupData.authKeyLenInBytes); + + LAC_OS_BZERO( + (Cpa8U *)pCipherConfig + + sizeof(icp_qat_hw_cipher_config_t) + + pHashSetupData->authModeSetupData.authKeyLenInBytes, + ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ); + } else if (CPA_CY_SYM_HASH_ZUC_EIA3 == pHashSetupData->hashAlgorithm) { + icp_qat_hw_cipher_config_t *pCipherConfig = + (icp_qat_hw_cipher_config_t *)hashBlkPtrs.pOutHashSetup; + + pCipherConfig->val = ICP_QAT_HW_CIPHER_CONFIG_BUILD( + ICP_QAT_HW_CIPHER_ECB_MODE, + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3, + ICP_QAT_HW_CIPHER_KEY_CONVERT, + ICP_QAT_HW_CIPHER_ENCRYPT, + aed_hash_cmp_length); + + pCipherConfig->reserved = 0; + + memcpy((Cpa8U *)pCipherConfig + + sizeof(icp_qat_hw_cipher_config_t), + pHashSetupData->authModeSetupData.authKey, + pHashSetupData->authModeSetupData.authKeyLenInBytes); + + LAC_OS_BZERO( + (Cpa8U *)pCipherConfig + + sizeof(icp_qat_hw_cipher_config_t) + + pHashSetupData->authModeSetupData.authKeyLenInBytes, + ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ); + } +} + +static void +LacSymQat_HashOpHwBlockPtrsInit(icp_qat_fw_auth_cd_ctrl_hdr_t *cd_ctrl, + void *pHwBlockBase, + lac_hash_blk_ptrs_optimised_t *pHashBlkPtrs) +{ + pHashBlkPtrs->pInHashInitState1 = (((Cpa8U *)pHwBlockBase) + 16); + pHashBlkPtrs->pInHashInitState2 = + (Cpa8U *)(pHashBlkPtrs->pInHashInitState1) + + cd_ctrl->inner_state1_sz; +} + +static void +LacSymQat_HashSetupBlockOptimisedFormatInit( + const CpaCySymHashSetupData *pHashSetupData, + icp_qat_fw_auth_cd_ctrl_hdr_t *pHashControlBlock, + void *pHwBlockBase, + icp_qat_hw_auth_mode_t qatHashMode, + lac_sym_qat_hash_precompute_info_t *pPrecompute, + lac_sym_qat_hash_defs_t *pHashDefs, + lac_sym_qat_hash_defs_t *pOuterHashDefs) +{ + + Cpa32U state1PadLen = 0; + Cpa32U state2PadLen = 0; + + lac_hash_blk_ptrs_optimised_t pHashBlkPtrs = { 0 }; + + LacSymQat_HashOpHwBlockPtrsInit(pHashControlBlock, + pHwBlockBase, + &pHashBlkPtrs); + + if (pHashControlBlock->inner_state1_sz > + pHashDefs->algInfo->stateSize) { + state1PadLen = pHashControlBlock->inner_state1_sz - + pHashDefs->algInfo->stateSize; + } + + if (pHashControlBlock->inner_state2_sz > + pHashDefs->algInfo->stateSize) { + state2PadLen = pHashControlBlock->inner_state2_sz - + pHashDefs->algInfo->stateSize; + } + + if (state1PadLen > 0) { + + LAC_OS_BZERO(pHashBlkPtrs.pInHashInitState1 + + pHashDefs->algInfo->stateSize, + state1PadLen); + } + + if (state2PadLen > 0) { + + LAC_OS_BZERO(pHashBlkPtrs.pInHashInitState2 + + pHashDefs->algInfo->stateSize, + state2PadLen); + } + pPrecompute->state1Size = pHashDefs->qatInfo->state1Length; + pPrecompute->state2Size = pHashDefs->qatInfo->state2Length; + + /* Set the destination for pre-compute state1 data to be written */ + pPrecompute->pState1 = pHashBlkPtrs.pInHashInitState1; + + /* Set the destination for pre-compute state1 data to be written */ + pPrecompute->pState2 = pHashBlkPtrs.pInHashInitState2; +} + +void +LacSymQat_HashStatePrefixAadBufferSizeGet( + icp_qat_la_bulk_req_ftr_t *pMsg, + lac_sym_qat_hash_state_buffer_info_t *pHashStateBuf) +{ + const icp_qat_fw_auth_cd_ctrl_hdr_t *cd_ctrl; + icp_qat_la_auth_req_params_t *pHashReqParams; + + cd_ctrl = (icp_qat_fw_auth_cd_ctrl_hdr_t *)&(pMsg->cd_ctrl); + pHashReqParams = + (icp_qat_la_auth_req_params_t *)(&(pMsg->serv_specif_rqpars)); + + /* hash state storage needed to support partial packets. Space reserved + * for this in all cases */ + pHashStateBuf->stateStorageSzQuadWords = LAC_BYTES_TO_QUADWORDS( + sizeof(icp_qat_hw_auth_counter_t) + cd_ctrl->inner_state1_sz); + + pHashStateBuf->prefixAadSzQuadWords = pHashReqParams->hash_state_sz; +} + +void +LacSymQat_HashStatePrefixAadBufferPopulate( + lac_sym_qat_hash_state_buffer_info_t *pHashStateBuf, + icp_qat_la_bulk_req_ftr_t *pMsg, + Cpa8U *pInnerPrefixAad, + Cpa8U innerPrefixSize, + Cpa8U *pOuterPrefix, + Cpa8U outerPrefixSize) +{ + const icp_qat_fw_auth_cd_ctrl_hdr_t *cd_ctrl = + (icp_qat_fw_auth_cd_ctrl_hdr_t *)&(pMsg->cd_ctrl); + + icp_qat_la_auth_req_params_t *pHashReqParams = + (icp_qat_la_auth_req_params_t *)(&(pMsg->serv_specif_rqpars)); + + /* + * Let S be the supplied secret + * S1 = S/2 if S is even and (S/2 + 1) if S is odd. + * Set length S2 (inner prefix) = S1 and the start address + * of S2 is S[S1/2] i.e. if S is odd then S2 starts at the last byte of + * S1 + * _____________________________________________________________ + * | outer prefix | padding | + * |________________| | + * | | + * |____________________________________________________________| + * | inner prefix | padding | + * |________________| | + * | | + * |____________________________________________________________| + * + */ + if (NULL != pInnerPrefixAad) { + Cpa8U *pLocalInnerPrefix = + (Cpa8U *)(pHashStateBuf->pData) + + LAC_QUADWORDS_TO_BYTES( + pHashStateBuf->stateStorageSzQuadWords); + Cpa8U padding = + pHashReqParams->u2.inner_prefix_sz - innerPrefixSize; + /* copy the inner prefix or aad data */ + memcpy(pLocalInnerPrefix, pInnerPrefixAad, innerPrefixSize); + + /* Reset with zeroes any area reserved for padding in this block + */ + if (0 < padding) { + LAC_OS_BZERO(pLocalInnerPrefix + innerPrefixSize, + padding); + } + } + + if (NULL != pOuterPrefix) { + Cpa8U *pLocalOuterPrefix = + (Cpa8U *)pHashStateBuf->pData + + LAC_QUADWORDS_TO_BYTES( + pHashStateBuf->stateStorageSzQuadWords + + cd_ctrl->outer_prefix_offset); + Cpa8U padding = LAC_QUADWORDS_TO_BYTES( + pHashStateBuf->prefixAadSzQuadWords) - + pHashReqParams->u2.inner_prefix_sz - outerPrefixSize; + + /* copy the outer prefix */ + memcpy(pLocalOuterPrefix, pOuterPrefix, outerPrefixSize); + + /* Reset with zeroes any area reserved for padding in this block + */ + if (0 < padding) { + LAC_OS_BZERO(pLocalOuterPrefix + outerPrefixSize, + padding); + } + } +} + +inline CpaStatus +LacSymQat_HashRequestParamsPopulate( + icp_qat_fw_la_bulk_req_t *pReq, + Cpa32U authOffsetInBytes, + Cpa32U authLenInBytes, + sal_service_t *pService, + lac_sym_qat_hash_state_buffer_info_t *pHashStateBuf, + Cpa32U packetType, + Cpa32U hashResultSize, + CpaBoolean digestVerify, + Cpa8U *pAuthResult, + CpaCySymHashAlgorithm alg, + void *hkdf_secret) +{ + Cpa64U authResultPhys = 0; + icp_qat_fw_la_auth_req_params_t *pHashReqParams; + + pHashReqParams = (icp_qat_fw_la_auth_req_params_t + *)((Cpa8U *)&(pReq->serv_specif_rqpars) + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + pHashReqParams->auth_off = authOffsetInBytes; + pHashReqParams->auth_len = authLenInBytes; + + /* Set the physical location of secret for HKDF */ + if (NULL != hkdf_secret) { + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + (*pService), pHashReqParams->u1.aad_adr, hkdf_secret); + + if (pHashReqParams->u1.aad_adr == 0) { + LAC_LOG_ERROR( + "Unable to get the physical address of the" + " HKDF secret\n"); + return CPA_STATUS_FAIL; + } + } + + /* For a Full packet or last partial need to set the digest result + * pointer + * and the auth result field */ + if (NULL != pAuthResult) { + authResultPhys = + LAC_OS_VIRT_TO_PHYS_EXTERNAL((*pService), + (void *)pAuthResult); + + if (authResultPhys == 0) { + LAC_LOG_ERROR( + "Unable to get the physical address of the" + " auth result\n"); + return CPA_STATUS_FAIL; + } + + pHashReqParams->auth_res_addr = authResultPhys; + } else { + pHashReqParams->auth_res_addr = 0; + } + + if (CPA_TRUE == digestVerify) { + /* auth result size in bytes to be read in for a verify + * operation */ + pHashReqParams->auth_res_sz = hashResultSize; + } else { + pHashReqParams->auth_res_sz = 0; + } + + /* If there is a hash state prefix buffer */ + if (NULL != pHashStateBuf) { + /* Only write the pointer to the buffer if the size is greater + * than 0 + * this will be the case for plain and auth mode due to the + * state storage required for partial packets and for nested + * mode (when + * the prefix data is > 0) */ + if ((pHashStateBuf->stateStorageSzQuadWords + + pHashStateBuf->prefixAadSzQuadWords) > 0) { + /* For the first partial packet, the QAT expects the + * pointer to the + * inner prefix even if there is no memory allocated for + * this. The + * QAT will internally calculate where to write the + * state back. */ + if ((ICP_QAT_FW_LA_PARTIAL_START == packetType) || + (ICP_QAT_FW_LA_PARTIAL_NONE == packetType)) { + // prefix_addr changed to auth_partial_st_prefix + pHashReqParams->u1.auth_partial_st_prefix = + ((pHashStateBuf->pDataPhys) + + LAC_QUADWORDS_TO_BYTES( + pHashStateBuf + ->stateStorageSzQuadWords)); + } else { + pHashReqParams->u1.auth_partial_st_prefix = + pHashStateBuf->pDataPhys; + } + } + /* nested mode when the prefix data is 0 */ + else { + pHashReqParams->u1.auth_partial_st_prefix = 0; + } + + /* For middle & last partial, state size is the hash state + * storage + * if hash mode 2 this will include the prefix data */ + if ((ICP_QAT_FW_LA_PARTIAL_MID == packetType) || + (ICP_QAT_FW_LA_PARTIAL_END == packetType)) { + pHashReqParams->hash_state_sz = + (pHashStateBuf->stateStorageSzQuadWords + + pHashStateBuf->prefixAadSzQuadWords); + } + /* For full packets and first partials set the state size to + * that of + * the prefix/aad. prefix includes both the inner and outer + * prefix */ + else { + pHashReqParams->hash_state_sz = + pHashStateBuf->prefixAadSzQuadWords; + } + } else { + pHashReqParams->u1.auth_partial_st_prefix = 0; + pHashReqParams->hash_state_sz = 0; + } + + /* GMAC only */ + if (CPA_CY_SYM_HASH_AES_GMAC == alg) { + pHashReqParams->hash_state_sz = 0; + pHashReqParams->u1.aad_adr = 0; + } + + /* This field is only used by TLS requests */ + /* In TLS case this is set after this function is called */ + pHashReqParams->resrvd1 = 0; + return CPA_STATUS_SUCCESS; +} Index: sys/dev/qat/qat_api/common/crypto/sym/qat/lac_sym_qat_hash_defs_lookup.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/qat/lac_sym_qat_hash_defs_lookup.c @@ -0,0 +1,491 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_qat_hash_defs_lookup.c Hash Definitions Lookup + * + * @ingroup LacHashDefsLookup + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ + +#include "cpa.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "lac_common.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +#include "icp_adf_transport.h" +#include "lac_sym.h" +#include "icp_qat_fw_la.h" +#include "lac_sym_qat_hash_defs_lookup.h" +#include "lac_sal_types_crypto.h" +#include "lac_sym_hash_defs.h" + +/* state size for xcbc mac consists of 3 * 16 byte keys */ +#define LAC_SYM_QAT_XCBC_STATE_SIZE ((LAC_HASH_XCBC_MAC_BLOCK_SIZE)*3) + +#define LAC_SYM_QAT_CMAC_STATE_SIZE ((LAC_HASH_CMAC_BLOCK_SIZE)*3) + +/* This type is used for the mapping between the hash algorithm and + * the corresponding hash definitions structure */ +typedef struct lac_sym_qat_hash_def_map_s { + CpaCySymHashAlgorithm hashAlgorithm; + /* hash algorithm */ + lac_sym_qat_hash_defs_t hashDefs; + /* hash defintions pointers */ +} lac_sym_qat_hash_def_map_t; + +/* +******************************************************************************* +* Static Variables +******************************************************************************* +*/ + +/* initialisers as defined in FIPS and RFCS for digest operations */ + +/* md5 16 bytes - Initialiser state can be found in RFC 1321*/ +static Cpa8U md5InitialState[LAC_HASH_MD5_STATE_SIZE] = { + 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef, + 0xfe, 0xdc, 0xba, 0x98, 0x76, 0x54, 0x32, 0x10, +}; + +/* SHA1 - 20 bytes - Initialiser state can be found in FIPS stds 180-2 */ +static Cpa8U sha1InitialState[LAC_HASH_SHA1_STATE_SIZE] = { + 0x67, 0x45, 0x23, 0x01, 0xef, 0xcd, 0xab, 0x89, 0x98, 0xba, + 0xdc, 0xfe, 0x10, 0x32, 0x54, 0x76, 0xc3, 0xd2, 0xe1, 0xf0 +}; + +/* SHA 224 - 32 bytes - Initialiser state can be found in FIPS stds 180-2 */ +static Cpa8U sha224InitialState[LAC_HASH_SHA224_STATE_SIZE] = { + 0xc1, 0x05, 0x9e, 0xd8, 0x36, 0x7c, 0xd5, 0x07, 0x30, 0x70, 0xdd, + 0x17, 0xf7, 0x0e, 0x59, 0x39, 0xff, 0xc0, 0x0b, 0x31, 0x68, 0x58, + 0x15, 0x11, 0x64, 0xf9, 0x8f, 0xa7, 0xbe, 0xfa, 0x4f, 0xa4 +}; + +/* SHA 256 - 32 bytes - Initialiser state can be found in FIPS stds 180-2 */ +static Cpa8U sha256InitialState[LAC_HASH_SHA256_STATE_SIZE] = + { 0x6a, 0x09, 0xe6, 0x67, 0xbb, 0x67, 0xae, 0x85, 0x3c, 0x6e, 0xf3, + 0x72, 0xa5, 0x4f, 0xf5, 0x3a, 0x51, 0x0e, 0x52, 0x7f, 0x9b, 0x05, + 0x68, 0x8c, 0x1f, 0x83, 0xd9, 0xab, 0x5b, 0xe0, 0xcd, 0x19 }; + +/* SHA 384 - 64 bytes - Initialiser state can be found in FIPS stds 180-2 */ +static Cpa8U sha384InitialState[LAC_HASH_SHA384_STATE_SIZE] = + { 0xcb, 0xbb, 0x9d, 0x5d, 0xc1, 0x05, 0x9e, 0xd8, 0x62, 0x9a, 0x29, + 0x2a, 0x36, 0x7c, 0xd5, 0x07, 0x91, 0x59, 0x01, 0x5a, 0x30, 0x70, + 0xdd, 0x17, 0x15, 0x2f, 0xec, 0xd8, 0xf7, 0x0e, 0x59, 0x39, 0x67, + 0x33, 0x26, 0x67, 0xff, 0xc0, 0x0b, 0x31, 0x8e, 0xb4, 0x4a, 0x87, + 0x68, 0x58, 0x15, 0x11, 0xdb, 0x0c, 0x2e, 0x0d, 0x64, 0xf9, 0x8f, + 0xa7, 0x47, 0xb5, 0x48, 0x1d, 0xbe, 0xfa, 0x4f, 0xa4 }; + +/* SHA 512 - 64 bytes - Initialiser state can be found in FIPS stds 180-2 */ +static Cpa8U sha512InitialState[LAC_HASH_SHA512_STATE_SIZE] = + { 0x6a, 0x09, 0xe6, 0x67, 0xf3, 0xbc, 0xc9, 0x08, 0xbb, 0x67, 0xae, + 0x85, 0x84, 0xca, 0xa7, 0x3b, 0x3c, 0x6e, 0xf3, 0x72, 0xfe, 0x94, + 0xf8, 0x2b, 0xa5, 0x4f, 0xf5, 0x3a, 0x5f, 0x1d, 0x36, 0xf1, 0x51, + 0x0e, 0x52, 0x7f, 0xad, 0xe6, 0x82, 0xd1, 0x9b, 0x05, 0x68, 0x8c, + 0x2b, 0x3e, 0x6c, 0x1f, 0x1f, 0x83, 0xd9, 0xab, 0xfb, 0x41, 0xbd, + 0x6b, 0x5b, 0xe0, 0xcd, 0x19, 0x13, 0x7e, 0x21, 0x79 }; + +/* SHA3 224 - 28 bytes */ +static Cpa8U sha3_224InitialState[LAC_HASH_SHA3_224_STATE_SIZE] = + { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; + +/* SHA3 256 - 32 bytes */ +static Cpa8U sha3_256InitialState[LAC_HASH_SHA3_256_STATE_SIZE] = + { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; + +/* SHA3 384 - 48 bytes */ +static Cpa8U sha3_384InitialState[LAC_HASH_SHA3_384_STATE_SIZE] = + { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; + +/* SHA3 512 - 64 bytes */ +static Cpa8U sha3_512InitialState[LAC_HASH_SHA3_512_STATE_SIZE] = + { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; + +/* SM3 - 32 bytes */ +static Cpa8U sm3InitialState[LAC_HASH_SM3_STATE_SIZE] = + { 0x73, 0x80, 0x16, 0x6f, 0x49, 0x14, 0xb2, 0xb9, 0x17, 0x24, 0x42, + 0xd7, 0xda, 0x8a, 0x06, 0x00, 0xa9, 0x6f, 0x30, 0xbc, 0x16, 0x31, + 0x38, 0xaa, 0xe3, 0x8d, 0xee, 0x4d, 0xb0, 0xfb, 0x0e, 0x4e }; + +/* Constants used in generating K1, K2, K3 from a Key for AES_XCBC_MAC + * State defined in RFC 3566 */ +static Cpa8U aesXcbcKeySeed[LAC_SYM_QAT_XCBC_STATE_SIZE] = { + 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, + 0x01, 0x01, 0x01, 0x01, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, + 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x03, 0x03, 0x03, 0x03, + 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, +}; + +static Cpa8U aesCmacKeySeed[LAC_HASH_CMAC_BLOCK_SIZE] = { 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, + 0x00 }; + +/* Hash Algorithm specific structure */ + +static lac_sym_qat_hash_alg_info_t md5Info = { LAC_HASH_MD5_DIGEST_SIZE, + LAC_HASH_MD5_BLOCK_SIZE, + md5InitialState, + LAC_HASH_MD5_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t sha1Info = { LAC_HASH_SHA1_DIGEST_SIZE, + LAC_HASH_SHA1_BLOCK_SIZE, + sha1InitialState, + LAC_HASH_SHA1_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t sha224Info = { LAC_HASH_SHA224_DIGEST_SIZE, + LAC_HASH_SHA224_BLOCK_SIZE, + sha224InitialState, + LAC_HASH_SHA224_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t sha256Info = { LAC_HASH_SHA256_DIGEST_SIZE, + LAC_HASH_SHA256_BLOCK_SIZE, + sha256InitialState, + LAC_HASH_SHA256_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t sha384Info = { LAC_HASH_SHA384_DIGEST_SIZE, + LAC_HASH_SHA384_BLOCK_SIZE, + sha384InitialState, + LAC_HASH_SHA384_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t sha512Info = { LAC_HASH_SHA512_DIGEST_SIZE, + LAC_HASH_SHA512_BLOCK_SIZE, + sha512InitialState, + LAC_HASH_SHA512_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t sha3_224Info = + { LAC_HASH_SHA3_224_DIGEST_SIZE, + LAC_HASH_SHA3_224_BLOCK_SIZE, + sha3_224InitialState, + LAC_HASH_SHA3_224_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t sha3_256Info = + { LAC_HASH_SHA3_256_DIGEST_SIZE, + LAC_HASH_SHA3_256_BLOCK_SIZE, + sha3_256InitialState, + LAC_HASH_SHA3_256_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t sha3_384Info = + { LAC_HASH_SHA3_384_DIGEST_SIZE, + LAC_HASH_SHA3_384_BLOCK_SIZE, + sha3_384InitialState, + LAC_HASH_SHA3_384_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t sha3_512Info = + { LAC_HASH_SHA3_512_DIGEST_SIZE, + LAC_HASH_SHA3_512_BLOCK_SIZE, + sha3_512InitialState, + LAC_HASH_SHA3_512_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t polyInfo = { LAC_HASH_POLY_DIGEST_SIZE, + LAC_HASH_POLY_BLOCK_SIZE, + NULL, /* intial state */ + LAC_HASH_POLY_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t shake_128Info = + { LAC_HASH_SHAKE_128_DIGEST_SIZE, LAC_HASH_SHAKE_128_BLOCK_SIZE, NULL, 0 }; + +static lac_sym_qat_hash_alg_info_t shake_256Info = + { LAC_HASH_SHAKE_256_DIGEST_SIZE, LAC_HASH_SHAKE_256_BLOCK_SIZE, NULL, 0 }; + +static lac_sym_qat_hash_alg_info_t sm3Info = { LAC_HASH_SM3_DIGEST_SIZE, + LAC_HASH_SM3_BLOCK_SIZE, + sm3InitialState, + LAC_HASH_SM3_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t xcbcMacInfo = + { LAC_HASH_XCBC_MAC_128_DIGEST_SIZE, + LAC_HASH_XCBC_MAC_BLOCK_SIZE, + aesXcbcKeySeed, + LAC_SYM_QAT_XCBC_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t aesCmacInfo = + { LAC_HASH_CMAC_128_DIGEST_SIZE, + LAC_HASH_CMAC_BLOCK_SIZE, + aesCmacKeySeed, + LAC_SYM_QAT_CMAC_STATE_SIZE }; + +static lac_sym_qat_hash_alg_info_t aesCcmInfo = { + LAC_HASH_AES_CCM_DIGEST_SIZE, + LAC_HASH_AES_CCM_BLOCK_SIZE, + NULL, /* intial state */ + 0 /* state size */ +}; + +static lac_sym_qat_hash_alg_info_t aesGcmInfo = { + LAC_HASH_AES_GCM_DIGEST_SIZE, + LAC_HASH_AES_GCM_BLOCK_SIZE, + NULL, /* initial state */ + 0 /* state size */ +}; + +static lac_sym_qat_hash_alg_info_t kasumiF9Info = { + LAC_HASH_KASUMI_F9_DIGEST_SIZE, + LAC_HASH_KASUMI_F9_BLOCK_SIZE, + NULL, /* initial state */ + 0 /* state size */ +}; + +static lac_sym_qat_hash_alg_info_t snow3gUia2Info = { + LAC_HASH_SNOW3G_UIA2_DIGEST_SIZE, + LAC_HASH_SNOW3G_UIA2_BLOCK_SIZE, + NULL, /* initial state */ + 0 /* state size */ +}; + +static lac_sym_qat_hash_alg_info_t aesCbcMacInfo = + { LAC_HASH_AES_CBC_MAC_DIGEST_SIZE, + LAC_HASH_AES_CBC_MAC_BLOCK_SIZE, + NULL, + 0 }; + +static lac_sym_qat_hash_alg_info_t zucEia3Info = { + LAC_HASH_ZUC_EIA3_DIGEST_SIZE, + LAC_HASH_ZUC_EIA3_BLOCK_SIZE, + NULL, /* initial state */ + 0 /* state size */ +}; +/* Hash QAT specific structures */ + +static lac_sym_qat_hash_qat_info_t md5Config = { ICP_QAT_HW_AUTH_ALGO_MD5, + LAC_HASH_MD5_BLOCK_SIZE, + ICP_QAT_HW_MD5_STATE1_SZ, + ICP_QAT_HW_MD5_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t sha1Config = { ICP_QAT_HW_AUTH_ALGO_SHA1, + LAC_HASH_SHA1_BLOCK_SIZE, + ICP_QAT_HW_SHA1_STATE1_SZ, + ICP_QAT_HW_SHA1_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t sha224Config = + { ICP_QAT_HW_AUTH_ALGO_SHA224, + LAC_HASH_SHA224_BLOCK_SIZE, + ICP_QAT_HW_SHA224_STATE1_SZ, + ICP_QAT_HW_SHA224_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t sha256Config = + { ICP_QAT_HW_AUTH_ALGO_SHA256, + LAC_HASH_SHA256_BLOCK_SIZE, + ICP_QAT_HW_SHA256_STATE1_SZ, + ICP_QAT_HW_SHA256_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t sha384Config = + { ICP_QAT_HW_AUTH_ALGO_SHA384, + LAC_HASH_SHA384_BLOCK_SIZE, + ICP_QAT_HW_SHA384_STATE1_SZ, + ICP_QAT_HW_SHA384_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t sha512Config = + { ICP_QAT_HW_AUTH_ALGO_SHA512, + LAC_HASH_SHA512_BLOCK_SIZE, + ICP_QAT_HW_SHA512_STATE1_SZ, + ICP_QAT_HW_SHA512_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t sha3_224Config = + { ICP_QAT_HW_AUTH_ALGO_SHA3_224, + LAC_HASH_SHA3_224_BLOCK_SIZE, + ICP_QAT_HW_SHA3_224_STATE1_SZ, + ICP_QAT_HW_SHA3_224_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t sha3_256Config = + { ICP_QAT_HW_AUTH_ALGO_SHA3_256, + LAC_HASH_SHA3_256_BLOCK_SIZE, + ICP_QAT_HW_SHA3_256_STATE1_SZ, + ICP_QAT_HW_SHA3_256_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t sha3_384Config = + { ICP_QAT_HW_AUTH_ALGO_SHA3_384, + LAC_HASH_SHA3_384_BLOCK_SIZE, + ICP_QAT_HW_SHA3_384_STATE1_SZ, + ICP_QAT_HW_SHA3_384_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t sha3_512Config = + { ICP_QAT_HW_AUTH_ALGO_SHA3_512, + LAC_HASH_SHA3_512_BLOCK_SIZE, + ICP_QAT_HW_SHA3_512_STATE1_SZ, + ICP_QAT_HW_SHA3_512_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t shake_128Config = + { ICP_QAT_HW_AUTH_ALGO_SHAKE_128, LAC_HASH_SHAKE_128_BLOCK_SIZE, 0, 0 }; + +static lac_sym_qat_hash_qat_info_t shake_256Config = + { ICP_QAT_HW_AUTH_ALGO_SHAKE_256, LAC_HASH_SHAKE_256_BLOCK_SIZE, 0, 0 }; + +static lac_sym_qat_hash_qat_info_t polyConfig = { ICP_QAT_HW_AUTH_ALGO_POLY, + LAC_HASH_POLY_BLOCK_SIZE, + 0, + 0 }; + +static lac_sym_qat_hash_qat_info_t sm3Config = { ICP_QAT_HW_AUTH_ALGO_SM3, + LAC_HASH_SM3_BLOCK_SIZE, + ICP_QAT_HW_SM3_STATE1_SZ, + ICP_QAT_HW_SM3_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t xcbcMacConfig = + { ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC, + 0, + ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ, + LAC_SYM_QAT_XCBC_STATE_SIZE }; + +static lac_sym_qat_hash_qat_info_t aesCmacConfig = + { ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC, + 0, + ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ, + LAC_SYM_QAT_CMAC_STATE_SIZE }; + +static lac_sym_qat_hash_qat_info_t aesCcmConfig = + { ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC, + 0, + ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ, + ICP_QAT_HW_AES_CBC_MAC_KEY_SZ + ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ }; + +static lac_sym_qat_hash_qat_info_t aesGcmConfig = + { ICP_QAT_HW_AUTH_ALGO_GALOIS_128, + 0, + ICP_QAT_HW_GALOIS_128_STATE1_SZ, + ICP_QAT_HW_GALOIS_H_SZ + ICP_QAT_HW_GALOIS_LEN_A_SZ + + ICP_QAT_HW_GALOIS_E_CTR0_SZ }; + +static lac_sym_qat_hash_qat_info_t kasumiF9Config = + { ICP_QAT_HW_AUTH_ALGO_KASUMI_F9, + 0, + ICP_QAT_HW_KASUMI_F9_STATE1_SZ, + ICP_QAT_HW_KASUMI_F9_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t snow3gUia2Config = + { ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2, + 0, + ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ, + ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ }; + +static lac_sym_qat_hash_qat_info_t aesCbcMacConfig = + { ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC, + 0, + ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ, + ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ + ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ }; + +static lac_sym_qat_hash_qat_info_t zucEia3Config = + { ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3, + 0, + ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ, + ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ }; + +/* Array of mappings between algorithm and info structure + * This array is used to populate the lookup table */ +static lac_sym_qat_hash_def_map_t lacHashDefsMapping[] = + { { CPA_CY_SYM_HASH_MD5, { &md5Info, &md5Config } }, + { CPA_CY_SYM_HASH_SHA1, { &sha1Info, &sha1Config } }, + { CPA_CY_SYM_HASH_SHA224, { &sha224Info, &sha224Config } }, + { CPA_CY_SYM_HASH_SHA256, { &sha256Info, &sha256Config } }, + { CPA_CY_SYM_HASH_SHA384, { &sha384Info, &sha384Config } }, + { CPA_CY_SYM_HASH_SHA512, { &sha512Info, &sha512Config } }, + { CPA_CY_SYM_HASH_SHA3_224, { &sha3_224Info, &sha3_224Config } }, + { CPA_CY_SYM_HASH_SHA3_256, { &sha3_256Info, &sha3_256Config } }, + { CPA_CY_SYM_HASH_SHA3_384, { &sha3_384Info, &sha3_384Config } }, + { CPA_CY_SYM_HASH_SHA3_512, { &sha3_512Info, &sha3_512Config } }, + { CPA_CY_SYM_HASH_SHAKE_128, { &shake_128Info, &shake_128Config } }, + { CPA_CY_SYM_HASH_SHAKE_256, { &shake_256Info, &shake_256Config } }, + { CPA_CY_SYM_HASH_POLY, { &polyInfo, &polyConfig } }, + { CPA_CY_SYM_HASH_SM3, { &sm3Info, &sm3Config } }, + { CPA_CY_SYM_HASH_AES_XCBC, { &xcbcMacInfo, &xcbcMacConfig } }, + { CPA_CY_SYM_HASH_AES_CMAC, { &aesCmacInfo, &aesCmacConfig } }, + { CPA_CY_SYM_HASH_AES_CCM, { &aesCcmInfo, &aesCcmConfig } }, + { CPA_CY_SYM_HASH_AES_GCM, { &aesGcmInfo, &aesGcmConfig } }, + { CPA_CY_SYM_HASH_KASUMI_F9, { &kasumiF9Info, &kasumiF9Config } }, + { CPA_CY_SYM_HASH_SNOW3G_UIA2, { &snow3gUia2Info, &snow3gUia2Config } }, + { CPA_CY_SYM_HASH_AES_GMAC, { &aesGcmInfo, &aesGcmConfig } }, + { CPA_CY_SYM_HASH_ZUC_EIA3, { &zucEia3Info, &zucEia3Config } }, + { CPA_CY_SYM_HASH_AES_CBC_MAC, { &aesCbcMacInfo, &aesCbcMacConfig } } }; + +/* + * LacSymQat_HashLookupInit + */ +CpaStatus +LacSymQat_HashLookupInit(CpaInstanceHandle instanceHandle) +{ + Cpa32U entry = 0; + Cpa32U numEntries = 0; + Cpa32U arraySize = 0; + CpaStatus status = CPA_STATUS_SUCCESS; + CpaCySymHashAlgorithm hashAlg = CPA_CY_SYM_HASH_NONE; + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + + arraySize = + (CPA_CY_HASH_ALG_END + 1) * sizeof(lac_sym_qat_hash_defs_t *); + /* Size round up for performance */ + arraySize = LAC_ALIGN_POW2_ROUNDUP(arraySize, LAC_64BYTE_ALIGNMENT); + + pService->pLacHashLookupDefs = LAC_OS_MALLOC(arraySize); + + if (NULL != pService->pLacHashLookupDefs) { + LAC_OS_BZERO(pService->pLacHashLookupDefs, arraySize); + + numEntries = sizeof(lacHashDefsMapping) / + sizeof(lac_sym_qat_hash_def_map_t); + + /* initialise the hash lookup definitions table so that the + * algorithm + * can be used to index into the table */ + for (entry = 0; entry < numEntries; entry++) { + hashAlg = lacHashDefsMapping[entry].hashAlgorithm; + + pService->pLacHashLookupDefs[hashAlg] = + &(lacHashDefsMapping[entry].hashDefs); + } + } else { + status = CPA_STATUS_RESOURCE; + } + return status; +} + +/* + * LacSymQat_HashAlgLookupGet + */ +void +LacSymQat_HashAlgLookupGet(CpaInstanceHandle instanceHandle, + CpaCySymHashAlgorithm hashAlgorithm, + lac_sym_qat_hash_alg_info_t **ppHashAlgInfo) +{ + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + + *ppHashAlgInfo = pService->pLacHashLookupDefs[hashAlgorithm]->algInfo; +} + +/* + * LacSymQat_HashDefsLookupGet + */ +void +LacSymQat_HashDefsLookupGet(CpaInstanceHandle instanceHandle, + CpaCySymHashAlgorithm hashAlgorithm, + lac_sym_qat_hash_defs_t **ppHashDefsInfo) +{ + sal_crypto_service_t *pService = (sal_crypto_service_t *)instanceHandle; + + *ppHashDefsInfo = pService->pLacHashLookupDefs[hashAlgorithm]; +} Index: sys/dev/qat/qat_api/common/crypto/sym/qat/lac_sym_qat_key.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/crypto/sym/qat/lac_sym_qat_key.c @@ -0,0 +1,196 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_qat_key.c Interfaces for populating the symmetric qat key + * structures + * + * @ingroup LacSymQatKey + * + *****************************************************************************/ + +#include "cpa.h" +#include "cpa_cy_key.h" +#include "lac_mem.h" +#include "icp_qat_fw_la.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +#include "lac_list.h" +#include "lac_sal_types.h" +#include "lac_sym_qat_key.h" +#include "lac_sym_hash_defs.h" + +void +LacSymQat_KeySslRequestPopulate(icp_qat_la_bulk_req_hdr_t *pKeyGenReqHdr, + icp_qat_fw_la_key_gen_common_t *pKeyGenReqMid, + Cpa32U generatedKeyLenInBytes, + Cpa32U labelLenInBytes, + Cpa32U secretLenInBytes, + Cpa32U iterations) +{ + /* Rounded to nearest 8 byte boundary */ + Cpa8U outLenRounded = 0; + outLenRounded = LAC_ALIGN_POW2_ROUNDUP(generatedKeyLenInBytes, + LAC_QUAD_WORD_IN_BYTES); + + pKeyGenReqMid->u.secret_lgth_ssl = secretLenInBytes; + pKeyGenReqMid->u1.s1.output_lgth_ssl = outLenRounded; + pKeyGenReqMid->u1.s1.label_lgth_ssl = labelLenInBytes; + pKeyGenReqMid->u2.iter_count = iterations; + pKeyGenReqMid->u3.resrvd2 = 0; + pKeyGenReqMid->resrvd3 = 0; + + /* Set up the common LA flags */ + pKeyGenReqHdr->comn_hdr.service_cmd_id = + ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE; + pKeyGenReqHdr->comn_hdr.resrvd1 = 0; +} + +void +LacSymQat_KeyTlsRequestPopulate( + icp_qat_fw_la_key_gen_common_t *pKeyGenReqParams, + Cpa32U generatedKeyLenInBytes, + Cpa32U labelInfo, /* Generic name, can be num of labels or label length */ + Cpa32U secretLenInBytes, + Cpa8U seedLenInBytes, + icp_qat_fw_la_cmd_id_t cmdId) +{ + pKeyGenReqParams->u1.s3.output_lgth_tls = + LAC_ALIGN_POW2_ROUNDUP(generatedKeyLenInBytes, + LAC_QUAD_WORD_IN_BYTES); + + /* For TLS u param of auth_req_params is set to secretLen */ + pKeyGenReqParams->u.secret_lgth_tls = secretLenInBytes; + + switch (cmdId) { + case ICP_QAT_FW_LA_CMD_HKDF_EXTRACT: + pKeyGenReqParams->u2.hkdf_ikm_length = secretLenInBytes; + pKeyGenReqParams->u3.resrvd2 = 0; + break; + case ICP_QAT_FW_LA_CMD_HKDF_EXPAND: + pKeyGenReqParams->u1.hkdf.info_length = labelInfo; + break; + case ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND: + pKeyGenReqParams->u2.hkdf_ikm_length = secretLenInBytes; + pKeyGenReqParams->u1.hkdf.info_length = labelInfo; + break; + case ICP_QAT_FW_LA_CMD_HKDF_EXPAND_LABEL: + /* Num of Labels */ + pKeyGenReqParams->u1.hkdf_label.num_labels = labelInfo; + pKeyGenReqParams->u3.hkdf_num_sublabels = 4; /* 4 subLabels */ + break; + case ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND_LABEL: + pKeyGenReqParams->u2.hkdf_ikm_length = secretLenInBytes; + /* Num of Labels */ + pKeyGenReqParams->u1.hkdf_label.num_labels = labelInfo; + pKeyGenReqParams->u3.hkdf_num_sublabels = 4; /* 4 subLabels */ + break; + default: + pKeyGenReqParams->u1.s3.label_lgth_tls = labelInfo; + pKeyGenReqParams->u2.tls_seed_length = seedLenInBytes; + pKeyGenReqParams->u3.resrvd2 = 0; + break; + } + pKeyGenReqParams->resrvd3 = 0; +} + +void +LacSymQat_KeyMgfRequestPopulate(icp_qat_la_bulk_req_hdr_t *pKeyGenReqHdr, + icp_qat_fw_la_key_gen_common_t *pKeyGenReqMid, + Cpa8U seedLenInBytes, + Cpa16U maskLenInBytes, + Cpa8U hashLenInBytes) +{ + pKeyGenReqHdr->comn_hdr.service_cmd_id = ICP_QAT_FW_LA_CMD_MGF1; + pKeyGenReqMid->u.mask_length = + LAC_ALIGN_POW2_ROUNDUP(maskLenInBytes, LAC_QUAD_WORD_IN_BYTES); + + pKeyGenReqMid->u1.s2.hash_length = hashLenInBytes; + pKeyGenReqMid->u1.s2.seed_length = seedLenInBytes; +} + +void +LacSymQat_KeySslKeyMaterialInputPopulate( + sal_service_t *pService, + icp_qat_fw_la_ssl_key_material_input_t *pSslKeyMaterialInput, + void *pSeed, + Cpa64U labelPhysAddr, + void *pSecret) +{ + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + (*pService), pSslKeyMaterialInput->seed_addr, pSeed); + + pSslKeyMaterialInput->label_addr = labelPhysAddr; + + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + (*pService), pSslKeyMaterialInput->secret_addr, pSecret); +} + +void +LacSymQat_KeyTlsKeyMaterialInputPopulate( + sal_service_t *pService, + icp_qat_fw_la_tls_key_material_input_t *pTlsKeyMaterialInput, + void *pSeed, + Cpa64U labelPhysAddr) +{ + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + (*pService), pTlsKeyMaterialInput->seed_addr, pSeed); + + pTlsKeyMaterialInput->label_addr = labelPhysAddr; +} + +void +LacSymQat_KeyTlsHKDFKeyMaterialInputPopulate( + sal_service_t *pService, + icp_qat_fw_la_hkdf_key_material_input_t *pTlsKeyMaterialInput, + CpaCyKeyGenHKDFOpData *pKeyGenTlsOpData, + Cpa64U subLabelsPhysAddr, + icp_qat_fw_la_cmd_id_t cmdId) +{ + switch (cmdId) { + case ICP_QAT_FW_LA_CMD_HKDF_EXTRACT: + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + (*pService), + pTlsKeyMaterialInput->ikm_addr, + pKeyGenTlsOpData->secret); + break; + case ICP_QAT_FW_LA_CMD_HKDF_EXPAND: + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + (*pService), + pTlsKeyMaterialInput->labels_addr, + pKeyGenTlsOpData->info); + break; + case ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND: + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + (*pService), + pTlsKeyMaterialInput->ikm_addr, + pKeyGenTlsOpData->secret); + pTlsKeyMaterialInput->labels_addr = + pTlsKeyMaterialInput->ikm_addr + + ((uint64_t)&pKeyGenTlsOpData->info - + (uint64_t)&pKeyGenTlsOpData->secret); + break; + case ICP_QAT_FW_LA_CMD_HKDF_EXPAND_LABEL: + pTlsKeyMaterialInput->sublabels_addr = subLabelsPhysAddr; + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + (*pService), + pTlsKeyMaterialInput->labels_addr, + pKeyGenTlsOpData->label); + break; + case ICP_QAT_FW_LA_CMD_HKDF_EXTRACT_AND_EXPAND_LABEL: + pTlsKeyMaterialInput->sublabels_addr = subLabelsPhysAddr; + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + (*pService), + pTlsKeyMaterialInput->ikm_addr, + pKeyGenTlsOpData->secret); + pTlsKeyMaterialInput->labels_addr = + pTlsKeyMaterialInput->ikm_addr + + ((uint64_t)&pKeyGenTlsOpData->label - + (uint64_t)&pKeyGenTlsOpData->secret); + break; + default: + break; + } +} Index: sys/dev/qat/qat_api/common/ctrl/sal_compression.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/ctrl/sal_compression.c @@ -0,0 +1,1554 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file sal_compression.c + * + * @ingroup SalCtrl + * + * @description + * This file contains the sal implementation for compression. + * + *****************************************************************************/ + +/* QAT-API includes */ +#include "cpa.h" +#include "cpa_dc.h" + +/* QAT utils includes */ +#include "qat_utils.h" + +/* ADF includes */ +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_accel_devices.h" +#include "icp_adf_cfg.h" +#include "icp_adf_accel_mgr.h" +#include "icp_adf_poll.h" +#include "icp_adf_debug.h" +#include "icp_adf_esram.h" +#include "icp_qat_hw.h" + +/* SAL includes */ +#include "lac_mem.h" +#include "lac_common.h" +#include "lac_mem_pools.h" +#include "sal_statistics.h" +#include "lac_list.h" +#include "icp_sal_poll.h" +#include "sal_types_compression.h" +#include "dc_session.h" +#include "dc_datapath.h" +#include "dc_stats.h" +#include "lac_sal.h" +#include "lac_sal_ctrl.h" +#include "sal_string_parse.h" +#include "sal_service_state.h" +#include "lac_buffer_desc.h" +#include "icp_qat_fw_comp.h" +#include "icp_sal_versions.h" + +/* C string null terminator size */ +#define SAL_NULL_TERM_SIZE 1 + +/* Type to access extended features bit fields */ +typedef struct dc_extended_features_s { + unsigned is_cnv : 1; /* Bit<0> */ + unsigned padding : 7; + unsigned is_cnvnr : 1; /* Bit<8> */ + unsigned not_used : 23; +} dc_extd_ftrs_t; + +/* + * Prints statistics for a compression instance + */ +static int +SalCtrl_CompresionDebug(void *private_data, char *data, int size, int offset) +{ + sal_compression_service_t *pCompressionService = + (sal_compression_service_t *)private_data; + CpaStatus status = CPA_STATUS_SUCCESS; + CpaDcStats dcStats = { 0 }; + Cpa32S len = 0; + + status = cpaDcGetStats(pCompressionService, &dcStats); + if (status != CPA_STATUS_SUCCESS) { + QAT_UTILS_LOG("cpaDcGetStats returned error.\n"); + return (-1); + } + + /* Engine Info */ + if (NULL != pCompressionService->debug_file) { + len += snprintf(data + len, + size - len, + SEPARATOR BORDER + " Statistics for Instance %24s | \n" SEPARATOR, + pCompressionService->debug_file->name); + } + + /* Perform Info */ + len += snprintf(data + len, + size - len, + BORDER " DC comp Requests: %16llu " BORDER + "\n" BORDER + " DC comp Request Errors: %16llu " BORDER + "\n" BORDER + " DC comp Completed: %16llu " BORDER + "\n" BORDER + " DC comp Completed Errors: %16llu " BORDER + "\n" SEPARATOR, + (long long unsigned int)dcStats.numCompRequests, + (long long unsigned int)dcStats.numCompRequestsErrors, + (long long unsigned int)dcStats.numCompCompleted, + (long long unsigned int)dcStats.numCompCompletedErrors); + + /* Perform Info */ + len += snprintf( + data + len, + size - len, + BORDER " DC decomp Requests: %16llu " BORDER "\n" BORDER + " DC decomp Request Errors: %16llu " BORDER "\n" BORDER + " DC decomp Completed: %16llu " BORDER "\n" BORDER + " DC decomp Completed Errors: %16llu " BORDER + "\n" SEPARATOR, + (long long unsigned int)dcStats.numDecompRequests, + (long long unsigned int)dcStats.numDecompRequestsErrors, + (long long unsigned int)dcStats.numDecompCompleted, + (long long unsigned int)dcStats.numDecompCompletedErrors); + return 0; +} + +/* Initialise device specific information needed by compression service */ +static CpaStatus +SalCtrl_CompressionInit_CompData(icp_accel_dev_t *device, + sal_compression_service_t *pCompService) +{ + switch (device->deviceType) { + case DEVICE_DH895XCC: + case DEVICE_DH895XCCVF: + pCompService->generic_service_info.integrityCrcCheck = + CPA_FALSE; + pCompService->numInterBuffs = + DC_QAT_MAX_NUM_INTER_BUFFERS_6COMP_SLICES; + pCompService->comp_device_data.minOutputBuffSize = + DC_DEST_BUFFER_STA_MIN_SIZE; + pCompService->comp_device_data.oddByteDecompNobFinal = CPA_TRUE; + pCompService->comp_device_data.oddByteDecompInterim = CPA_FALSE; + pCompService->comp_device_data.translatorOverflow = CPA_FALSE; + pCompService->comp_device_data.useDevRam = + ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF; + pCompService->comp_device_data.enableDmm = + ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_DISABLED; + + pCompService->comp_device_data.inflateContextSize = + DC_INFLATE_CONTEXT_SIZE; + pCompService->comp_device_data.highestHwCompressionDepth = + ICP_QAT_HW_COMPRESSION_DEPTH_16; + + pCompService->comp_device_data.windowSizeMask = + (1 << DC_8K_WINDOW_SIZE | 1 << DC_32K_WINDOW_SIZE); + pCompService->comp_device_data.cnvnrSupported = CPA_FALSE; + break; + case DEVICE_C3XXX: + case DEVICE_C3XXXVF: + case DEVICE_200XX: + case DEVICE_200XXVF: + pCompService->generic_service_info.integrityCrcCheck = + CPA_FALSE; + pCompService->numInterBuffs = + DC_QAT_MAX_NUM_INTER_BUFFERS_6COMP_SLICES; + pCompService->comp_device_data.oddByteDecompNobFinal = + CPA_FALSE; + pCompService->comp_device_data.oddByteDecompInterim = CPA_TRUE; + pCompService->comp_device_data.translatorOverflow = CPA_FALSE; + pCompService->comp_device_data.useDevRam = + ICP_QAT_FW_COMP_DISABLE_SECURE_RAM_USED_AS_INTMD_BUF; + pCompService->comp_device_data.inflateContextSize = + DC_INFLATE_EH_CONTEXT_SIZE; + pCompService->comp_device_data.highestHwCompressionDepth = + ICP_QAT_HW_COMPRESSION_DEPTH_16; + pCompService->comp_device_data.windowSizeMask = + (1 << DC_16K_WINDOW_SIZE | 1 << DC_32K_WINDOW_SIZE); + pCompService->comp_device_data.minOutputBuffSize = + DC_DEST_BUFFER_STA_MIN_SIZE; + pCompService->comp_device_data.enableDmm = + ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED; + + pCompService->comp_device_data.cnvnrSupported = CPA_TRUE; + break; + case DEVICE_C62X: + case DEVICE_C62XVF: + pCompService->generic_service_info.integrityCrcCheck = + CPA_FALSE; + pCompService->numInterBuffs = + DC_QAT_MAX_NUM_INTER_BUFFERS_10COMP_SLICES; + pCompService->comp_device_data.oddByteDecompNobFinal = + CPA_FALSE; + pCompService->comp_device_data.oddByteDecompInterim = CPA_TRUE; + pCompService->comp_device_data.translatorOverflow = CPA_FALSE; + pCompService->comp_device_data.useDevRam = + ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF; + pCompService->comp_device_data.inflateContextSize = + DC_INFLATE_EH_CONTEXT_SIZE; + pCompService->comp_device_data.windowSizeMask = + (1 << DC_16K_WINDOW_SIZE | 1 << DC_32K_WINDOW_SIZE); + pCompService->comp_device_data.minOutputBuffSize = + DC_DEST_BUFFER_STA_MIN_SIZE; + pCompService->comp_device_data.enableDmm = + ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED; + pCompService->comp_device_data.cnvnrSupported = CPA_TRUE; + break; + case DEVICE_C4XXX: + case DEVICE_C4XXXVF: + pCompService->generic_service_info.integrityCrcCheck = CPA_TRUE; + pCompService->numInterBuffs = + DC_QAT_MAX_NUM_INTER_BUFFERS_24COMP_SLICES; + pCompService->comp_device_data.minOutputBuffSize = + DC_DEST_BUFFER_MIN_SIZE; + pCompService->comp_device_data.oddByteDecompNobFinal = CPA_TRUE; + pCompService->comp_device_data.oddByteDecompInterim = CPA_TRUE; + pCompService->comp_device_data.translatorOverflow = CPA_TRUE; + if (pCompService->generic_service_info.capabilitiesMask & + ICP_ACCEL_CAPABILITIES_INLINE) { + pCompService->comp_device_data.useDevRam = + ICP_QAT_FW_COMP_DISABLE_SECURE_RAM_USED_AS_INTMD_BUF; + } else { + pCompService->comp_device_data.useDevRam = + ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF; + } + pCompService->comp_device_data.enableDmm = + ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED; + pCompService->comp_device_data.inflateContextSize = + DC_INFLATE_EH_CONTEXT_SIZE; + pCompService->comp_device_data.highestHwCompressionDepth = + ICP_QAT_HW_COMPRESSION_DEPTH_128; + pCompService->comp_device_data.windowSizeMask = + (1 << DC_16K_WINDOW_SIZE | 1 << DC_32K_WINDOW_SIZE); + pCompService->comp_device_data.cnvnrSupported = CPA_TRUE; + break; + default: + QAT_UTILS_LOG("Unknown device type! - %d.\n", + device->deviceType); + return CPA_STATUS_FAIL; + } + return CPA_STATUS_SUCCESS; +} + +CpaStatus +SalCtrl_CompressionInit(icp_accel_dev_t *device, sal_service_t *service) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa32U numCompConcurrentReq = 0; + Cpa32U request_ring_id = 0; + Cpa32U response_ring_id = 0; + + char adfGetParam[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + char compMemPool[SAL_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + char temp_string[SAL_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + char temp_string2[SAL_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + char *instance_name = NULL; + sal_statistics_collection_t *pStatsCollection = + (sal_statistics_collection_t *)device->pQatStats; + icp_resp_deliv_method rx_resp_type = ICP_RESP_TYPE_IRQ; + sal_compression_service_t *pCompressionService = + (sal_compression_service_t *)service; + Cpa32U msgSize = 0; + char *section = DYN_SEC; + + SAL_SERVICE_GOOD_FOR_INIT(pCompressionService); + + pCompressionService->generic_service_info.state = + SAL_SERVICE_STATE_INITIALIZING; + + if (CPA_FALSE == pCompressionService->generic_service_info.is_dyn) { + section = icpGetProcessName(); + } + + if (pStatsCollection == NULL) { + return CPA_STATUS_FAIL; + } + + /* Get Config Info: Accel Num, bank Num, packageID, + coreAffinity, nodeAffinity and response mode + */ + + pCompressionService->acceleratorNum = 0; + + /* Initialise device specific compression data */ + SalCtrl_CompressionInit_CompData(device, pCompressionService); + + status = Sal_StringParsing( + "Dc", + pCompressionService->generic_service_info.instance, + "BankNumber", + temp_string); + LAC_CHECK_STATUS(status); + status = + icp_adf_cfgGetParamValue(device, section, temp_string, adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration.\n", + temp_string); + return status; + } + + pCompressionService->bankNum = + Sal_Strtoul(adfGetParam, NULL, SAL_CFG_BASE_DEC); + + status = Sal_StringParsing( + "Dc", + pCompressionService->generic_service_info.instance, + "IsPolled", + temp_string); + LAC_CHECK_STATUS(status); + status = + icp_adf_cfgGetParamValue(device, section, temp_string, adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration.\n", + temp_string); + return status; + } + pCompressionService->isPolled = + (Cpa8U)Sal_Strtoul(adfGetParam, NULL, SAL_CFG_BASE_DEC); + + /* User instances only support poll and epoll mode */ + if (SAL_RESP_POLL_CFG_FILE != pCompressionService->isPolled) { + QAT_UTILS_LOG( + "IsPolled %u is not supported for user instance %s.\n", + pCompressionService->isPolled, + temp_string); + return CPA_STATUS_FAIL; + } + + if (SAL_RESP_POLL_CFG_FILE == pCompressionService->isPolled) { + rx_resp_type = ICP_RESP_TYPE_POLL; + } + + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + ADF_DEV_PKG_ID, + adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration.\n", + ADF_DEV_PKG_ID); + return status; + } + pCompressionService->pkgID = + (Cpa16U)Sal_Strtoul(adfGetParam, NULL, SAL_CFG_BASE_DEC); + + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + ADF_DEV_NODE_ID, + adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration.\n", + ADF_DEV_NODE_ID); + return status; + } + pCompressionService->nodeAffinity = + (Cpa32U)Sal_Strtoul(adfGetParam, NULL, SAL_CFG_BASE_DEC); + + /* In case of interrupt instance, use the bank affinity set by adf_ctl + * Otherwise, use the instance affinity for backwards compatibility */ + if (SAL_RESP_POLL_CFG_FILE != pCompressionService->isPolled) { + /* Next need to read the [AcceleratorX] section of the config + * file */ + status = Sal_StringParsing("Accelerator", + pCompressionService->acceleratorNum, + "", + temp_string2); + LAC_CHECK_STATUS(status); + + status = Sal_StringParsing("Bank", + pCompressionService->bankNum, + "CoreAffinity", + temp_string); + LAC_CHECK_STATUS(status); + } else { + strncpy(temp_string2, + section, + sizeof(temp_string2) - SAL_NULL_TERM_SIZE); + temp_string2[SAL_CFG_MAX_VAL_LEN_IN_BYTES - + SAL_NULL_TERM_SIZE] = '\0'; + + status = Sal_StringParsing( + "Dc", + pCompressionService->generic_service_info.instance, + "CoreAffinity", + temp_string); + LAC_CHECK_STATUS(status); + } + + status = icp_adf_cfgGetParamValue(device, + temp_string2, + temp_string, + adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration.\n", + temp_string); + return status; + } + pCompressionService->coreAffinity = + (Cpa32U)Sal_Strtoul(adfGetParam, NULL, SAL_CFG_BASE_DEC); + + status = Sal_StringParsing( + "Dc", + pCompressionService->generic_service_info.instance, + "NumConcurrentRequests", + temp_string); + LAC_CHECK_STATUS(status); + status = + icp_adf_cfgGetParamValue(device, section, temp_string, adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration.\n", + temp_string); + return status; + } + + numCompConcurrentReq = + (Cpa32U)Sal_Strtoul(adfGetParam, NULL, SAL_CFG_BASE_DEC); + if (validateConcurrRequest(numCompConcurrentReq)) { + QAT_UTILS_LOG( + "Invalid NumConcurrentRequests, valid values are: {64, 128, 256, ... 32768, 65536}.\n"); + return CPA_STATUS_FAIL; + } + + /* ADF does not allow us to completely fill the ring for batch requests + */ + pCompressionService->maxNumCompConcurrentReq = + (numCompConcurrentReq - SAL_BATCH_SUBMIT_FREE_SPACE); + + /* 1. Create transport handles */ + status = Sal_StringParsing( + "Dc", + pCompressionService->generic_service_info.instance, + "RingTx", + temp_string); + LAC_CHECK_STATUS(status); + + msgSize = LAC_QAT_DC_REQ_SZ_LW * LAC_LONG_WORD_IN_BYTES; + status = icp_adf_transCreateHandle( + device, + ICP_TRANS_TYPE_ETR, + section, + pCompressionService->acceleratorNum, + pCompressionService->bankNum, + temp_string, + lac_getRingType(SAL_RING_TYPE_DC), + NULL, + ICP_RESP_TYPE_NONE, + numCompConcurrentReq, + msgSize, + (icp_comms_trans_handle *)&( + pCompressionService->trans_handle_compression_tx)); + LAC_CHECK_STATUS(status); + + if (icp_adf_transGetRingNum( + pCompressionService->trans_handle_compression_tx, + &request_ring_id) != CPA_STATUS_SUCCESS) { + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + + QAT_UTILS_LOG("Failed to get DC TX ring number.\n"); + return CPA_STATUS_FAIL; + } + + status = Sal_StringParsing( + "Dc", + pCompressionService->generic_service_info.instance, + "RingRx", + temp_string); + if (CPA_STATUS_SUCCESS != status) { + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + return status; + } + + msgSize = LAC_QAT_DC_RESP_SZ_LW * LAC_LONG_WORD_IN_BYTES; + status = icp_adf_transCreateHandle( + device, + ICP_TRANS_TYPE_ETR, + section, + pCompressionService->acceleratorNum, + pCompressionService->bankNum, + temp_string, + lac_getRingType(SAL_RING_TYPE_NONE), + (icp_trans_callback)dcCompression_ProcessCallback, + rx_resp_type, + numCompConcurrentReq, + msgSize, + (icp_comms_trans_handle *)&( + pCompressionService->trans_handle_compression_rx)); + if (CPA_STATUS_SUCCESS != status) { + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + return status; + } + + if (icp_adf_transGetRingNum( + pCompressionService->trans_handle_compression_rx, + &response_ring_id) != CPA_STATUS_SUCCESS) { + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_rx); + + QAT_UTILS_LOG("Failed to get DC RX ring number.\n"); + return CPA_STATUS_FAIL; + } + + /* 2. Allocates memory pools */ + + /* Valid initialisation value for a pool ID */ + pCompressionService->compression_mem_pool = LAC_MEM_POOL_INIT_POOL_ID; + + status = Sal_StringParsing( + "Comp", + pCompressionService->generic_service_info.instance, + "_MemPool", + compMemPool); + if (CPA_STATUS_SUCCESS != status) { + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_rx); + + return status; + } + + status = Lac_MemPoolCreate(&pCompressionService->compression_mem_pool, + compMemPool, + (numCompConcurrentReq + 1), + sizeof(dc_compression_cookie_t), + LAC_64BYTE_ALIGNMENT, + CPA_FALSE, + pCompressionService->nodeAffinity); + if (CPA_STATUS_SUCCESS != status) { + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_rx); + + return status; + } + + /* Init compression statistics */ + status = dcStatsInit(pCompressionService); + if (CPA_STATUS_SUCCESS != status) { + Lac_MemPoolDestroy(pCompressionService->compression_mem_pool); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_rx); + + return status; + } + if (CPA_TRUE == pStatsCollection->bDcStatsEnabled) { + /* Get instance name for stats */ + instance_name = LAC_OS_MALLOC(ADF_CFG_MAX_VAL_LEN_IN_BYTES); + if (NULL == instance_name) { + Lac_MemPoolDestroy( + pCompressionService->compression_mem_pool); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_rx); + + return CPA_STATUS_RESOURCE; + } + + status = Sal_StringParsing( + "Dc", + pCompressionService->generic_service_info.instance, + "Name", + temp_string); + if (CPA_STATUS_SUCCESS != status) { + Lac_MemPoolDestroy( + pCompressionService->compression_mem_pool); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_rx); + LAC_OS_FREE(instance_name); + return status; + } + status = icp_adf_cfgGetParamValue(device, + section, + temp_string, + adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration.\n", + temp_string); + + Lac_MemPoolDestroy( + pCompressionService->compression_mem_pool); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_rx); + LAC_OS_FREE(instance_name); + return status; + } + + snprintf(instance_name, + ADF_CFG_MAX_VAL_LEN_IN_BYTES, + "%s", + adfGetParam); + + pCompressionService->debug_file = + LAC_OS_MALLOC(sizeof(debug_file_info_t)); + if (NULL == pCompressionService->debug_file) { + Lac_MemPoolDestroy( + pCompressionService->compression_mem_pool); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_rx); + LAC_OS_FREE(instance_name); + return CPA_STATUS_RESOURCE; + } + + memset(pCompressionService->debug_file, + 0, + sizeof(debug_file_info_t)); + pCompressionService->debug_file->name = instance_name; + pCompressionService->debug_file->seq_read = + SalCtrl_CompresionDebug; + pCompressionService->debug_file->private_data = + pCompressionService; + pCompressionService->debug_file->parent = + pCompressionService->generic_service_info.debug_parent_dir; + + status = icp_adf_debugAddFile(device, + pCompressionService->debug_file); + if (CPA_STATUS_SUCCESS != status) { + Lac_MemPoolDestroy( + pCompressionService->compression_mem_pool); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + + icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_rx); + LAC_OS_FREE(instance_name); + LAC_OS_FREE(pCompressionService->debug_file); + return status; + } + } + pCompressionService->generic_service_info.stats = pStatsCollection; + pCompressionService->generic_service_info.state = + SAL_SERVICE_STATE_INITIALIZED; + + return status; +} + +CpaStatus +SalCtrl_CompressionStart(icp_accel_dev_t *device, sal_service_t *service) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + sal_compression_service_t *pCompressionService = + (sal_compression_service_t *)service; + + if (SAL_SERVICE_STATE_INITIALIZED != + pCompressionService->generic_service_info.state) { + QAT_UTILS_LOG("Not in the correct state to call start.\n"); + return CPA_STATUS_FAIL; + } + /**************************************************************/ + /* Obtain Extended Features. I.e. Compress And Verify */ + /**************************************************************/ + pCompressionService->generic_service_info.dcExtendedFeatures = + device->dcExtendedFeatures; + pCompressionService->generic_service_info.state = + SAL_SERVICE_STATE_RUNNING; + + return status; +} + +CpaStatus +SalCtrl_CompressionStop(icp_accel_dev_t *device, sal_service_t *service) +{ + sal_compression_service_t *pCompressionService = + (sal_compression_service_t *)service; + + if (SAL_SERVICE_STATE_RUNNING != + pCompressionService->generic_service_info.state) { + QAT_UTILS_LOG("Not in the correct state to call stop.\n"); + return CPA_STATUS_FAIL; + } + + if (icp_adf_is_dev_in_reset(device)) { + pCompressionService->generic_service_info.state = + SAL_SERVICE_STATE_RESTARTING; + return CPA_STATUS_SUCCESS; + } + + pCompressionService->generic_service_info.state = + SAL_SERVICE_STATE_SHUTTING_DOWN; + return CPA_STATUS_RETRY; +} + +CpaStatus +SalCtrl_CompressionShutdown(icp_accel_dev_t *device, sal_service_t *service) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + sal_compression_service_t *pCompressionService = + (sal_compression_service_t *)service; + sal_statistics_collection_t *pStatsCollection = + (sal_statistics_collection_t *)device->pQatStats; + + if ((SAL_SERVICE_STATE_INITIALIZED != + pCompressionService->generic_service_info.state) && + (SAL_SERVICE_STATE_SHUTTING_DOWN != + pCompressionService->generic_service_info.state) && + (SAL_SERVICE_STATE_RESTARTING != + pCompressionService->generic_service_info.state)) { + QAT_UTILS_LOG("Not in the correct state to call shutdown.\n"); + return CPA_STATUS_FAIL; + } + + Lac_MemPoolDestroy(pCompressionService->compression_mem_pool); + + status = icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_tx); + LAC_CHECK_STATUS(status); + + status = icp_adf_transReleaseHandle( + pCompressionService->trans_handle_compression_rx); + LAC_CHECK_STATUS(status); + + if (CPA_TRUE == pStatsCollection->bDcStatsEnabled) { + /* Clean stats */ + if (NULL != pCompressionService->debug_file) { + icp_adf_debugRemoveFile( + pCompressionService->debug_file); + LAC_OS_FREE(pCompressionService->debug_file->name); + LAC_OS_FREE(pCompressionService->debug_file); + pCompressionService->debug_file = NULL; + } + } + pCompressionService->generic_service_info.stats = NULL; + dcStatsFree(pCompressionService); + + if (icp_adf_is_dev_in_reset(device)) { + pCompressionService->generic_service_info.state = + SAL_SERVICE_STATE_RESTARTING; + return CPA_STATUS_SUCCESS; + } + pCompressionService->generic_service_info.state = + SAL_SERVICE_STATE_SHUTDOWN; + return status; +} + +CpaStatus +cpaDcGetStatusText(const CpaInstanceHandle dcInstance, + const CpaStatus errStatus, + Cpa8S *pStatusText) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + LAC_CHECK_NULL_PARAM(pStatusText); + + switch (errStatus) { + case CPA_STATUS_SUCCESS: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_SUCCESS); + break; + case CPA_STATUS_FAIL: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_FAIL); + break; + case CPA_STATUS_RETRY: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_RETRY); + break; + case CPA_STATUS_RESOURCE: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_RESOURCE); + break; + case CPA_STATUS_INVALID_PARAM: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_INVALID_PARAM); + break; + case CPA_STATUS_FATAL: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_FATAL); + break; + case CPA_STATUS_UNSUPPORTED: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_UNSUPPORTED); + break; + default: + status = CPA_STATUS_INVALID_PARAM; + break; + } + + return status; +} + +CpaStatus +cpaDcGetNumIntermediateBuffers(CpaInstanceHandle dcInstance, + Cpa16U *pNumBuffers) +{ + CpaInstanceHandle insHandle = NULL; + sal_compression_service_t *pService = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + LAC_CHECK_NULL_PARAM(insHandle); + LAC_CHECK_NULL_PARAM(pNumBuffers); + + pService = (sal_compression_service_t *)insHandle; + *pNumBuffers = pService->numInterBuffs; + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcStartInstance(CpaInstanceHandle instanceHandle, + Cpa16U numBuffers, + CpaBufferList **pIntermediateBufferPtrsArray) +{ + icp_qat_addr_width_t *pInterBuffPtrsArray = NULL; + icp_qat_addr_width_t pArrayBufferListDescPhyAddr = 0; + icp_qat_addr_width_t bufListDescPhyAddr; + icp_qat_addr_width_t bufListAlignedPhyAddr; + CpaFlatBuffer *pClientCurrFlatBuffer = NULL; + icp_buffer_list_desc_t *pBufferListDesc = NULL; + icp_flat_buffer_desc_t *pCurrFlatBufDesc = NULL; + CpaInstanceInfo2 info = { 0 }; + icp_accel_dev_t *dev = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + sal_compression_service_t *pService = NULL; + CpaInstanceHandle insHandle = NULL; + Cpa16U bufferIndex = 0; + Cpa32U numFlatBuffers = 0; + Cpa64U clientListSize = 0; + CpaBufferList *pClientCurrentIntermediateBuffer = NULL; + Cpa32U bufferIndex2 = 0; + CpaBufferList **pTempIntermediateBufferPtrsArray; + Cpa64U lastClientListSize = 0; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = instanceHandle; + } + LAC_CHECK_NULL_PARAM(insHandle); + + status = cpaDcInstanceGetInfo2(insHandle, &info); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Can not get instance info.\n"); + return status; + } + + dev = icp_adf_getAccelDevByAccelId(info.physInstId.packageId); + if (NULL == dev) { + QAT_UTILS_LOG("Can not find device for the instance\n"); + return CPA_STATUS_FAIL; + } + + if (NULL == pIntermediateBufferPtrsArray) { + /* Increment dev ref counter and return - DRAM is not used */ + icp_qa_dev_get(dev); + return CPA_STATUS_SUCCESS; + } + + if (0 == numBuffers) { + /* Increment dev ref counter and return - DRAM is not used */ + icp_qa_dev_get(dev); + return CPA_STATUS_SUCCESS; + } + + pService = (sal_compression_service_t *)insHandle; + + LAC_CHECK_NULL_PARAM(insHandle); + + if ((numBuffers > 0) && (NULL == pIntermediateBufferPtrsArray)) { + QAT_UTILS_LOG("Invalid Intermediate Buffers Array pointer\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Check number of intermediate buffers allocated by user */ + if ((pService->numInterBuffs != numBuffers)) { + QAT_UTILS_LOG("Invalid number of buffers\n"); + return CPA_STATUS_INVALID_PARAM; + } + + pTempIntermediateBufferPtrsArray = pIntermediateBufferPtrsArray; + for (bufferIndex = 0; bufferIndex < numBuffers; bufferIndex++) { + if (NULL == *pTempIntermediateBufferPtrsArray) { + QAT_UTILS_LOG( + "Intermediate Buffer - Invalid Buffer List pointer\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (NULL == (*pTempIntermediateBufferPtrsArray)->pBuffers) { + QAT_UTILS_LOG( + "Intermediate Buffer - Invalid Flat Buffer descriptor pointer\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (NULL == + (*pTempIntermediateBufferPtrsArray)->pPrivateMetaData) { + QAT_UTILS_LOG( + "Intermediate Buffer - Invalid Private MetaData descriptor pointer\n"); + return CPA_STATUS_INVALID_PARAM; + } + + clientListSize = 0; + for (bufferIndex2 = 0; bufferIndex2 < + (*pTempIntermediateBufferPtrsArray)->numBuffers; + bufferIndex2++) { + + if ((0 != + (*pTempIntermediateBufferPtrsArray) + ->pBuffers[bufferIndex2] + .dataLenInBytes) && + NULL == + (*pTempIntermediateBufferPtrsArray) + ->pBuffers[bufferIndex2] + .pData) { + QAT_UTILS_LOG( + "Intermediate Buffer - Invalid Flat Buffer pointer\n"); + return CPA_STATUS_INVALID_PARAM; + } + + clientListSize += (*pTempIntermediateBufferPtrsArray) + ->pBuffers[bufferIndex2] + .dataLenInBytes; + } + + if (bufferIndex != 0) { + if (lastClientListSize != clientListSize) { + QAT_UTILS_LOG( + "SGLs have to be of the same size.\n"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + lastClientListSize = clientListSize; + } + pTempIntermediateBufferPtrsArray++; + } + + /* Allocate array of physical pointers to icp_buffer_list_desc_t */ + status = LAC_OS_CAMALLOC(&pInterBuffPtrsArray, + (numBuffers * sizeof(icp_qat_addr_width_t)), + LAC_64BYTE_ALIGNMENT, + pService->nodeAffinity); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Can not allocate Intermediate Buffers array.\n"); + return status; + } + + /* Get physical address of the intermediate buffer pointers array */ + pArrayBufferListDescPhyAddr = LAC_MEM_CAST_PTR_TO_UINT64( + LAC_OS_VIRT_TO_PHYS_INTERNAL(pInterBuffPtrsArray)); + + pService->pInterBuffPtrsArray = pInterBuffPtrsArray; + pService->pInterBuffPtrsArrayPhyAddr = pArrayBufferListDescPhyAddr; + + /* Get the full size of the buffer list */ + /* Assumption: all the SGLs allocated by the user have the same size */ + clientListSize = 0; + for (bufferIndex = 0; + bufferIndex < (*pIntermediateBufferPtrsArray)->numBuffers; + bufferIndex++) { + clientListSize += ((*pIntermediateBufferPtrsArray) + ->pBuffers[bufferIndex] + .dataLenInBytes); + } + pService->minInterBuffSizeInBytes = clientListSize; + + for (bufferIndex = 0; bufferIndex < numBuffers; bufferIndex++) { + + /* Get pointer to the client Intermediate Buffer List + * (CpaBufferList) */ + pClientCurrentIntermediateBuffer = + *pIntermediateBufferPtrsArray; + + /* Get number of flat buffers in the buffer list */ + numFlatBuffers = pClientCurrentIntermediateBuffer->numBuffers; + + /* Get pointer to the client array of CpaFlatBuffers */ + pClientCurrFlatBuffer = + pClientCurrentIntermediateBuffer->pBuffers; + + /* Calculate Physical address of current private SGL */ + bufListDescPhyAddr = LAC_OS_VIRT_TO_PHYS_EXTERNAL( + (*pService), + pClientCurrentIntermediateBuffer->pPrivateMetaData); + if (bufListDescPhyAddr == 0) { + QAT_UTILS_LOG( + "Unable to get the physical address of the metadata.\n"); + return CPA_STATUS_FAIL; + } + + /* Align SGL physical address */ + bufListAlignedPhyAddr = + LAC_ALIGN_POW2_ROUNDUP(bufListDescPhyAddr, + ICP_DESCRIPTOR_ALIGNMENT_BYTES); + + /* Set physical address of the Intermediate Buffer SGL in the + * SGLs array + */ + *pInterBuffPtrsArray = + LAC_MEM_CAST_PTR_TO_UINT64(bufListAlignedPhyAddr); + + /* Calculate (virtual) offset to the buffer list descriptor */ + pBufferListDesc = + (icp_buffer_list_desc_t + *)((LAC_ARCH_UINT)pClientCurrentIntermediateBuffer + ->pPrivateMetaData + + (LAC_ARCH_UINT)(bufListAlignedPhyAddr - + bufListDescPhyAddr)); + + /* Set number of flat buffers in the physical Buffer List + * descriptor */ + pBufferListDesc->numBuffers = numFlatBuffers; + + /* Go past the Buffer List descriptor to the list of buffer + * descriptors + */ + pCurrFlatBufDesc = + (icp_flat_buffer_desc_t *)((pBufferListDesc->phyBuffers)); + + /* Loop for each flat buffer in the SGL */ + while (0 != numFlatBuffers) { + /* Set length of the current flat buffer */ + pCurrFlatBufDesc->dataLenInBytes = + pClientCurrFlatBuffer->dataLenInBytes; + + /* Set physical address of the flat buffer */ + pCurrFlatBufDesc->phyBuffer = + LAC_MEM_CAST_PTR_TO_UINT64( + LAC_OS_VIRT_TO_PHYS_EXTERNAL( + (*pService), pClientCurrFlatBuffer->pData)); + + if (pCurrFlatBufDesc->phyBuffer == 0) { + QAT_UTILS_LOG( + "Unable to get the physical address of the flat buffer.\n"); + return CPA_STATUS_FAIL; + } + + pCurrFlatBufDesc++; + pClientCurrFlatBuffer++; + numFlatBuffers--; + } + pIntermediateBufferPtrsArray++; + pInterBuffPtrsArray++; + } + + pService->generic_service_info.isInstanceStarted = CPA_TRUE; + + /* Increment dev ref counter */ + icp_qa_dev_get(dev); + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcStopInstance(CpaInstanceHandle instanceHandle) +{ + CpaInstanceHandle insHandle = NULL; + CpaInstanceInfo2 info = { 0 }; + icp_accel_dev_t *dev = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + sal_compression_service_t *pService = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = instanceHandle; + } + + LAC_CHECK_NULL_PARAM(insHandle); + pService = (sal_compression_service_t *)insHandle; + + /* Free Intermediate Buffer Pointers Array */ + if (pService->pInterBuffPtrsArray != NULL) { + LAC_OS_CAFREE(pService->pInterBuffPtrsArray); + pService->pInterBuffPtrsArray = 0; + } + + pService->pInterBuffPtrsArrayPhyAddr = 0; + + status = cpaDcInstanceGetInfo2(insHandle, &info); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Can not get instance info.\n"); + return status; + } + dev = icp_adf_getAccelDevByAccelId(info.physInstId.packageId); + if (NULL == dev) { + QAT_UTILS_LOG("Can not find device for the instance.\n"); + return CPA_STATUS_FAIL; + } + + pService->generic_service_info.isInstanceStarted = CPA_FALSE; + + /* Decrement dev ref counter */ + icp_qa_dev_put(dev); + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcGetNumInstances(Cpa16U *pNumInstances) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + icp_accel_dev_t **pAdfInsts = NULL; + icp_accel_dev_t *dev_addr = NULL; + sal_t *base_addr = NULL; + sal_list_t *list_temp = NULL; + Cpa16U num_accel_dev = 0; + Cpa16U num = 0; + Cpa16U i = 0; + + LAC_CHECK_NULL_PARAM(pNumInstances); + + /* Get the number of accel_dev in the system */ + status = icp_amgr_getNumInstances(&num_accel_dev); + LAC_CHECK_STATUS(status); + + /* Allocate memory to store addr of accel_devs */ + pAdfInsts = + malloc(num_accel_dev * sizeof(icp_accel_dev_t *), M_QAT, M_WAITOK); + num_accel_dev = 0; + + /* Get ADF to return accel_devs with dc enabled */ + status = icp_amgr_getAllAccelDevByCapabilities( + ICP_ACCEL_CAPABILITIES_COMPRESSION, pAdfInsts, &num_accel_dev); + if (CPA_STATUS_SUCCESS == status) { + for (i = 0; i < num_accel_dev; i++) { + dev_addr = (icp_accel_dev_t *)pAdfInsts[i]; + if (NULL != dev_addr) { + base_addr = dev_addr->pSalHandle; + if (NULL != base_addr) { + list_temp = + base_addr->compression_services; + while (NULL != list_temp) { + num++; + list_temp = + SalList_next(list_temp); + } + } + } + } + + *pNumInstances = num; + } + + free(pAdfInsts, M_QAT); + + return status; +} + +CpaStatus +cpaDcGetInstances(Cpa16U numInstances, CpaInstanceHandle *dcInstances) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + icp_accel_dev_t **pAdfInsts = NULL; + icp_accel_dev_t *dev_addr = NULL; + sal_t *base_addr = NULL; + sal_list_t *list_temp = NULL; + Cpa16U num_accel_dev = 0; + Cpa16U index = 0; + Cpa16U i = 0; + + LAC_CHECK_NULL_PARAM(dcInstances); + if (0 == numInstances) { + QAT_UTILS_LOG("numInstances is 0.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Get the number of accel_dev in the system */ + status = icp_amgr_getNumInstances(&num_accel_dev); + LAC_CHECK_STATUS(status); + + /* Allocate memory to store addr of accel_devs */ + pAdfInsts = + malloc(num_accel_dev * sizeof(icp_accel_dev_t *), M_QAT, M_WAITOK); + + num_accel_dev = 0; + /* Get ADF to return accel_devs with dc enabled */ + status = icp_amgr_getAllAccelDevByCapabilities( + ICP_ACCEL_CAPABILITIES_COMPRESSION, pAdfInsts, &num_accel_dev); + + if (CPA_STATUS_SUCCESS == status) { + /* First check the number of instances in the system */ + for (i = 0; i < num_accel_dev; i++) { + dev_addr = (icp_accel_dev_t *)pAdfInsts[i]; + if (NULL != dev_addr) { + base_addr = dev_addr->pSalHandle; + if (NULL != base_addr) { + list_temp = + base_addr->compression_services; + while (NULL != list_temp) { + if (index > + (numInstances - 1)) { + break; + } + + dcInstances[index] = + SalList_getObject( + list_temp); + list_temp = + SalList_next(list_temp); + index++; + } + } + } + } + + if (numInstances > index) { + QAT_UTILS_LOG("Only %d dc instances available.\n", + index); + status = CPA_STATUS_RESOURCE; + } + } + + if (CPA_STATUS_SUCCESS == status) { + index = 0; + for (i = 0; i < num_accel_dev; i++) { + dev_addr = (icp_accel_dev_t *)pAdfInsts[i]; + /* Note dev_addr cannot be NULL here as numInstances=0 + is not valid and if dev_addr=NULL then index=0 (which + is less than numInstances and status is set to + _RESOURCE + above */ + base_addr = dev_addr->pSalHandle; + if (NULL != base_addr) { + list_temp = base_addr->compression_services; + while (NULL != list_temp) { + if (index > (numInstances - 1)) { + break; + } + + dcInstances[index] = + SalList_getObject(list_temp); + list_temp = SalList_next(list_temp); + index++; + } + } + } + } + + free(pAdfInsts, M_QAT); + + return status; +} + +CpaStatus +cpaDcInstanceGetInfo2(const CpaInstanceHandle instanceHandle, + CpaInstanceInfo2 *pInstanceInfo2) +{ + sal_compression_service_t *pCompressionService = NULL; + CpaInstanceHandle insHandle = NULL; + icp_accel_dev_t *dev = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + char keyStr[ADF_CFG_MAX_KEY_LEN_IN_BYTES] = { 0 }; + char valStr[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + char *section = DYN_SEC; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = instanceHandle; + } + + LAC_CHECK_NULL_PARAM(insHandle); + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + LAC_CHECK_NULL_PARAM(pInstanceInfo2); + + LAC_OS_BZERO(pInstanceInfo2, sizeof(CpaInstanceInfo2)); + pInstanceInfo2->accelerationServiceType = + CPA_ACC_SVC_TYPE_DATA_COMPRESSION; + + snprintf((char *)pInstanceInfo2->vendorName, + CPA_INST_VENDOR_NAME_SIZE, + "%s", + SAL_INFO2_VENDOR_NAME); + pInstanceInfo2->vendorName[CPA_INST_VENDOR_NAME_SIZE - 1] = '\0'; + + snprintf((char *)pInstanceInfo2->swVersion, + CPA_INST_SW_VERSION_SIZE, + "Version %d.%d", + SAL_INFO2_DRIVER_SW_VERSION_MAJ_NUMBER, + SAL_INFO2_DRIVER_SW_VERSION_MIN_NUMBER); + pInstanceInfo2->swVersion[CPA_INST_SW_VERSION_SIZE - 1] = '\0'; + + /* Note we can safely read the contents of the compression service + instance + here because icp_amgr_getAccelDevByCapabilities() only returns devs + that have started */ + pCompressionService = (sal_compression_service_t *)insHandle; + pInstanceInfo2->physInstId.packageId = pCompressionService->pkgID; + pInstanceInfo2->physInstId.acceleratorId = + pCompressionService->acceleratorNum; + pInstanceInfo2->physInstId.executionEngineId = 0; + pInstanceInfo2->physInstId.busAddress = + icp_adf_get_busAddress(pInstanceInfo2->physInstId.packageId); + + /* set coreAffinity to zero before use */ + LAC_OS_BZERO(pInstanceInfo2->coreAffinity, + sizeof(pInstanceInfo2->coreAffinity)); + CPA_BITMAP_BIT_SET(pInstanceInfo2->coreAffinity, + pCompressionService->coreAffinity); + + pInstanceInfo2->nodeAffinity = pCompressionService->nodeAffinity; + + if (CPA_TRUE == + pCompressionService->generic_service_info.isInstanceStarted) { + pInstanceInfo2->operState = CPA_OPER_STATE_UP; + } else { + pInstanceInfo2->operState = CPA_OPER_STATE_DOWN; + } + + pInstanceInfo2->requiresPhysicallyContiguousMemory = CPA_TRUE; + + if (SAL_RESP_POLL_CFG_FILE == pCompressionService->isPolled) { + pInstanceInfo2->isPolled = CPA_TRUE; + } else { + pInstanceInfo2->isPolled = CPA_FALSE; + } + + pInstanceInfo2->isOffloaded = CPA_TRUE; + /* Get the instance name and part name from the config file */ + dev = icp_adf_getAccelDevByAccelId(pCompressionService->pkgID); + if (NULL == dev) { + QAT_UTILS_LOG("Can not find device for the instance.\n"); + LAC_OS_BZERO(pInstanceInfo2, sizeof(CpaInstanceInfo2)); + return CPA_STATUS_FAIL; + } + snprintf((char *)pInstanceInfo2->partName, + CPA_INST_PART_NAME_SIZE, + SAL_INFO2_PART_NAME, + dev->deviceName); + pInstanceInfo2->partName[CPA_INST_PART_NAME_SIZE - 1] = '\0'; + + if (CPA_FALSE == pCompressionService->generic_service_info.is_dyn) { + section = icpGetProcessName(); + } + + status = Sal_StringParsing( + "Dc", + pCompressionService->generic_service_info.instance, + "Name", + keyStr); + LAC_CHECK_STATUS(status); + status = icp_adf_cfgGetParamValue(dev, section, keyStr, valStr); + LAC_CHECK_STATUS(status); + strncpy((char *)pInstanceInfo2->instName, + valStr, + sizeof(pInstanceInfo2->instName) - 1); + pInstanceInfo2->instName[CPA_INST_NAME_SIZE - 1] = '\0'; + +#if __GNUC__ >= 7 +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wformat-truncation" +#endif + snprintf((char *)pInstanceInfo2->instID, + CPA_INST_ID_SIZE, + "%s_%s", + section, + valStr); +#if __GNUC__ >= 7 +#pragma GCC diagnostic pop +#endif + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcQueryCapabilities(CpaInstanceHandle dcInstance, + CpaDcInstanceCapabilities *pInstanceCapabilities) +{ + CpaInstanceHandle insHandle = NULL; + sal_compression_service_t *pService = NULL; + Cpa32U capabilitiesMask = 0; + dc_extd_ftrs_t *pExtendedFtrs = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + if (NULL == insHandle) { + QAT_UTILS_LOG("Can not get the instance.\n"); + return CPA_STATUS_FAIL; + } + } else { + insHandle = dcInstance; + } + + pService = (sal_compression_service_t *)insHandle; + + LAC_CHECK_NULL_PARAM(insHandle); + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + LAC_CHECK_NULL_PARAM(pInstanceCapabilities); + + memset(pInstanceCapabilities, 0, sizeof(CpaDcInstanceCapabilities)); + + capabilitiesMask = pService->generic_service_info.capabilitiesMask; + + /* Set compression capabilities */ + if (capabilitiesMask & ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY) { + pInstanceCapabilities->integrityCrcs = CPA_TRUE; + } + + pInstanceCapabilities->endOfLastBlock = CPA_TRUE; + pInstanceCapabilities->statefulDeflateCompression = CPA_FALSE; + pInstanceCapabilities->statefulDeflateDecompression = CPA_TRUE; + pInstanceCapabilities->statelessDeflateCompression = CPA_TRUE; + pInstanceCapabilities->statelessDeflateDecompression = CPA_TRUE; + pInstanceCapabilities->checksumCRC32 = CPA_TRUE; + pInstanceCapabilities->checksumAdler32 = CPA_TRUE; + pInstanceCapabilities->dynamicHuffman = CPA_TRUE; + pInstanceCapabilities->precompiledHuffman = CPA_FALSE; + pInstanceCapabilities->dynamicHuffmanBufferReq = CPA_TRUE; + pInstanceCapabilities->autoSelectBestHuffmanTree = CPA_TRUE; + + pInstanceCapabilities->validWindowSizeMaskCompression = + pService->comp_device_data.windowSizeMask; + pInstanceCapabilities->validWindowSizeMaskDecompression = + pService->comp_device_data.windowSizeMask; + pExtendedFtrs = (dc_extd_ftrs_t *)&( + ((sal_service_t *)insHandle)->dcExtendedFeatures); + pInstanceCapabilities->batchAndPack = CPA_FALSE; + pInstanceCapabilities->compressAndVerify = + (CpaBoolean)pExtendedFtrs->is_cnv; + pInstanceCapabilities->compressAndVerifyStrict = CPA_TRUE; + pInstanceCapabilities->compressAndVerifyAndRecover = + (CpaBoolean)pExtendedFtrs->is_cnvnr; + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcSetAddressTranslation(const CpaInstanceHandle instanceHandle, + CpaVirtualToPhysical virtual2Physical) +{ + sal_service_t *pService = NULL; + CpaInstanceHandle insHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = instanceHandle; + } + + LAC_CHECK_NULL_PARAM(insHandle); + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + LAC_CHECK_NULL_PARAM(virtual2Physical); + + pService = (sal_service_t *)insHandle; + + pService->virt2PhysClient = virtual2Physical; + + return CPA_STATUS_SUCCESS; +} + +/** + ****************************************************************************** + * @ingroup cpaDcCommon + * Data compression specific polling function which polls a DC instance. + *****************************************************************************/ + +CpaStatus +icp_sal_DcPollInstance(CpaInstanceHandle instanceHandle_in, + Cpa32U response_quota) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_compression_service_t *dc_handle = NULL; + sal_service_t *gen_handle = NULL; + icp_comms_trans_handle trans_hndTable[DC_NUM_RX_RINGS]; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + dc_handle = (sal_compression_service_t *)dcGetFirstHandle(); + } else { + dc_handle = (sal_compression_service_t *)instanceHandle_in; + } + + LAC_CHECK_NULL_PARAM(dc_handle); + SAL_RUNNING_CHECK(dc_handle); + + gen_handle = &(dc_handle->generic_service_info); + if (SAL_SERVICE_TYPE_COMPRESSION != gen_handle->type) { + QAT_UTILS_LOG("Instance handle type is incorrect.\n"); + return CPA_STATUS_FAIL; + } + + /* + * From the instanceHandle we must get the trans_handle and send + * down to adf for polling. + * Populate our trans handle table with the appropriate handles. + */ + trans_hndTable[0] = dc_handle->trans_handle_compression_rx; + + /* Call adf to do the polling. */ + status = icp_adf_pollInstance(trans_hndTable, + DC_NUM_RX_RINGS, + response_quota); + return status; +} + +/** + ****************************************************************************** + * @ingroup cpaDcCommon + *****************************************************************************/ +CpaStatus +cpaDcInstanceSetNotificationCb( + const CpaInstanceHandle instanceHandle, + const CpaDcInstanceNotificationCbFunc pInstanceNotificationCb, + void *pCallbackTag) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_service_t *gen_handle = instanceHandle; + + LAC_CHECK_NULL_PARAM(gen_handle); + gen_handle->notification_cb = pInstanceNotificationCb; + gen_handle->cb_tag = pCallbackTag; + return status; +} + +CpaInstanceHandle +dcGetFirstHandle(void) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + static icp_accel_dev_t *adfInsts[ADF_MAX_DEVICES] = { 0 }; + CpaInstanceHandle dcInst = NULL; + icp_accel_dev_t *dev_addr = NULL; + sal_t *base_addr = NULL; + sal_list_t *list_temp = NULL; + Cpa16U i, num_dc = 0; + + /* Only need 1 dev with compression enabled - so check all devices */ + status = icp_amgr_getAllAccelDevByCapabilities( + ICP_ACCEL_CAPABILITIES_COMPRESSION, adfInsts, &num_dc); + if ((0 == num_dc) || (CPA_STATUS_SUCCESS != status)) { + QAT_UTILS_LOG( + "No compression devices enabled in the system.\n"); + return dcInst; + } + + for (i = 0; i < num_dc; i++) { + dev_addr = (icp_accel_dev_t *)adfInsts[i]; + if (NULL != dev_addr) { + base_addr = dev_addr->pSalHandle; + if (NULL != base_addr) { + list_temp = base_addr->compression_services; + if (NULL != list_temp) { + dcInst = SalList_getObject(list_temp); + break; + } + } + } + } + return dcInst; +} Index: sys/dev/qat/qat_api/common/ctrl/sal_create_services.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/ctrl/sal_create_services.c @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file sal_create_services.c + * + * @defgroup SalCtrl Service Access Layer Controller + * + * @ingroup SalCtrl + * + * @description + * This file contains the main function to create a specific service. + * + *****************************************************************************/ + +#include "cpa.h" +#include "lac_mem.h" +#include "lac_mem_pools.h" +#include "qat_utils.h" +#include "lac_list.h" +#include "icp_adf_transport.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" + +#include "icp_qat_fw_la.h" +#include "lac_sym_qat.h" +#include "sal_types_compression.h" +#include "lac_sal_types_crypto.h" + +#include "icp_adf_init.h" + +#include "lac_sal.h" +#include "lac_sal_ctrl.h" + +CpaStatus +SalCtrl_ServiceCreate(sal_service_type_t serviceType, + Cpa32U instance, + sal_service_t **ppInst) +{ + sal_crypto_service_t *pCrypto_service = NULL; + sal_compression_service_t *pCompression_service = NULL; + + switch ((sal_service_type_t)serviceType) { + case SAL_SERVICE_TYPE_CRYPTO_ASYM: + case SAL_SERVICE_TYPE_CRYPTO_SYM: + case SAL_SERVICE_TYPE_CRYPTO: { + pCrypto_service = + malloc(sizeof(sal_crypto_service_t), M_QAT, M_WAITOK); + + /* Zero memory */ + memset(pCrypto_service, 0, sizeof(sal_crypto_service_t)); + + pCrypto_service->generic_service_info.type = + (sal_service_type_t)serviceType; + pCrypto_service->generic_service_info.state = + SAL_SERVICE_STATE_UNINITIALIZED; + pCrypto_service->generic_service_info.instance = instance; + + pCrypto_service->generic_service_info.init = SalCtrl_CryptoInit; + pCrypto_service->generic_service_info.start = + SalCtrl_CryptoStart; + pCrypto_service->generic_service_info.stop = SalCtrl_CryptoStop; + pCrypto_service->generic_service_info.shutdown = + SalCtrl_CryptoShutdown; + + *(ppInst) = &(pCrypto_service->generic_service_info); + + return CPA_STATUS_SUCCESS; + } + case SAL_SERVICE_TYPE_COMPRESSION: { + pCompression_service = + malloc(sizeof(sal_compression_service_t), M_QAT, M_WAITOK); + + /* Zero memory */ + memset(pCompression_service, + 0, + sizeof(sal_compression_service_t)); + + pCompression_service->generic_service_info.type = + (sal_service_type_t)serviceType; + pCompression_service->generic_service_info.state = + SAL_SERVICE_STATE_UNINITIALIZED; + pCompression_service->generic_service_info.instance = instance; + + pCompression_service->generic_service_info.init = + SalCtrl_CompressionInit; + pCompression_service->generic_service_info.start = + SalCtrl_CompressionStart; + pCompression_service->generic_service_info.stop = + SalCtrl_CompressionStop; + pCompression_service->generic_service_info.shutdown = + SalCtrl_CompressionShutdown; + + *(ppInst) = &(pCompression_service->generic_service_info); + return CPA_STATUS_SUCCESS; + } + + default: { + QAT_UTILS_LOG("Not a valid service type\n"); + (*ppInst) = NULL; + return CPA_STATUS_FAIL; + } + } +} Index: sys/dev/qat/qat_api/common/ctrl/sal_crypto.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/ctrl/sal_crypto.c @@ -0,0 +1,1837 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file sal_crypto.c Instance handling functions for crypto + * + * @ingroup SalCtrl + * + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ + +/* QAT-API includes */ +#include "cpa.h" +#include "cpa_types.h" +#include "cpa_cy_common.h" +#include "cpa_cy_im.h" +#include "cpa_cy_key.h" +#include "cpa_cy_sym.h" + +#include "qat_utils.h" + +/* ADF includes */ +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_accel_devices.h" +#include "icp_adf_cfg.h" +#include "icp_adf_accel_mgr.h" +#include "icp_adf_poll.h" +#include "icp_adf_debug.h" + +/* SAL includes */ +#include "lac_log.h" +#include "lac_mem.h" +#include "lac_mem_pools.h" +#include "sal_statistics.h" +#include "lac_common.h" +#include "lac_list.h" +#include "lac_hooks.h" +#include "lac_sym_qat_hash_defs_lookup.h" +#include "lac_sym.h" +#include "lac_sym_key.h" +#include "lac_sym_hash.h" +#include "lac_sym_cb.h" +#include "lac_sym_stats.h" +#include "lac_sal_types_crypto.h" +#include "lac_sal.h" +#include "lac_sal_ctrl.h" +#include "sal_string_parse.h" +#include "sal_service_state.h" +#include "icp_sal_poll.h" +#include "lac_sync.h" +#include "lac_sym_qat.h" +#include "icp_sal_versions.h" +#include "icp_sal_user.h" + +#define TH_CY_RX_0 0 +#define TH_CY_RX_1 1 +#define MAX_CY_RX_RINGS 2 + +#define DOUBLE_INCR 2 + +#define TH_SINGLE_RX 0 +#define NUM_CRYPTO_SYM_RX_RINGS 1 +#define NUM_CRYPTO_ASYM_RX_RINGS 1 +#define NUM_CRYPTO_NRBG_RX_RINGS 1 + +static CpaInstanceHandle +Lac_CryptoGetFirstHandle(void) +{ + CpaInstanceHandle instHandle; + instHandle = Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO); + if (!instHandle) { + instHandle = Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + if (!instHandle) { + instHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_ASYM); + } + } + return instHandle; +} + + +/* Function to release the sym handles. */ +static CpaStatus +SalCtrl_SymReleaseTransHandle(sal_service_t *service) +{ + + CpaStatus status = CPA_STATUS_SUCCESS; + CpaStatus ret_status = CPA_STATUS_SUCCESS; + sal_crypto_service_t *pCryptoService = (sal_crypto_service_t *)service; + + if (NULL != pCryptoService->trans_handle_sym_tx) { + status = icp_adf_transReleaseHandle( + pCryptoService->trans_handle_sym_tx); + if (CPA_STATUS_SUCCESS != status) { + ret_status = status; + } + } + if (NULL != pCryptoService->trans_handle_sym_rx) { + status = icp_adf_transReleaseHandle( + pCryptoService->trans_handle_sym_rx); + if (CPA_STATUS_SUCCESS != status) { + ret_status = status; + } + } + + return ret_status; +} + + +/* + * @ingroup sal_crypto + * Frees resources (memory and transhandles) if allocated + * + * @param[in] pCryptoService Pointer to sym service instance + * @retval SUCCESS if transhandles released + * successfully. +*/ +static CpaStatus +SalCtrl_SymFreeResources(sal_crypto_service_t *pCryptoService) +{ + + CpaStatus status = CPA_STATUS_SUCCESS; + + /* Free memory pools if not NULL */ + Lac_MemPoolDestroy(pCryptoService->lac_sym_cookie_pool); + + /* Free misc memory if allocated */ + /* Frees memory allocated for Hmac precomputes */ + LacSymHash_HmacPrecompShutdown(pCryptoService); + /* Free memory allocated for key labels + Also clears key stats */ + LacSymKey_Shutdown(pCryptoService); + /* Free hash lookup table if allocated */ + if (NULL != pCryptoService->pLacHashLookupDefs) { + LAC_OS_FREE(pCryptoService->pLacHashLookupDefs); + } + + /* Free statistics */ + LacSym_StatsFree(pCryptoService); + + /* Free transport handles */ + status = SalCtrl_SymReleaseTransHandle((sal_service_t *)pCryptoService); + return status; +} + + +/** + *********************************************************************** + * @ingroup SalCtrl + * This macro verifies that the status is _SUCCESS + * If status is not _SUCCESS then Sym Instance resources are + * freed before the function returns the error + * + * @param[in] status status we are checking + * + * @return void status is ok (CPA_STATUS_SUCCESS) + * @return status The value in the status parameter is an error one + * + ****************************************************************************/ +#define LAC_CHECK_STATUS_SYM_INIT(status) \ + do { \ + if (CPA_STATUS_SUCCESS != status) { \ + SalCtrl_SymFreeResources(pCryptoService); \ + return status; \ + } \ + } while (0) + + +/* Function that creates the Sym Handles. */ +static CpaStatus +SalCtrl_SymCreateTransHandle(icp_accel_dev_t *device, + sal_service_t *service, + Cpa32U numSymRequests, + char *section) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + char temp_string[SAL_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + sal_crypto_service_t *pCryptoService = (sal_crypto_service_t *)service; + icp_resp_deliv_method rx_resp_type = ICP_RESP_TYPE_IRQ; + Cpa32U msgSize = 0; + + if (SAL_RESP_POLL_CFG_FILE == pCryptoService->isPolled) { + rx_resp_type = ICP_RESP_TYPE_POLL; + } + + if (CPA_FALSE == pCryptoService->generic_service_info.is_dyn) { + section = icpGetProcessName(); + } + + /* Parse Sym ring details */ + status = + Sal_StringParsing("Cy", + pCryptoService->generic_service_info.instance, + "RingSymTx", + temp_string); + + /* Need to free resources in case not _SUCCESS from here */ + LAC_CHECK_STATUS_SYM_INIT(status); + + msgSize = LAC_QAT_SYM_REQ_SZ_LW * LAC_LONG_WORD_IN_BYTES; + status = + icp_adf_transCreateHandle(device, + ICP_TRANS_TYPE_ETR, + section, + pCryptoService->acceleratorNum, + pCryptoService->bankNum, + temp_string, + lac_getRingType(SAL_RING_TYPE_A_SYM_HI), + NULL, + ICP_RESP_TYPE_NONE, + numSymRequests, + msgSize, + (icp_comms_trans_handle *)&( + pCryptoService->trans_handle_sym_tx)); + LAC_CHECK_STATUS_SYM_INIT(status); + + status = + Sal_StringParsing("Cy", + pCryptoService->generic_service_info.instance, + "RingSymRx", + temp_string); + LAC_CHECK_STATUS_SYM_INIT(status); + + msgSize = LAC_QAT_SYM_RESP_SZ_LW * LAC_LONG_WORD_IN_BYTES; + status = icp_adf_transCreateHandle( + device, + ICP_TRANS_TYPE_ETR, + section, + pCryptoService->acceleratorNum, + pCryptoService->bankNum, + temp_string, + lac_getRingType(SAL_RING_TYPE_NONE), + (icp_trans_callback)LacSymQat_SymRespHandler, + rx_resp_type, + numSymRequests, + msgSize, + (icp_comms_trans_handle *)&(pCryptoService->trans_handle_sym_rx)); + LAC_CHECK_STATUS_SYM_INIT(status); + + return status; +} + +static int +SalCtrl_CryptoDebug(void *private_data, char *data, int size, int offset) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa32U len = 0; + sal_crypto_service_t *pCryptoService = + (sal_crypto_service_t *)private_data; + + switch (offset) { + case SAL_STATS_SYM: { + CpaCySymStats64 symStats = { 0 }; + if (CPA_TRUE != + pCryptoService->generic_service_info.stats + ->bSymStatsEnabled) { + break; + } + status = cpaCySymQueryStats64(pCryptoService, &symStats); + if (status != CPA_STATUS_SUCCESS) { + LAC_LOG_ERROR("cpaCySymQueryStats64 returned error\n"); + return 0; + } + + /* Engine Info */ + len += snprintf( + data + len, + size - len, + SEPARATOR BORDER + " Statistics for Instance %24s |\n" BORDER + " Symmetric Stats " BORDER + "\n" SEPARATOR, + pCryptoService->debug_file->name); + + /* Session Info */ + len += snprintf( + data + len, + size - len, + BORDER " Sessions Initialized: %16llu " BORDER + "\n" BORDER + " Sessions Removed: %16llu " BORDER + "\n" BORDER + " Session Errors: %16llu " BORDER + "\n" SEPARATOR, + (long long unsigned int)symStats.numSessionsInitialized, + (long long unsigned int)symStats.numSessionsRemoved, + (long long unsigned int)symStats.numSessionErrors); + + /* Session info */ + len += snprintf( + data + len, + size - len, + BORDER " Symmetric Requests: %16llu " BORDER + "\n" BORDER + " Symmetric Request Errors: %16llu " BORDER + "\n" BORDER + " Symmetric Completed: %16llu " BORDER + "\n" BORDER + " Symmetric Completed Errors: %16llu " BORDER + "\n" BORDER + " Symmetric Verify Failures: %16llu " BORDER + "\n", + (long long unsigned int)symStats.numSymOpRequests, + (long long unsigned int)symStats.numSymOpRequestErrors, + (long long unsigned int)symStats.numSymOpCompleted, + (long long unsigned int)symStats.numSymOpCompletedErrors, + (long long unsigned int)symStats.numSymOpVerifyFailures); + break; + } + default: { + len += snprintf(data + len, size - len, SEPARATOR); + return 0; + } + } + return ++offset; +} + + +static CpaStatus +SalCtrl_SymInit(icp_accel_dev_t *device, sal_service_t *service) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa32U numSymConcurrentReq = 0; + char adfGetParam[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + char temp_string[SAL_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + sal_crypto_service_t *pCryptoService = (sal_crypto_service_t *)service; + char *section = DYN_SEC; + + /*Instance may not in the DYN section*/ + if (CPA_FALSE == pCryptoService->generic_service_info.is_dyn) { + section = icpGetProcessName(); + } + + + /* Register callbacks for the symmetric services + * (Hash, Cipher, Algorithm-Chaining) (returns void)*/ + LacSymCb_CallbacksRegister(); + + /* Get num concurrent requests from config file */ + status = + Sal_StringParsing("Cy", + pCryptoService->generic_service_info.instance, + "NumConcurrentSymRequests", + temp_string); + LAC_CHECK_STATUS(status); + status = + icp_adf_cfgGetParamValue(device, section, temp_string, adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration file\n", + temp_string); + return status; + } + + numSymConcurrentReq = + (Cpa32U)Sal_Strtoul(adfGetParam, NULL, SAL_CFG_BASE_DEC); + if (CPA_STATUS_FAIL == validateConcurrRequest(numSymConcurrentReq)) { + LAC_LOG_ERROR("Invalid NumConcurrentSymRequests, valid " + "values {64, 128, 256, ... 32768, 65536}"); + return CPA_STATUS_FAIL; + } + + /* ADF does not allow us to completely fill the ring for batch requests + */ + pCryptoService->maxNumSymReqBatch = + (numSymConcurrentReq - SAL_BATCH_SUBMIT_FREE_SPACE); + + /* Create transport handles */ + status = SalCtrl_SymCreateTransHandle(device, + service, + numSymConcurrentReq, + section); + LAC_CHECK_STATUS(status); + + /* Allocates memory pools */ + + /* Create and initialise symmetric cookie memory pool */ + pCryptoService->lac_sym_cookie_pool = LAC_MEM_POOL_INIT_POOL_ID; + status = + Sal_StringParsing("Cy", + pCryptoService->generic_service_info.instance, + "SymPool", + temp_string); + LAC_CHECK_STATUS_SYM_INIT(status); + /* Note we need twice (i.e. <<1) the number of sym cookies to + support sym ring pairs (and some, for partials) */ + status = + Lac_MemPoolCreate(&pCryptoService->lac_sym_cookie_pool, + temp_string, + ((numSymConcurrentReq + numSymConcurrentReq + 1) + << 1), + sizeof(lac_sym_cookie_t), + LAC_64BYTE_ALIGNMENT, + CPA_FALSE, + pCryptoService->nodeAffinity); + LAC_CHECK_STATUS_SYM_INIT(status); + /* For all sym cookies fill out the physical address of data that + will be set to QAT */ + Lac_MemPoolInitSymCookiesPhyAddr(pCryptoService->lac_sym_cookie_pool); + + /* Clear stats */ + /* Clears Key stats and allocate memory of SSL and TLS labels + These labels are initialised to standard values */ + status = LacSymKey_Init(pCryptoService); + LAC_CHECK_STATUS_SYM_INIT(status); + + /* Initialises the hash lookup table*/ + status = LacSymQat_Init(pCryptoService); + LAC_CHECK_STATUS_SYM_INIT(status); + + /* Fills out content descriptor for precomputes and registers the + hash precompute callback */ + status = LacSymHash_HmacPrecompInit(pCryptoService); + LAC_CHECK_STATUS_SYM_INIT(status); + + /* Init the Sym stats */ + status = LacSym_StatsInit(pCryptoService); + LAC_CHECK_STATUS_SYM_INIT(status); + + return status; +} + +static void +SalCtrl_DebugShutdown(icp_accel_dev_t *device, sal_service_t *service) +{ + sal_crypto_service_t *pCryptoService = (sal_crypto_service_t *)service; + sal_statistics_collection_t *pStatsCollection = + (sal_statistics_collection_t *)device->pQatStats; + + if (CPA_TRUE == pStatsCollection->bStatsEnabled) { + /* Clean stats */ + if (NULL != pCryptoService->debug_file) { + icp_adf_debugRemoveFile(pCryptoService->debug_file); + LAC_OS_FREE(pCryptoService->debug_file->name); + LAC_OS_FREE(pCryptoService->debug_file); + pCryptoService->debug_file = NULL; + } + } + pCryptoService->generic_service_info.stats = NULL; +} + +static CpaStatus +SalCtrl_DebugInit(icp_accel_dev_t *device, sal_service_t *service) +{ + char adfGetParam[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + char temp_string[SAL_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + char *instance_name = NULL; + sal_crypto_service_t *pCryptoService = (sal_crypto_service_t *)service; + sal_statistics_collection_t *pStatsCollection = + (sal_statistics_collection_t *)device->pQatStats; + CpaStatus status = CPA_STATUS_SUCCESS; + char *section = DYN_SEC; + + /*Instance may not in the DYN section*/ + if (CPA_FALSE == pCryptoService->generic_service_info.is_dyn) { + section = icpGetProcessName(); + } + + if (CPA_TRUE == pStatsCollection->bStatsEnabled) { + /* Get instance name for stats */ + instance_name = LAC_OS_MALLOC(ADF_CFG_MAX_VAL_LEN_IN_BYTES); + if (NULL == instance_name) { + return CPA_STATUS_RESOURCE; + } + + status = Sal_StringParsing( + "Cy", + pCryptoService->generic_service_info.instance, + "Name", + temp_string); + if (CPA_STATUS_SUCCESS != status) { + LAC_OS_FREE(instance_name); + return status; + } + status = icp_adf_cfgGetParamValue(device, + section, + temp_string, + adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG( + "Failed to get %s from configuration file\n", + temp_string); + LAC_OS_FREE(instance_name); + return status; + } + snprintf(instance_name, + ADF_CFG_MAX_VAL_LEN_IN_BYTES, + "%s", + adfGetParam); + + pCryptoService->debug_file = + LAC_OS_MALLOC(sizeof(debug_file_info_t)); + if (NULL == pCryptoService->debug_file) { + LAC_OS_FREE(instance_name); + return CPA_STATUS_RESOURCE; + } + + memset(pCryptoService->debug_file, + 0, + sizeof(debug_file_info_t)); + pCryptoService->debug_file->name = instance_name; + pCryptoService->debug_file->seq_read = SalCtrl_CryptoDebug; + pCryptoService->debug_file->private_data = pCryptoService; + pCryptoService->debug_file->parent = + pCryptoService->generic_service_info.debug_parent_dir; + + status = + icp_adf_debugAddFile(device, pCryptoService->debug_file); + if (CPA_STATUS_SUCCESS != status) { + LAC_OS_FREE(instance_name); + LAC_OS_FREE(pCryptoService->debug_file); + return status; + } + } + pCryptoService->generic_service_info.stats = pStatsCollection; + + return status; +} + +static CpaStatus +SalCtr_InstInit(icp_accel_dev_t *device, sal_service_t *service) +{ + char adfGetParam[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + char temp_string[SAL_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + char temp_string2[SAL_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + sal_crypto_service_t *pCryptoService = (sal_crypto_service_t *)service; + CpaStatus status = CPA_STATUS_SUCCESS; + char *section = DYN_SEC; + + /*Instance may not in the DYN section*/ + if (CPA_FALSE == pCryptoService->generic_service_info.is_dyn) { + section = icpGetProcessName(); + } + + + /* Get Config Info: Accel Num, bank Num, packageID, + coreAffinity, nodeAffinity and response mode */ + + pCryptoService->acceleratorNum = 0; + + status = + Sal_StringParsing("Cy", + pCryptoService->generic_service_info.instance, + "BankNumber", + temp_string); + LAC_CHECK_STATUS(status); + status = + icp_adf_cfgGetParamValue(device, section, temp_string, adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration file\n", + temp_string); + return status; + } + pCryptoService->bankNum = + (Cpa16U)Sal_Strtoul(adfGetParam, NULL, SAL_CFG_BASE_DEC); + + status = + Sal_StringParsing("Cy", + pCryptoService->generic_service_info.instance, + "IsPolled", + temp_string); + LAC_CHECK_STATUS(status); + status = + icp_adf_cfgGetParamValue(device, section, temp_string, adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration file\n", + temp_string); + return status; + } + pCryptoService->isPolled = + (Cpa8U)Sal_Strtoul(adfGetParam, NULL, SAL_CFG_BASE_DEC); + + /* Kernel instances do not support epoll mode */ + if (SAL_RESP_EPOLL_CFG_FILE == pCryptoService->isPolled) { + QAT_UTILS_LOG( + "IsPolled %u is not supported for kernel instance %s", + pCryptoService->isPolled, + temp_string); + return CPA_STATUS_FAIL; + } + + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + ADF_DEV_PKG_ID, + adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration file\n", + ADF_DEV_PKG_ID); + return status; + } + pCryptoService->pkgID = + (Cpa16U)Sal_Strtoul(adfGetParam, NULL, SAL_CFG_BASE_DEC); + + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + ADF_DEV_NODE_ID, + adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration file\n", + ADF_DEV_NODE_ID); + return status; + } + pCryptoService->nodeAffinity = + (Cpa32U)Sal_Strtoul(adfGetParam, NULL, SAL_CFG_BASE_DEC); + /* In case of interrupt instance, use the bank affinity set by adf_ctl + * Otherwise, use the instance affinity for backwards compatibility */ + if (SAL_RESP_POLL_CFG_FILE != pCryptoService->isPolled) { + /* Next need to read the [AcceleratorX] section of the config + * file */ + status = Sal_StringParsing("Accelerator", + pCryptoService->acceleratorNum, + "", + temp_string2); + LAC_CHECK_STATUS(status); + status = Sal_StringParsing("Bank", + pCryptoService->bankNum, + "CoreAffinity", + temp_string); + LAC_CHECK_STATUS(status); + } else { + strncpy(temp_string2, section, (strlen(section) + 1)); + status = Sal_StringParsing( + "Cy", + pCryptoService->generic_service_info.instance, + "CoreAffinity", + temp_string); + LAC_CHECK_STATUS(status); + } + + status = icp_adf_cfgGetParamValue(device, + temp_string2, + temp_string, + adfGetParam); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration file\n", + temp_string); + return status; + } + pCryptoService->coreAffinity = + (Cpa32U)Sal_Strtoul(adfGetParam, NULL, SAL_CFG_BASE_DEC); + + /*No Execution Engine in DH895xcc, so make sure it is zero*/ + pCryptoService->executionEngine = 0; + + return status; +} + +/* This function: + * 1. Creates sym and asym transport handles + * 2. Allocates memory pools required by sym and asym services +.* 3. Clears the sym and asym stats counters + * 4. In case service asym or sym is enabled then this function + * only allocates resources for these services. i.e if the + * service asym is enabled then only asym transport handles + * are created and vice versa. + */ +CpaStatus +SalCtrl_CryptoInit(icp_accel_dev_t *device, sal_service_t *service) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_crypto_service_t *pCryptoService = (sal_crypto_service_t *)service; + sal_service_type_t svc_type = service->type; + + SAL_SERVICE_GOOD_FOR_INIT(pCryptoService); + pCryptoService->generic_service_info.state = + SAL_SERVICE_STATE_INITIALIZING; + + /* Set up the instance parameters such as bank number, + * coreAffinity, pkgId and node affinity etc + */ + status = SalCtr_InstInit(device, service); + LAC_CHECK_STATUS(status); + /* Create debug directory for service */ + status = SalCtrl_DebugInit(device, service); + LAC_CHECK_STATUS(status); + + switch (svc_type) { + case SAL_SERVICE_TYPE_CRYPTO_ASYM: + break; + case SAL_SERVICE_TYPE_CRYPTO_SYM: + status = SalCtrl_SymInit(device, service); + if (CPA_STATUS_SUCCESS != status) { + SalCtrl_DebugShutdown(device, service); + return status; + } + break; + case SAL_SERVICE_TYPE_CRYPTO: + status = SalCtrl_SymInit(device, service); + if (CPA_STATUS_SUCCESS != status) { + SalCtrl_DebugShutdown(device, service); + return status; + } + break; + default: + LAC_LOG_ERROR("Invalid service type\n"); + status = CPA_STATUS_FAIL; + break; + } + + pCryptoService->generic_service_info.state = + SAL_SERVICE_STATE_INITIALIZED; + + return status; +} + +CpaStatus +SalCtrl_CryptoStart(icp_accel_dev_t *device, sal_service_t *service) +{ + sal_crypto_service_t *pCryptoService = (sal_crypto_service_t *)service; + CpaStatus status = CPA_STATUS_SUCCESS; + + if (pCryptoService->generic_service_info.state != + SAL_SERVICE_STATE_INITIALIZED) { + LAC_LOG_ERROR("Not in the correct state to call start\n"); + return CPA_STATUS_FAIL; + } + + pCryptoService->generic_service_info.state = SAL_SERVICE_STATE_RUNNING; + return status; +} + +CpaStatus +SalCtrl_CryptoStop(icp_accel_dev_t *device, sal_service_t *service) +{ + sal_crypto_service_t *pCryptoService = (sal_crypto_service_t *)service; + + if (SAL_SERVICE_STATE_RUNNING != + pCryptoService->generic_service_info.state) { + LAC_LOG_ERROR("Not in the correct state to call stop"); + } + + pCryptoService->generic_service_info.state = + SAL_SERVICE_STATE_SHUTTING_DOWN; + return CPA_STATUS_SUCCESS; +} + +CpaStatus +SalCtrl_CryptoShutdown(icp_accel_dev_t *device, sal_service_t *service) +{ + sal_crypto_service_t *pCryptoService = (sal_crypto_service_t *)service; + CpaStatus status = CPA_STATUS_SUCCESS; + sal_service_type_t svc_type = service->type; + + if ((SAL_SERVICE_STATE_INITIALIZED != + pCryptoService->generic_service_info.state) && + (SAL_SERVICE_STATE_SHUTTING_DOWN != + pCryptoService->generic_service_info.state)) { + LAC_LOG_ERROR("Not in the correct state to call shutdown \n"); + return CPA_STATUS_FAIL; + } + + + /* Free memory and transhandles */ + switch (svc_type) { + case SAL_SERVICE_TYPE_CRYPTO_ASYM: + break; + case SAL_SERVICE_TYPE_CRYPTO_SYM: + if (SalCtrl_SymFreeResources(pCryptoService)) { + status = CPA_STATUS_FAIL; + } + break; + case SAL_SERVICE_TYPE_CRYPTO: + if (SalCtrl_SymFreeResources(pCryptoService)) { + status = CPA_STATUS_FAIL; + } + break; + default: + LAC_LOG_ERROR("Invalid service type\n"); + status = CPA_STATUS_FAIL; + break; + } + + SalCtrl_DebugShutdown(device, service); + + pCryptoService->generic_service_info.state = SAL_SERVICE_STATE_SHUTDOWN; + + return status; +} + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + *****************************************************************************/ +CpaStatus +cpaCyGetStatusText(const CpaInstanceHandle instanceHandle, + CpaStatus errStatus, + Cpa8S *pStatusText) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + + LAC_CHECK_NULL_PARAM(pStatusText); + + switch (errStatus) { + case CPA_STATUS_SUCCESS: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_SUCCESS); + break; + case CPA_STATUS_FAIL: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_FAIL); + break; + case CPA_STATUS_RETRY: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_RETRY); + break; + case CPA_STATUS_RESOURCE: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_RESOURCE); + break; + case CPA_STATUS_INVALID_PARAM: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_INVALID_PARAM); + break; + case CPA_STATUS_FATAL: + LAC_COPY_STRING(pStatusText, CPA_STATUS_STR_FATAL); + break; + default: + status = CPA_STATUS_INVALID_PARAM; + break; + } + return status; +} + +void +SalCtrl_CyQueryCapabilities(sal_service_t *pGenericService, + CpaCyCapabilitiesInfo *pCapInfo) +{ + memset(pCapInfo, 0, sizeof(CpaCyCapabilitiesInfo)); + + if (SAL_SERVICE_TYPE_CRYPTO == pGenericService->type || + SAL_SERVICE_TYPE_CRYPTO_SYM == pGenericService->type) { + pCapInfo->symSupported = CPA_TRUE; + if (pGenericService->capabilitiesMask & + ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN) { + pCapInfo->extAlgchainSupported = CPA_TRUE; + } + + if (pGenericService->capabilitiesMask & + ICP_ACCEL_CAPABILITIES_HKDF) { + pCapInfo->hkdfSupported = CPA_TRUE; + } + } + + if (pGenericService->capabilitiesMask & + ICP_ACCEL_CAPABILITIES_ECEDMONT) { + pCapInfo->ecEdMontSupported = CPA_TRUE; + } + + if (pGenericService->capabilitiesMask & + ICP_ACCEL_CAPABILITIES_RANDOM_NUMBER) { + pCapInfo->nrbgSupported = CPA_TRUE; + } + + pCapInfo->drbgSupported = CPA_FALSE; + pCapInfo->randSupported = CPA_FALSE; + pCapInfo->nrbgSupported = CPA_FALSE; +} + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + *****************************************************************************/ +CpaStatus +cpaCyStartInstance(CpaInstanceHandle instanceHandle_in) +{ + CpaInstanceHandle instanceHandle = NULL; +/* Structure initializer is supported by C99, but it is + * not supported by some former Intel compilers. + */ + CpaInstanceInfo2 info = { 0 }; + icp_accel_dev_t *dev = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + sal_crypto_service_t *pService = NULL; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO); + if (!instanceHandle) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } + } else { + instanceHandle = instanceHandle_in; + } + LAC_CHECK_NULL_PARAM(instanceHandle); + + pService = (sal_crypto_service_t *)instanceHandle; + + status = cpaCyInstanceGetInfo2(instanceHandle, &info); + if (CPA_STATUS_SUCCESS != status) { + LAC_LOG_ERROR("Can not get instance info\n"); + return status; + } + dev = icp_adf_getAccelDevByAccelId(info.physInstId.packageId); + if (NULL == dev) { + LAC_LOG_ERROR("Can not find device for the instance\n"); + return CPA_STATUS_FAIL; + } + + pService->generic_service_info.isInstanceStarted = CPA_TRUE; + + /* Increment dev ref counter */ + icp_qa_dev_get(dev); + return CPA_STATUS_SUCCESS; +} + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + *****************************************************************************/ +CpaStatus +cpaCyStopInstance(CpaInstanceHandle instanceHandle_in) +{ + CpaInstanceHandle instanceHandle = NULL; +/* Structure initializer is supported by C99, but it is + * not supported by some former Intel compilers. + */ + CpaInstanceInfo2 info = { 0 }; + icp_accel_dev_t *dev = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + sal_crypto_service_t *pService = NULL; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = Lac_CryptoGetFirstHandle(); + } else { + instanceHandle = instanceHandle_in; + } + LAC_CHECK_NULL_PARAM(instanceHandle); + + status = cpaCyInstanceGetInfo2(instanceHandle, &info); + if (CPA_STATUS_SUCCESS != status) { + LAC_LOG_ERROR("Can not get instance info\n"); + return status; + } + dev = icp_adf_getAccelDevByAccelId(info.physInstId.packageId); + if (NULL == dev) { + LAC_LOG_ERROR("Can not find device for the instance\n"); + return CPA_STATUS_FAIL; + } + + pService = (sal_crypto_service_t *)instanceHandle; + + pService->generic_service_info.isInstanceStarted = CPA_FALSE; + + /* Decrement dev ref counter */ + icp_qa_dev_put(dev); + return CPA_STATUS_SUCCESS; +} + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + *****************************************************************************/ +CpaStatus +cpaCyInstanceSetNotificationCb( + const CpaInstanceHandle instanceHandle, + const CpaCyInstanceNotificationCbFunc pInstanceNotificationCb, + void *pCallbackTag) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_service_t *gen_handle = instanceHandle; + + + LAC_CHECK_NULL_PARAM(gen_handle); + gen_handle->notification_cb = pInstanceNotificationCb; + gen_handle->cb_tag = pCallbackTag; + return status; +} + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + *****************************************************************************/ +CpaStatus +cpaCyGetNumInstances(Cpa16U *pNumInstances) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaInstanceHandle cyInstanceHandle; + CpaInstanceInfo2 info; + icp_accel_dev_t **pAdfInsts = NULL; + icp_accel_dev_t *dev_addr = NULL; + sal_t *base_addr = NULL; + sal_list_t *list_temp = NULL; + Cpa16U num_accel_dev = 0; + Cpa16U num_inst = 0; + Cpa16U i = 0; + + LAC_CHECK_NULL_PARAM(pNumInstances); + + /* Get the number of accel_dev in the system */ + status = icp_amgr_getNumInstances(&num_accel_dev); + LAC_CHECK_STATUS(status); + + /* Allocate memory to store addr of accel_devs */ + pAdfInsts = + malloc(num_accel_dev * sizeof(icp_accel_dev_t *), M_QAT, M_WAITOK); + num_accel_dev = 0; + /* Get ADF to return all accel_devs that support either + * symmetric or asymmetric crypto */ + status = icp_amgr_getAllAccelDevByCapabilities( + (ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC | + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC), + pAdfInsts, + &num_accel_dev); + if (CPA_STATUS_SUCCESS != status) { + LAC_LOG_ERROR("No support for crypto\n"); + *pNumInstances = 0; + free(pAdfInsts, M_QAT); + return status; + } + + for (i = 0; i < num_accel_dev; i++) { + dev_addr = (icp_accel_dev_t *)pAdfInsts[i]; + if (NULL == dev_addr || NULL == dev_addr->pSalHandle) { + continue; + } + + base_addr = dev_addr->pSalHandle; + list_temp = base_addr->crypto_services; + while (NULL != list_temp) { + cyInstanceHandle = SalList_getObject(list_temp); + status = cpaCyInstanceGetInfo2(cyInstanceHandle, &info); + if (CPA_STATUS_SUCCESS == status && + CPA_TRUE == info.isPolled) { + num_inst++; + } + list_temp = SalList_next(list_temp); + } + list_temp = base_addr->asym_services; + while (NULL != list_temp) { + cyInstanceHandle = SalList_getObject(list_temp); + status = cpaCyInstanceGetInfo2(cyInstanceHandle, &info); + if (CPA_STATUS_SUCCESS == status && + CPA_TRUE == info.isPolled) { + num_inst++; + } + list_temp = SalList_next(list_temp); + } + list_temp = base_addr->sym_services; + while (NULL != list_temp) { + cyInstanceHandle = SalList_getObject(list_temp); + status = cpaCyInstanceGetInfo2(cyInstanceHandle, &info); + if (CPA_STATUS_SUCCESS == status && + CPA_TRUE == info.isPolled) { + num_inst++; + } + list_temp = SalList_next(list_temp); + } + } + *pNumInstances = num_inst; + free(pAdfInsts, M_QAT); + + + return status; +} + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + *****************************************************************************/ +CpaStatus +cpaCyGetInstances(Cpa16U numInstances, CpaInstanceHandle *pCyInstances) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaInstanceHandle cyInstanceHandle; + CpaInstanceInfo2 info; + icp_accel_dev_t **pAdfInsts = NULL; + icp_accel_dev_t *dev_addr = NULL; + sal_t *base_addr = NULL; + sal_list_t *list_temp = NULL; + Cpa16U num_accel_dev = 0; + Cpa16U num_allocated_instances = 0; + Cpa16U index = 0; + Cpa16U i = 0; + + + LAC_CHECK_NULL_PARAM(pCyInstances); + if (0 == numInstances) { + LAC_INVALID_PARAM_LOG("NumInstances is 0"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Get the number of crypto instances */ + status = cpaCyGetNumInstances(&num_allocated_instances); + if (CPA_STATUS_SUCCESS != status) { + return status; + } + + if (numInstances > num_allocated_instances) { + QAT_UTILS_LOG("Only %d crypto instances available\n", + num_allocated_instances); + return CPA_STATUS_RESOURCE; + } + + /* Get the number of accel devices in the system */ + status = icp_amgr_getNumInstances(&num_accel_dev); + LAC_CHECK_STATUS(status); + + /* Allocate memory to store addr of accel_devs */ + pAdfInsts = + malloc(num_accel_dev * sizeof(icp_accel_dev_t *), M_QAT, M_WAITOK); + + num_accel_dev = 0; + /* Get ADF to return all accel_devs that support either + * symmetric or asymmetric crypto */ + status = icp_amgr_getAllAccelDevByCapabilities( + (ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC | + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC), + pAdfInsts, + &num_accel_dev); + if (CPA_STATUS_SUCCESS != status) { + LAC_LOG_ERROR("No support for crypto\n"); + free(pAdfInsts, M_QAT); + return status; + } + + for (i = 0; i < num_accel_dev; i++) { + dev_addr = (icp_accel_dev_t *)pAdfInsts[i]; + /* Note dev_addr cannot be NULL here as numInstances = 0 + * is not valid and if dev_addr = NULL then index = 0 (which + * is less than numInstances and status is set to _RESOURCE + * above + */ + base_addr = dev_addr->pSalHandle; + if (NULL == base_addr) { + continue; + } + list_temp = base_addr->crypto_services; + while (NULL != list_temp) { + if (index > (numInstances - 1)) { + break; + } + cyInstanceHandle = SalList_getObject(list_temp); + status = cpaCyInstanceGetInfo2(cyInstanceHandle, &info); + list_temp = SalList_next(list_temp); + if (CPA_STATUS_SUCCESS != status || + CPA_TRUE != info.isPolled) { + continue; + } + pCyInstances[index] = cyInstanceHandle; + index++; + } + list_temp = base_addr->asym_services; + while (NULL != list_temp) { + if (index > (numInstances - 1)) { + break; + } + cyInstanceHandle = SalList_getObject(list_temp); + status = cpaCyInstanceGetInfo2(cyInstanceHandle, &info); + list_temp = SalList_next(list_temp); + if (CPA_STATUS_SUCCESS != status || + CPA_TRUE != info.isPolled) { + continue; + } + pCyInstances[index] = cyInstanceHandle; + index++; + } + list_temp = base_addr->sym_services; + while (NULL != list_temp) { + if (index > (numInstances - 1)) { + break; + } + cyInstanceHandle = SalList_getObject(list_temp); + status = cpaCyInstanceGetInfo2(cyInstanceHandle, &info); + list_temp = SalList_next(list_temp); + if (CPA_STATUS_SUCCESS != status || + CPA_TRUE != info.isPolled) { + continue; + } + pCyInstances[index] = cyInstanceHandle; + index++; + } + } + free(pAdfInsts, M_QAT); + + return status; +} + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + *****************************************************************************/ +CpaStatus +cpaCyInstanceGetInfo(const CpaInstanceHandle instanceHandle_in, + struct _CpaInstanceInfo *pInstanceInfo) +{ + CpaInstanceHandle instanceHandle = NULL; + sal_crypto_service_t *pCryptoService = NULL; + sal_service_t *pGenericService = NULL; + + Cpa8U name[CPA_INST_NAME_SIZE] = + "Intel(R) DH89XXCC instance number: %02x, type: Crypto"; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = Lac_CryptoGetFirstHandle(); + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_NULL_PARAM(instanceHandle); + LAC_CHECK_NULL_PARAM(pInstanceInfo); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_ASYM | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + + pCryptoService = (sal_crypto_service_t *)instanceHandle; + + pInstanceInfo->type = CPA_INSTANCE_TYPE_CRYPTO; + + /* According to cpa.h instance state is initialized and ready for use + * or shutdown. Therefore need to map our running state to initialised + * or shutdown */ + if (SAL_SERVICE_STATE_RUNNING == + pCryptoService->generic_service_info.state) { + pInstanceInfo->state = CPA_INSTANCE_STATE_INITIALISED; + } else { + pInstanceInfo->state = CPA_INSTANCE_STATE_SHUTDOWN; + } + + pGenericService = (sal_service_t *)instanceHandle; + snprintf((char *)pInstanceInfo->name, + CPA_INST_NAME_SIZE, + (char *)name, + pGenericService->instance); + + pInstanceInfo->name[CPA_INST_NAME_SIZE - 1] = '\0'; + + snprintf((char *)pInstanceInfo->version, + CPA_INSTANCE_MAX_NAME_SIZE_IN_BYTES, + "%d.%d", + CPA_CY_API_VERSION_NUM_MAJOR, + CPA_CY_API_VERSION_NUM_MINOR); + + pInstanceInfo->version[CPA_INSTANCE_MAX_VERSION_SIZE_IN_BYTES - 1] = + '\0'; + return CPA_STATUS_SUCCESS; +} + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + *****************************************************************************/ +CpaStatus +cpaCyInstanceGetInfo2(const CpaInstanceHandle instanceHandle_in, + CpaInstanceInfo2 *pInstanceInfo2) +{ + CpaInstanceHandle instanceHandle = NULL; + sal_crypto_service_t *pCryptoService = NULL; + icp_accel_dev_t *dev = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + char keyStr[ADF_CFG_MAX_KEY_LEN_IN_BYTES] = { 0 }; + char valStr[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + char *section = DYN_SEC; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = Lac_CryptoGetFirstHandle(); + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_NULL_PARAM(instanceHandle); + LAC_CHECK_NULL_PARAM(pInstanceInfo2); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_ASYM | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + + LAC_OS_BZERO(pInstanceInfo2, sizeof(CpaInstanceInfo2)); + pInstanceInfo2->accelerationServiceType = CPA_ACC_SVC_TYPE_CRYPTO; + snprintf((char *)pInstanceInfo2->vendorName, + CPA_INST_VENDOR_NAME_SIZE, + "%s", + SAL_INFO2_VENDOR_NAME); + pInstanceInfo2->vendorName[CPA_INST_VENDOR_NAME_SIZE - 1] = '\0'; + + snprintf((char *)pInstanceInfo2->swVersion, + CPA_INST_SW_VERSION_SIZE, + "Version %d.%d", + SAL_INFO2_DRIVER_SW_VERSION_MAJ_NUMBER, + SAL_INFO2_DRIVER_SW_VERSION_MIN_NUMBER); + pInstanceInfo2->swVersion[CPA_INST_SW_VERSION_SIZE - 1] = '\0'; + + /* Note we can safely read the contents of the crypto service instance + here because icp_amgr_getAllAccelDevByCapabilities() only returns + devs + that have started */ + pCryptoService = (sal_crypto_service_t *)instanceHandle; + pInstanceInfo2->physInstId.packageId = pCryptoService->pkgID; + pInstanceInfo2->physInstId.acceleratorId = + pCryptoService->acceleratorNum; + pInstanceInfo2->physInstId.executionEngineId = + pCryptoService->executionEngine; + pInstanceInfo2->physInstId.busAddress = + icp_adf_get_busAddress(pInstanceInfo2->physInstId.packageId); + + /*set coreAffinity to zero before use */ + LAC_OS_BZERO(pInstanceInfo2->coreAffinity, + sizeof(pInstanceInfo2->coreAffinity)); + CPA_BITMAP_BIT_SET(pInstanceInfo2->coreAffinity, + pCryptoService->coreAffinity); + pInstanceInfo2->nodeAffinity = pCryptoService->nodeAffinity; + + if (SAL_SERVICE_STATE_RUNNING == + pCryptoService->generic_service_info.state) { + pInstanceInfo2->operState = CPA_OPER_STATE_UP; + } else { + pInstanceInfo2->operState = CPA_OPER_STATE_DOWN; + } + + pInstanceInfo2->requiresPhysicallyContiguousMemory = CPA_TRUE; + if (SAL_RESP_POLL_CFG_FILE == pCryptoService->isPolled) { + pInstanceInfo2->isPolled = CPA_TRUE; + } else { + pInstanceInfo2->isPolled = CPA_FALSE; + } + pInstanceInfo2->isOffloaded = CPA_TRUE; + + /* Get the instance name and part name*/ + dev = icp_adf_getAccelDevByAccelId(pCryptoService->pkgID); + if (NULL == dev) { + LAC_LOG_ERROR("Can not find device for the instance\n"); + LAC_OS_BZERO(pInstanceInfo2, sizeof(CpaInstanceInfo2)); + return CPA_STATUS_FAIL; + } + snprintf((char *)pInstanceInfo2->partName, + CPA_INST_PART_NAME_SIZE, + SAL_INFO2_PART_NAME, + dev->deviceName); + pInstanceInfo2->partName[CPA_INST_PART_NAME_SIZE - 1] = '\0'; + + status = + Sal_StringParsing("Cy", + pCryptoService->generic_service_info.instance, + "Name", + keyStr); + LAC_CHECK_STATUS(status); + + if (CPA_FALSE == pCryptoService->generic_service_info.is_dyn) { + section = icpGetProcessName(); + } + + status = icp_adf_cfgGetParamValue(dev, section, keyStr, valStr); + LAC_CHECK_STATUS(status); + + snprintf((char *)pInstanceInfo2->instName, + CPA_INST_NAME_SIZE, + "%s", + valStr); + snprintf((char *)pInstanceInfo2->instID, + CPA_INST_ID_SIZE, + "%s_%s", + section, + valStr); + return CPA_STATUS_SUCCESS; +} + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + *****************************************************************************/ + +CpaStatus +cpaCyQueryCapabilities(const CpaInstanceHandle instanceHandle_in, + CpaCyCapabilitiesInfo *pCapInfo) +{ + /* Verify Instance exists */ + CpaInstanceHandle instanceHandle = NULL; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = Lac_CryptoGetFirstHandle(); + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_NULL_PARAM(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_ASYM | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(pCapInfo); + + SalCtrl_CyQueryCapabilities((sal_service_t *)instanceHandle, pCapInfo); + + return CPA_STATUS_SUCCESS; +} + +/** + ****************************************************************************** + * @ingroup cpaCySym + *****************************************************************************/ +CpaStatus +cpaCySymQueryCapabilities(const CpaInstanceHandle instanceHandle_in, + CpaCySymCapabilitiesInfo *pCapInfo) +{ + sal_crypto_service_t *pCryptoService = NULL; + sal_service_t *pGenericService = NULL; + CpaInstanceHandle instanceHandle = NULL; + + /* Verify Instance exists */ + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO); + if (!instanceHandle) { + instanceHandle = + Lac_GetFirstHandle(SAL_SERVICE_TYPE_CRYPTO_SYM); + } + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_NULL_PARAM(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_ASYM | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(pCapInfo); + + pCryptoService = (sal_crypto_service_t *)instanceHandle; + pGenericService = &(pCryptoService->generic_service_info); + + memset(pCapInfo, '\0', sizeof(CpaCySymCapabilitiesInfo)); + /* An asym crypto instance does not support sym service */ + if (SAL_SERVICE_TYPE_CRYPTO_ASYM == pGenericService->type) { + return CPA_STATUS_SUCCESS; + } + + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_NULL); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_ARC4); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_AES_ECB); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_AES_CBC); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_AES_CTR); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_AES_CCM); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_AES_GCM); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_DES_ECB); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_DES_CBC); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_3DES_ECB); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_3DES_CBC); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_3DES_CTR); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_KASUMI_F8); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_SNOW3G_UEA2); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_AES_F8); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_AES_XTS); + + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_MD5); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_SHA1); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_SHA224); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_SHA256); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_SHA384); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_SHA512); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_AES_XCBC); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_AES_CCM); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_AES_GCM); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_KASUMI_F9); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_SNOW3G_UIA2); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_AES_CMAC); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_AES_GMAC); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_AES_CBC_MAC); + + if (pGenericService->capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_ZUC) { + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, + CPA_CY_SYM_CIPHER_ZUC_EEA3); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_ZUC_EIA3); + } + + if (pGenericService->capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CHACHA_POLY) { + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_POLY); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, CPA_CY_SYM_CIPHER_CHACHA); + } + + if (pGenericService->capabilitiesMask & ICP_ACCEL_CAPABILITIES_SM3) { + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_SM3); + } + + pCapInfo->partialPacketSupported = CPA_TRUE; + + if (pGenericService->capabilitiesMask & ICP_ACCEL_CAPABILITIES_SHA3) { + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_SHA3_256); + pCapInfo->partialPacketSupported = CPA_FALSE; + } + + if (pGenericService->capabilitiesMask & + ICP_ACCEL_CAPABILITIES_SHA3_EXT) { + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_SHA3_224); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_SHA3_256); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_SHA3_384); + CPA_BITMAP_BIT_SET(pCapInfo->hashes, CPA_CY_SYM_HASH_SHA3_512); + pCapInfo->partialPacketSupported = CPA_FALSE; + } + + if (pGenericService->capabilitiesMask & ICP_ACCEL_CAPABILITIES_SM4) { + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, + CPA_CY_SYM_CIPHER_SM4_ECB); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, + CPA_CY_SYM_CIPHER_SM4_CBC); + CPA_BITMAP_BIT_SET(pCapInfo->ciphers, + CPA_CY_SYM_CIPHER_SM4_CTR); + pCapInfo->partialPacketSupported = CPA_FALSE; + } + + return CPA_STATUS_SUCCESS; +} + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + *****************************************************************************/ +CpaStatus +cpaCySetAddressTranslation(const CpaInstanceHandle instanceHandle_in, + CpaVirtualToPhysical virtual2physical) +{ + + CpaInstanceHandle instanceHandle = NULL; + sal_service_t *pService = NULL; + + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + instanceHandle = Lac_CryptoGetFirstHandle(); + } else { + instanceHandle = instanceHandle_in; + } + + LAC_CHECK_NULL_PARAM(instanceHandle); + SAL_CHECK_INSTANCE_TYPE(instanceHandle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_ASYM | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + LAC_CHECK_NULL_PARAM(virtual2physical); + + pService = (sal_service_t *)instanceHandle; + + pService->virt2PhysClient = virtual2physical; + + return CPA_STATUS_SUCCESS; +} + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + * Crypto specific polling function which polls a crypto instance. + *****************************************************************************/ +CpaStatus +icp_sal_CyPollInstance(CpaInstanceHandle instanceHandle_in, + Cpa32U response_quota) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_crypto_service_t *crypto_handle = NULL; + sal_service_t *gen_handle = NULL; + icp_comms_trans_handle trans_hndTable[MAX_CY_RX_RINGS] = { 0 }; + Cpa32U num_rx_rings = 0; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + crypto_handle = + (sal_crypto_service_t *)Lac_CryptoGetFirstHandle(); + } else { + crypto_handle = (sal_crypto_service_t *)instanceHandle_in; + } + LAC_CHECK_NULL_PARAM(crypto_handle); + SAL_RUNNING_CHECK(crypto_handle); + SAL_CHECK_INSTANCE_TYPE(crypto_handle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_ASYM | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + + gen_handle = &(crypto_handle->generic_service_info); + + /* + * From the instanceHandle we must get the trans_handle and send + * down to adf for polling. + * Populate our trans handle table with the appropriate handles. + */ + + switch (gen_handle->type) { + case SAL_SERVICE_TYPE_CRYPTO_ASYM: + trans_hndTable[TH_CY_RX_0] = + crypto_handle->trans_handle_asym_rx; + num_rx_rings = 1; + break; + case SAL_SERVICE_TYPE_CRYPTO_SYM: + trans_hndTable[TH_CY_RX_0] = crypto_handle->trans_handle_sym_rx; + num_rx_rings = 1; + break; + case SAL_SERVICE_TYPE_CRYPTO: + trans_hndTable[TH_CY_RX_0] = crypto_handle->trans_handle_sym_rx; + trans_hndTable[TH_CY_RX_1] = + crypto_handle->trans_handle_asym_rx; + num_rx_rings = MAX_CY_RX_RINGS; + break; + default: + break; + } + + /* Call adf to do the polling. */ + status = + icp_adf_pollInstance(trans_hndTable, num_rx_rings, response_quota); + + return status; +} + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + * Crypto specific polling function which polls sym crypto ring. + *****************************************************************************/ +CpaStatus +icp_sal_CyPollSymRing(CpaInstanceHandle instanceHandle_in, + Cpa32U response_quota) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_crypto_service_t *crypto_handle = NULL; + icp_comms_trans_handle trans_hndTable[NUM_CRYPTO_SYM_RX_RINGS] = { 0 }; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle_in) { + crypto_handle = (sal_crypto_service_t *)Lac_GetFirstHandle( + SAL_SERVICE_TYPE_CRYPTO_SYM); + } else { + crypto_handle = (sal_crypto_service_t *)instanceHandle_in; + } + LAC_CHECK_NULL_PARAM(crypto_handle); + SAL_CHECK_INSTANCE_TYPE(crypto_handle, + (SAL_SERVICE_TYPE_CRYPTO | + SAL_SERVICE_TYPE_CRYPTO_SYM)); + SAL_RUNNING_CHECK(crypto_handle); + + /* + * From the instanceHandle we must get the trans_handle and send + * down to adf for polling. + * Populate our trans handle table with the appropriate handles. + */ + trans_hndTable[TH_SINGLE_RX] = crypto_handle->trans_handle_sym_rx; + /* Call adf to do the polling. */ + status = icp_adf_pollInstance(trans_hndTable, + NUM_CRYPTO_SYM_RX_RINGS, + response_quota); + return status; +} + + +/** + ****************************************************************************** + * @ingroup cpaCyCommon + * Crypto specific polling function which polls an nrbg crypto ring. + *****************************************************************************/ +CpaStatus +icp_sal_CyPollNRBGRing(CpaInstanceHandle instanceHandle_in, + Cpa32U response_quota) +{ + return CPA_STATUS_UNSUPPORTED; +} + +/* Returns the handle to the first asym crypto instance */ +static CpaInstanceHandle +Lac_GetFirstAsymHandle(icp_accel_dev_t *adfInsts[ADF_MAX_DEVICES], + Cpa16U num_dev) +{ + icp_accel_dev_t *dev_addr = NULL; + sal_t *base_addr = NULL; + sal_list_t *list_temp = NULL; + CpaInstanceHandle cyInst = NULL; + Cpa16U i = 0; + + for (i = 0; i < num_dev; i++) { + dev_addr = (icp_accel_dev_t *)adfInsts[i]; + base_addr = dev_addr->pSalHandle; + if ((NULL != base_addr) && (NULL != base_addr->asym_services)) { + list_temp = base_addr->asym_services; + cyInst = SalList_getObject(list_temp); + break; + } + } + + return cyInst; +} + +/* Returns the handle to the first sym crypto instance */ +static CpaInstanceHandle +Lac_GetFirstSymHandle(icp_accel_dev_t *adfInsts[ADF_MAX_DEVICES], + Cpa16U num_dev) +{ + icp_accel_dev_t *dev_addr = NULL; + sal_t *base_addr = NULL; + sal_list_t *list_temp = NULL; + CpaInstanceHandle cyInst = NULL; + Cpa16U i = 0; + + for (i = 0; i < num_dev; i++) { + dev_addr = (icp_accel_dev_t *)adfInsts[i]; + base_addr = dev_addr->pSalHandle; + if ((NULL != base_addr) && (NULL != base_addr->sym_services)) { + list_temp = base_addr->sym_services; + cyInst = SalList_getObject(list_temp); + break; + } + } + + return cyInst; +} + +/* Returns the handle to the first crypto instance + * Note that the crypto instance in this case supports + * both asym and sym services */ +static CpaInstanceHandle +Lac_GetFirstCyHandle(icp_accel_dev_t *adfInsts[ADF_MAX_DEVICES], Cpa16U num_dev) +{ + icp_accel_dev_t *dev_addr = NULL; + sal_t *base_addr = NULL; + sal_list_t *list_temp = NULL; + CpaInstanceHandle cyInst = NULL; + Cpa16U i = 0; + + for (i = 0; i < num_dev; i++) { + dev_addr = (icp_accel_dev_t *)adfInsts[i]; + base_addr = dev_addr->pSalHandle; + if ((NULL != base_addr) && + (NULL != base_addr->crypto_services)) { + list_temp = base_addr->crypto_services; + cyInst = SalList_getObject(list_temp); + break; + } + } + return cyInst; +} + +CpaInstanceHandle +Lac_GetFirstHandle(sal_service_type_t svc_type) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + static icp_accel_dev_t *adfInsts[ADF_MAX_DEVICES] = { 0 }; + CpaInstanceHandle cyInst = NULL; + Cpa16U num_cy_dev = 0; + Cpa32U capabilities = 0; + + switch (svc_type) { + case SAL_SERVICE_TYPE_CRYPTO_ASYM: + capabilities = ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC; + break; + case SAL_SERVICE_TYPE_CRYPTO_SYM: + capabilities = ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC; + break; + case SAL_SERVICE_TYPE_CRYPTO: + capabilities = ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC; + capabilities |= ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC; + break; + default: + LAC_LOG_ERROR("Invalid service type\n"); + return NULL; + break; + } + /* Only need 1 dev with crypto enabled - so check all devices*/ + status = icp_amgr_getAllAccelDevByEachCapability(capabilities, + adfInsts, + &num_cy_dev); + if ((0 == num_cy_dev) || (CPA_STATUS_SUCCESS != status)) { + LAC_LOG_ERROR("No crypto devices enabled in the system\n"); + return NULL; + } + + switch (svc_type) { + case SAL_SERVICE_TYPE_CRYPTO_ASYM: + /* Try to find an asym only instance first */ + cyInst = Lac_GetFirstAsymHandle(adfInsts, num_cy_dev); + /* Try to find a cy instance since it also supports asym */ + if (NULL == cyInst) { + cyInst = Lac_GetFirstCyHandle(adfInsts, num_cy_dev); + } + break; + case SAL_SERVICE_TYPE_CRYPTO_SYM: + /* Try to find a sym only instance first */ + cyInst = Lac_GetFirstSymHandle(adfInsts, num_cy_dev); + /* Try to find a cy instance since it also supports sym */ + if (NULL == cyInst) { + cyInst = Lac_GetFirstCyHandle(adfInsts, num_cy_dev); + } + break; + case SAL_SERVICE_TYPE_CRYPTO: + /* Try to find a cy instance */ + cyInst = Lac_GetFirstCyHandle(adfInsts, num_cy_dev); + break; + default: + break; + } + if (NULL == cyInst) { + LAC_LOG_ERROR("No remaining crypto instances available\n"); + } + return cyInst; +} + +CpaStatus +icp_sal_NrbgGetInflightRequests(CpaInstanceHandle instanceHandle_in, + Cpa32U *maxInflightRequests, + Cpa32U *numInflightRequests) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +icp_sal_SymGetInflightRequests(CpaInstanceHandle instanceHandle, + Cpa32U *maxInflightRequests, + Cpa32U *numInflightRequests) +{ + sal_crypto_service_t *crypto_handle = NULL; + + crypto_handle = (sal_crypto_service_t *)instanceHandle; + + LAC_CHECK_NULL_PARAM(crypto_handle); + LAC_CHECK_NULL_PARAM(maxInflightRequests); + LAC_CHECK_NULL_PARAM(numInflightRequests); + SAL_RUNNING_CHECK(crypto_handle); + + return icp_adf_getInflightRequests(crypto_handle->trans_handle_sym_tx, + maxInflightRequests, + numInflightRequests); +} + + +CpaStatus +icp_sal_dp_SymGetInflightRequests(CpaInstanceHandle instanceHandle, + Cpa32U *maxInflightRequests, + Cpa32U *numInflightRequests) +{ + sal_crypto_service_t *crypto_handle = NULL; + + crypto_handle = (sal_crypto_service_t *)instanceHandle; + + return icp_adf_dp_getInflightRequests( + crypto_handle->trans_handle_sym_tx, + maxInflightRequests, + numInflightRequests); +} + Index: sys/dev/qat/qat_api/common/ctrl/sal_ctrl_services.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/ctrl/sal_ctrl_services.c @@ -0,0 +1,1344 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file sal_ctrl_services.c + * + * @ingroup SalCtrl + * + * @description + * This file contains the core of the service controller implementation. + * + *****************************************************************************/ + +/* QAT-API includes */ +#include "cpa.h" +#include "cpa_cy_key.h" +#include "cpa_cy_ln.h" +#include "cpa_cy_dh.h" +#include "cpa_cy_dsa.h" +#include "cpa_cy_rsa.h" +#include "cpa_cy_ec.h" +#include "cpa_cy_prime.h" +#include "cpa_cy_sym.h" +#include "cpa_dc.h" + +/* QAT utils includes */ +#include "qat_utils.h" + +/* ADF includes */ +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_accel_devices.h" +#include "icp_adf_cfg.h" +#include "icp_adf_init.h" +#include "icp_adf_accel_mgr.h" +#include "icp_adf_debug.h" + +/* FW includes */ +#include "icp_qat_fw_la.h" + +/* SAL includes */ +#include "lac_mem.h" +#include "lac_mem_pools.h" +#include "lac_list.h" +#include "lac_hooks.h" +#include "sal_string_parse.h" +#include "lac_common.h" +#include "lac_sal_types.h" +#include "lac_sal.h" +#include "lac_sal_ctrl.h" +#include "icp_sal_versions.h" + +#define MAX_SUBSYSTEM_RETRY 64 + +static char *subsystem_name = "SAL"; +/**< Name used by ADF to identify this component. */ +static char *cy_dir_name = "cy"; +static char *asym_dir_name = "asym"; +static char *sym_dir_name = "sym"; +static char *dc_dir_name = "dc"; +/**< Stats dir names. */ +static char *ver_file_name = "version"; + +static subservice_registation_handle_t sal_service_reg_handle; +/**< Data structure used by ADF to keep a reference to this component. */ + +/* + * @ingroup SalCtrl + * @description + * This function is used to parse the results from ADF + * in response to ServiceEnabled query.The results are + * semi-colon separated. Internally, the bitmask represented + * by the enabled_service is used to track which features are enabled. + * + * @context + * This functions is called from the SalCtrl_ServiceEventInit function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] device pointer to icp_accel_dev_t structure + * @param[in] pEnabledServices pointer to memory where enabled services will + * be written. + * @retval Status + */ +CpaStatus +SalCtrl_GetEnabledServices(icp_accel_dev_t *device, Cpa32U *pEnabledServices) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + char param_value[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + char *token = NULL; + char *running = NULL; + + *pEnabledServices = 0; + + memset(param_value, 0, ADF_CFG_MAX_VAL_LEN_IN_BYTES); + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + "ServicesEnabled", + param_value); + + if (CPA_STATUS_SUCCESS == status) { + running = param_value; + + token = strsep(&running, ";"); + + while (NULL != token) { + do { + if (strncmp(token, "asym", strlen("asym")) == + 0) { + *pEnabledServices |= + SAL_SERVICE_TYPE_CRYPTO_ASYM; + break; + } + if (strncmp(token, "sym", strlen("sym")) == 0) { + *pEnabledServices |= + SAL_SERVICE_TYPE_CRYPTO_SYM; + break; + } + if (strncmp(token, "cy", strlen("cy")) == 0) { + *pEnabledServices |= + SAL_SERVICE_TYPE_CRYPTO; + break; + } + if (strncmp(token, "dc", strlen("dc")) == 0) { + *pEnabledServices |= + SAL_SERVICE_TYPE_COMPRESSION; + break; + } + + QAT_UTILS_LOG( + "Error parsing enabled services from ADF.\n"); + return CPA_STATUS_FAIL; + + } while (0); + token = strsep(&running, ";"); + } + } else { + QAT_UTILS_LOG("Failed to get enabled services from ADF.\n"); + } + return status; +} + +/* + * @ingroup SalCtrl + * @description + * This function is used to check whether a service is enabled + * + * @context + * This functions is called from the SalCtrl_ServiceEventInit function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * param[in] enabled_services It is the bitmask for the enabled services + * param[in] service It is the service we want to check for + */ +CpaBoolean +SalCtrl_IsServiceEnabled(Cpa32U enabled_services, sal_service_type_t service) +{ + return (CpaBoolean)((enabled_services & (Cpa32U)(service)) != 0); +} + +/* + * @ingroup SalCtrl + * @description + * This function is used to check whether enabled services has associated + * hardware capability support + * + * @context + * This functions is called from the SalCtrl_ServiceEventInit function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * param[in] device A pointer to an icp_accel_dev_t + * param[in] enabled_services It is the bitmask for the enabled services + */ + +CpaStatus +SalCtrl_GetSupportedServices(icp_accel_dev_t *device, Cpa32U enabled_services) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa32U capabilitiesMask = 0; + + status = icp_amgr_getAccelDevCapabilities(device, &capabilitiesMask); + + if (CPA_STATUS_SUCCESS == status) { + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO)) { + if (!(capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC) || + !(capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC)) { + QAT_UTILS_LOG( + "Device does not support Crypto service\n"); + status = CPA_STATUS_FAIL; + } + } + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO_ASYM)) { + if (!(capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC)) { + QAT_UTILS_LOG( + "Device does not support Asym service\n"); + status = CPA_STATUS_FAIL; + } + } + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO_SYM)) { + if (!(capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC)) { + QAT_UTILS_LOG( + "Device does not support Sym service\n"); + status = CPA_STATUS_FAIL; + } + } + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_COMPRESSION)) { + if (!(capabilitiesMask & + ICP_ACCEL_CAPABILITIES_COMPRESSION)) { + QAT_UTILS_LOG( + "Device does not support Compression service.\n"); + status = CPA_STATUS_FAIL; + } + } + } + + return status; +} + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to check if a service is supported + * on the device. The key difference between this and + * SalCtrl_GetSupportedServices() is that the latter treats it as + * an error if the service is unsupported. + * + * @context + * This can be called anywhere. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * param[in] device + * param[in] service service or services to check + * + *************************************************************************/ +CpaBoolean +SalCtrl_IsServiceSupported(icp_accel_dev_t *device, + sal_service_type_t service_to_check) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa32U capabilitiesMask = 0; + CpaBoolean service_supported = CPA_TRUE; + + if (!(SalCtrl_IsServiceEnabled((Cpa32U)service_to_check, + SAL_SERVICE_TYPE_CRYPTO)) && + !(SalCtrl_IsServiceEnabled((Cpa32U)service_to_check, + SAL_SERVICE_TYPE_CRYPTO_ASYM)) && + !(SalCtrl_IsServiceEnabled((Cpa32U)service_to_check, + SAL_SERVICE_TYPE_CRYPTO_SYM)) && + !(SalCtrl_IsServiceEnabled((Cpa32U)service_to_check, + SAL_SERVICE_TYPE_COMPRESSION))) { + QAT_UTILS_LOG("Invalid service type\n"); + service_supported = CPA_FALSE; + } + + status = icp_amgr_getAccelDevCapabilities(device, &capabilitiesMask); + + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Can not get device capabilities.\n"); + return CPA_FALSE; + } + + if (SalCtrl_IsServiceEnabled((Cpa32U)service_to_check, + SAL_SERVICE_TYPE_CRYPTO)) { + if (!(capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC) || + !(capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC)) { + QAT_UTILS_LOG( + "Device does not support Crypto service\n"); + service_supported = CPA_FALSE; + } + } + if (SalCtrl_IsServiceEnabled((Cpa32U)service_to_check, + SAL_SERVICE_TYPE_CRYPTO_ASYM)) { + if (!(capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC)) { + QAT_UTILS_LOG("Device does not support Asym service\n"); + service_supported = CPA_FALSE; + } + } + if (SalCtrl_IsServiceEnabled((Cpa32U)service_to_check, + SAL_SERVICE_TYPE_CRYPTO_SYM)) { + if (!(capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC)) { + QAT_UTILS_LOG("Device does not support Sym service\n"); + service_supported = CPA_FALSE; + } + } + if (SalCtrl_IsServiceEnabled((Cpa32U)service_to_check, + SAL_SERVICE_TYPE_COMPRESSION)) { + if (!(capabilitiesMask & ICP_ACCEL_CAPABILITIES_COMPRESSION)) { + QAT_UTILS_LOG( + "Device does not support Compression service.\n"); + service_supported = CPA_FALSE; + } + } + + return service_supported; +} + +/* + * @ingroup SalCtrl + * @description + * This function is used to retrieve how many instances are + * to be configured for process specific service. + * + * @context + * This functions is called from the SalCtrl_ServiceEventInit function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] device A pointer to an icp_accel_dev_t + * @param[in] key Represents the parameter's name we want to query + * @param[out] pCount Pointer to memory where num instances will be stored + * @retval status returned status from ADF or _FAIL if number of instances + * is out of range for the device. + */ +static CpaStatus +SalCtrl_GetInstanceCount(icp_accel_dev_t *device, char *key, Cpa32U *pCount) +{ + CpaStatus status = CPA_STATUS_FAIL; + char param_value[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + + memset(param_value, 0, ADF_CFG_MAX_VAL_LEN_IN_BYTES); + status = icp_adf_cfgGetParamValue(device, + icpGetProcessName(), + key, + param_value); + if (CPA_STATUS_SUCCESS == status) { + *pCount = + (Cpa32U)(Sal_Strtoul(param_value, NULL, SAL_CFG_BASE_DEC)); + if (*pCount > SAL_MAX_NUM_INSTANCES_PER_DEV) { + QAT_UTILS_LOG("Number of instances is out of range.\n"); + status = CPA_STATUS_FAIL; + } + } + return status; +} + +/************************************************************************** + * @ingroup SalCtrl + * @description + * This function calls the shutdown function on all the + * service instances. + * It also frees all service instance memory allocated at Init. + * + * @context + * This function is called from the SalCtrl_ServiceEventShutdown + * function. + * + * @assumptions + * params[in] should not be NULL + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] services A pointer to the container of services + * @param[in] dbg_dir A pointer to the debug directory + * @param[in] svc_type The type of the service instance + * + ****************************************************************************/ +static CpaStatus +SalCtrl_ServiceShutdown(icp_accel_dev_t *device, + sal_list_t **services, + debug_dir_info_t **debug_dir, + sal_service_type_t svc_type) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_list_t *dyn_service = NULL; + sal_service_t *inst = NULL; + + /* Call Shutdown function for each service instance */ + SAL_FOR_EACH(*services, sal_service_t, device, shutdown, status); + + if (*debug_dir) { + icp_adf_debugRemoveDir(*debug_dir); + LAC_OS_FREE(*debug_dir); + *debug_dir = NULL; + } + + if (!icp_adf_is_dev_in_reset(device)) { + dyn_service = *services; + while (dyn_service) { + inst = (sal_service_t *)SalList_getObject(dyn_service); + if (CPA_TRUE == inst->is_dyn) { + icp_adf_putDynInstance(device, + (adf_service_type_t) + svc_type, + inst->instance); + } + dyn_service = SalList_next(dyn_service); + } + /* Free Sal services controller memory */ + SalList_free(services); + } else { + sal_list_t *curr_element = NULL; + sal_service_t *service = NULL; + curr_element = *services; + while (NULL != curr_element) { + service = + (sal_service_t *)SalList_getObject(curr_element); + service->state = SAL_SERVICE_STATE_RESTARTING; + curr_element = SalList_next(curr_element); + } + } + + return status; +} + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to initialise the service instances. + * It allocates memory for service instances and invokes the + * Init function on them. + * + * @context + * This function is called from the SalCtrl_ServiceEventInit function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] services A pointer to the container of services + * @param[in] dbg_dir A pointer to the debug directory + * @param[in] dbg_dir_name Name of the debug directory + * @param[in] tail_list SAL's list of services + * @param[in] instance_count Number of instances + * @param[in] svc_type The type of the service instance + * + *************************************************************************/ +static CpaStatus +SalCtrl_ServiceInit(icp_accel_dev_t *device, + sal_list_t **services, + debug_dir_info_t **dbg_dir, + char *dbg_dir_name, + sal_list_t *tail_list, + Cpa32U instance_count, + sal_service_type_t svc_type) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_service_t *pInst = NULL; + Cpa32U i = 0; + debug_dir_info_t *debug_dir = NULL; + + debug_dir = LAC_OS_MALLOC(sizeof(debug_dir_info_t)); + if (NULL == debug_dir) { + QAT_UTILS_LOG("Failed to allocate memory for debug dir.\n"); + return CPA_STATUS_RESOURCE; + } + debug_dir->name = dbg_dir_name; + debug_dir->parent = NULL; + status = icp_adf_debugAddDir(device, debug_dir); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to add debug dir.\n"); + LAC_OS_FREE(debug_dir); + debug_dir = NULL; + return status; + } + + if (!icp_adf_is_dev_in_reset(device)) { + for (i = 0; i < instance_count; i++) { + status = SalCtrl_ServiceCreate(svc_type, i, &pInst); + if (CPA_STATUS_SUCCESS != status) { + break; + } + pInst->debug_parent_dir = debug_dir; + pInst->capabilitiesMask = device->accelCapabilitiesMask; + status = SalList_add(services, &tail_list, pInst); + if (CPA_STATUS_SUCCESS != status) { + free(pInst, M_QAT); + } + } + } else { + sal_list_t *curr_element = *services; + sal_service_t *service = NULL; + while (NULL != curr_element) { + service = + (sal_service_t *)SalList_getObject(curr_element); + service->debug_parent_dir = debug_dir; + + if (CPA_TRUE == service->isInstanceStarted) { + icp_qa_dev_get(device); + } + + curr_element = SalList_next(curr_element); + } + } + + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to allocate all instances.\n"); + icp_adf_debugRemoveDir(debug_dir); + LAC_OS_FREE(debug_dir); + debug_dir = NULL; + SalList_free(services); + return status; + } + + /* Call init function for each service instance */ + SAL_FOR_EACH(*services, sal_service_t, device, init, status); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to initialise all service instances.\n"); + /* shutdown all instances initialised before error */ + SAL_FOR_EACH_STATE(*services, + sal_service_t, + device, + shutdown, + SAL_SERVICE_STATE_INITIALIZED); + icp_adf_debugRemoveDir(debug_dir); + LAC_OS_FREE(debug_dir); + debug_dir = NULL; + SalList_free(services); + return status; + } + /* initialize the debug directory for relevant service */ + *dbg_dir = debug_dir; + + return status; +} + +/************************************************************************** + * @ingroup SalCtrl + * @description + * This function calls the start function on all the service instances. + * + * @context + * This function is called from the SalCtrl_ServiceEventStart function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] services A pointer to the container of services + * + **************************************************************************/ +static CpaStatus +SalCtrl_ServiceStart(icp_accel_dev_t *device, sal_list_t *services) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + /* Call Start function for each service instance */ + SAL_FOR_EACH(services, sal_service_t, device, start, status); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to start all instances.\n"); + /* stop all instances started before error */ + SAL_FOR_EACH_STATE(services, + sal_service_t, + device, + stop, + SAL_SERVICE_STATE_RUNNING); + return status; + } + + if (icp_adf_is_dev_in_reset(device)) { + sal_list_t *curr_element = services; + sal_service_t *service = NULL; + while (NULL != curr_element) { + service = + (sal_service_t *)SalList_getObject(curr_element); + if (service->notification_cb) { + service->notification_cb( + service, + service->cb_tag, + CPA_INSTANCE_EVENT_RESTARTED); + } + curr_element = SalList_next(curr_element); + } + } + + return status; +} + +/**************************************************************************** + * @ingroup SalCtrl + * @description + * This function calls the stop function on all the + * service instances. + * + * @context + * This function is called from the SalCtrl_ServiceEventStop function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] services A pointer to the container of services + * + *************************************************************************/ +static CpaStatus +SalCtrl_ServiceStop(icp_accel_dev_t *device, sal_list_t *services) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + /* Calling restarting functions */ + if (icp_adf_is_dev_in_reset(device)) { + sal_list_t *curr_element = services; + sal_service_t *service = NULL; + while (NULL != curr_element) { + service = + (sal_service_t *)SalList_getObject(curr_element); + if (service->notification_cb) { + service->notification_cb( + service, + service->cb_tag, + CPA_INSTANCE_EVENT_RESTARTING); + } + curr_element = SalList_next(curr_element); + } + } + + /* Call Stop function for each service instance */ + SAL_FOR_EACH(services, sal_service_t, device, stop, status); + + return status; +} + +/* + * @ingroup SalCtrl + * @description + * This function is used to print hardware and software versions in proc + * filesystem entry via ADF Debug interface + * + * @context + * This functions is called from proc filesystem interface + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] private_data A pointer to a private data passed to the + * function while adding a debug file. + * @param[out] data Pointer to a buffer where version information + * needs to be printed to. + * @param[in] size Size of a buffer pointed by data. + * @param[in] offset Offset in a debug file + * + * @retval 0 This function always returns 0 + */ +static int +SalCtrl_VersionDebug(void *private_data, char *data, int size, int offset) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa32U len = 0; + icp_accel_dev_t *device = (icp_accel_dev_t *)private_data; + char param_value[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + + len += snprintf( + data + len, + size - len, + SEPARATOR BORDER + " Hardware and Software versions for device %d " BORDER + "\n" SEPARATOR, + device->accelId); + + memset(param_value, 0, ADF_CFG_MAX_VAL_LEN_IN_BYTES); + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + ICP_CFG_HW_REV_ID_KEY, + param_value); + LAC_CHECK_STATUS(status); + + len += snprintf(data + len, + size - len, + " Hardware Version: %s %s \n", + param_value, + get_sku_info(device->sku)); + + memset(param_value, 0, ADF_CFG_MAX_VAL_LEN_IN_BYTES); + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + ICP_CFG_UOF_VER_KEY, + param_value); + LAC_CHECK_STATUS(status); + + len += snprintf(data + len, + size - len, + " Firmware Version: %s \n", + param_value); + memset(param_value, 0, ADF_CFG_MAX_VAL_LEN_IN_BYTES); + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + ICP_CFG_MMP_VER_KEY, + param_value); + LAC_CHECK_STATUS(status); + + len += snprintf(data + len, + size - len, + " MMP Version: %s \n", + param_value); + len += snprintf(data + len, + size - len, + " Driver Version: %d.%d.%d \n", + SAL_INFO2_DRIVER_SW_VERSION_MAJ_NUMBER, + SAL_INFO2_DRIVER_SW_VERSION_MIN_NUMBER, + SAL_INFO2_DRIVER_SW_VERSION_PATCH_NUMBER); + + memset(param_value, 0, ADF_CFG_MAX_VAL_LEN_IN_BYTES); + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + ICP_CFG_LO_COMPATIBLE_DRV_KEY, + param_value); + LAC_CHECK_STATUS(status); + + len += snprintf(data + len, + size - len, + " Lowest Compatible Driver: %s \n", + param_value); + + len += snprintf(data + len, + size - len, + " QuickAssist API CY Version: %d.%d \n", + CPA_CY_API_VERSION_NUM_MAJOR, + CPA_CY_API_VERSION_NUM_MINOR); + len += snprintf(data + len, + size - len, + " QuickAssist API DC Version: %d.%d \n", + CPA_DC_API_VERSION_NUM_MAJOR, + CPA_DC_API_VERSION_NUM_MINOR); + + len += snprintf(data + len, size - len, SEPARATOR); + return 0; +} + +/************************************************************************** + * @ingroup SalCtrl + * @description + * This function calls the shutdown function on all the service + * instances. It also frees all service instance memory + * allocated at Init. + * + * @context + * This function is called from the SalCtrl_ServiceEventHandler function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] enabled_services Services enabled by user + * + ****************************************************************************/ +static CpaStatus +SalCtrl_ServiceEventShutdown(icp_accel_dev_t *device, Cpa32U enabled_services) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaStatus ret_status = CPA_STATUS_SUCCESS; + sal_t *service_container = (sal_t *)device->pSalHandle; + + if (NULL == service_container) { + QAT_UTILS_LOG("Private data is NULL\n"); + return CPA_STATUS_FATAL; + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO)) { + status = + SalCtrl_ServiceShutdown(device, + &service_container->crypto_services, + &service_container->cy_dir, + SAL_SERVICE_TYPE_CRYPTO); + if (CPA_STATUS_SUCCESS != status) { + ret_status = status; + } + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO_ASYM)) { + status = + SalCtrl_ServiceShutdown(device, + &service_container->asym_services, + &service_container->asym_dir, + SAL_SERVICE_TYPE_CRYPTO_ASYM); + if (CPA_STATUS_SUCCESS != status) { + ret_status = status; + } + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO_SYM)) { + status = + SalCtrl_ServiceShutdown(device, + &service_container->sym_services, + &service_container->sym_dir, + SAL_SERVICE_TYPE_CRYPTO_SYM); + if (CPA_STATUS_SUCCESS != status) { + ret_status = status; + } + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_COMPRESSION)) { + status = SalCtrl_ServiceShutdown( + device, + &service_container->compression_services, + &service_container->dc_dir, + SAL_SERVICE_TYPE_COMPRESSION); + if (CPA_STATUS_SUCCESS != status) { + ret_status = status; + } + } + + if (service_container->ver_file) { + icp_adf_debugRemoveFile(service_container->ver_file); + LAC_OS_FREE(service_container->ver_file); + service_container->ver_file = NULL; + } + + if (!icp_adf_is_dev_in_reset(device)) { + /* Free container also */ + free(service_container, M_QAT); + device->pSalHandle = NULL; + } + + return ret_status; +} + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to initialize the service instances. + * It first checks (via ADF query) which services are enabled in the + * system and the number of each services. + * It then invokes the init function on them which creates the + * instances and allocates memory for them. + * + * @context + * This function is called from the SalCtrl_ServiceEventHandler function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] enabled_services Services enabled by user + * + *************************************************************************/ +static CpaStatus +SalCtrl_ServiceEventInit(icp_accel_dev_t *device, Cpa32U enabled_services) +{ + sal_t *service_container = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + sal_list_t *tail_list = NULL; + Cpa32U instance_count = 0; + + status = SalCtrl_GetSupportedServices(device, enabled_services); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get supported services.\n"); + return status; + } + + if (!icp_adf_is_dev_in_reset(device)) { + service_container = malloc(sizeof(sal_t), M_QAT, M_WAITOK); + device->pSalHandle = service_container; + service_container->asym_services = NULL; + service_container->sym_services = NULL; + service_container->crypto_services = NULL; + service_container->compression_services = NULL; + } else { + service_container = device->pSalHandle; + } + service_container->asym_dir = NULL; + service_container->sym_dir = NULL; + service_container->cy_dir = NULL; + service_container->dc_dir = NULL; + service_container->ver_file = NULL; + + service_container->ver_file = LAC_OS_MALLOC(sizeof(debug_file_info_t)); + if (NULL == service_container->ver_file) { + free(service_container, M_QAT); + return CPA_STATUS_RESOURCE; + } + + memset(service_container->ver_file, 0, sizeof(debug_file_info_t)); + service_container->ver_file->name = ver_file_name; + service_container->ver_file->seq_read = SalCtrl_VersionDebug; + service_container->ver_file->private_data = device; + service_container->ver_file->parent = NULL; + + status = icp_adf_debugAddFile(device, service_container->ver_file); + if (CPA_STATUS_SUCCESS != status) { + LAC_OS_FREE(service_container->ver_file); + free(service_container, M_QAT); + return status; + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO_ASYM)) { + status = SalCtrl_GetInstanceCount(device, + "NumberCyInstances", + &instance_count); + if (CPA_STATUS_SUCCESS != status) { + instance_count = 0; + } + status = SalCtrl_ServiceInit(device, + &service_container->asym_services, + &service_container->asym_dir, + asym_dir_name, + tail_list, + instance_count, + SAL_SERVICE_TYPE_CRYPTO_ASYM); + if (CPA_STATUS_SUCCESS != status) { + goto err_init; + } + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO_SYM)) { + status = SalCtrl_GetInstanceCount(device, + "NumberCyInstances", + &instance_count); + if (CPA_STATUS_SUCCESS != status) { + instance_count = 0; + } + status = SalCtrl_ServiceInit(device, + &service_container->sym_services, + &service_container->sym_dir, + sym_dir_name, + tail_list, + instance_count, + SAL_SERVICE_TYPE_CRYPTO_SYM); + if (CPA_STATUS_SUCCESS != status) { + goto err_init; + } + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO)) { + status = SalCtrl_GetInstanceCount(device, + "NumberCyInstances", + &instance_count); + if (CPA_STATUS_SUCCESS != status) { + instance_count = 0; + } + status = + SalCtrl_ServiceInit(device, + &service_container->crypto_services, + &service_container->cy_dir, + cy_dir_name, + tail_list, + instance_count, + SAL_SERVICE_TYPE_CRYPTO); + if (CPA_STATUS_SUCCESS != status) { + goto err_init; + } + } + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_COMPRESSION)) { + status = SalCtrl_GetInstanceCount(device, + "NumberDcInstances", + &instance_count); + if (CPA_STATUS_SUCCESS != status) { + instance_count = 0; + } + status = SalCtrl_ServiceInit( + device, + &service_container->compression_services, + &service_container->dc_dir, + dc_dir_name, + tail_list, + instance_count, + SAL_SERVICE_TYPE_COMPRESSION); + if (CPA_STATUS_SUCCESS != status) { + goto err_init; + } + } + + return status; + +err_init: + SalCtrl_ServiceEventShutdown(device, enabled_services); + return status; +} + +/**************************************************************************** + * @ingroup SalCtrl + * @description + * This function calls the stop function on all the service instances. + * + * @context + * This function is called from the SalCtrl_ServiceEventHandler function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] enabled_services Enabled services by user + * + *************************************************************************/ +static CpaStatus +SalCtrl_ServiceEventStop(icp_accel_dev_t *device, Cpa32U enabled_services) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaStatus ret_status = CPA_STATUS_SUCCESS; + sal_t *service_container = device->pSalHandle; + + if (service_container == NULL) { + QAT_UTILS_LOG("Private data is NULL.\n"); + return CPA_STATUS_FATAL; + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO_ASYM)) { + status = SalCtrl_ServiceStop(device, + service_container->asym_services); + if (CPA_STATUS_SUCCESS != status) { + ret_status = status; + } + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO_SYM)) { + status = SalCtrl_ServiceStop(device, + service_container->sym_services); + if (CPA_STATUS_SUCCESS != status) { + ret_status = status; + } + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO)) { + status = + SalCtrl_ServiceStop(device, + service_container->crypto_services); + if (CPA_STATUS_SUCCESS != status) { + ret_status = status; + } + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_COMPRESSION)) { + status = SalCtrl_ServiceStop( + device, service_container->compression_services); + if (CPA_STATUS_SUCCESS != status) { + ret_status = status; + } + } + + return ret_status; +} + +/************************************************************************** + * @ingroup SalCtrl + * @description + * This function calls the start function on all the service instances. + * + * @context + * This function is called from the SalCtrl_ServiceEventHandler function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] enabled_services Enabled services by user + * + **************************************************************************/ +static CpaStatus +SalCtrl_ServiceEventStart(icp_accel_dev_t *device, Cpa32U enabled_services) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_t *service_container = device->pSalHandle; + + if (service_container == NULL) { + QAT_UTILS_LOG("Private data is NULL.\n"); + return CPA_STATUS_FATAL; + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO_ASYM)) { + status = SalCtrl_ServiceStart(device, + service_container->asym_services); + if (CPA_STATUS_SUCCESS != status) { + goto err_start; + } + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO_SYM)) { + status = SalCtrl_ServiceStart(device, + service_container->sym_services); + if (CPA_STATUS_SUCCESS != status) { + goto err_start; + } + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO)) { + status = + SalCtrl_ServiceStart(device, + service_container->crypto_services); + if (CPA_STATUS_SUCCESS != status) { + goto err_start; + } + } + + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_COMPRESSION)) { + status = SalCtrl_ServiceStart( + device, service_container->compression_services); + if (CPA_STATUS_SUCCESS != status) { + goto err_start; + } + } + + return status; +err_start: + SalCtrl_ServiceEventStop(device, enabled_services); + return status; +} + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is the events handler registered with ADF + * for the QA API services (cy, dc) - kernel and user + * + * @context + * This function is called from an ADF context. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] event Event from ADF + * @param[in] param Parameter used for back compatibility + * + ***********************************************************************/ +static CpaStatus +SalCtrl_ServiceEventHandler(icp_accel_dev_t *device, + icp_adf_subsystemEvent_t event, + void *param) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaStatus stats_status = CPA_STATUS_SUCCESS; + Cpa32U enabled_services = 0; + + status = SalCtrl_GetEnabledServices(device, &enabled_services); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get enabled services.\n"); + return status; + } + + switch (event) { + case ICP_ADF_EVENT_INIT: { + /* In case there is no QAT SAL needs to call InitStats */ + if (NULL == device->pQatStats) { + status = SalStatistics_InitStatisticsCollection(device); + } + if (CPA_STATUS_SUCCESS != status) { + return status; + } + + status = SalCtrl_ServiceEventInit(device, enabled_services); + break; + } + case ICP_ADF_EVENT_START: { + status = SalCtrl_ServiceEventStart(device, enabled_services); + break; + } + case ICP_ADF_EVENT_STOP: { + status = SalCtrl_ServiceEventStop(device, enabled_services); + break; + } + case ICP_ADF_EVENT_SHUTDOWN: { + status = SalCtrl_ServiceEventShutdown(device, enabled_services); + stats_status = SalStatistics_CleanStatisticsCollection(device); + if (CPA_STATUS_SUCCESS != status || + CPA_STATUS_SUCCESS != stats_status) { + return CPA_STATUS_FAIL; + } + break; + } + default: + status = CPA_STATUS_SUCCESS; + break; + } + return status; +} + +CpaStatus +SalCtrl_AdfServicesRegister(void) +{ + /* Fill out the global sal_service_reg_handle structure */ + sal_service_reg_handle.subserviceEventHandler = + SalCtrl_ServiceEventHandler; + /* Set subsystem name to globally defined name */ + sal_service_reg_handle.subsystem_name = subsystem_name; + + return icp_adf_subsystemRegister(&sal_service_reg_handle); +} + +CpaStatus +SalCtrl_AdfServicesUnregister(void) +{ + return icp_adf_subsystemUnregister(&sal_service_reg_handle); +} + +CpaStatus +SalCtrl_AdfServicesStartedCheck(void) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa32U retry_num = 0; + CpaBoolean state = CPA_FALSE; + + do { + state = icp_adf_isSubsystemStarted(&sal_service_reg_handle); + retry_num++; + } while ((CPA_FALSE == state) && (retry_num < MAX_SUBSYSTEM_RETRY)); + + if (CPA_FALSE == state) { + QAT_UTILS_LOG("Sal Ctrl failed to start in given time.\n"); + status = CPA_STATUS_FAIL; + } + + return status; +} + +CpaStatus +validateConcurrRequest(Cpa32U numConcurrRequests) +{ + Cpa32U baseReq = SAL_64_CONCURR_REQUESTS; + + if (SAL_64_CONCURR_REQUESTS > numConcurrRequests) { + QAT_UTILS_LOG( + "Invalid numConcurrRequests, it is less than min value.\n"); + return CPA_STATUS_FAIL; + } + + while (SAL_MAX_CONCURR_REQUESTS >= baseReq) { + if (baseReq != numConcurrRequests) { + baseReq = baseReq << 1; + } else { + break; + } + } + if (SAL_MAX_CONCURR_REQUESTS < baseReq) { + QAT_UTILS_LOG( + "Invalid baseReg, it is greater than max value.\n"); + return CPA_STATUS_FAIL; + } + + return CPA_STATUS_SUCCESS; +} Index: sys/dev/qat/qat_api/common/ctrl/sal_list.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/ctrl/sal_list.c @@ -0,0 +1,99 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file sal_list.c + * + * @ingroup SalCtrl + * + * List implementations for SAL + * + *****************************************************************************/ + +#include "lac_mem.h" +#include "lac_list.h" + +CpaStatus +SalList_add(sal_list_t **list, sal_list_t **tail, void *pObj) +{ + sal_list_t *new_element = NULL; + + if (NULL == *list) { + /* First element in list */ + *list = malloc(sizeof(sal_list_t), M_QAT, M_WAITOK); + (*list)->next = NULL; + (*list)->pObj = pObj; + *tail = *list; + } else { + /* add to tail of the list */ + new_element = malloc(sizeof(sal_list_t), M_QAT, M_WAITOK); + new_element->pObj = pObj; + new_element->next = NULL; + + (*tail)->next = new_element; + + *tail = new_element; + } + + return CPA_STATUS_SUCCESS; +} + +void * +SalList_getObject(sal_list_t *list) +{ + if (list == NULL) { + return NULL; + } + + return list->pObj; +} + +void +SalList_delObject(sal_list_t **list) +{ + if (*list == NULL) { + return; + } + + (*list)->pObj = NULL; + return; +} + +void * +SalList_next(sal_list_t *list) +{ + return list->next; +} + +void +SalList_free(sal_list_t **list) +{ + sal_list_t *next_element = NULL; + void *pObj = NULL; + while (NULL != (*list)) { + next_element = SalList_next(*list); + pObj = SalList_getObject((*list)); + LAC_OS_FREE(pObj); + LAC_OS_FREE(*list); + *list = next_element; + } +} + +void +SalList_del(sal_list_t **head_list, sal_list_t **pre_list, sal_list_t *list) +{ + void *pObj = NULL; + if ((NULL == *head_list) || (NULL == *pre_list) || (NULL == list)) { + return; + } + if (*head_list == list) { /* delete the first node in list */ + *head_list = list->next; + } else { + (*pre_list)->next = list->next; + } + pObj = SalList_getObject(list); + LAC_OS_FREE(pObj); + LAC_OS_FREE(list); + return; +} Index: sys/dev/qat/qat_api/common/include/lac_buffer_desc.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_buffer_desc.h @@ -0,0 +1,252 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file lac_buffer_desc.h + * + * @defgroup LacBufferDesc Buffer Descriptors + * + * @ingroup LacCommon + * + * Functions which handle updating a user supplied buffer with the QAT + * descriptor representation. + * + ***************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_BUFFER_DESC_H +#define LAC_BUFFER_DESC_H + +/*************************************************************************** + * Include header files + ***************************************************************************/ +#include "cpa.h" +#include "icp_buffer_desc.h" +#include "cpa_cy_sym.h" +#include "lac_common.h" + +/** +******************************************************************************* +* @ingroup LacBufferDesc +* Write the buffer descriptor in QAT friendly format. +* +* @description +* Updates the Meta Data associated with the pUserBufferList CpaBufferList +* This function will also return the (aligned) physical address +* associated with this CpaBufferList. +* +* @param[in] pUserBufferList A pointer to the buffer list to +* create the meta data for the QAT. +* @param[out] pBufferListAlignedPhyAddr The pointer to the aligned physical +* address. +* @param[in] isPhysicalAddress Type of address +* @param[in] pService Pointer to generic service +* +*****************************************************************************/ +CpaStatus LacBuffDesc_BufferListDescWrite(const CpaBufferList *pUserBufferList, + Cpa64U *pBufferListAlignedPhyAddr, + CpaBoolean isPhysicalAddress, + sal_service_t *pService); + +/** +******************************************************************************* +* @ingroup LacBufferDesc +* Write the buffer descriptor in QAT friendly format. +* +* @description +* Updates the Meta Data associated with the pUserBufferList CpaBufferList +* This function will also return the (aligned) physical address +* associated with this CpaBufferList. Zero length buffers are allowed. +* Should be used for CHA-CHA-POLY and GCM algorithms. +* +* @param[in] pUserBufferList A pointer to the buffer list to +* create the meta data for the QAT. +* @param[out] pBufferListAlignedPhyAddr The pointer to the aligned physical +* address. +* @param[in] isPhysicalAddress Type of address +* @param[in] pService Pointer to generic service +* +*****************************************************************************/ +CpaStatus LacBuffDesc_BufferListDescWriteAndAllowZeroBuffer( + const CpaBufferList *pUserBufferList, + Cpa64U *pBufferListAlignedPhyAddr, + CpaBoolean isPhysicalAddress, + sal_service_t *pService); + +/** +******************************************************************************* + * @ingroup LacBufferDesc + * Write the buffer descriptor in QAT friendly format. + * + * @description + * Updates the Meta Data associated with the PClientList CpaBufferList + * This function will also return the (aligned) physical address + * associated with this CpaBufferList and the total data length of the + * buffer list. + * + * @param[in] pUserBufferList A pointer to the buffer list to + * create the meta data for the QAT. + * @param[out] pBufListAlignedPhyAddr The pointer to the aligned physical + * address. + * @param[in] isPhysicalAddress Type of address + * @param[out] totalDataLenInBytes The pointer to the total data length + * of the buffer list + * @param[in] pService Pointer to generic service + * + *****************************************************************************/ +CpaStatus +LacBuffDesc_BufferListDescWriteAndGetSize(const CpaBufferList *pUserBufferList, + Cpa64U *pBufListAlignedPhyAddr, + CpaBoolean isPhysicalAddress, + Cpa64U *totalDataLenInBytes, + sal_service_t *pService); + +/** +******************************************************************************* + * @ingroup LacBufferDesc + * Ensure the CpaFlatBuffer is correctly formatted. + * + * @description + * Ensures the CpaFlatBuffer is correctly formatted + * This function will also return the total size of the buffers + * in the scatter gather list. + * + * @param[in] pUserFlatBuffer A pointer to the flat buffer to + * validate. + * @param[out] pPktSize The total size of the packet. + * @param[in] alignmentShiftExpected The expected alignment shift of each + * of the elements of the scatter gather + * + * @retval CPA_STATUS_INVALID_PARAM BufferList failed checks + * @retval CPA_STATUS_SUCCESS Function executed successfully + * + *****************************************************************************/ +CpaStatus +LacBuffDesc_FlatBufferVerify(const CpaFlatBuffer *pUserFlatBuffer, + Cpa64U *pPktSize, + lac_aligment_shift_t alignmentShiftExpected); + +/** +******************************************************************************* + * @ingroup LacBufferDesc + * Ensure the CpaFlatBuffer is correctly formatted. + * This function will allow a size of zero bytes to any of the Flat + * buffers. + * + * @description + * Ensures the CpaFlatBuffer is correctly formatted + * This function will also return the total size of the buffers + * in the scatter gather list. + * + * @param[in] pUserFlatBuffer A pointer to the flat buffer to + * validate. + * @param[out] pPktSize The total size of the packet. + * @param[in] alignmentShiftExpected The expected alignment shift of each + * of the elements of the scatter gather + * + * @retval CPA_STATUS_INVALID_PARAM BufferList failed checks + * @retval CPA_STATUS_SUCCESS Function executed successfully + * + *****************************************************************************/ +CpaStatus +LacBuffDesc_FlatBufferVerifyNull(const CpaFlatBuffer *pUserFlatBuffer, + Cpa64U *pPktSize, + lac_aligment_shift_t alignmentShiftExpected); + +/** +******************************************************************************* + * @ingroup LacBufferDesc + * Ensure the CpaBufferList is correctly formatted. + * + * @description + * Ensures the CpaBufferList pUserBufferList is correctly formatted + * including the user supplied metaData. + * This function will also return the total size of the buffers + * in the scatter gather list. + * + * @param[in] pUserBufferList A pointer to the buffer list to + * validate. + * @param[out] pPktSize The total size of the buffers in the + * scatter gather list. + * @param[in] alignmentShiftExpected The expected alignment shift of each + * of the elements of the scatter gather + * list. + * @retval CPA_STATUS_INVALID_PARAM BufferList failed checks + * @retval CPA_STATUS_SUCCESS Function executed successfully + * + *****************************************************************************/ +CpaStatus +LacBuffDesc_BufferListVerify(const CpaBufferList *pUserBufferList, + Cpa64U *pPktSize, + lac_aligment_shift_t alignmentShiftExpected); + +/** +******************************************************************************* + * @ingroup LacBufferDesc + * Ensure the CpaBufferList is correctly formatted. + * + * @description + * Ensures the CpaBufferList pUserBufferList is correctly formatted + * including the user supplied metaData. + * This function will also return the total size of the buffers + * in the scatter gather list. + * + * @param[in] pUserBufferList A pointer to the buffer list to + * validate. + * @param[out] pPktSize The total size of the buffers in the + * scatter gather list. + * @param[in] alignmentShiftExpected The expected alignment shift of each + * of the elements of the scatter gather + * list. + * @retval CPA_STATUS_INVALID_PARAM BufferList failed checks + * @retval CPA_STATUS_SUCCESS Function executed successfully + * + *****************************************************************************/ +CpaStatus +LacBuffDesc_BufferListVerifyNull(const CpaBufferList *pUserBufferList, + Cpa64U *pPktSize, + lac_aligment_shift_t alignmentShiftExpected); + +/** +******************************************************************************* + * @ingroup LacBufferDesc + * Get the total size of a CpaBufferList. + * + * @description + * This function returns the total size of the buffers + * in the scatter gather list. + * + * @param[in] pUserBufferList A pointer to the buffer list to + * calculate the total size for. + * @param[out] pPktSize The total size of the buffers in the + * scatter gather list. + * + *****************************************************************************/ +void LacBuffDesc_BufferListTotalSizeGet(const CpaBufferList *pUserBufferList, + Cpa64U *pPktSize); + +/** +******************************************************************************* + * @ingroup LacBufferDesc + * Zero some of the CpaBufferList. + * + * @description + * Zero a section of data within the CpaBufferList from an offset for + * a specific length. + * + * @param[in] pBuffList A pointer to the buffer list to + * zero an area of. + * @param[in] offset Number of bytes from start of buffer to where + * to start zeroing. + * + * @param[in] lenToZero Number of bytes that will be set to zero + * after the call to this function. + *****************************************************************************/ + +void LacBuffDesc_BufferListZeroFromOffset(CpaBufferList *pBuffList, + Cpa32U offset, + Cpa32U lenToZero); + +#endif /* LAC_BUFFER_DESC_H */ Index: sys/dev/qat/qat_api/common/include/lac_common.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_common.h @@ -0,0 +1,847 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file lac_common.h Common macros + * + * @defgroup Lac Look Aside Crypto LLD Doc + * + *****************************************************************************/ + +/** + ***************************************************************************** + * @defgroup LacCommon LAC Common + * Common code for Lac which includes init/shutdown, memory, logging and + * hooks. + * + * @ingroup Lac + * + *****************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_COMMON_H +#define LAC_COMMON_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "qat_utils.h" +#include "cpa_cy_common.h" +#include "icp_adf_init.h" + +#define LAC_ARCH_UINT uintptr_t +#define LAC_ARCH_INT intptr_t + +/* +***************************************************************************** +* Max range values for some primitive param checking +***************************************************************************** +*/ + +/**< Maximum number of instances */ +#define SAL_MAX_NUM_INSTANCES_PER_DEV 512 + +#define SAL_DEFAULT_RING_SIZE 256 +/**< Default ring size */ + +#define SAL_64_CONCURR_REQUESTS 64 +#define SAL_128_CONCURR_REQUESTS 128 +#define SAL_256_CONCURR_REQUESTS 256 +#define SAL_512_CONCURR_REQUESTS 512 +#define SAL_1024_CONCURR_REQUESTS 1024 +#define SAL_2048_CONCURR_REQUESTS 2048 +#define SAL_4096_CONCURR_REQUESTS 4096 +#define SAL_MAX_CONCURR_REQUESTS 65536 +/**< Valid options for the num of concurrent requests per ring pair read + from the config file. These values are used to size the rings */ + +#define SAL_BATCH_SUBMIT_FREE_SPACE 2 +/**< For data plane batch submissions ADF leaves 2 spaces free on the ring */ + +/* +****************************************************************************** +* Some common settings for QA API queries +****************************************************************************** +*/ + +#define SAL_INFO2_VENDOR_NAME "Intel(R)" +/**< @ingroup LacCommon + * Name of vendor of this driver */ +#define SAL_INFO2_PART_NAME "%s with Intel(R) QuickAssist Technology" +/**< @ingroup LacCommon + */ + +/* +******************************************************************************** +* User process name defines and functions +******************************************************************************** +*/ + +#define LAC_USER_PROCESS_NAME_MAX_LEN 32 +/**< @ingroup LacCommon + * Max length of user process name */ + +#define LAC_KERNEL_PROCESS_NAME "KERNEL_QAT" +/**< @ingroup LacCommon + * Default name for kernel process */ + +/* +******************************************************************************** +* response mode indicator from Config file +******************************************************************************** +*/ + +#define SAL_RESP_POLL_CFG_FILE 1 +#define SAL_RESP_EPOLL_CFG_FILE 2 + +/* + * @ingroup LacCommon + * @description + * This function sets the process name + * + * @context + * This functions is called from module_init or from user space process + * initialization function + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No + * + * param[in] processName Process name to be set +*/ +CpaStatus icpSetProcessName(const char *processName); + +/* + * @ingroup LacCommon + * @description + * This function gets the process name + * + * @context + * This functions is called from LAC context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * Yes + * @threadSafe + * Yes + * +*/ +char *icpGetProcessName(void); + +/* Sections of the config file */ +#define LAC_CFG_SECTION_GENERAL "GENERAL" +#define LAC_CFG_SECTION_INTERNAL "INTERNAL" + +/* +******************************************************************************** +* Debug Macros and settings +******************************************************************************** +*/ + +#define SEPARATOR "+--------------------------------------------------+\n" +/**< @ingroup LacCommon + * separator used for printing stats to standard output*/ + +#define BORDER "|" +/**< @ingroup LacCommon + * separator used for printing stats to standard output*/ + +/** +***************************************************************************** + * @ingroup LacCommon + * Component state + * + * @description + * This enum is used to indicate the state that the component is in. Its + * purpose is to prevent components from being initialised or shutdown + * incorrectly. + * + *****************************************************************************/ +typedef enum { + LAC_COMP_SHUT_DOWN = 0, + /**< Component in the Shut Down state */ + LAC_COMP_SHUTTING_DOWN, + /**< Component in the Process of Shutting down */ + LAC_COMP_INITIALISING, + /**< Component in the Process of being initialised */ + LAC_COMP_INITIALISED, + /**< Component in the initialised state */ +} lac_comp_state_t; + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro checks if a parameter is NULL + * + * @param[in] param Parameter + * + * @return CPA_STATUS_INVALID_PARAM Parameter is NULL + * @return void Parameter is not NULL + ******************************************************************************/ +#define LAC_CHECK_NULL_PARAM(param) \ + do { \ + if (NULL == (param)) { \ + return CPA_STATUS_INVALID_PARAM; \ + } \ + } while (0) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro checks if a parameter is within a specified range + * + * @param[in] param Parameter + * @param[in] min Parameter must be greater than OR equal to + *min + * @param[in] max Parameter must be less than max + * + * @return CPA_STATUS_INVALID_PARAM Parameter is outside range + * @return void Parameter is within range + ******************************************************************************/ +#define LAC_CHECK_PARAM_RANGE(param, min, max) \ + do { \ + if (((param) < (min)) || ((param) >= (max))) { \ + return CPA_STATUS_INVALID_PARAM; \ + } \ + } while (0) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This checks if a param is 8 byte aligned. + * + ******************************************************************************/ +#define LAC_CHECK_8_BYTE_ALIGNMENT(param) \ + do { \ + if ((Cpa64U)param % 8 != 0) { \ + return CPA_STATUS_INVALID_PARAM; \ + } \ + } while (0) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This checks if a param is 64 byte aligned. + * + ******************************************************************************/ +#define LAC_CHECK_64_BYTE_ALIGNMENT(param) \ + do { \ + if ((LAC_ARCH_UINT)param % 64 != 0) { \ + return CPA_STATUS_INVALID_PARAM; \ + } \ + } while (0) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro returns the size of the buffer list structure given the + * number of elements in the buffer list - note: only the sizeof the + * buffer list structure is returned. + * + * @param[in] numBuffers The number of flatbuffers in a buffer list + * + * @return size of the buffer list structure + ******************************************************************************/ +#define LAC_BUFFER_LIST_SIZE_GET(numBuffers) \ + (sizeof(CpaBufferList) + (numBuffers * sizeof(CpaFlatBuffer))) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro checks that a flatbuffer is valid i.e. that it is not + * null and the data it points to is not null + * + * @param[in] pFlatBuffer Pointer to flatbuffer + * + * @return CPA_STATUS_INVALID_PARAM Invalid flatbuffer pointer + * @return void flatbuffer is ok + ******************************************************************************/ +#define LAC_CHECK_FLAT_BUFFER(pFlatBuffer) \ + do { \ + LAC_CHECK_NULL_PARAM((pFlatBuffer)); \ + LAC_CHECK_NULL_PARAM((pFlatBuffer)->pData); \ + } while (0) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro verifies that the status is ok i.e. equal to CPA_STATUS_SUCCESS + * + * @param[in] status status we are checking + * + * @return void status is ok (CPA_STATUS_SUCCESS) + * @return status The value in the status parameter is an error one + * + ******************************************************************************/ +#define LAC_CHECK_STATUS(status) \ + do { \ + if (CPA_STATUS_SUCCESS != (status)) { \ + return status; \ + } \ + } while (0) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro verifies that the Instance Handle is valid. + * + * @param[in] instanceHandle Instance Handle + * + * @return CPA_STATUS_INVALID_PARAM Parameter is NULL + * @return void Parameter is not NULL + * + ******************************************************************************/ +#define LAC_CHECK_INSTANCE_HANDLE(instanceHandle) \ + do { \ + if (NULL == (instanceHandle)) { \ + return CPA_STATUS_INVALID_PARAM; \ + } \ + } while (0) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro copies a string from one location to another + * + * @param[out] pDestinationBuffer Pointer to destination buffer + * @param[in] pSource Pointer to source buffer + * + ******************************************************************************/ +#define LAC_COPY_STRING(pDestinationBuffer, pSource) \ + do { \ + memcpy(pDestinationBuffer, pSource, (sizeof(pSource) - 1)); \ + pDestinationBuffer[(sizeof(pSource) - 1)] = '\0'; \ + } while (0) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro fills a memory zone with ZEROES + * + * @param[in] pBuffer Pointer to buffer + * @param[in] count Buffer length + * + * @return void + * + ******************************************************************************/ +#define LAC_OS_BZERO(pBuffer, count) memset(pBuffer, 0, count); + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro calculates the position of the given member in a struct + * Only for use on a struct where all members are of equal size to map + * the struct member position to an array index + * + * @param[in] structType the struct + * @param[in] member the member of the given struct + * + ******************************************************************************/ +#define LAC_IDX_OF(structType, member) \ + (offsetof(structType, member) / sizeof(((structType *)0)->member)) + +/* +******************************************************************************** +* Alignment, Bid define and Bit Operation Macros +******************************************************************************** +*/ + +#define LAC_BIT31_SET 0x80000000 /**< bit 31 == 1 */ +#define LAC_BIT7_SET 0x80 /**< bit 7 == 1 */ +#define LAC_BIT6_SET 0x40 /**< bit 6 == 1 */ +#define LAC_BIT5_SET 0x20 /**< bit 5 == 1 */ +#define LAC_BIT4_SET 0x10 /**< bit 4 == 1 */ +#define LAC_BIT3_SET 0x08 /**< bit 3 == 1 */ +#define LAC_BIT2_SET 0x04 /**< bit 2 == 1 */ +#define LAC_BIT1_SET 0x02 /**< bit 1 == 1 */ +#define LAC_BIT0_SET 0x01 /**< bit 0 == 1 */ + +#define LAC_NUM_BITS_IN_BYTE (8) +/**< @ingroup LacCommon + * Number of bits in a byte */ + +#define LAC_LONG_WORD_IN_BYTES (4) +/**< @ingroup LacCommon + * Number of bytes in an IA word */ + +#define LAC_QUAD_WORD_IN_BYTES (8) +/**< @ingroup LacCommon + * Number of bytes in a QUAD word */ + +#define LAC_QAT_MAX_MSG_SZ_LW (32) +/**< @ingroup LacCommon + * Maximum size in Long Words for a QAT message */ + +/** +***************************************************************************** + * @ingroup LacCommon + * Alignment shift requirements of a buffer. + * + * @description + * This enum is used to indicate the alignment shift of a buffer. + * All alignments are to power of 2 + * + *****************************************************************************/ +typedef enum lac_aligment_shift_s { + LAC_NO_ALIGNMENT_SHIFT = 0, + /**< No alignment shift (to a power of 2)*/ + LAC_8BYTE_ALIGNMENT_SHIFT = 3, + /**< 8 byte alignment shift (to a power of 2)*/ + LAC_16BYTE_ALIGNMENT_SHIFT = 4, + /**< 16 byte alignment shift (to a power of 2)*/ + LAC_64BYTE_ALIGNMENT_SHIFT = 6, + /**< 64 byte alignment shift (to a power of 2)*/ + LAC_4KBYTE_ALIGNMENT_SHIFT = 12, + /**< 4k byte alignment shift (to a power of 2)*/ +} lac_aligment_shift_t; + +/** +***************************************************************************** + * @ingroup LacCommon + * Alignment of a buffer. + * + * @description + * This enum is used to indicate the alignment requirements of a buffer. + * + *****************************************************************************/ +typedef enum lac_aligment_s { + LAC_NO_ALIGNMENT = 0, + /**< No alignment */ + LAC_1BYTE_ALIGNMENT = 1, + /**< 1 byte alignment */ + LAC_8BYTE_ALIGNMENT = 8, + /**< 8 byte alignment*/ + LAC_64BYTE_ALIGNMENT = 64, + /**< 64 byte alignment*/ + LAC_4KBYTE_ALIGNMENT = 4096, + /**< 4k byte alignment */ +} lac_aligment_t; + +/** +***************************************************************************** + * @ingroup LacCommon + * Size of a buffer. + * + * @description + * This enum is used to indicate the required size. + * The buffer must be a multiple of the required size. + * + *****************************************************************************/ +typedef enum lac_expected_size_s { + LAC_NO_LENGTH_REQUIREMENTS = 0, + /**< No requirement for size */ + LAC_4KBYTE_MULTIPLE_REQUIRED = 4096, + /**< 4k multiple requirement for size */ +} lac_expected_size_t; + +#define LAC_OPTIMAL_ALIGNMENT_SHIFT LAC_64BYTE_ALIGNMENT_SHIFT +/**< @ingroup LacCommon + * optimal alignment to a power of 2 */ + +#define LAC_SHIFT_8 (1 << LAC_8BYTE_ALIGNMENT_SHIFT) +/**< shift by 8 bits */ +#define LAC_SHIFT_24 \ + ((1 << LAC_8BYTE_ALIGNMENT_SHIFT) + (1 << LAC_16BYTE_ALIGNMENT_SHIFT)) +/**< shift by 24 bits */ + +#define LAC_MAX_16_BIT_VALUE ((1 << 16) - 1) +/**< @ingroup LacCommon + * maximum value a 16 bit type can hold */ + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro can be used to avoid an unused variable warning from the + * compiler + * + * @param[in] variable unused variable + * + ******************************************************************************/ +#define LAC_UNUSED_VARIABLE(x) (void)(x) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro checks if an address is aligned to the specified power of 2 + * Returns 0 if alignment is ok, or non-zero otherwise + * + * @param[in] address the address we are checking + * + * @param[in] alignment the byte alignment to check (specified as power of 2) + * + ******************************************************************************/ +#define LAC_ADDRESS_ALIGNED(address, alignment) \ + (!((LAC_ARCH_UINT)(address) & ((1 << (alignment)) - 1))) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro rounds up a number to a be a multiple of the alignment when + * the alignment is a power of 2. + * + * @param[in] num Number + * @param[in] align Alignment (must be a power of 2) + * + ******************************************************************************/ +#define LAC_ALIGN_POW2_ROUNDUP(num, align) (((num) + (align)-1) & ~((align)-1)) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro generates a bit mask to select a particular bit + * + * @param[in] bitPos Bit position to select + * + ******************************************************************************/ +#define LAC_BIT(bitPos) (0x1 << (bitPos)) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro converts a size in bits to the equivalent size in bytes, + * using a bit shift to divide by 8 + * + * @param[in] x size in bits + * + ******************************************************************************/ +#define LAC_BITS_TO_BYTES(x) ((x) >> 3) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro converts a size in bytes to the equivalent size in bits, + * using a bit shift to multiply by 8 + * + * @param[in] x size in bytes + * + ******************************************************************************/ +#define LAC_BYTES_TO_BITS(x) ((x) << 3) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro converts a size in bytes to the equivalent size in longwords, + * using a bit shift to divide by 4 + * + * @param[in] x size in bytes + * + ******************************************************************************/ +#define LAC_BYTES_TO_LONGWORDS(x) ((x) >> 2) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro converts a size in longwords to the equivalent size in bytes, + * using a bit shift to multiply by 4 + * + * @param[in] x size in long words + * + ******************************************************************************/ +#define LAC_LONGWORDS_TO_BYTES(x) ((x) << 2) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro converts a size in bytes to the equivalent size in quadwords, + * using a bit shift to divide by 8 + * + * @param[in] x size in bytes + * + ******************************************************************************/ +#define LAC_BYTES_TO_QUADWORDS(x) (((x) >> 3) + (((x) % 8) ? 1 : 0)) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro converts a size in quadwords to the equivalent size in bytes, + * using a bit shift to multiply by 8 + * + * @param[in] x size in quad words + * + ******************************************************************************/ +#define LAC_QUADWORDS_TO_BYTES(x) ((x) << 3) + + +/******************************************************************************/ + +/* +******************************************************************************* +* Mutex Macros +******************************************************************************* +*/ + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro tries to acquire a mutex and returns the status + * + * @param[in] pLock Pointer to Lock + * @param[in] timeout Timeout + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_RESOURCE Error with Mutex + ******************************************************************************/ +#define LAC_LOCK_MUTEX(pLock, timeout) \ + ((CPA_STATUS_SUCCESS != qatUtilsMutexLock((pLock), (timeout))) ? \ + CPA_STATUS_RESOURCE : \ + CPA_STATUS_SUCCESS) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro unlocks a mutex and returns the status + * + * @param[in] pLock Pointer to Lock + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_RESOURCE Error with Mutex + ******************************************************************************/ +#define LAC_UNLOCK_MUTEX(pLock) \ + ((CPA_STATUS_SUCCESS != qatUtilsMutexUnlock((pLock))) ? \ + CPA_STATUS_RESOURCE : \ + CPA_STATUS_SUCCESS) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro initialises a mutex and returns the status + * + * @param[in] pLock Pointer to Lock + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_RESOURCE Error with Mutex + ******************************************************************************/ +#define LAC_INIT_MUTEX(pLock) \ + ((CPA_STATUS_SUCCESS != qatUtilsMutexInit((pLock))) ? \ + CPA_STATUS_RESOURCE : \ + CPA_STATUS_SUCCESS) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro destroys a mutex and returns the status + * + * @param[in] pLock Pointer to Lock + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_RESOURCE Error with Mutex + ******************************************************************************/ +#define LAC_DESTROY_MUTEX(pLock) \ + ((CPA_STATUS_SUCCESS != qatUtilsMutexDestroy((pLock))) ? \ + CPA_STATUS_RESOURCE : \ + CPA_STATUS_SUCCESS) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro calls a trylock on a mutex + * + * @param[in] pLock Pointer to Lock + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_RESOURCE Error with Mutex + ******************************************************************************/ +#define LAC_TRYLOCK_MUTEX(pLock) \ + ((CPA_STATUS_SUCCESS != \ + qatUtilsMutexTryLock((pLock), QAT_UTILS_WAIT_NONE)) ? \ + CPA_STATUS_RESOURCE : \ + CPA_STATUS_SUCCESS) + +/* +******************************************************************************* +* Semaphore Macros +******************************************************************************* +*/ + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro waits on a semaphore and returns the status + * + * @param[in] sid The semaphore + * @param[in] timeout Timeout + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_RESOURCE Error with semaphore + ******************************************************************************/ +#define LAC_WAIT_SEMAPHORE(sid, timeout) \ + ((CPA_STATUS_SUCCESS != qatUtilsSemaphoreWait(&sid, (timeout))) ? \ + CPA_STATUS_RESOURCE : \ + CPA_STATUS_SUCCESS) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro checks a semaphore and returns the status + * + * @param[in] sid The semaphore + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_RESOURCE Error with semaphore + ******************************************************************************/ +#define LAC_CHECK_SEMAPHORE(sid) \ + ((CPA_STATUS_SUCCESS != qatUtilsSemaphoreTryWait(&sid)) ? \ + CPA_STATUS_RETRY : \ + CPA_STATUS_SUCCESS) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro post a semaphore and returns the status + * + * @param[in] sid The semaphore + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_RESOURCE Error with semaphore + ******************************************************************************/ +#define LAC_POST_SEMAPHORE(sid) \ + ((CPA_STATUS_SUCCESS != qatUtilsSemaphorePost(&sid)) ? \ + CPA_STATUS_RESOURCE : \ + CPA_STATUS_SUCCESS) +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro initialises a semaphore and returns the status + * + * @param[in] sid The semaphore + * @param[in] semValue Initial semaphore value + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_RESOURCE Error with semaphore + ******************************************************************************/ +#define LAC_INIT_SEMAPHORE(sid, semValue) \ + ((CPA_STATUS_SUCCESS != qatUtilsSemaphoreInit(&sid, semValue)) ? \ + CPA_STATUS_RESOURCE : \ + CPA_STATUS_SUCCESS) + +/** + ******************************************************************************* + * @ingroup LacCommon + * This macro destroys a semaphore and returns the status + * + * @param[in] sid The semaphore + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_RESOURCE Error with semaphore + ******************************************************************************/ +#define LAC_DESTROY_SEMAPHORE(sid) \ + ((CPA_STATUS_SUCCESS != qatUtilsSemaphoreDestroy(&sid)) ? \ + CPA_STATUS_RESOURCE : \ + CPA_STATUS_SUCCESS) + +/* +******************************************************************************* +* Spinlock Macros +******************************************************************************* +*/ +typedef struct mtx *lac_lock_t; +#define LAC_SPINLOCK_INIT(lock) \ + ((CPA_STATUS_SUCCESS != qatUtilsLockInit(lock)) ? \ + CPA_STATUS_RESOURCE : \ + CPA_STATUS_SUCCESS) +#define LAC_SPINLOCK(lock) \ + ({ \ + (void)qatUtilsLock(lock); \ + CPA_STATUS_SUCCESS; \ + }) +#define LAC_SPINUNLOCK(lock) \ + ({ \ + (void)qatUtilsUnlock(lock); \ + CPA_STATUS_SUCCESS; \ + }) +#define LAC_SPINLOCK_DESTROY(lock) \ + ({ \ + (void)qatUtilsLockDestroy(lock); \ + CPA_STATUS_SUCCESS; \ + }) + +#define LAC_CONST_PTR_CAST(castee) ((void *)(LAC_ARCH_UINT)(castee)) +#define LAC_CONST_VOLATILE_PTR_CAST(castee) ((void *)(LAC_ARCH_UINT)(castee)) + +/* Type of ring */ +#define SAL_RING_TYPE_NONE 0 +#define SAL_RING_TYPE_A_SYM_HI 1 +#define SAL_RING_TYPE_A_SYM_LO 2 +#define SAL_RING_TYPE_A_ASYM 3 +#define SAL_RING_TYPE_B_SYM_HI 4 +#define SAL_RING_TYPE_B_SYM_LO 5 +#define SAL_RING_TYPE_B_ASYM 6 +#define SAL_RING_TYPE_DC 7 +#define SAL_RING_TYPE_ADMIN 8 +#define SAL_RING_TYPE_TRNG 9 + +/* Maps Ring Service to generic service type */ +static inline icp_adf_ringInfoService_t +lac_getRingType(int type) +{ + switch (type) { + case SAL_RING_TYPE_NONE: + return ICP_ADF_RING_SERVICE_0; + case SAL_RING_TYPE_A_SYM_HI: + return ICP_ADF_RING_SERVICE_1; + case SAL_RING_TYPE_A_SYM_LO: + return ICP_ADF_RING_SERVICE_2; + case SAL_RING_TYPE_A_ASYM: + return ICP_ADF_RING_SERVICE_3; + case SAL_RING_TYPE_B_SYM_HI: + return ICP_ADF_RING_SERVICE_4; + case SAL_RING_TYPE_B_SYM_LO: + return ICP_ADF_RING_SERVICE_5; + case SAL_RING_TYPE_B_ASYM: + return ICP_ADF_RING_SERVICE_6; + case SAL_RING_TYPE_DC: + return ICP_ADF_RING_SERVICE_7; + case SAL_RING_TYPE_ADMIN: + return ICP_ADF_RING_SERVICE_8; + case SAL_RING_TYPE_TRNG: + return ICP_ADF_RING_SERVICE_9; + default: + return ICP_ADF_RING_SERVICE_0; + } + return ICP_ADF_RING_SERVICE_0; +} + +/* Maps generic service type to Ring Service type */ +static inline int +lac_getServiceType(icp_adf_ringInfoService_t type) +{ + switch (type) { + case ICP_ADF_RING_SERVICE_0: + return SAL_RING_TYPE_NONE; + case ICP_ADF_RING_SERVICE_1: + return SAL_RING_TYPE_A_SYM_HI; + case ICP_ADF_RING_SERVICE_2: + return SAL_RING_TYPE_A_SYM_LO; + case ICP_ADF_RING_SERVICE_3: + return SAL_RING_TYPE_A_ASYM; + case ICP_ADF_RING_SERVICE_4: + return SAL_RING_TYPE_B_SYM_HI; + case ICP_ADF_RING_SERVICE_5: + return SAL_RING_TYPE_B_SYM_LO; + case ICP_ADF_RING_SERVICE_6: + return SAL_RING_TYPE_B_ASYM; + case ICP_ADF_RING_SERVICE_7: + return SAL_RING_TYPE_DC; + case ICP_ADF_RING_SERVICE_8: + return SAL_RING_TYPE_ADMIN; + default: + return SAL_RING_TYPE_NONE; + } + return SAL_RING_TYPE_NONE; +} + +#endif /* LAC_COMMON_H */ Index: sys/dev/qat/qat_api/common/include/lac_hooks.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_hooks.h @@ -0,0 +1,234 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ******************************************************************************* + * @file lac_hooks.h + * + * @defgroup LacHooks Hooks + * + * @ingroup LacCommon + * + * Component Init/Shutdown functions. These are: + * - an init function which is called during the intialisation sequence, + * - a shutdown function which is called by the overall shutdown function, + * + ******************************************************************************/ + +#ifndef LAC_HOOKS_H +#define LAC_HOOKS_H + +/* +******************************************************************************** +* Include public/global header files +******************************************************************************** +*/ + +#include "cpa.h" + +/* +******************************************************************************** +* Include private header files +******************************************************************************** +*/ + +/******************************************************************************/ + +/** + ******************************************************************************* + * @ingroup LacHooks + * This function initialises the Large Number (ModExp and ModInv) module + * + * @description + * This function clears the Large Number statistics + * + * @param[in] instanceHandle + * + ******************************************************************************/ +CpaStatus LacLn_Init(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacHooks + * This function frees statistics array for Large Number module + * + * @description + * This function frees statistics array for Large Number module + * + * @param[in] instanceHandle + * + ******************************************************************************/ +void LacLn_StatsFree(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacHooks + * This function initialises the Prime module + * + * @description + * This function clears the Prime statistics + * + * @param[in] instanceHandle + * + ******************************************************************************/ +CpaStatus LacPrime_Init(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacHooks + * This function frees the Prime module statistics array + * + * @description + * This function frees the Prime module statistics array + * + * @param[in] instanceHandle + * + ******************************************************************************/ +void LacPrime_StatsFree(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacHooks + * This function initialises the DSA module + * + * @param[in] instanceHandle + * + * @description + * This function clears the DSA statistics + * + ******************************************************************************/ +CpaStatus LacDsa_Init(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacHooks + * This function frees the DSA module statistics array + * + * @param[in] instanceHandle + * + * @description + * This function frees the DSA statistics array + * + ******************************************************************************/ +void LacDsa_StatsFree(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacHooks + * This function initialises the Diffie Hellmann module + * + * @description + * This function initialises the Diffie Hellman statistics + * + * @param[in] instanceHandle + * + ******************************************************************************/ +CpaStatus LacDh_Init(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacHooks + * This function frees the Diffie Hellmann module statistics + * + * @description + * This function frees the Diffie Hellmann module statistics + * + * @param[in] instanceHandle + * + ******************************************************************************/ +void LacDh_StatsFree(CpaInstanceHandle instanceHandle); + +/** + ****************************************************************************** + * @ingroup LacSymKey + * This function registers the callback handlers to SSL/TLS and MGF, + * allocates resources that are needed for the component and clears + * the stats. + * + * @param[in] instanceHandle + * + * @retval CPA_STATUS_SUCCESS Status Success + * @retval CPA_STATUS_FAIL General failure + * @retval CPA_STATUS_RESOURCE Resource allocation failure + * + *****************************************************************************/ +CpaStatus LacSymKey_Init(CpaInstanceHandle instanceHandle); + +/** + ****************************************************************************** + * @ingroup LacSymKey + * This function frees up resources obtained by the key gen component + * and clears the stats + * + * @param[in] instanceHandle + * + * @retval CPA_STATUS_SUCCESS Status Success + * + *****************************************************************************/ +CpaStatus LacSymKey_Shutdown(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacHooks + * This function initialises the RSA module + * + * @description + * This function clears the RSA statistics + * + * @param[in] instanceHandle + * + ******************************************************************************/ +CpaStatus LacRsa_Init(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacHooks + * This function frees the RSA module statistics + * + * @description + * This function frees the RSA module statistics + * + * @param[in] instanceHandle + * + ******************************************************************************/ +void LacRsa_StatsFree(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacHooks + * This function initialises the EC module + * + * @description + * This function clears the EC statistics + * + * @param[in] instanceHandle + * + ******************************************************************************/ +CpaStatus LacEc_Init(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacHooks + * This function frees the EC module stats array + * + * @description + * This function frees the EC module stats array + * + * @param[in] instanceHandle + * + ******************************************************************************/ +void LacEc_StatsFree(CpaInstanceHandle instanceHandle); + +/** +******************************************************************************* + * @ingroup LacSymNrbg + * Initialise the NRBG module + * + * @description + * This function registers NRBG callback handlers. + * + * + *****************************************************************************/ +void LacSymNrbg_Init(void); + +#endif /* LAC_HOOKS_H */ Index: sys/dev/qat/qat_api/common/include/lac_list.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_list.h @@ -0,0 +1,137 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file lac_list.h + * + * @defgroup SalList + * + * @ingroup SalCtrl + * + * List structure and list functions. + * + ***************************************************************************/ + +#ifndef LAC_LIST_H +#define LAC_LIST_H + +/** + ***************************************************************************** + * @ingroup SalList + * + * @description + * List structure + * + *****************************************************************************/ +typedef struct sal_list_s { + + struct sal_list_s *next; + void *pObj; + +} sal_list_t; + +/** +******************************************************************************* + * @ingroup SalList + * Add a structure to tail of a list. + * + * @description + * Adds pObj to the tail of list (if it exists). Allocates and sets a + * new sal_list_t structure. + * + * @param[in] list Pointer to the head pointer of the list. + * Can be NULL if no elements yet in list. + * @param[in/out] tail Pointer to tail pointer of the list. + * Can be NULL if no elements yet in list. + * Is updated by the function to point to +*tail + * of list if pObj has been successfully +*added. + * @param[in] pObj Pointer to structure to add to tail of + * the list. + * @retval status + * + *****************************************************************************/ +CpaStatus SalList_add(sal_list_t **list, sal_list_t **tail, void *pObj); + +/** +******************************************************************************* + * @ingroup SalList + * Delete an element from the list. + * + * @description + * Delete an element from the list. + * + * @param[in/out] head_list Pointer to the head pointer of the list. + * Can be NULL if no elements yet in list. + * Is updated by the function + * to point to list->next if head_list is +*list. + * @param[in/out] pre_list Pointer to the previous pointer of the +*list. + * Can be NULL if no elements yet in list. + * (*pre_list)->next is updated + * by the function to point to list->next + * @param[in] list Pointer to list. + * + *****************************************************************************/ +void +SalList_del(sal_list_t **head_list, sal_list_t **pre_list, sal_list_t *list); + +/** +******************************************************************************* + * @ingroup SalList + * Returns pObj element in list structure. + * + * @description + * Returns pObj associated with sal_list_t structure. + * + * @param[in] list Pointer to list element. + * @retval void* pObj member of list structure. + * + *****************************************************************************/ +void *SalList_getObject(sal_list_t *list); + +/** +******************************************************************************* + * @ingroup SalList + * Set pObj to be NULL in the list. + * + * @description + * Set pObj of a element in the list to be NULL. + * + * @param[in] list Pointer to list element. + * + *****************************************************************************/ +void SalList_delObject(sal_list_t **list); + +/** +******************************************************************************* + * @ingroup SalList + * Returns next element in list structure. + * + * @description + * Returns next associated with sal_list_t structure. + * + * @param[in] list Pointer to list element. + * @retval void* next member of list structure. + * + *****************************************************************************/ +void *SalList_next(sal_list_t *); + +/** +******************************************************************************* + * @ingroup SalList + * Frees memory associated with list structure. + * + * @description + * Frees memory associated with list structure and the Obj pointed to by + * the list. + * + * @param[in] list Pointer to list. + * + *****************************************************************************/ +void SalList_free(sal_list_t **); + +#endif Index: sys/dev/qat/qat_api/common/include/lac_log.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_log.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_log.h + * + * @defgroup LacLog Log + * + * @ingroup LacCommon + * + * Logging Macros. These macros also log the function name they are called in. + * + ***************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_LOG_H +#define LAC_LOG_H + +/*************************************************************************** + * Include public/global header files + ***************************************************************************/ +#include "cpa.h" +#include "lac_common.h" +#include "icp_accel_devices.h" + +#define LAC_INVALID_PARAM_LOG_(log, args...) \ + QAT_UTILS_LOG("[error] %s() - : Invalid API Param - " log "\n", \ + __func__, \ + ##args) + +#define LAC_INVALID_PARAM_LOG(log) LAC_INVALID_PARAM_LOG_(log) + +#define LAC_INVALID_PARAM_LOG1(log, param1) LAC_INVALID_PARAM_LOG_(log, param1) + +#define LAC_INVALID_PARAM_LOG2(log, param1, param2) \ + LAC_INVALID_PARAM_LOG_(log, param1, param2) + +#define LAC_UNSUPPORTED_PARAM_LOG(log) \ + QAT_UTILS_LOG("%s() - : UnSupported API Param - " log "\n", __func__) + +#define LAC_LOG_ERROR(log) QAT_UTILS_LOG("%s() - : " log "\n", __func__) + +#endif /* LAC_LOG_H */ Index: sys/dev/qat/qat_api/common/include/lac_mem.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_mem.h @@ -0,0 +1,577 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file lac_mem.h + * + * @defgroup LacMem Memory + * + * @ingroup LacCommon + * + * Memory re-sizing functions and memory accessor macros. + * + ***************************************************************************/ + +#ifndef LAC_MEM_H +#define LAC_MEM_H + +/*************************************************************************** + * Include header files + ***************************************************************************/ +#include "cpa.h" +#include "qat_utils.h" +#include "lac_common.h" + +/** + ******************************************************************************* + * @ingroup LacMem + * These macros are used to Endian swap variables from IA to QAT. + * + * @param[out] x The variable to be swapped. + * + * @retval none + ******************************************************************************/ +#if (BYTE_ORDER == LITTLE_ENDIAN) +#define LAC_MEM_WR_64(x) QAT_UTILS_HOST_TO_NW_64(x) +#define LAC_MEM_WR_32(x) QAT_UTILS_HOST_TO_NW_32(x) +#define LAC_MEM_WR_16(x) QAT_UTILS_HOST_TO_NW_16(x) +#define LAC_MEM_RD_64(x) QAT_UTILS_NW_TO_HOST_64(x) +#define LAC_MEM_RD_32(x) QAT_UTILS_NW_TO_HOST_32(x) +#define LAC_MEM_RD_16(x) QAT_UTILS_NW_TO_HOST_16(x) +#else +#define LAC_MEM_WR_64(x) (x) +#define LAC_MEM_WR_32(x) (x) +#define LAC_MEM_WR_16(x) (x) +#define LAC_MEM_RD_64(x) (x) +#define LAC_MEM_RD_32(x) (x) +#define LAC_MEM_RD_16(x) (x) +#endif + +/* +******************************************************************************* +* Shared Memory Macros (memory accessible by Acceleration Engines, e.g. QAT) +******************************************************************************* +*/ + +/** + ******************************************************************************* + * @ingroup LacMem + * This macro can be used to write to a variable that will be read by the + * QAT. The macro will automatically detect the size of the target variable and + * will select the correct method for performing the write. The data is cast to + * the type of the field that it will be written to. + * This macro swaps data if required. + * + * @param[out] var The variable to be written. Can be a field of a struct. + * + * @param[in] data The value to be written. Will be cast to the size of the + * target. + * + * @retval none + ******************************************************************************/ +#define LAC_MEM_SHARED_WRITE_SWAP(var, data) \ + do { \ + switch (sizeof(var)) { \ + case 1: \ + (var) = (Cpa8U)(data); \ + break; \ + case 2: \ + (var) = (Cpa16U)(data); \ + (var) = LAC_MEM_WR_16(((Cpa16U)var)); \ + break; \ + case 4: \ + (var) = (Cpa32U)(data); \ + (var) = LAC_MEM_WR_32(((Cpa32U)var)); \ + break; \ + case 8: \ + (var) = (Cpa64U)(data); \ + (var) = LAC_MEM_WR_64(((Cpa64U)var)); \ + break; \ + default: \ + break; \ + } \ + } while (0) + +/** + ******************************************************************************* + * @ingroup LacMem + * This macro can be used to read a variable that was written by the QAT. + * The macro will automatically detect the size of the data to be read and will + * select the correct method for performing the read. The value read from the + * variable is cast to the size of the data type it will be stored in. + * This macro swaps data if required. + * + * @param[in] var The variable to be read. Can be a field of a struct. + * + * @param[out] data The variable to hold the result of the read. Data read + * will be cast to the size of this variable + * + * @retval none + ******************************************************************************/ +#define LAC_MEM_SHARED_READ_SWAP(var, data) \ + do { \ + switch (sizeof(var)) { \ + case 1: \ + (data) = (var); \ + break; \ + case 2: \ + (data) = LAC_MEM_RD_16(((Cpa16U)var)); \ + break; \ + case 4: \ + (data) = LAC_MEM_RD_32(((Cpa32U)var)); \ + break; \ + case 8: \ + (data) = LAC_MEM_RD_64(((Cpa64U)var)); \ + break; \ + default: \ + break; \ + } \ + } while (0) + +/** + ******************************************************************************* + * @ingroup LacMem + * This macro can be used to write a pointer to a QAT request. The fields + * for pointers in the QAT request and response messages are always 64 bits + * + * @param[out] var The variable to be written to. Can be a field of a struct. + * + * @param[in] data The value to be written. Will be cast to size of target + * variable + * + * @retval none + ******************************************************************************/ +/* cast pointer to scalar of same size of the native pointer */ +#define LAC_MEM_SHARED_WRITE_FROM_PTR(var, data) \ + ((var) = (Cpa64U)(LAC_ARCH_UINT)(data)) + +/* Note: any changes to this macro implementation should also be made to the + * similar LAC_MEM_CAST_PTR_TO_UINT64 macro + */ + +/** + ******************************************************************************* + * @ingroup LacMem + * This macro can be used to read a pointer from a QAT response. The fields + * for pointers in the QAT request and response messages are always 64 bits + * + * @param[in] var The variable to be read. Can be a field of a struct. + * + * @param[out] data The variable to hold the result of the read. Data read + * will be cast to the size of this variable + * + * @retval none + ******************************************************************************/ +/* Cast back to native pointer */ +#define LAC_MEM_SHARED_READ_TO_PTR(var, data) \ + ((data) = (void *)(LAC_ARCH_UINT)(var)) + +/** + ******************************************************************************* + * @ingroup LacMem + * This macro safely casts a pointer to a Cpa64U type. + * + * @param[in] pPtr The pointer to be cast. + * + * @retval pointer cast to Cpa64U + ******************************************************************************/ +#define LAC_MEM_CAST_PTR_TO_UINT64(pPtr) ((Cpa64U)(pPtr)) + +/** + ******************************************************************************* + * @ingroup LacMem + * This macro uses an QAT Utils macro to convert from a virtual address to + *a + * physical address for internally allocated memory. + * + * @param[in] pVirtAddr The address to be converted. + * + * @retval The converted physical address + ******************************************************************************/ +#define LAC_OS_VIRT_TO_PHYS_INTERNAL(pVirtAddr) \ + (QAT_UTILS_MMU_VIRT_TO_PHYS(pVirtAddr)) + +/** + ******************************************************************************* + * @ingroup LacMem + * This macro should be called on all externally allocated memory it calls + * SalMem_virt2PhysExternal function which allows a user + * to set the virt2phys function used by an instance. + * Defaults to virt to phys for kernel. + * + * @param[in] genService Generic sal_service_t structure. + * @param[in] pVirtAddr The address to be converted. + * + * @retval The converted physical address + ******************************************************************************/ +#define LAC_OS_VIRT_TO_PHYS_EXTERNAL(genService, pVirtAddr) \ + ((SalMem_virt2PhysExternal(pVirtAddr, &(genService)))) + +/** + ******************************************************************************* + * @ingroup LacMem + * This macro can be used to write an address variable that will be read by + * the QAT. The macro will perform the necessary virt2phys address translation + * This macro is only to be called on memory allocated internally by the driver. + * + * @param[out] var The address variable to write. Can be a field of a struct. + * + * @param[in] pPtr The pointer variable to containing the address to be + * written + * + * @retval none + ******************************************************************************/ +#define LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_INTERNAL(var, pPtr) \ + do { \ + Cpa64U physAddr = 0; \ + physAddr = LAC_MEM_CAST_PTR_TO_UINT64( \ + LAC_OS_VIRT_TO_PHYS_INTERNAL(pPtr)); \ + var = physAddr; \ + } while (0) + +/** + ******************************************************************************* + * @ingroup LacMem + * This macro can be used to write an address variable that will be read by + * the QAT. The macro will perform the necessary virt2phys address translation + * This macro is to be used on memory allocated externally by the user. It calls + * the user supplied virt2phys address translation. + * + * @param[in] pService The pointer to the service + * @param[out] var The address variable to write. Can be a field of a struct + * @param[in] pPtr The pointer variable to containing the address to be + * written + * + * @retval none + ******************************************************************************/ +#define LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL(pService, var, pPtr) \ + do { \ + Cpa64U physAddr = 0; \ + physAddr = LAC_MEM_CAST_PTR_TO_UINT64( \ + LAC_OS_VIRT_TO_PHYS_EXTERNAL(pService, pPtr)); \ + var = physAddr; \ + } while (0) + +/* +******************************************************************************* +* OS Memory Macros +******************************************************************************* +*/ + +/** + ******************************************************************************* + * @ingroup LacMem + * This function and associated macro allocates the memory for the given + * size and stores the address of the memory allocated in the pointer. + * + * @param[out] ppMemAddr address of pointer where address will be stored + * @param[in] sizeBytes the size of the memory to be allocated. + * + * @retval CPA_STATUS_RESOURCE Macro failed to allocate Memory + * @retval CPA_STATUS_SUCCESS Macro executed successfully + * + ******************************************************************************/ +static __inline CpaStatus +LacMem_OsMemAlloc(void **ppMemAddr, Cpa32U sizeBytes) +{ + *ppMemAddr = malloc(sizeBytes, M_QAT, M_WAITOK); + + return CPA_STATUS_SUCCESS; +} + +/** + ******************************************************************************* + * @ingroup LacMem + * This function and associated macro allocates the contiguous + * memory for the given + * size and stores the address of the memory allocated in the pointer. + * + * @param[out] ppMemAddr address of pointer where address will be stored + * @param[in] sizeBytes the size of the memory to be allocated. + * @param[in] alignmentBytes the alignment + * @param[in] node node to allocate from + * + * @retval CPA_STATUS_RESOURCE Macro failed to allocate Memory + * @retval CPA_STATUS_SUCCESS Macro executed successfully + * + ******************************************************************************/ +static __inline CpaStatus +LacMem_OsContigAlignMemAlloc(void **ppMemAddr, + Cpa32U sizeBytes, + Cpa32U alignmentBytes, + Cpa32U node) +{ + if ((alignmentBytes & (alignmentBytes - 1)) != + 0) /* if is not power of 2 */ + { + *ppMemAddr = NULL; + QAT_UTILS_LOG("alignmentBytes MUST be the power of 2\n"); + return CPA_STATUS_INVALID_PARAM; + } + + *ppMemAddr = + qatUtilsMemAllocContiguousNUMA(sizeBytes, node, alignmentBytes); + + if (NULL == *ppMemAddr) { + return CPA_STATUS_RESOURCE; + } + + return CPA_STATUS_SUCCESS; +} + +/** + ******************************************************************************* + * @ingroup LacMem + * Macro from the malloc() function + * + ******************************************************************************/ +#define LAC_OS_MALLOC(sizeBytes) malloc(sizeBytes, M_QAT, M_WAITOK) + +/** + ******************************************************************************* + * @ingroup LacMem + * Macro from the LacMem_OsContigAlignMemAlloc function + * + ******************************************************************************/ +#define LAC_OS_CAMALLOC(ppMemAddr, sizeBytes, alignmentBytes, node) \ + LacMem_OsContigAlignMemAlloc((void *)ppMemAddr, \ + sizeBytes, \ + alignmentBytes, \ + node) + +/** + ******************************************************************************* + * @ingroup LacMem + * Macro for declaration static const unsigned int constant. One provides + * the compilation time computation with the highest bit set in the + * sizeof(TYPE) value. The constant is being put by the linker by default in + * .rodata section + * + * E.g. Statement LAC_DECLARE_HIGHEST_BIT_OF(lac_mem_blk_t) + * results in following entry: + * static const unsigned int highest_bit_of_lac_mem_blk_t = 3 + * + * CAUTION!! + * Macro is prepared only for type names NOT-containing ANY + * special characters. Types as amongst others: + * - void * + * - unsigned long + * - unsigned int + * are strictly forbidden and will result in compilation error. + * Use typedef to provide one-word type name for MACRO's usage. + ******************************************************************************/ +#define LAC_DECLARE_HIGHEST_BIT_OF(TYPE) \ + static const unsigned int highest_bit_of_##TYPE = \ + (sizeof(TYPE) & 0x80000000 ? 31 : (sizeof(TYPE) & 0x40000000 ? 30 : (sizeof(TYPE) & 0x20000000 ? 29 : ( \ + sizeof(TYPE) & 0x10000000 ? 28 : ( \ + sizeof(TYPE) & 0x08000000 ? 27 : ( \ + sizeof(TYPE) & 0x04000000 ? 26 : ( \ + sizeof(TYPE) & 0x02000000 ? 25 : ( \ + sizeof(TYPE) & 0x01000000 ? 24 : ( \ + sizeof(TYPE) & 0x00800000 ? \ + 23 : \ + (sizeof(TYPE) & 0x00400000 ? 22 : ( \ + sizeof( \ + TYPE) & \ + 0x00200000 ? \ + 21 : \ + ( \ + sizeof(TYPE) & 0x00100000 ? 20 : (sizeof(TYPE) & 0x00080000 ? 19 : ( \ + sizeof( \ + TYPE) & \ + 0x00040000 ? \ + 18 : \ + ( \ + sizeof(TYPE) & 0x00020000 ? 17 : ( \ + sizeof(TYPE) & 0x00010000 ? 16 : (sizeof(TYPE) & \ + 0x00008000 ? \ + 15 : \ + (sizeof(TYPE) & 0x00004000 ? 14 : ( \ + sizeof(TYPE) & 0x00002000 ? 13 : \ + ( \ + sizeof(TYPE) & 0x00001000 ? 12 : ( \ + sizeof(TYPE) & 0x00000800 ? 11 : ( \ + sizeof(TYPE) & 0x00000400 ? 10 : \ + ( \ + sizeof(TYPE) & \ + 0x00000200 ? \ + 9 : \ + (sizeof( \ + TYPE) & \ + 0x00000100 ? \ + 8 : \ + (sizeof(TYPE) & 0x00000080 ? 7 : \ + ( \ + sizeof(TYPE) & 0x00000040 ? \ + 6 : \ + ( \ + sizeof(TYPE) & 0x00000020 ? 5 : \ + ( \ + sizeof(TYPE) & 0x00000010 ? 4 : \ + ( \ + sizeof(TYPE) & 0x00000008 ? 3 : \ + ( \ + sizeof(TYPE) & 0x00000004 ? 2 : \ + ( \ + sizeof(TYPE) & 0x00000002 ? 1 : ( \ + sizeof(TYPE) & 0x00000001 ? 0 : 32))))))))))))))))) /*16*/))))))))))))))) /* 31 */ + +/** + ******************************************************************************* + * @ingroup LacMem + * This function and associated macro frees the memory at the given address + * and resets the pointer to NULL + * + * @param[out] ppMemAddr address of pointer where mem address is stored. + * If pointer is NULL, the function will exit silently + * + * @retval void + * + ******************************************************************************/ +static __inline void +LacMem_OsMemFree(void **ppMemAddr) +{ + free(*ppMemAddr, M_QAT); + *ppMemAddr = NULL; +} + +/** + ******************************************************************************* + * @ingroup LacMem + * This function and associated macro frees the contiguous memory at the + * given address and resets the pointer to NULL + * + * @param[out] ppMemAddr address of pointer where mem address is stored. + * If pointer is NULL, the function will exit silently + * + * @retval void + * + ******************************************************************************/ +static __inline void +LacMem_OsContigAlignMemFree(void **ppMemAddr) +{ + if (NULL != *ppMemAddr) { + qatUtilsMemFreeNUMA(*ppMemAddr); + *ppMemAddr = NULL; + } +} + +#define LAC_OS_FREE(pMemAddr) LacMem_OsMemFree((void *)&pMemAddr) + +#define LAC_OS_CAFREE(pMemAddr) LacMem_OsContigAlignMemFree((void *)&pMemAddr) + +/** +******************************************************************************* + * @ingroup LacMem + * Copies user data to a working buffer of the correct size (required by + * PKE services) + * + * @description + * This function produces a correctly sized working buffer from the input + * user buffer. If the original buffer is too small a new buffer shall + * be allocated and memory is copied (left padded with zeros to the +*required + * length). + * + * The returned working buffer is guaranteed to be of the desired size for + * QAT. + * + * When this function is called pInternalMem describes the user_buffer and + * when the function returns pInternalMem describes the working buffer. + * This is because pInternalMem describes the memory that will be sent to + * QAT. + * + * The caller must keep the original buffer pointer. The alllocated buffer +*is + * freed (as necessary) using icp_LacBufferRestore(). + * + * @param[in] instanceHandle Handle to crypto instance so pke_resize mem pool +*can + * be located + * @param[in] pUserBuffer Pointer on the user buffer + * @param[in] userLen length of the user buffer + * @param[in] workingLen length of the working (correctly sized) buffer + * @param[in/out] pInternalMem pointer to boolean if TRUE on input then + * user_buffer is internally allocated memory + * if false then it is externally allocated. + * This value gets updated by the function + * if the returned pointer references internally + * allocated memory. + * + * @return a pointer to the working (correctly sized) buffer or NULL if the + * allocation failed + * + * @note the working length cannot be smaller than the user buffer length + * + * @warning the working buffer may be the same or different from the original + * user buffer; the caller should make no assumptions in this regard + * + * @see icp_LacBufferRestore() + * + ******************************************************************************/ +Cpa8U *icp_LacBufferResize(CpaInstanceHandle instanceHandle, + Cpa8U *pUserBuffer, + Cpa32U userLen, + Cpa32U workingLen, + CpaBoolean *pInternalMemory); + +/** +******************************************************************************* + * @ingroup LacMem + * Restores a user buffer + * + * @description + * This function restores a user buffer and releases its + * corresponding working buffer. The working buffer, assumed to be + * previously obtained using icp_LacBufferResize(), is freed as necessary. + * + * The contents are copied in the process. + * + * @note the working length cannot be smaller than the user buffer length + * + * @param[out] pUserBuffer Pointer on the user buffer + * @param[in] userLen length of the user buffer + * @param[in] pWorkingBuffer Pointer on the working buffer + * @param[in] workingLen working buffer length + * @param[in] copyBuf if set _TRUE the data in the workingBuffer + * will be copied to the userBuffer before the + * workingBuffer is freed. + * + * @return the status of the operation + * + * @see icp_LacBufferResize() + * + ******************************************************************************/ +CpaStatus icp_LacBufferRestore(Cpa8U *pUserBuffer, + Cpa32U userLen, + Cpa8U *pWorkingBuffer, + Cpa32U workingLen, + CpaBoolean copyBuf); + +/** +******************************************************************************* + * @ingroup LacMem + * Uses an instance specific user supplied virt2phys function to convert a + * virtual address to a physical address. + * + * @description + * Uses an instance specific user supplied virt2phys function to convert a + * virtual address to a physical address. A client of QA API can set the + * virt2phys function for an instance by using the + * cpaXxSetAddressTranslation() function. If the client does not set the + * virt2phys function and the instance is in kernel space then OS specific + * virt2phys function will be used. In user space the virt2phys function + * MUST be set by the user. + * + * @param[in] pVirtAddr the virtual addr to be converted + * @param[in] pServiceGen Pointer on the sal_service_t structure + * so client supplied virt2phys function can be + * called. + * + * @return the physical address + * + ******************************************************************************/ +CpaPhysicalAddr SalMem_virt2PhysExternal(void *pVirtAddr, void *pServiceGen); + +#endif /* LAC_MEM_H */ Index: sys/dev/qat/qat_api/common/include/lac_mem_pools.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_mem_pools.h @@ -0,0 +1,278 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file lac_mem_pools.h + * + * @defgroup LacMemPool Memory Pool Mgmt + * + * @ingroup LacCommon + * + * Memory Pool creation and mgmt functions + * + * @lld_start + * @lld_overview + * This component is designed as a set of utility functions for the + * creation of pre-allocated memory pools. Each pool will be created using OS + * memory with a user specified number of elements, element size and element + * alignment(alignmnet is at byte granularity). + * @lld_dependencies + * These utilities rely on QAT Utils for locking mechanisms and memory + *allocation + * @lld_initialisation + * Pool creation needs to be done by each component. There is no specific + * initialisation required for this feature. + * @lld_module_algorithms + * The following is a diagram of how the memory is layed out for each block + * in a pool. Each element must be aligned on the boundary requested for in the + * create call. In order to hide the management of the pools from the user, + * the memory block data is hidden prior to the + * data pointer. This way it can be accessed easily on a free call with pointer + * arithmatic. The Padding at the start is simply there for alignment and is + * unused in the pools. + * + * ------------------------------------------------------- + * + * |Padding |lac_mem_blk_t | client memory | + * + * @lld_process_context + * @lld_end + ***************************************************************************/ + +/** + ******************************************************************************* + * @ingroup LacMemPool + * + * + ******************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_MEM_POOLS_H +#define LAC_MEM_POOLS_H + +#include "cpa.h" +#include "lac_common.h" +struct lac_mem_pool_hdr_s; +/**< @ingroup LacMemPool + * This is a forward declaration of the structure type lac_mem_pool_hdr_s */ + +typedef LAC_ARCH_UINT lac_memory_pool_id_t; +/**< @ingroup LacMemPool + * Pool ID type to be used by all clients */ + +/**< @ingroup LacMemPool + * This structure is used to link each memory block in the created pool + * together and contain the necessary information for deletion of the block + */ +typedef struct lac_mem_blk_s { + CpaPhysicalAddr physDataPtr; + /**< physical address of data pointer for client */ + void *pMemAllocPtr; + /**< virtual address of the memory block actually allocated */ + CpaBoolean isInUse; + /**< indicates if the pool item is in use */ + struct lac_mem_blk_s *pNext; + /**< link to next blcok in the pool */ + struct lac_mem_pool_hdr_s *pPoolID; + /**< identifier of the pool that this block was allocated from */ +} lac_mem_blk_t; + +#define LAC_MEM_POOL_VIRT_TO_PHYS(pVirtAddr) \ + (((lac_mem_blk_t *)((LAC_ARCH_UINT)pVirtAddr - sizeof(lac_mem_blk_t))) \ + ->physDataPtr) +/**< @ingroup LacMemPool + * macro for retreiving the physical address of the memory block. */ + +#define LAC_MEM_POOL_INIT_POOL_ID 0 +/**< @ingroup LacMemPool + * macro which defines the valid initialisation value for a pool ID. This is + * used as a level of abstraction for the user of this interface */ + +/** + ******************************************************************************* + * @ingroup LacMemPool + * This function creates a memory pool containing a specified number of + * elements of specific size and byte alignment. This function is not reentrant + * or thread safe and is only intended to be called during initialisation. + * + * @blocking + * Yes + * @reentrant + * No + * @threadSafe + * No + * @param[out] poolID on successful creation of a pool this will + * be the ID used for all subsequent accesses + * @param[in] poolName The name of the memory pool + * @param[in] numElementsInPool number of elements to provision in the pool + * @param[in] blkSizeInBytes size in bytes of each element in the pool + * @param[in] blkAlignmentInBytes byte alignment required for each element + * @param[in] trackMemory track the memory in use by this pool + * @param[in] node node to allocate from + * + * @retval CPA_STATUS_INVALID_PARAM invalid input parameter + * @retval CPA_STATUS_RESOURCE error in provisioning resources + * @retval CPA_STATUS_SUCCESS function executed successfully + * + ******************************************************************************/ +CpaStatus Lac_MemPoolCreate(lac_memory_pool_id_t *poolID, + char *poolName, + unsigned int numElementsInPool, + unsigned int blkSizeInBytes, + unsigned int blkAlignmentInBytes, + CpaBoolean trackMemory, + Cpa32U node); + +/** + ******************************************************************************* + * @ingroup LacMemPool + * This function will destroy the memory pool in it's current state. All memory + * blocks which have been returned to the memory pool will be de-allocated and + * the pool indetifier will be freed and assigned to NULL. It is the + * responsibility of the pool creators to return all memory before a destroy or + * memory will be leaked. + * + * @blocking + * Yes + * @reentrant + * No + * @threadSafe + * No + + * @param[in] poolID Pointer to the memory pool to destroy + * + ******************************************************************************/ +void Lac_MemPoolDestroy(lac_memory_pool_id_t poolID); + +/** + ******************************************************************************* + * @ingroup LacMemPool + * This function allocates a block from the pool which has been previously + * created. It does not check the validity of the pool Id prior to accessing the + * pool. It is up to the calling code to ensure the value is correct. + * + * @blocking + * Yes + * @reentrant + * Yes + * @threadSafe + * Yes + * @param[in] poolID ID of the pool to allocate memory from + * + * @retval pointer to the memory which has been allocated from the pool + * + ******************************************************************************/ +void *Lac_MemPoolEntryAlloc(lac_memory_pool_id_t poolID); + +/** + ******************************************************************************* + * @ingroup LacMemPool + * This function de-allocates the memory passed in back to the pool from which + * it was allocated. + * + * @blocking + * Yes + * @reentrant + * Yes + * @threadSafe + * Yes + * @param[in] entry memory address of the block to be freed + * + ******************************************************************************/ +void Lac_MemPoolEntryFree(void *entry); + +/** + ******************************************************************************* + * @ingroup LacMemPool + * This function returns the number of available entries in a particular pool + * + * @blocking + * No + * @reentrant + * No + * @threadSafe + * No + * @param[in] poolID ID of the pool + * + * @retval number of elements left for allocation from the pool + * + ******************************************************************************/ +unsigned int Lac_MemPoolAvailableEntries(lac_memory_pool_id_t poolID); + +/** + ******************************************************************************* + * @ingroup LacMemPool + * This function displays the stats associated with the memory pools + * + * @blocking + * No + * @reentrant + * No + * @threadSafe + * No + * + ******************************************************************************/ +void Lac_MemPoolStatsShow(void); + +/** + ******************************************************************************* + * @ingroup LacMemPool + * This function initialises the physical addresses of the symmetric cookie + * + * @blocking + * No + * @reentrant + * No + * @threadSafe + * No + * @param[in] poolID ID of the pool + * + * @retval CPA_STATUS_FAIL function failed + * @retval CPA_STATUS_SUCCESS function executed successfully + * + ******************************************************************************/ +CpaStatus Lac_MemPoolInitSymCookiesPhyAddr(lac_memory_pool_id_t poolID); + +/** + ******************************************************************************* + * @ingroup LacMemPool + * This function populates all PKE requests with instance constant parameters + * + * @blocking + * No + * @reentrant + * No + * @threadSafe + * No + * @param[in] poolID ID of the pool + * @param[in] instanceHandle instanceHandle + * + * @retval CPA_STATUS_FAIL function failed + * @retval CPA_STATUS_SUCCESS function executed successfully + * + ******************************************************************************/ +CpaStatus Lac_MemPoolInitAsymCookies(lac_memory_pool_id_t poolID, + CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacMemPool + * This function initialises the physical addresses of the compression cookie + * + * @blocking + * No + * @reentrant + * No + * @threadSafe + * No + * @param[in] poolID ID of the pool + * + * @retval CPA_STATUS_FAIL function failed + * @retval CPA_STATUS_SUCCESS function executed successfully + * + ******************************************************************************/ +CpaStatus Lac_MemPoolInitDcCookiePhyAddr(lac_memory_pool_id_t poolID); + +#endif /*LAC_MEM_POOLS_H*/ Index: sys/dev/qat/qat_api/common/include/lac_module.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_module.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef __LAC_MODULE_H__ +#define __LAC_MODULE_H__ + +#include "icp_qat_hw.h" + +/* Lac module getter/setter for TUNABLE_INT in lac_module.c */ +icp_qat_hw_auth_mode_t Lac_GetQatHmacMode(void); +void Lac_SetQatHmacMode(const icp_qat_hw_auth_mode_t); + +#endif Index: sys/dev/qat/qat_api/common/include/lac_sal.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_sal.h @@ -0,0 +1,498 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file lac_sal.h + * + * @defgroup SalCtrl Service Access Layer Controller + * + * @ingroup SalCtrl + * + * @description + * These functions are the functions to be executed for each state + * of the state machine for each service. + * + *****************************************************************************/ + +#ifndef LAC_SAL_H +#define LAC_SAL_H + +#include "cpa_cy_im.h" + +/** +******************************************************************************* + * @ingroup SalCtrl + * @description + * This function allocates memory for a specific instance type. + * Zeros this memory and sets the generic service section of + * the instance memory. + * + * @context + * This function is called from the generic services init. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] service The type of the service to be created + * (e.g. CRYPTO) + * @param[in] instance_num The logical instance number which will + * run the service + * @param[out] pObj Pointer to specific service instance memory + * @retVal CPA_STATUS_SUCCESS Instance memory successfully allocated + * @retVal CPA_STATUS_RESOURCE Instance memory not successfully allocated + * @retVal CPA_STATUS_FAIL Unsupported service type + * + *****************************************************************************/ +CpaStatus SalCtrl_ServiceCreate(sal_service_type_t service, + Cpa32U instance_num, + sal_service_t **pObj); + +/** +******************************************************************************* + * @ingroup SalCtl + * @description + * This macro goes through the 'list' passed in as a parameter. For each + * element found in the list, it peforms a cast to the type of the element + * given by the 'type' parameter. Finally, it calls the function given by + * the 'function' parameter, passing itself and the device as parameters. + * + * In case of error (i.e. 'function' does not return _SUCCESS or _RETRY) + * processing of the 'list' elements will stop and the status_ret will be + * updated. + * + * In case of _RETRY status_ret will be updated but the 'list' + * will continue to be processed. _RETRY is only expected when + * 'function' is stop. + * + * @context + * This macro is used by both the service and qat event handlers. + * + * @assumptions + * None + * @sideEffects + * None + * + * @param[in] list The list of services or qats as a type of list_t + * @param[in] type It identifies the type of the object inside the + * list: service or qat + * @param[in] device The ADF accelerator handle for the device + * @param[in] function The function pointer to call + * @param[in/out] status_ret If an error occured (i.e. status returned from + * function is not _SUCCESS) then status_ret is + * overwritten with status returned from function. + * + *****************************************************************************/ +#define SAL_FOR_EACH(list, type, device, function, status_ret) \ + do { \ + sal_list_t *curr_element = list; \ + CpaStatus status_temp = CPA_STATUS_SUCCESS; \ + typeof(type) *process = NULL; \ + while (NULL != curr_element) { \ + process = \ + (typeof(type) *)SalList_getObject(curr_element); \ + status_temp = process->function(device, process); \ + if ((CPA_STATUS_SUCCESS != status_temp) && \ + (CPA_STATUS_RETRY != status_temp)) { \ + status_ret = status_temp; \ + break; \ + } else { \ + if (CPA_STATUS_RETRY == status_temp) { \ + status_ret = status_temp; \ + } \ + } \ + curr_element = SalList_next(curr_element); \ + } \ + } while (0) + +/** +******************************************************************************* + * @ingroup SalCtl + * @description + * This macro goes through the 'list' passed in as a parameter. For each + * element found in the list, it peforms a cast to the type of the element + * given by the 'type' parameter. Finally, it checks the state of the + * element and if it is in state 'state_check' then it calls the + * function given by the 'function' parameter, passing itself + * and the device as parameters. + * If the element is not in 'state_check' it returns from the macro. + * + * In case of error (i.e. 'function' does not return _SUCCESS) + * processing of the 'list' elements will continue. + * + * @context + * This macro is used by both the service and qat event handlers. + * + * @assumptions + * None + * @sideEffects + * None + * + * @param[in] list The list of services or qats as a type of list_t + * @param[in] type It identifies the type of the object + * inside the list: service or qat + * @param[in] device The ADF accelerator handle for the device + * @param[in] function The function pointer to call + * @param[in] state_check The state to check for + * + *****************************************************************************/ +#define SAL_FOR_EACH_STATE(list, type, device, function, state_check) \ + do { \ + sal_list_t *curr_element = list; \ + typeof(type) *process = NULL; \ + while (NULL != curr_element) { \ + process = \ + (typeof(type) *)SalList_getObject(curr_element); \ + if (process->state == state_check) { \ + process->function(device, process); \ + } else { \ + break; \ + } \ + curr_element = SalList_next(curr_element); \ + } \ + } while (0) + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to initialize an instance of crypto service. + * It creates a crypto instance's memory pools. It calls ADF to create + * its required transport handles. It calls the sub crypto service init + * functions. Resets the stats. + * + * @context + * This function is called from the SalCtrl_ServiceEventInit function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No (ADF ensures that this function doesn't need to be thread safe) + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] service A crypto instance + * + *************************************************************************/ +CpaStatus SalCtrl_CryptoInit(icp_accel_dev_t *device, sal_service_t *service); + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to start an instance of crypto service. + * It sends the first messages to FW on its crypto instance transport + * handles. For asymmetric crypto it verifies the header on the downloaded + * MMP library. + * + * @context + * This function is called from the SalCtrl_ServiceEventStart function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No (ADF ensures that this function doesn't need to be thread safe) + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] service A crypto instance + * + *************************************************************************/ +CpaStatus SalCtrl_CryptoStart(icp_accel_dev_t *device, sal_service_t *service); + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to stop an instance of crypto service. + * It checks for inflight messages to the FW. If no messages are pending + * it returns success. If messages are pending it returns retry. + * + * @context + * This function is called from the SalCtrl_ServiceEventStop function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No (ADF ensures that this function doesn't need to be thread safe) + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] service A crypto instance + * + *************************************************************************/ +CpaStatus SalCtrl_CryptoStop(icp_accel_dev_t *device, sal_service_t *service); + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to shutdown an instance of crypto service. + * It frees resources allocated at initialisation - e.g. frees the + * memory pools and ADF transport handles. + * + * @context + * This function is called from the SalCtrl_ServiceEventShutdown function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No (ADF ensures that this function doesn't need to be thread safe) + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] service A crypto instance + * + *************************************************************************/ +CpaStatus SalCtrl_CryptoShutdown(icp_accel_dev_t *device, + sal_service_t *service); + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function sets the capability info of crypto instances. + * + * @context + * This function is called from the cpaCyQueryCapabilities and + * LacSymSession_ParamCheck function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No (ADF ensures that this function doesn't need to be thread safe) + * + * @param[in] service A sal_service_t* type + * @param[in] cyCapabilityInfo A CpaCyCapabilitiesInfo* type + * + *************************************************************************/ +void SalCtrl_CyQueryCapabilities(sal_service_t *pGenericService, + CpaCyCapabilitiesInfo *pCapInfo); + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to initialize an instance of compression service. + * It creates a compression instance's memory pools. It calls ADF to create + * its required transport handles. It zeros an instances stats. + * + * @context + * This function is called from the SalCtrl_ServiceEventInit function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No (ADF ensures that this function doesn't need to be thread safe) + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] service A compression instance + * + *************************************************************************/ + +CpaStatus SalCtrl_CompressionInit(icp_accel_dev_t *device, + sal_service_t *service); + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to start an instance of compression service. + * + * @context + * This function is called from the SalCtrl_ServiceEventStart function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No (ADF ensures that this function doesn't need to be thread safe) + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] service A compression instance + * + *************************************************************************/ + +CpaStatus SalCtrl_CompressionStart(icp_accel_dev_t *device, + sal_service_t *service); + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to stop an instance of compression service. + * It checks for inflight messages to the FW. If no messages are pending + * it returns success. If messages are pending it returns retry. + * + * @context + * This function is called from the SalCtrl_ServiceEventStop function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No (ADF ensures that this function doesn't need to be thread safe) + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] service A compression instance + * + *************************************************************************/ + +CpaStatus SalCtrl_CompressionStop(icp_accel_dev_t *device, + sal_service_t *service); + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to shutdown an instance of compression service. + * It frees resources allocated at initialisation - e.g. frees the + * memory pools and ADF transport handles. + * + * @context + * This function is called from the SalCtrl_ServiceEventShutdown function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No (ADF ensures that this function doesn't need to be thread safe) + * + * @param[in] device An icp_accel_dev_t* type + * @param[in] service A compression instance + * + *************************************************************************/ + +CpaStatus SalCtrl_CompressionShutdown(icp_accel_dev_t *device, + sal_service_t *service); + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to get the number of services enabled + * from the config table. + * + * @context + * This function is called from the SalCtrl_QatInit + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No + * + * param[in] device An icp_accel_dev_t* type + * param[in] pEnabledServices pointer to a variable used to store + * the enabled services + * + *************************************************************************/ + +CpaStatus SalCtrl_GetEnabledServices(icp_accel_dev_t *device, + Cpa32U *pEnabledServices); + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to check if a service is enabled + * + * @context + * This function is called from the SalCtrl_QatInit + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * param[in] enabled_services + * param[in] service + * + *************************************************************************/ + +CpaBoolean SalCtrl_IsServiceEnabled(Cpa32U enabled_services, + sal_service_type_t service); + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to check if a service is supported on the device + * The key difference between this and SalCtrl_GetSupportedServices() is + * that the latter treats it as an error if the service is unsupported. + * + * @context + * This can be called anywhere. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * param[in] device + * param[in] service service or services to check + * + *************************************************************************/ +CpaBoolean SalCtrl_IsServiceSupported(icp_accel_dev_t *device, + sal_service_type_t service); + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to check whether enabled services has associated + * hardware capability support + * + * @context + * This functions is called from the SalCtrl_ServiceEventInit function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * param[in] device A pointer to an icp_accel_dev_t + * param[in] enabled_services It is the bitmask for the enabled services + *************************************************************************/ + +CpaStatus SalCtrl_GetSupportedServices(icp_accel_dev_t *device, + Cpa32U enabled_services); + +#endif Index: sys/dev/qat/qat_api/common/include/lac_sal_ctrl.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_sal_ctrl.h @@ -0,0 +1,100 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file lac_sal_ctrl.h + * + * @ingroup SalCtrl + * + * Functions to register and deregister qat and service controllers with ADF. + * + ***************************************************************************/ + +#ifndef LAC_SAL_CTRL_H +#define LAC_SAL_CTRL_H + +/******************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to check whether the service component + * has been successfully started. + * + * @context + * This function is called from the icp_sal_userStart() function. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + ******************************************************************/ + +CpaStatus SalCtrl_AdfServicesStartedCheck(void); + +/******************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to check whether the user's parameter + * for concurrent request is valid. + * + * @context + * This function is called when crypto or compression is init + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * Yes + * @threadSafe + * Yes + * + ******************************************************************/ +CpaStatus validateConcurrRequest(Cpa32U numConcurrRequests); + +/******************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to register adf services + * + * @context + * This function is called from do_userStart() function + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * Yes + * @threadSafe + * Yes + * + ******************************************************************/ +CpaStatus SalCtrl_AdfServicesRegister(void); + +/******************************************************************* + * @ingroup SalCtrl + * @description + * This function is used to unregister adf services. + * + * @context + * This function is called from do_userStart() function + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * Yes + * @threadSafe + * Yes + * + ******************************************************************/ +CpaStatus SalCtrl_AdfServicesUnregister(void); + +#endif Index: sys/dev/qat/qat_api/common/include/lac_sal_types.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_sal_types.h @@ -0,0 +1,198 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file lac_sal_types.h + * + * @ingroup SalCtrl + * + * Generic instance type definitions of SAL controller + * + ***************************************************************************/ + +#ifndef LAC_SAL_TYPES_H +#define LAC_SAL_TYPES_H + +#include "lac_sync.h" +#include "lac_list.h" +#include "icp_accel_devices.h" +#include "sal_statistics.h" +#include "icp_adf_debug.h" + +#define SAL_CFG_BASE_DEC 10 +#define SAL_CFG_BASE_HEX 16 + +/** + ***************************************************************************** + * @ingroup SalCtrl + * Instance States + * + * @description + * An enumeration containing the possible states for an instance. + * + *****************************************************************************/ +typedef enum sal_service_state_s { + SAL_SERVICE_STATE_UNINITIALIZED = 0, + SAL_SERVICE_STATE_INITIALIZING, + SAL_SERVICE_STATE_INITIALIZED, + SAL_SERVICE_STATE_RUNNING, + SAL_SERVICE_STATE_SHUTTING_DOWN, + SAL_SERVICE_STATE_SHUTDOWN, + SAL_SERVICE_STATE_RESTARTING, + SAL_SERVICE_STATE_END +} sal_service_state_t; + +/** + ***************************************************************************** + * @ingroup SalCtrl + * Service Instance Types + * + * @description + * An enumeration containing the possible types for a service. + * + *****************************************************************************/ +typedef enum { + SAL_SERVICE_TYPE_UNKNOWN = 0, + /* symmetric and asymmetric crypto service */ + SAL_SERVICE_TYPE_CRYPTO = 1, + /* compression service */ + SAL_SERVICE_TYPE_COMPRESSION = 2, + /* inline service */ + SAL_SERVICE_TYPE_INLINE = 4, + /* asymmetric crypto only service*/ + SAL_SERVICE_TYPE_CRYPTO_ASYM = 8, + /* symmetric crypto only service*/ + SAL_SERVICE_TYPE_CRYPTO_SYM = 16, + SAL_SERVICE_TYPE_QAT = 32 +} sal_service_type_t; + +/** + ***************************************************************************** + * @ingroup SalCtrl + * Generic Instance Container + * + * @description + * Contains all the common information across the different instances. + * + *****************************************************************************/ +typedef struct sal_service_s { + sal_service_type_t type; + /**< Service type (e.g. SAL_SERVICE_TYPE_CRYPTO)*/ + + Cpa8U state; + /**< Status of the service instance + (e.g. SAL_SERVICE_STATE_INITIALIZED) */ + + Cpa32U instance; + /**< Instance number */ + + CpaVirtualToPhysical virt2PhysClient; + /**< Function pointer to client supplied virt_to_phys */ + + CpaStatus (*init)(icp_accel_dev_t *device, + struct sal_service_s *service); + /**< Function pointer for instance INIT function */ + CpaStatus (*start)(icp_accel_dev_t *device, + struct sal_service_s *service); + /**< Function pointer for instance START function */ + CpaStatus (*stop)(icp_accel_dev_t *device, + struct sal_service_s *service); + /**< Function pointer for instance STOP function */ + CpaStatus (*shutdown)(icp_accel_dev_t *device, + struct sal_service_s *service); + /**< Function pointer for instance SHUTDOWN function */ + + CpaCyInstanceNotificationCbFunc notification_cb; + /**< Function pointer for instance restarting handler */ + + void *cb_tag; + /**< Restarting handler priv data */ + + sal_statistics_collection_t *stats; + /**< Pointer to device statistics configuration */ + + void *debug_parent_dir; + /**< Pointer to parent proc dir entry */ + + CpaBoolean is_dyn; + + Cpa32U capabilitiesMask; + /**< Capabilities mask of the device */ + + Cpa32U dcExtendedFeatures; + /**< Bit field of features. I.e. Compress And Verify */ + + CpaBoolean isInstanceStarted; + /**< True if user called StartInstance on this instance */ + + CpaBoolean integrityCrcCheck; + /** < True if the device supports end to end data integrity checks */ +} sal_service_t; + +/** + ***************************************************************************** + * @ingroup SalCtrl + * SAL structure + * + * @description + * Contains lists to crypto and compression instances. + * + *****************************************************************************/ +typedef struct sal_s { + sal_list_t *crypto_services; + /**< Container of sal_crypto_service_t */ + sal_list_t *asym_services; + /**< Container of sal_asym_service_t */ + sal_list_t *sym_services; + /**< Container of sal_sym_service_t */ + sal_list_t *compression_services; + /**< Container of sal_compression_service_t */ + debug_dir_info_t *cy_dir; + /**< Container for crypto proc debug */ + debug_dir_info_t *asym_dir; + /**< Container for asym proc debug */ + debug_dir_info_t *sym_dir; + /**< Container for sym proc debug */ + debug_dir_info_t *dc_dir; + /**< Container for compression proc debug */ + debug_file_info_t *ver_file; + /**< Container for version debug file */ +} sal_t; + +/** + ***************************************************************************** + * @ingroup SalCtrl + * SAL debug structure + * + * @description + * Service debug handler + * + *****************************************************************************/ +typedef struct sal_service_debug_s { + icp_accel_dev_t *accel_dev; + debug_file_info_t debug_file; +} sal_service_debug_t; + +/** + ******************************************************************************* + * @ingroup SalCtrl + * This macro verifies that the right service type has been passed in. + * + * @param[in] pService pointer to service instance + * @param[in] service_type service type to check againstx. + * + * @return CPA_STATUS_FAIL Parameter is incorrect type + * + ******************************************************************************/ +#define SAL_CHECK_INSTANCE_TYPE(pService, service_type) \ + do { \ + sal_service_t *pGenericService = NULL; \ + pGenericService = (sal_service_t *)pService; \ + if (!(service_type & pGenericService->type)) { \ + QAT_UTILS_LOG("Instance handle type is incorrect.\n"); \ + return CPA_STATUS_FAIL; \ + } \ + } while (0) + +#endif Index: sys/dev/qat/qat_api/common/include/lac_sal_types_crypto.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_sal_types_crypto.h @@ -0,0 +1,179 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sal_types_crypto.h + * + * @ingroup SalCtrl + * + * Generic crypto instance type definition + * + ***************************************************************************/ + +#ifndef LAC_SAL_TYPES_CRYPTO_H_ +#define LAC_SAL_TYPES_CRYPTO_H_ + +#include "lac_sym_qat_hash_defs_lookup.h" +#include "lac_sym_key.h" +#include "cpa_cy_sym_dp.h" + +#include "icp_adf_debug.h" +#include "lac_sal_types.h" +#include "icp_adf_transport.h" +#include "lac_mem_pools.h" + +#define LAC_PKE_FLOW_ID_TAG 0xFFFFFFFC +#define LAC_PKE_ACCEL_ID_BIT_POS 1 +#define LAC_PKE_SLICE_ID_BIT_POS 0 + +/** + ***************************************************************************** + * @ingroup SalCtrl + * Crypto specific Service Container + * + * @description + * Contains information required per crypto service instance. + * + *****************************************************************************/ +typedef struct sal_crypto_service_s { + sal_service_t generic_service_info; + /**< An instance of the Generic Service Container */ + + lac_memory_pool_id_t lac_sym_cookie_pool; + /**< Memory pool ID used for symmetric operations */ + lac_memory_pool_id_t lac_ec_pool; + /**< Memory pool ID used for asymmetric operations */ + lac_memory_pool_id_t lac_prime_pool; + /**< Memory pool ID used for asymmetric operations */ + lac_memory_pool_id_t lac_pke_req_pool; + /**< Memory pool ID used for asymmetric operations */ + lac_memory_pool_id_t lac_pke_align_pool; + /**< Memory pool ID used for asymmetric operations */ + + QatUtilsAtomic *pLacSymStatsArr; + /**< pointer to an array of atomic stats for symmetric */ + + QatUtilsAtomic *pLacKeyStats; + /**< pointer to an array of atomic stats for key */ + + QatUtilsAtomic *pLacDhStatsArr; + /**< pointer to an array of atomic stats for DH */ + + QatUtilsAtomic *pLacDsaStatsArr; + /**< pointer to an array of atomic stats for Dsa */ + + QatUtilsAtomic *pLacRsaStatsArr; + /**< pointer to an array of atomic stats for Rsa */ + + QatUtilsAtomic *pLacEcStatsArr; + /**< pointer to an array of atomic stats for Ecc */ + + QatUtilsAtomic *pLacEcdhStatsArr; + /**< pointer to an array of atomic stats for Ecc DH */ + + QatUtilsAtomic *pLacEcdsaStatsArr; + /**< pointer to an array of atomic stats for Ecc DSA */ + + QatUtilsAtomic *pLacPrimeStatsArr; + /**< pointer to an array of atomic stats for prime */ + + QatUtilsAtomic *pLacLnStatsArr; + /**< pointer to an array of atomic stats for large number */ + + QatUtilsAtomic *pLacDrbgStatsArr; + /**< pointer to an array of atomic stats for DRBG */ + + Cpa32U pkeFlowId; + /**< Flow ID for all pke requests from this instance - identifies + accelerator + and execution engine to use */ + + icp_comms_trans_handle trans_handle_sym_tx; + icp_comms_trans_handle trans_handle_sym_rx; + + icp_comms_trans_handle trans_handle_asym_tx; + icp_comms_trans_handle trans_handle_asym_rx; + + icp_comms_trans_handle trans_handle_nrbg_tx; + icp_comms_trans_handle trans_handle_nrbg_rx; + + Cpa32U maxNumSymReqBatch; + /**< Maximum number of requests that can be placed on the sym tx ring + for any one batch request (DP api) */ + + Cpa16U acceleratorNum; + Cpa16U bankNum; + Cpa16U pkgID; + Cpa8U isPolled; + Cpa8U executionEngine; + Cpa32U coreAffinity; + Cpa32U nodeAffinity; + /**< Config Info */ + + CpaCySymDpCbFunc pSymDpCb; + /**< Sym DP Callback */ + + lac_sym_qat_hash_defs_t **pLacHashLookupDefs; + /**< table of pointers to standard defined information for all hash + algorithms. We support an extra hash algo that is not exported by + cy api which is why we need the extra +1 */ + Cpa8U **ppHmacContentDesc; + /**< table of pointers to CD for Hmac precomputes - used at session init + */ + + Cpa8U *pSslLabel; + /**< pointer to memory holding the standard SSL label ABBCCC.. */ + + lac_sym_key_tls_labels_t *pTlsLabel; + /**< pointer to memory holding the 4 standard TLS labels */ + + QatUtilsAtomic drbgErrorState; + /**< DRBG related variables */ + + lac_sym_key_tls_hkdf_sub_labels_t *pTlsHKDFSubLabel; + /**< pointer to memory holding the 4 HKDFLabels sublabels */ + + debug_file_info_t *debug_file; +/**< Statistics handler */ +} sal_crypto_service_t; + +/************************************************************************* + * @ingroup cpaCyCommon + * @description + * This function returns a valid asym/sym/crypto instance handle for the + * system if it exists. When requesting an instance handle of type sym or + * asym, if either is not found then a crypto instance handle is returned + * if found, since a crypto handle supports both sym and asym services. + * Similarly when requesting a crypto instance handle, if it is not found + * then an asym or sym crypto instance handle is returned. + * + * @performance + * To avoid calling this function the user of the QA api should not use + * instanceHandle = CPA_INSTANCE_HANDLE_SINGLE. + * + * @context + * This function is called whenever instanceHandle = + *CPA_INSTANCE_HANDLE_SINGLE + * at the QA Cy api. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] svc_type Type of crypto service requested. + * + * @retval Pointer to first crypto instance handle or NULL if no crypto + * instances in the system. + * + *************************************************************************/ + +CpaInstanceHandle Lac_GetFirstHandle(sal_service_type_t svc_type); + +#endif /*LAC_SAL_TYPES_CRYPTO_H_*/ Index: sys/dev/qat/qat_api/common/include/lac_sync.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/lac_sync.h @@ -0,0 +1,376 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file lac_sync.h + * + * @defgroup LacSync LAC synchronous + * + * @ingroup LacCommon + * + * Function prototypes and defines for synchronous support + * + ***************************************************************************/ + +#ifndef LAC_SYNC_H +#define LAC_SYNC_H + +#include "cpa.h" +#include "qat_utils.h" +#include "lac_mem.h" + +/** + ***************************************************************************** + * @ingroup LacSync + * + * @description + * LAC cookie for synchronous support + * + *****************************************************************************/ +typedef struct lac_sync_op_data_s { + struct sema *sid; + /**< Semaphore to signal */ + CpaStatus status; + /**< Output - Status of the QAT response */ + CpaBoolean opResult; + /**< Output - Verification of the operation/protocol status */ + CpaBoolean complete; + /**< Output - Operation is complete */ + CpaBoolean canceled; + /**< Output - Operation canceled */ +} lac_sync_op_data_t; + +#define LAC_PKE_SYNC_CALLBACK_TIMEOUT (5000) +/**< @ingroup LacSync + * Timeout waiting for an async callbacks in msecs. + * This is derived from the max latency of a PKE request + 1 sec + */ + +#define LAC_SYM_DRBG_POLL_AND_WAIT_TIME_MS (10) +/**< @ingroup LacSyn + * Default interval DRBG polling in msecs */ + +#define LAC_SYM_SYNC_CALLBACK_TIMEOUT (300) +/**< @ingroup LacSyn + * Timeout for wait for symmetric response in msecs +*/ + +#define LAC_INIT_MSG_CALLBACK_TIMEOUT (1922) +/**< @ingroup LacSyn + * Timeout for wait for init messages response in msecs +*/ + +#define DC_SYNC_CALLBACK_TIMEOUT (1000) +/**< @ingroup LacSyn + * Timeout for wait for compression response in msecs */ + +#define LAC_SYN_INITIAL_SEM_VALUE (0) +/**< @ingroup LacSyn + * Initial value of the sync waiting semaphore */ + +/** + ******************************************************************************* + * @ingroup LacSync + * This function allocates a sync op data cookie + * and creates and initialises the QAT Utils semaphore + * + * @param[in] ppSyncCallbackCookie Pointer to synch op data + * + * @retval CPA_STATUS_RESOURCE Failed to allocate the memory for the cookie. + * @retval CPA_STATUS_SUCCESS Success + * + ******************************************************************************/ +static __inline CpaStatus +LacSync_CreateSyncCookie(lac_sync_op_data_t **ppSyncCallbackCookie) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + *ppSyncCallbackCookie = + malloc(sizeof(lac_sync_op_data_t), M_QAT, M_WAITOK); + + if (CPA_STATUS_SUCCESS == status) { + status = LAC_INIT_SEMAPHORE((*ppSyncCallbackCookie)->sid, + LAC_SYN_INITIAL_SEM_VALUE); + (*ppSyncCallbackCookie)->complete = CPA_FALSE; + (*ppSyncCallbackCookie)->canceled = CPA_FALSE; + } + + if (CPA_STATUS_SUCCESS != status) { + LAC_OS_FREE(*ppSyncCallbackCookie); + } + + return status; +} + +/** + ******************************************************************************* + * @ingroup LacSync + * This macro frees a sync op data cookie and destroys the QAT Utils + *semaphore + * + * @param[in] ppSyncCallbackCookie Pointer to sync op data + * + * @return void + ******************************************************************************/ +static __inline CpaStatus +LacSync_DestroySyncCookie(lac_sync_op_data_t **ppSyncCallbackCookie) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + /* + * If the operation has not completed, cancel it instead of destroying + * the + * cookie. Otherwise, the callback might panic. In this case, the cookie + * will leak, but it's better than a panic. + */ + if (!(*ppSyncCallbackCookie)->complete) { + QAT_UTILS_LOG( + "Attempting to destroy an incomplete sync cookie\n"); + (*ppSyncCallbackCookie)->canceled = CPA_TRUE; + return CPA_STATUS_FAIL; + } + + status = LAC_DESTROY_SEMAPHORE((*ppSyncCallbackCookie)->sid); + LAC_OS_FREE(*ppSyncCallbackCookie); + return status; +} + +/** + ***************************************************************************** + * @ingroup LacSync + * Function which will wait for a sync callback on a given cookie. + * + * @param[in] pSyncCallbackCookie Pointer to sync op data + * @param[in] timeOut Time to wait for callback (msec) + * @param[out] pStatus Status returned by the callback + * @param[out] pOpStatus Operation status returned by callback. + * + * @retval CPA_STATUS_SUCCESS Success + * @retval CPA_STATUS_SUCCESS Fail waiting for a callback + * + *****************************************************************************/ +static __inline CpaStatus +LacSync_WaitForCallback(lac_sync_op_data_t *pSyncCallbackCookie, + Cpa32S timeOut, + CpaStatus *pStatus, + CpaBoolean *pOpStatus) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + status = LAC_WAIT_SEMAPHORE(pSyncCallbackCookie->sid, timeOut); + + if (CPA_STATUS_SUCCESS == status) { + *pStatus = pSyncCallbackCookie->status; + if (NULL != pOpStatus) { + *pOpStatus = pSyncCallbackCookie->opResult; + } + pSyncCallbackCookie->complete = CPA_TRUE; + } + + return status; +} + +/** + ***************************************************************************** + * @ingroup LacSync + * Function which will check for a sync callback on a given cookie. + * Returns whether the callback has happened or not, no timeout. + * + * @param[in] pSyncCallbackCookie Pointer to sync op data + * @param[in] timeOut Time to wait for callback (msec) + * @param[out] pStatus Status returned by the callback + * @param[out] pOpStatus Operation status returned by callback. + * + * @retval CPA_STATUS_SUCCESS Success + * @retval CPA_STATUS_FAIL Fail waiting for a callback + * + *****************************************************************************/ +static __inline CpaStatus +LacSync_CheckForCallback(lac_sync_op_data_t *pSyncCallbackCookie, + CpaStatus *pStatus, + CpaBoolean *pOpStatus) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + status = LAC_CHECK_SEMAPHORE(pSyncCallbackCookie->sid); + + if (CPA_STATUS_SUCCESS == status) { + *pStatus = pSyncCallbackCookie->status; + if (NULL != pOpStatus) { + *pOpStatus = pSyncCallbackCookie->opResult; + } + pSyncCallbackCookie->complete = CPA_TRUE; + } + + return status; +} + +/** + ***************************************************************************** + * @ingroup LacSync + * Function which will mark a sync cookie as complete. + * If it's known that the callback will not happen it's necessary + * to call this, else the cookie can't be destroyed. + * + * @param[in] pSyncCallbackCookie Pointer to sync op data + * + * @retval CPA_STATUS_SUCCESS Success + * @retval CPA_STATUS_FAIL Failed to mark as complete + * + *****************************************************************************/ +static __inline CpaStatus +LacSync_SetSyncCookieComplete(lac_sync_op_data_t *pSyncCallbackCookie) +{ + CpaStatus status = CPA_STATUS_FAIL; + + if (NULL != pSyncCallbackCookie) { + pSyncCallbackCookie->complete = CPA_TRUE; + status = CPA_STATUS_SUCCESS; + } + return status; +} +/** + ***************************************************************************** + * @ingroup LacSync + * Generic verify callback function. + * @description + * This function is used when the API is called in synchronous mode. + * It's assumed the callbackTag holds a lac_sync_op_data_t type + * and when the callback is received, this callback shall set the + * status element of that cookie structure and kick the sid. + * This function may be used directly as a callback function. + * + * @param[in] callbackTag Callback Tag + * @param[in] status Status of callback + * @param[out] pOpdata Pointer to the Op Data + * @param[out] opResult Boolean to indicate the result of the operation + * + * @return void + *****************************************************************************/ +void LacSync_GenVerifyCb(void *callbackTag, + CpaStatus status, + void *pOpdata, + CpaBoolean opResult); + +/** + ***************************************************************************** + * @ingroup LacSync + * Generic flatbuffer callback function. + * @description + * This function is used when the API is called in synchronous mode. + * It's assumed the callbackTag holds a lac_sync_op_data_t type + * and when the callback is received, this callback shall set the + * status element of that cookie structure and kick the sid. + * This function may be used directly as a callback function. + * + * @param[in] callbackTag Callback Tag + * @param[in] status Status of callback + * @param[in] pOpdata Pointer to the Op Data + * @param[out] pOut Pointer to the flat buffer + * + * @return void + *****************************************************************************/ +void LacSync_GenFlatBufCb(void *callbackTag, + CpaStatus status, + void *pOpdata, + CpaFlatBuffer *pOut); + +/** + ***************************************************************************** + * @ingroup LacSync + * Generic flatbuffer verify callback function. + * @description + * This function is used when the API is called in synchronous mode. + * It's assumed the callbackTag holds a lac_sync_op_data_t type + * and when the callback is received, this callback shall set the + * status and opResult element of that cookie structure and + * kick the sid. + * This function may be used directly as a callback function. + * + * @param[in] callbackTag Callback Tag + * @param[in] status Status of callback + * @param[in] pOpdata Pointer to the Op Data + * @param[out] opResult Boolean to indicate the result of the operation + * @param[out] pOut Pointer to the flat buffer + * + * @return void + *****************************************************************************/ +void LacSync_GenFlatBufVerifyCb(void *callbackTag, + CpaStatus status, + void *pOpdata, + CpaBoolean opResult, + CpaFlatBuffer *pOut); + +/** + ***************************************************************************** + * @ingroup LacSync + * Generic dual flatbuffer verify callback function. + * @description + * This function is used when the API is called in synchronous mode. + * It's assumed the callbackTag holds a lac_sync_op_data_t type + * and when the callback is received, this callback shall set the + * status and opResult element of that cookie structure and + * kick the sid. + * This function may be used directly as a callback function. + * + * @param[in] callbackTag Callback Tag + * @param[in] status Status of callback + * @param[in] pOpdata Pointer to the Op Data + * @param[out] opResult Boolean to indicate the result of the operation + * @param[out] pOut0 Pointer to the flat buffer + * @param[out] pOut1 Pointer to the flat buffer + * + * @return void + *****************************************************************************/ +void LacSync_GenDualFlatBufVerifyCb(void *callbackTag, + CpaStatus status, + void *pOpdata, + CpaBoolean opResult, + CpaFlatBuffer *pOut0, + CpaFlatBuffer *pOut1); + +/** + ***************************************************************************** + * @ingroup LacSync + * Generic wake up function. + * @description + * This function is used when the API is called in synchronous + * mode. + * It's assumed the callbackTag holds a lac_sync_op_data_t type + * and when the callback is received, this callback shall set + * the status element of that cookie structure and kick the + * sid. + * This function maybe called from an async callback. + * + * @param[in] callbackTag Callback Tag + * @param[in] status Status of callback + * + * @return void + *****************************************************************************/ +void LacSync_GenWakeupSyncCaller(void *callbackTag, CpaStatus status); + +/** + ***************************************************************************** + * @ingroup LacSync + * Generic wake up verify function. + * @description + * This function is used when the API is called in synchronous + * mode. + * It's assumed the callbackTag holds a lac_sync_op_data_t type + * and when the callback is received, this callback shall set + * the status element and the opResult of that cookie structure + * and kick the sid. + * This function maybe called from an async callback. + * + * @param[in] callbackTag Callback Tag + * @param[in] status Status of callback + * @param[out] opResult Boolean to indicate the result of the operation + * + * @return void + *****************************************************************************/ +void LacSync_GenVerifyWakeupSyncCaller(void *callbackTag, + CpaStatus status, + CpaBoolean opResult); + +#endif /*LAC_SYNC_H*/ Index: sys/dev/qat/qat_api/common/include/sal_qat_cmn_msg.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/sal_qat_cmn_msg.h @@ -0,0 +1,209 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file sal_qat_cmn_msg.c + * + * @ingroup SalQatCmnMessage + * + * @description + * Implementation for populating the common (across services) QAT structures. + * + *****************************************************************************/ +#ifndef SAL_QAT_CMN_MSG_H +#define SAL_QAT_CMN_MSG_H +/* + ******************************************************************************* + * Include public/global header files + ******************************************************************************* + */ +#include "cpa.h" + +/* + ******************************************************************************* + * Include private header files + ******************************************************************************* + */ +#include "lac_common.h" +#include "icp_accel_devices.h" +#include "qat_utils.h" + +#include "cpa_cy_sym.h" +#include "lac_mem.h" +#include "lac_mem_pools.h" +#include "lac_list.h" +#include "icp_adf_transport.h" +#include "icp_adf_transport_dp.h" + +#include "icp_qat_hw.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_la.h" + +/** + ****************************************************************************** + * @ingroup SalQatCmnMessage + * content descriptor info structure + * + * @description + * This structure contains generic information on the content descriptor + * + *****************************************************************************/ +typedef struct sal_qat_content_desc_info_s { + CpaPhysicalAddr hardwareSetupBlockPhys; + /**< Physical address of hardware setup block of the content descriptor + */ + void *pData; + /**< Virtual Pointer to the hardware setup block of the content + * descriptor */ + Cpa8U hwBlkSzQuadWords; + /**< Hardware Setup Block size in quad words */ +} sal_qat_content_desc_info_t; + +/** + ******************************************************************************* + * @ingroup SalQatCmnMessage + * Lookaside response handler function type + * + * @description + * This type definition specifies the function prototype for handling the + * response messages for a specific symmetric operation + * + * @param[in] lacCmdId Look Aside Command ID + * + * @param[in] pOpaqueData Pointer to Opaque Data + * + * @param[in] cmnRespFlags Common Response flags + * + * @return void + * + *****************************************************************************/ +typedef void (*sal_qat_resp_handler_func_t)(icp_qat_fw_la_cmd_id_t lacCmdId, + void *pOpaqueData, + icp_qat_fw_comn_flags cmnRespFlags); + +/******************************************************************** + * @ingroup SalQatMsg_CmnHdrWrite + * + * @description + * This function fills in all fields in the icp_qat_fw_comn_req_hdr_t + * section of the Request Msg. Build LW0 + LW1 - + * service part of the request + * + * @param[in] pMsg Pointer to 128B Request Msg buffer + * @param[in] serviceType type of service request + * @param[in] serviceCmdId id for the type of service request + * @param[in] cmnFlags common request flags + * @param[in] serviceCmdFlags service command flahgs + * + * @return + * None + * + *****************************************/ +void SalQatMsg_CmnHdrWrite(icp_qat_fw_comn_req_t *pMsg, + icp_qat_fw_comn_request_id_t serviceType, + uint8_t serviceCmdId, + icp_qat_fw_comn_flags cmnFlags, + icp_qat_fw_serv_specif_flags serviceCmdFlags); + +/******************************************************************** + * @ingroup SalQatMsg_CmnMidWrite + * + * @description + * This function fills in all fields in the icp_qat_fw_comn_req_mid_t + * section of the Request Msg and the corresponding SGL/Flat flag + * in the Hdr. + * + * @param[in] pReq Pointer to 128B Request Msg buffer + * @param[in] pOpaqueData Pointer to opaque data used by callback + * @param[in] bufferFormat src and dst Buffers are either SGL or Flat + format + * @param[in] pSrcBuffer Address of source buffer + * @param[in] pDstBuffer Address of destination buffer + * @param[in] pSrcLength Length of source buffer + * @param[in] pDstLength Length of destination buffer + * + + * @assumptions + * All fields in mid section are zero before fn is called + + * @return + * None + * + *****************************************/ +void SalQatMsg_CmnMidWrite(icp_qat_fw_la_bulk_req_t *pReq, + const void *pOpaqueData, + Cpa8U bufferFormat, + Cpa64U srcBuffer, + Cpa64U dstBuffer, + Cpa32U srcLength, + Cpa32U dstLength); + +/******************************************************************** + * @ingroup SalQatMsg_ContentDescHdrWrite + * + * @description + * This function fills in all fields in the + *icp_qat_fw_comn_req_hdr_cd_pars_t + * section of the Request Msg. + * + * @param[in] pMsg Pointer to 128B Request Msg buffer. + * @param[in] pContentDescInfo content descripter info. + * + * @return + * none + * + *****************************************/ +void SalQatMsg_ContentDescHdrWrite( + icp_qat_fw_comn_req_t *pMsg, + const sal_qat_content_desc_info_t *pContentDescInfo); + +/******************************************************************** + * @ingroup SalQatMsg_CtrlBlkSetToReserved + * + * @description + * This function set the whole contrle block to a reserved state. + * + * @param[in] _pMsg Pointer to 128B Request Msg buffer. + * + * @return + * none + * + *****************************************/ +void SalQatMsg_CtrlBlkSetToReserved(icp_qat_fw_comn_req_t *_pMsg); + +/******************************************************************** + * @ingroup SalQatMsg_transPutMsg + * + * @description + * + * + * @param[in] trans_handle + * @param[in] pqat_msg + * @param[in] size_in_lws + * @param[in] service + * + * @return + * CpaStatus + * + *****************************************/ +CpaStatus SalQatMsg_transPutMsg(icp_comms_trans_handle trans_handle, + void *pqat_msg, + Cpa32U size_in_lws, + Cpa8U service); + +/******************************************************************** + * @ingroup SalQatMsg_updateQueueTail + * + * @description + * + * + * @param[in] trans_handle + * + * + * @return + * CpaStatus + * + *****************************************/ +void SalQatMsg_updateQueueTail(icp_comms_trans_handle trans_hnd); +#endif Index: sys/dev/qat/qat_api/common/include/sal_service_state.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/sal_service_state.h @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file sal_service_state.h + * + * @defgroup SalServiceState + * + * @ingroup SalCtrl + * + * Checks state for generic service instance + * + ***************************************************************************/ + +#ifndef SAL_SERVICE_STATE_H_ +#define SAL_SERVICE_STATE_H_ + +/** +******************************************************************************* + * @ingroup SalServiceState + * Check to see if the instance is in the running state + * + * @description + * This function checks the state of an instance to see if it is in the + * running state + * + * @param[in] instance Instance handle (assumes this is valid, i.e. checked + * before this function is called) + * @retval CPA_TRUE Instance in the RUNNING state + * @retval CPA_FALSE Instance not in RUNNING state + * + *****************************************************************************/ +CpaBoolean Sal_ServiceIsRunning(CpaInstanceHandle instanceHandle); + +/** +******************************************************************************* + * @ingroup SalServiceState + * Check to see if the instance is beign restarted + * + * @description + * This function checks the state of an instance to see if the device it + * uses is being restarted because of hardware error. + * + * @param[in] instance Instance handle (assumes this is valid, i.e. checked + * before this function is called) + * @retval CPA_TRUE Device the instance is using is restarting. + * @retval CPA_FALSE Device the instance is running. + * + *****************************************************************************/ +CpaBoolean Sal_ServiceIsRestarting(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup SalServiceState + * This macro checks if an instance is running. An error message is logged + * if it is not in a running state. + * + * @return CPA_STATUS_FAIL Instance not in RUNNING state. + * @return void Instance is in RUNNING state. + ******************************************************************************/ +#define SAL_RUNNING_CHECK(instanceHandle) \ + do { \ + if (unlikely(CPA_TRUE != \ + Sal_ServiceIsRunning(instanceHandle))) { \ + if (CPA_TRUE == \ + Sal_ServiceIsRestarting(instanceHandle)) { \ + return CPA_STATUS_RESTARTING; \ + } \ + QAT_UTILS_LOG("Instance not in a Running state\n"); \ + return CPA_STATUS_FAIL; \ + } \ + } while (0) + +/** + ******************************************************************************* + * @ingroup SalServiceState + * This macro checks if an instance is in a state to get init event. + * + * @return CPA_STATUS_FAIL Instance not in good state. + * @return void Instance is in good state. + ******************************************************************************/ +#define SAL_SERVICE_GOOD_FOR_INIT(instanceHandle) \ + do { \ + sal_service_t *pService = (sal_service_t *)instanceHandle; \ + if ((SAL_SERVICE_STATE_UNINITIALIZED != pService->state) && \ + (SAL_SERVICE_STATE_RESTARTING != pService->state)) { \ + QAT_UTILS_LOG( \ + "Not in the correct state to call init\n"); \ + return CPA_STATUS_FAIL; \ + } \ + } while (0) + +#endif /*SAL_SERVICE_STATE_H_*/ Index: sys/dev/qat/qat_api/common/include/sal_statistics.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/sal_statistics.h @@ -0,0 +1,102 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file sal_statistics.h + * + * @ingroup SalStats + * + * @description + * Statistics related defines, structures and functions + * + *****************************************************************************/ + +#ifndef SAL_STATISTICS_H +#define SAL_STATISTICS_H + +#include "sal_statistics_strings.h" + +#define SAL_STATS_SYM 0 +#define SAL_STATS_DSA 1 +#define SAL_STATS_DSA2 2 +#define SAL_STATS_RSA 3 +#define SAL_STATS_DH 4 +#define SAL_STATS_KEYGEN 5 +#define SAL_STATS_LN 6 +#define SAL_STATS_PRIME 7 +#define SAL_STATS_ECC 8 +#define SAL_STATS_ECDH 9 +#define SAL_STATS_ECDSA 10 +/**< Numeric values for crypto statistics */ + +#define SAL_STATISTICS_STRING_OFF "0" +/**< String representing the value for disabled statistics */ + +/** +***************************************************************************** + * @ingroup SalStats + * Structure describing stats enabled/disabled in the system + * + * @description + * Structure describing stats enabled/disabled in the system + * + *****************************************************************************/ +typedef struct sal_statistics_collection_s { + CpaBoolean bStatsEnabled; + /**< If CPA_TRUE then statistics functionality is enabled */ + CpaBoolean bDcStatsEnabled; + /**< If CPA_TRUE then Compression statistics are enabled */ + CpaBoolean bDhStatsEnabled; + /**< If CPA_TRUE then Diffie-Helman statistics are enabled */ + CpaBoolean bDsaStatsEnabled; + /**< If CPA_TRUE then DSA statistics are enabled */ + CpaBoolean bEccStatsEnabled; + /**< If CPA_TRUE then ECC statistics are enabled */ + CpaBoolean bKeyGenStatsEnabled; + /**< If CPA_TRUE then Key Gen statistics are enabled */ + CpaBoolean bLnStatsEnabled; + /**< If CPA_TRUE then Large Number statistics are enabled */ + CpaBoolean bPrimeStatsEnabled; + /**< If CPA_TRUE then Prime statistics are enabled */ + CpaBoolean bRsaStatsEnabled; + /**< If CPA_TRUE then RSA statistics are enabled */ + CpaBoolean bSymStatsEnabled; + /**< If CPA_TRUE then Symmetric Crypto statistics are enabled */ +} sal_statistics_collection_t; + +/** + ****************************************************************************** + * @ingroup SalStats + * + * @description + * Initializes structure describing which statistics + * are enabled for the acceleration device. + * + * @param[in] device Pointer to an acceleration device structure + * + * @retval CPA_STATUS_SUCCESS Operation successful + * @retval CPA_STATUS_INVALID_PARAM Invalid param provided + * @retval CPA_STATUS_RESOURCE Memory alloc failed + * @retval CPA_STATUS_FAIL Operation failed + * + ******************************************************************************/ +CpaStatus SalStatistics_InitStatisticsCollection(icp_accel_dev_t *device); + +/** + ****************************************************************************** + * @ingroup SalStats + * + * @description + * Cleans structure describing which statistics + * are enabled for the acceleration device. + * + * @param[in] device Pointer to an acceleration device structure + * + * @retval CPA_STATUS_SUCCESS Operation successful + * @retval CPA_STATUS_INVALID_PARAM Invalid param provided + * @retval CPA_STATUS_FAIL Operation failed + * + ******************************************************************************/ +CpaStatus SalStatistics_CleanStatisticsCollection(icp_accel_dev_t *device); +#endif Index: sys/dev/qat/qat_api/common/include/sal_string_parse.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/sal_string_parse.h @@ -0,0 +1,66 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file sal_string_parse.h + * + * @defgroup SalStringParse + * + * @ingroup SalStringParse + * + * @description + * This file contains string parsing functions + * + *****************************************************************************/ + +#ifndef SAL_STRING_PARSE_H +#define SAL_STRING_PARSE_H + +/* Maximum size of the strings used by SAL */ +#define SAL_CFG_MAX_VAL_LEN_IN_BYTES 64 + +/** +******************************************************************************* + * @ingroup SalStringParse + * Builds a string and store it in result + * + * @description + * The result string will be the concatenation of string1, instanceNumber + * and string2. The size of result has to be SAL_CFG_MAX_VAL_LEN_IN_BYTES. + * We can't check this in this function, this is the user responsibility + * + * @param[in] string1 First string to concatenate + * @param[in] instanceNumber Instance number + * @param[in] string2 Second string to concatenate + * @param[out] result Resulting string of concatenation + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_FAIL Function failed + * + *****************************************************************************/ +CpaStatus Sal_StringParsing(char *string1, + Cpa32U instanceNumber, + char *string2, + char *result); + +/** +******************************************************************************* + * @ingroup SalStringParse + * Convert a string to an unsigned long + * + * @description + * Parses the string cp in the specified base, and returned it as an + * unsigned long value. + * + * @param[in] cp String to be converted + * @param[in] endp Pointer to the end of the string. This parameter + * can also be NULL and will not be used in this case + * @param[in] cfgBase Base to convert the string + * + * @retval The string converted to an unsigned long + * + *****************************************************************************/ +Cpa64U Sal_Strtoul(const char *cp, char **endp, unsigned int cfgBase); + +#endif /* SAL_STRING_PARSE_H */ Index: sys/dev/qat/qat_api/common/include/sal_types_compression.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/include/sal_types_compression.h @@ -0,0 +1,150 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file sal_types_compression.h + * + * @ingroup SalCtrl + * + * Generic compression instance type definition + * + ***************************************************************************/ +#ifndef SAL_TYPES_COMPRESSION_H_ +#define SAL_TYPES_COMPRESSION_H_ + +#include "cpa_dc.h" +#include "cpa_dc_dp.h" +#include "lac_sal_types.h" +#include "icp_qat_hw.h" +#include "icp_buffer_desc.h" + +#include "lac_mem_pools.h" +#include "icp_adf_transport.h" + +#define DC_NUM_RX_RINGS (1) + +/** + ***************************************************************************** + * @ingroup SalCtrl + * Compression device specific data + * + * @description + * Contains device specific information for a compression service. + * + *****************************************************************************/ +typedef struct sal_compression_device_data { + /* Device specific minimum output buffer size for static compression */ + Cpa32U minOutputBuffSize; + + /* Enable/disable secureRam/acceleratorRam for intermediate buffers*/ + Cpa8U useDevRam; + + /* When set, implies device can decompress interim odd byte length + * stateful decompression requests. + */ + CpaBoolean oddByteDecompInterim; + + /* When set, implies device can decompress odd byte length + * stateful decompression requests when bFinal is absent + */ + CpaBoolean oddByteDecompNobFinal; + + /* Flag to indicate if translator slice overflow is supported */ + CpaBoolean translatorOverflow; + + /* Flag to enable/disable delayed match mode */ + icp_qat_hw_compression_delayed_match_t enableDmm; + + Cpa32U inflateContextSize; + Cpa8U highestHwCompressionDepth; + + /* Mask that reports supported window sizes for comp/decomp */ + Cpa8U windowSizeMask; + + /* Flag to indicate CompressAndVerifyAndRecover feature support */ + CpaBoolean cnvnrSupported; +} sal_compression_device_data_t; + +/** + ***************************************************************************** + * @ingroup SalCtrl + * Compression specific Service Container + * + * @description + * Contains information required per compression service instance. + * + *****************************************************************************/ +typedef struct sal_compression_service_s { + /* An instance of the Generic Service Container */ + sal_service_t generic_service_info; + + /* Memory pool ID used for compression */ + lac_memory_pool_id_t compression_mem_pool; + + /* Pointer to an array of atomic stats for compression */ + QatUtilsAtomic *pCompStatsArr; + + /* Size of the DRAM intermediate buffer in bytes */ + Cpa64U minInterBuffSizeInBytes; + + /* Number of DRAM intermediate buffers */ + Cpa16U numInterBuffs; + + /* Address of the array of DRAM intermediate buffers*/ + icp_qat_addr_width_t *pInterBuffPtrsArray; + CpaPhysicalAddr pInterBuffPtrsArrayPhyAddr; + + icp_comms_trans_handle trans_handle_compression_tx; + icp_comms_trans_handle trans_handle_compression_rx; + + /* Maximum number of in flight requests */ + Cpa32U maxNumCompConcurrentReq; + + /* Callback function defined for the DcDp API compression session */ + CpaDcDpCallbackFn pDcDpCb; + + /* Config info */ + Cpa16U acceleratorNum; + Cpa16U bankNum; + Cpa16U pkgID; + Cpa16U isPolled; + Cpa32U coreAffinity; + Cpa32U nodeAffinity; + + sal_compression_device_data_t comp_device_data; + + /* Statistics handler */ + debug_file_info_t *debug_file; +} sal_compression_service_t; + +/************************************************************************* + * @ingroup SalCtrl + * @description + * This function returns a valid compression instance handle for the system + * if it exists. + * + * @performance + * To avoid calling this function the user of the QA api should not use + * instanceHandle = CPA_INSTANCE_HANDLE_SINGLE. + * + * @context + * This function is called whenever instanceHandle = + * CPA_INSTANCE_HANDLE_SINGLE at the QA Dc api. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval Pointer to first compression instance handle or NULL if no + * compression instances in the system. + * + *************************************************************************/ +CpaInstanceHandle dcGetFirstHandle(void); + +#endif /*SAL_TYPES_COMPRESSION_H_*/ Index: sys/dev/qat/qat_api/common/qat_comms/sal_qat_cmn_msg.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/qat_comms/sal_qat_cmn_msg.c @@ -0,0 +1,219 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file sal_qat_cmn_msg.h + * + * @defgroup SalQatCmnMessage + * + * @ingroup SalQatCmnMessage + * + * Interfaces for populating the common QAT structures for a lookaside + * operation. + * + *****************************************************************************/ + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ +#include "cpa.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "qat_utils.h" +#include "icp_accel_devices.h" +#include "icp_qat_fw_la.h" +#include "icp_qat_hw.h" +#include "lac_common.h" +#include "lac_mem.h" +#include "sal_qat_cmn_msg.h" + +/******************************************************************** + * @ingroup SalQatMsg_CmnHdrWrite + * + * @description + * This function fills in all fields in the icp_qat_fw_comn_req_hdr_t + * section of the Request Msg. Build LW0 + LW1 - + * service part of the request + * + * @param[in] pMsg Pointer to 128B Request Msg buffer + * @param[in] serviceType type of service request + * @param[in] serviceCmdId id for the type of service request + * @param[in] cmnFlags common request flags + * @param[in] serviceCmdFlags service command flahgs + * + * @return + * None + * + *****************************************/ +void +SalQatMsg_CmnHdrWrite(icp_qat_fw_comn_req_t *pMsg, + icp_qat_fw_comn_request_id_t serviceType, + uint8_t serviceCmdId, + icp_qat_fw_comn_flags cmnFlags, + icp_qat_fw_serv_specif_flags serviceCmdFlags) +{ + icp_qat_fw_comn_req_hdr_t *pHeader = &(pMsg->comn_hdr); + + /* LW0 */ + pHeader->hdr_flags = + ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET); + pHeader->service_type = (uint8_t)serviceType; + pHeader->service_cmd_id = serviceCmdId; + pHeader->resrvd1 = 0; + /* LW1 */ + pHeader->comn_req_flags = cmnFlags; + pHeader->serv_specif_flags = serviceCmdFlags; +} + +/******************************************************************** + * @ingroup SalQatCmnMessage + * + * @description + * This function fills in all fields in the icp_qat_fw_comn_req_mid_t + * section of the Request Msg and the corresponding SGL/Flat flag + * in the Hdr. + * + * @param[in] pReq Pointer to 128B Request Msg buffer + * @param[in] pOpaqueData Pointer to opaque data used by callback + * @param[in] bufferFormat src and dst Buffers are either SGL or Flat + * format + * @param[in] pSrcBuffer Address of source buffer + * @param[in] pDstBuffer Address of destination buffer + * @param[in] pSrcLength Length of source buffer + * @param[in] pDstLength Length of destination buffer + * + + * @assumptions + * All fields in mid section are zero before fn is called + + * @return + * None + * + *****************************************/ +void inline SalQatMsg_CmnMidWrite(icp_qat_fw_la_bulk_req_t *pReq, + const void *pOpaqueData, + Cpa8U bufferFormat, + Cpa64U srcBuffer, + Cpa64U dstBuffer, + Cpa32U srcLength, + Cpa32U dstLength) +{ + icp_qat_fw_comn_req_mid_t *pMid = &(pReq->comn_mid); + + LAC_MEM_SHARED_WRITE_FROM_PTR(pMid->opaque_data, pOpaqueData); + pMid->src_data_addr = srcBuffer; + + /* In place */ + if (0 == dstBuffer) { + pMid->dest_data_addr = srcBuffer; + } + /* Out of place */ + else { + pMid->dest_data_addr = dstBuffer; + } + + if (bufferFormat == QAT_COMN_PTR_TYPE_SGL) { + /* Using ScatterGatherLists so set flag in header */ + ICP_QAT_FW_COMN_PTR_TYPE_SET(pReq->comn_hdr.comn_req_flags, + QAT_COMN_PTR_TYPE_SGL); + + /* Assumption: No need to set src and dest length in this case + * as not + * used */ + + } else { + /* Using Flat buffers so set flag in header */ + ICP_QAT_FW_COMN_PTR_TYPE_SET(pReq->comn_hdr.comn_req_flags, + QAT_COMN_PTR_TYPE_FLAT); + + pMid->src_length = srcLength; + pMid->dst_length = dstLength; + } +} + +/******************************************************************** + * @ingroup SalQatMsg_ContentDescHdrWrite + * + * @description + * This function fills in all fields in the + * icp_qat_fw_comn_req_hdr_cd_pars_t section of the Request Msg. + * + * @param[in] pMsg Pointer to 128B Request Msg buffer. + * @param[in] pContentDescInfo content descripter info. + * + * @return + * none + * + *****************************************/ +void +SalQatMsg_ContentDescHdrWrite( + icp_qat_fw_comn_req_t *pMsg, + const sal_qat_content_desc_info_t *pContentDescInfo) +{ + icp_qat_fw_comn_req_hdr_cd_pars_t *pCd_pars = &(pMsg->cd_pars); + + pCd_pars->s.content_desc_addr = + pContentDescInfo->hardwareSetupBlockPhys; + pCd_pars->s.content_desc_params_sz = pContentDescInfo->hwBlkSzQuadWords; + pCd_pars->s.content_desc_resrvd1 = 0; + pCd_pars->s.content_desc_hdr_resrvd2 = 0; + pCd_pars->s.content_desc_resrvd3 = 0; +} + +/******************************************************************** + * @ingroup SalQatMsg_CtrlBlkSetToReserved + * + * @description + * This function sets the whole control block to a reserved state. + * + * @param[in] _pMsg Pointer to 128B Request Msg buffer. + * + * @return + * none + * + *****************************************/ +void +SalQatMsg_CtrlBlkSetToReserved(icp_qat_fw_comn_req_t *pMsg) +{ + + icp_qat_fw_comn_req_cd_ctrl_t *pCd_ctrl = &(pMsg->cd_ctrl); + + memset(pCd_ctrl, 0, sizeof(icp_qat_fw_comn_req_cd_ctrl_t)); +} + +/******************************************************************** + * @ingroup SalQatMsg_transPutMsg + * + * @description + * + * + * @param[in] trans_handle + * @param[in] pqat_msg + * @param[in] size_in_lws + * @param[in] service + * + * @return + * CpaStatus + * + *****************************************/ +CpaStatus +SalQatMsg_transPutMsg(icp_comms_trans_handle trans_handle, + void *pqat_msg, + Cpa32U size_in_lws, + Cpa8U service) +{ + return icp_adf_transPutMsg(trans_handle, pqat_msg, size_in_lws); +} + +void +SalQatMsg_updateQueueTail(icp_comms_trans_handle trans_handle) +{ + icp_adf_updateQueueTail(trans_handle); +} Index: sys/dev/qat/qat_api/common/stubs/lac_stubs.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/stubs/lac_stubs.c @@ -0,0 +1,413 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * + * @file lac_stubs.c + * + * @defgroup kernel stubs + * + * All PKE and KPT API won't be supported in kernel API + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ + +/* API Includes */ +#include "cpa.h" +#include "cpa_cy_dh.h" +#include "cpa_cy_dsa.h" +#include "cpa_cy_ecdh.h" +#include "cpa_cy_ecdsa.h" +#include "cpa_cy_ec.h" +#include "cpa_cy_prime.h" +#include "cpa_cy_rsa.h" +#include "cpa_cy_ln.h" +#include "cpa_dc.h" +#include "icp_accel_devices.h" +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_sal_poll.h" +#include "cpa_cy_sym.h" +#include "cpa_cy_sym_dp.h" +#include "cpa_cy_key.h" +#include "cpa_cy_common.h" +#include "cpa_cy_im.h" +#include "icp_sal_user.h" + +/* Diffie Hellman */ +CpaStatus +cpaCyDhKeyGenPhase1(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pDhPhase1Cb, + void *pCallbackTag, + const CpaCyDhPhase1KeyGenOpData *pPhase1KeyGenData, + CpaFlatBuffer *pLocalOctetStringPV) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyDhKeyGenPhase2Secret( + const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pDhPhase2Cb, + void *pCallbackTag, + const CpaCyDhPhase2SecretKeyGenOpData *pPhase2SecretKeyGenData, + CpaFlatBuffer *pOctetStringSecretKey) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyDhQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyDhStats64 *pDhStats) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyDhQueryStats(const CpaInstanceHandle instanceHandle, + CpaCyDhStats *pDhStats) +{ + return CPA_STATUS_UNSUPPORTED; +} + +/* DSA */ +CpaStatus +cpaCyDsaGenPParam(const CpaInstanceHandle instanceHandle, + const CpaCyDsaGenCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaPParamGenOpData *pOpData, + CpaBoolean *pProtocolStatus, + CpaFlatBuffer *pP) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyDsaGenGParam(const CpaInstanceHandle instanceHandle, + const CpaCyDsaGenCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaGParamGenOpData *pOpData, + CpaBoolean *pProtocolStatus, + CpaFlatBuffer *pG) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyDsaGenYParam(const CpaInstanceHandle instanceHandle, + const CpaCyDsaGenCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaYParamGenOpData *pOpData, + CpaBoolean *pProtocolStatus, + CpaFlatBuffer *pY) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyDsaSignR(const CpaInstanceHandle instanceHandle, + const CpaCyDsaGenCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaRSignOpData *pOpData, + CpaBoolean *pProtocolStatus, + CpaFlatBuffer *pR) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyDsaSignS(const CpaInstanceHandle instanceHandle, + const CpaCyDsaGenCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaSSignOpData *pOpData, + CpaBoolean *pProtocolStatus, + CpaFlatBuffer *pS) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyDsaSignRS(const CpaInstanceHandle instanceHandle, + const CpaCyDsaRSSignCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaRSSignOpData *pOpData, + CpaBoolean *pProtocolStatus, + CpaFlatBuffer *pR, + CpaFlatBuffer *pS) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyDsaVerify(const CpaInstanceHandle instanceHandle, + const CpaCyDsaVerifyCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaVerifyOpData *pOpData, + CpaBoolean *pVerifyStatus) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyDsaQueryStats(const CpaInstanceHandle instanceHandle, + CpaCyDsaStats *pDsaStats) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyDsaQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyDsaStats64 *pDsaStats) +{ + return CPA_STATUS_UNSUPPORTED; +} + +/* ECDH */ +CpaStatus +cpaCyEcdhPointMultiply(const CpaInstanceHandle instanceHandle, + const CpaCyEcdhPointMultiplyCbFunc pCb, + void *pCallbackTag, + const CpaCyEcdhPointMultiplyOpData *pOpData, + CpaBoolean *pMultiplyStatus, + CpaFlatBuffer *pXk, + CpaFlatBuffer *pYk) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyEcdhQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyEcdhStats64 *pEcdhStats) +{ + return CPA_STATUS_UNSUPPORTED; +} + +/* ECDSA */ +CpaStatus +cpaCyEcdsaSignR(const CpaInstanceHandle instanceHandle, + const CpaCyEcdsaGenSignCbFunc pCb, + void *pCallbackTag, + const CpaCyEcdsaSignROpData *pOpData, + CpaBoolean *pSignStatus, + CpaFlatBuffer *pR) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyEcdsaSignS(const CpaInstanceHandle instanceHandle, + const CpaCyEcdsaGenSignCbFunc pCb, + void *pCallbackTag, + const CpaCyEcdsaSignSOpData *pOpData, + CpaBoolean *pSignStatus, + CpaFlatBuffer *pS) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyEcdsaSignRS(const CpaInstanceHandle instanceHandle, + const CpaCyEcdsaSignRSCbFunc pCb, + void *pCallbackTag, + const CpaCyEcdsaSignRSOpData *pOpData, + CpaBoolean *pSignStatus, + CpaFlatBuffer *pR, + CpaFlatBuffer *pS) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyEcdsaVerify(const CpaInstanceHandle instanceHandle, + const CpaCyEcdsaVerifyCbFunc pCb, + void *pCallbackTag, + const CpaCyEcdsaVerifyOpData *pOpData, + CpaBoolean *pVerifyStatus) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyEcdsaQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyEcdsaStats64 *pEcdsaStats) +{ + return CPA_STATUS_UNSUPPORTED; +} + +/* EC */ +CpaStatus +cpaCyEcPointMultiply(const CpaInstanceHandle instanceHandle, + const CpaCyEcPointMultiplyCbFunc pCb, + void *pCallbackTag, + const CpaCyEcPointMultiplyOpData *pOpData, + CpaBoolean *pMultiplyStatus, + CpaFlatBuffer *pXk, + CpaFlatBuffer *pYk) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyEcPointVerify(const CpaInstanceHandle instanceHandle, + const CpaCyEcPointVerifyCbFunc pCb, + void *pCallbackTag, + const CpaCyEcPointVerifyOpData *pOpData, + CpaBoolean *pVerifyStatus) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyEcQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyEcStats64 *pEcStats) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyEcMontEdwdsPointMultiply( + const CpaInstanceHandle instanceHandle, + const CpaCyEcPointMultiplyCbFunc pCb, + void *pCallbackTag, + const CpaCyEcMontEdwdsPointMultiplyOpData *pOpData, + CpaBoolean *pMultiplyStatus, + CpaFlatBuffer *pXk, + CpaFlatBuffer *pYk) +{ + return CPA_STATUS_UNSUPPORTED; +} + +/* Prime */ +CpaStatus +cpaCyPrimeTest(const CpaInstanceHandle instanceHandle, + const CpaCyPrimeTestCbFunc pCb, + void *pCallbackTag, + const CpaCyPrimeTestOpData *pOpData, + CpaBoolean *pTestPassed) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyPrimeQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyPrimeStats64 *pPrimeStats) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyPrimeQueryStats(const CpaInstanceHandle instanceHandle, + CpaCyPrimeStats *pPrimeStats) +{ + return CPA_STATUS_UNSUPPORTED; +} + +/* RSA */ +CpaStatus +cpaCyRsaGenKey(const CpaInstanceHandle instanceHandle, + const CpaCyRsaKeyGenCbFunc pRsaKeyGenCb, + void *pCallbackTag, + const CpaCyRsaKeyGenOpData *pKeyGenOpData, + CpaCyRsaPrivateKey *pPrivateKey, + CpaCyRsaPublicKey *pPublicKey) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyRsaEncrypt(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pRsaEncryptCb, + void *pCallbackTag, + const CpaCyRsaEncryptOpData *pEncryptOpData, + CpaFlatBuffer *pOutputData) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyRsaDecrypt(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pRsaDecryptCb, + void *pCallbackTag, + const CpaCyRsaDecryptOpData *pDecryptOpData, + CpaFlatBuffer *pOutputData) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyRsaQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyRsaStats64 *pRsaStats) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyRsaQueryStats(const CpaInstanceHandle instanceHandle, + CpaCyRsaStats *pRsaStats) +{ + return CPA_STATUS_UNSUPPORTED; +} + +/* Large Number */ +CpaStatus +cpaCyLnModExp(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pLnModExpCb, + void *pCallbackTag, + const CpaCyLnModExpOpData *pLnModExpOpData, + CpaFlatBuffer *pResult) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyLnModInv(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pLnModInvCb, + void *pCallbackTag, + const CpaCyLnModInvOpData *pLnModInvOpData, + CpaFlatBuffer *pResult) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +cpaCyLnStatsQuery64(const CpaInstanceHandle instanceHandle, + CpaCyLnStats64 *pLnStats) +{ + return CPA_STATUS_UNSUPPORTED; +} + +/* Dynamic Instance */ +CpaStatus +icp_adf_putDynInstance(icp_accel_dev_t *accel_dev, + adf_service_type_t stype, + Cpa32U instance_id) +{ + return CPA_STATUS_FAIL; +} + +CpaStatus +icp_sal_CyPollAsymRing(CpaInstanceHandle instanceHandle_in, + Cpa32U response_quota) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +icp_sal_AsymGetInflightRequests(CpaInstanceHandle instanceHandle, + Cpa32U *maxInflightRequests, + Cpa32U *numInflightRequests) +{ + return CPA_STATUS_UNSUPPORTED; +} + +CpaStatus +icp_sal_AsymPerformOpNow(CpaInstanceHandle instanceHandle) +{ + return CPA_STATUS_UNSUPPORTED; +} Index: sys/dev/qat/qat_api/common/utils/lac_buffer_desc.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/utils/lac_buffer_desc.c @@ -0,0 +1,492 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file lac_buffer_desc.c Utility functions for setting buffer descriptors + * + * @ingroup LacBufferDesc + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include header files +******************************************************************************* +*/ +#include "qat_utils.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +#include "icp_adf_init.h" +#include "lac_list.h" +#include "lac_sal_types.h" +#include "lac_buffer_desc.h" +#include "lac_mem.h" +#include "cpa_cy_common.h" + +/* +******************************************************************************* +* Define public/global function definitions +******************************************************************************* +*/ +/* Invalid physical address value */ +#define INVALID_PHYSICAL_ADDRESS 0 + +/* Indicates what type of buffer writes need to be perfomed */ +typedef enum lac_buff_write_op_e { + WRITE_NORMAL = 0, + WRITE_AND_GET_SIZE, + WRITE_AND_ALLOW_ZERO_BUFFER, +} lac_buff_write_op_t; + +/* This function implements the buffer description writes for the traditional + * APIs */ +static CpaStatus +LacBuffDesc_CommonBufferListDescWrite(const CpaBufferList *pUserBufferList, + Cpa64U *pBufListAlignedPhyAddr, + CpaBoolean isPhysicalAddress, + Cpa64U *totalDataLenInBytes, + sal_service_t *pService, + lac_buff_write_op_t operationType) +{ + Cpa32U numBuffers = 0; + icp_qat_addr_width_t bufListDescPhyAddr = 0; + icp_qat_addr_width_t bufListAlignedPhyAddr = 0; + CpaFlatBuffer *pCurrClientFlatBuffer = NULL; + icp_buffer_list_desc_t *pBufferListDesc = NULL; + icp_flat_buffer_desc_t *pCurrFlatBufDesc = NULL; + + if (WRITE_AND_GET_SIZE == operationType) { + *totalDataLenInBytes = 0; + } + + numBuffers = pUserBufferList->numBuffers; + pCurrClientFlatBuffer = pUserBufferList->pBuffers; + + /* + * Get the physical address of this descriptor - need to offset by the + * alignment restrictions on the buffer descriptors + */ + bufListDescPhyAddr = (icp_qat_addr_width_t)LAC_OS_VIRT_TO_PHYS_EXTERNAL( + (*pService), pUserBufferList->pPrivateMetaData); + + if (bufListDescPhyAddr == 0) { + QAT_UTILS_LOG( + "Unable to get the physical address of the metadata.\n"); + return CPA_STATUS_FAIL; + } + + bufListAlignedPhyAddr = + LAC_ALIGN_POW2_ROUNDUP(bufListDescPhyAddr, + ICP_DESCRIPTOR_ALIGNMENT_BYTES); + + pBufferListDesc = (icp_buffer_list_desc_t *)(LAC_ARCH_UINT)( + (LAC_ARCH_UINT)pUserBufferList->pPrivateMetaData + + ((LAC_ARCH_UINT)bufListAlignedPhyAddr - + (LAC_ARCH_UINT)bufListDescPhyAddr)); + + /* Go past the Buffer List descriptor to the list of buffer descriptors + */ + pCurrFlatBufDesc = + (icp_flat_buffer_desc_t *)((pBufferListDesc->phyBuffers)); + + pBufferListDesc->numBuffers = numBuffers; + + if (WRITE_AND_GET_SIZE != operationType) { + /* Defining zero buffers is useful for example if running zero + * length + * hash */ + if (0 == numBuffers) { + /* In the case where there are zero buffers within the + * BufList + * it is required by firmware that the number is set to + * 1 + * but the phyBuffer and dataLenInBytes are set to + * NULL.*/ + pBufferListDesc->numBuffers = 1; + pCurrFlatBufDesc->dataLenInBytes = 0; + pCurrFlatBufDesc->phyBuffer = 0; + } + } + + while (0 != numBuffers) { + pCurrFlatBufDesc->dataLenInBytes = + pCurrClientFlatBuffer->dataLenInBytes; + + if (WRITE_AND_GET_SIZE == operationType) { + /* Calculate the total data length in bytes */ + *totalDataLenInBytes += + pCurrClientFlatBuffer->dataLenInBytes; + } + + /* Check if providing a physical address in the function. If not + * we + * need to convert it to a physical one */ + if (CPA_TRUE == isPhysicalAddress) { + pCurrFlatBufDesc->phyBuffer = + LAC_MEM_CAST_PTR_TO_UINT64( + (LAC_ARCH_UINT)(pCurrClientFlatBuffer->pData)); + } else { + pCurrFlatBufDesc->phyBuffer = + LAC_MEM_CAST_PTR_TO_UINT64( + LAC_OS_VIRT_TO_PHYS_EXTERNAL( + (*pService), pCurrClientFlatBuffer->pData)); + + if (WRITE_AND_ALLOW_ZERO_BUFFER != operationType) { + if (INVALID_PHYSICAL_ADDRESS == + pCurrFlatBufDesc->phyBuffer) { + QAT_UTILS_LOG( + "Unable to get the physical address of the client buffer.\n"); + return CPA_STATUS_FAIL; + } + } + } + + pCurrFlatBufDesc++; + pCurrClientFlatBuffer++; + + numBuffers--; + } + + *pBufListAlignedPhyAddr = bufListAlignedPhyAddr; + return CPA_STATUS_SUCCESS; +} + +/* This function implements the buffer description writes for the traditional + * APIs Zero length buffers are allowed, should be used for CHA-CHA-POLY and + * GCM aglorithms */ +CpaStatus +LacBuffDesc_BufferListDescWriteAndAllowZeroBuffer( + const CpaBufferList *pUserBufferList, + Cpa64U *pBufListAlignedPhyAddr, + CpaBoolean isPhysicalAddress, + sal_service_t *pService) +{ + return LacBuffDesc_CommonBufferListDescWrite( + pUserBufferList, + pBufListAlignedPhyAddr, + isPhysicalAddress, + NULL, + pService, + WRITE_AND_ALLOW_ZERO_BUFFER); +} + +/* This function implements the buffer description writes for the traditional + * APIs */ +CpaStatus +LacBuffDesc_BufferListDescWrite(const CpaBufferList *pUserBufferList, + Cpa64U *pBufListAlignedPhyAddr, + CpaBoolean isPhysicalAddress, + sal_service_t *pService) +{ + return LacBuffDesc_CommonBufferListDescWrite(pUserBufferList, + pBufListAlignedPhyAddr, + isPhysicalAddress, + NULL, + pService, + WRITE_NORMAL); +} + +/* This function does the same processing as LacBuffDesc_BufferListDescWrite + * but calculate as well the total length in bytes of the buffer list. */ +CpaStatus +LacBuffDesc_BufferListDescWriteAndGetSize(const CpaBufferList *pUserBufferList, + Cpa64U *pBufListAlignedPhyAddr, + CpaBoolean isPhysicalAddress, + Cpa64U *totalDataLenInBytes, + sal_service_t *pService) +{ + Cpa32U numBuffers = 0; + icp_qat_addr_width_t bufListDescPhyAddr = 0; + icp_qat_addr_width_t bufListAlignedPhyAddr = 0; + CpaFlatBuffer *pCurrClientFlatBuffer = NULL; + icp_buffer_list_desc_t *pBufferListDesc = NULL; + icp_flat_buffer_desc_t *pCurrFlatBufDesc = NULL; + *totalDataLenInBytes = 0; + + numBuffers = pUserBufferList->numBuffers; + pCurrClientFlatBuffer = pUserBufferList->pBuffers; + + /* + * Get the physical address of this descriptor - need to offset by the + * alignment restrictions on the buffer descriptors + */ + bufListDescPhyAddr = (icp_qat_addr_width_t)LAC_OS_VIRT_TO_PHYS_EXTERNAL( + (*pService), pUserBufferList->pPrivateMetaData); + + if (INVALID_PHYSICAL_ADDRESS == bufListDescPhyAddr) { + QAT_UTILS_LOG( + "Unable to get the physical address of the metadata.\n"); + return CPA_STATUS_FAIL; + } + + bufListAlignedPhyAddr = + LAC_ALIGN_POW2_ROUNDUP(bufListDescPhyAddr, + ICP_DESCRIPTOR_ALIGNMENT_BYTES); + + pBufferListDesc = (icp_buffer_list_desc_t *)(LAC_ARCH_UINT)( + (LAC_ARCH_UINT)pUserBufferList->pPrivateMetaData + + ((LAC_ARCH_UINT)bufListAlignedPhyAddr - + (LAC_ARCH_UINT)bufListDescPhyAddr)); + + /* Go past the Buffer List descriptor to the list of buffer descriptors + */ + pCurrFlatBufDesc = + (icp_flat_buffer_desc_t *)((pBufferListDesc->phyBuffers)); + + pBufferListDesc->numBuffers = numBuffers; + + while (0 != numBuffers) { + pCurrFlatBufDesc->dataLenInBytes = + pCurrClientFlatBuffer->dataLenInBytes; + + /* Calculate the total data length in bytes */ + *totalDataLenInBytes += pCurrClientFlatBuffer->dataLenInBytes; + + if (isPhysicalAddress == CPA_TRUE) { + pCurrFlatBufDesc->phyBuffer = + LAC_MEM_CAST_PTR_TO_UINT64( + (LAC_ARCH_UINT)(pCurrClientFlatBuffer->pData)); + } else { + pCurrFlatBufDesc->phyBuffer = + LAC_MEM_CAST_PTR_TO_UINT64( + LAC_OS_VIRT_TO_PHYS_EXTERNAL( + (*pService), pCurrClientFlatBuffer->pData)); + + if (pCurrFlatBufDesc->phyBuffer == 0) { + QAT_UTILS_LOG( + "Unable to get the physical address of the client buffer.\n"); + return CPA_STATUS_FAIL; + } + } + + pCurrFlatBufDesc++; + pCurrClientFlatBuffer++; + + numBuffers--; + } + + *pBufListAlignedPhyAddr = bufListAlignedPhyAddr; + return CPA_STATUS_SUCCESS; +} + +CpaStatus +LacBuffDesc_FlatBufferVerify(const CpaFlatBuffer *pUserFlatBuffer, + Cpa64U *pPktSize, + lac_aligment_shift_t alignmentShiftExpected) +{ + LAC_CHECK_NULL_PARAM(pUserFlatBuffer); + LAC_CHECK_NULL_PARAM(pUserFlatBuffer->pData); + + if (0 == pUserFlatBuffer->dataLenInBytes) { + QAT_UTILS_LOG("FlatBuffer empty\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Expected alignment */ + if (LAC_NO_ALIGNMENT_SHIFT != alignmentShiftExpected) { + if (!LAC_ADDRESS_ALIGNED(pUserFlatBuffer->pData, + alignmentShiftExpected)) { + QAT_UTILS_LOG( + "FlatBuffer not aligned correctly - expected alignment of %u bytes.\n", + 1 << alignmentShiftExpected); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* Update the total size of the packet. This function being called in a + * loop + * for an entire buffer list we need to increment the value */ + *pPktSize += pUserFlatBuffer->dataLenInBytes; + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +LacBuffDesc_FlatBufferVerifyNull(const CpaFlatBuffer *pUserFlatBuffer, + Cpa64U *pPktSize, + lac_aligment_shift_t alignmentShiftExpected) +{ + LAC_CHECK_NULL_PARAM(pUserFlatBuffer); + + if (0 != pUserFlatBuffer->dataLenInBytes) { + LAC_CHECK_NULL_PARAM(pUserFlatBuffer->pData); + } + + /* Expected alignment */ + if (LAC_NO_ALIGNMENT_SHIFT != alignmentShiftExpected) { + if (!LAC_ADDRESS_ALIGNED(pUserFlatBuffer->pData, + alignmentShiftExpected)) { + QAT_UTILS_LOG( + "FlatBuffer not aligned correctly - expected alignment of %u bytes.\n", + 1 << alignmentShiftExpected); + return CPA_STATUS_INVALID_PARAM; + } + } + + /* Update the total size of the packet. This function being called in a + * loop + * for an entire buffer list we need to increment the value */ + *pPktSize += pUserFlatBuffer->dataLenInBytes; + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +LacBuffDesc_BufferListVerify(const CpaBufferList *pUserBufferList, + Cpa64U *pPktSize, + lac_aligment_shift_t alignmentShiftExpected) +{ + CpaFlatBuffer *pCurrClientFlatBuffer = NULL; + Cpa32U numBuffers = 0; + CpaStatus status = CPA_STATUS_SUCCESS; + + LAC_CHECK_NULL_PARAM(pUserBufferList); + LAC_CHECK_NULL_PARAM(pUserBufferList->pBuffers); + LAC_CHECK_NULL_PARAM(pUserBufferList->pPrivateMetaData); + + numBuffers = pUserBufferList->numBuffers; + + if (0 == pUserBufferList->numBuffers) { + QAT_UTILS_LOG("Number of buffers is 0.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + pCurrClientFlatBuffer = pUserBufferList->pBuffers; + + *pPktSize = 0; + while (0 != numBuffers && status == CPA_STATUS_SUCCESS) { + status = LacBuffDesc_FlatBufferVerify(pCurrClientFlatBuffer, + pPktSize, + alignmentShiftExpected); + + pCurrClientFlatBuffer++; + numBuffers--; + } + return status; +} + +CpaStatus +LacBuffDesc_BufferListVerifyNull(const CpaBufferList *pUserBufferList, + Cpa64U *pPktSize, + lac_aligment_shift_t alignmentShiftExpected) +{ + CpaFlatBuffer *pCurrClientFlatBuffer = NULL; + Cpa32U numBuffers = 0; + CpaStatus status = CPA_STATUS_SUCCESS; + + LAC_CHECK_NULL_PARAM(pUserBufferList); + LAC_CHECK_NULL_PARAM(pUserBufferList->pBuffers); + LAC_CHECK_NULL_PARAM(pUserBufferList->pPrivateMetaData); + + numBuffers = pUserBufferList->numBuffers; + + if (0 == pUserBufferList->numBuffers) { + QAT_UTILS_LOG("Number of buffers is 0.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + pCurrClientFlatBuffer = pUserBufferList->pBuffers; + + *pPktSize = 0; + while (0 != numBuffers && status == CPA_STATUS_SUCCESS) { + status = + LacBuffDesc_FlatBufferVerifyNull(pCurrClientFlatBuffer, + pPktSize, + alignmentShiftExpected); + + pCurrClientFlatBuffer++; + numBuffers--; + } + return status; +} + +/** + ****************************************************************************** + * @ingroup LacBufferDesc + *****************************************************************************/ +void +LacBuffDesc_BufferListTotalSizeGet(const CpaBufferList *pUserBufferList, + Cpa64U *pPktSize) +{ + CpaFlatBuffer *pCurrClientFlatBuffer = NULL; + Cpa32U numBuffers = 0; + + pCurrClientFlatBuffer = pUserBufferList->pBuffers; + numBuffers = pUserBufferList->numBuffers; + + *pPktSize = 0; + while (0 != numBuffers) { + *pPktSize += pCurrClientFlatBuffer->dataLenInBytes; + pCurrClientFlatBuffer++; + numBuffers--; + } +} + +void +LacBuffDesc_BufferListZeroFromOffset(CpaBufferList *pBuffList, + Cpa32U offset, + Cpa32U lenToZero) +{ + Cpa32U zeroLen = 0, sizeLeftToZero = 0; + Cpa64U currentBufferSize = 0; + CpaFlatBuffer *pBuffer = NULL; + Cpa8U *pZero = NULL; + pBuffer = pBuffList->pBuffers; + + /* Take a copy of total length to zero. */ + sizeLeftToZero = lenToZero; + + while (sizeLeftToZero > 0) { + currentBufferSize = pBuffer->dataLenInBytes; + /* check where to start zeroing */ + if (offset >= currentBufferSize) { + /* Need to get to next buffer and reduce + * offset size by data len of buffer */ + offset = offset - pBuffer->dataLenInBytes; + pBuffer++; + } else { + /* Start to Zero from this position */ + pZero = (Cpa8U *)pBuffer->pData + offset; + + /* Need to calculate the correct number of bytes to zero + * for this iteration and for this location. + */ + if (sizeLeftToZero >= pBuffer->dataLenInBytes) { + /* The size to zero is spanning buffers, zeroLen + * in + * this case is from pZero (position) to end of + * buffer. + */ + zeroLen = pBuffer->dataLenInBytes - offset; + } else { + /* zeroLen is set to sizeLeftToZero, then check + * if zeroLen and + * the offset is greater or equal to the size of + * the buffer, if + * yes, adjust the zeroLen to zero out the + * remainder of this + * buffer. + */ + zeroLen = sizeLeftToZero; + if ((zeroLen + offset) >= + pBuffer->dataLenInBytes) { + zeroLen = + pBuffer->dataLenInBytes - offset; + } + } /* end inner else */ + memset((void *)pZero, 0, zeroLen); + sizeLeftToZero = sizeLeftToZero - zeroLen; + /* offset is no longer required as any data left to zero + * is now + * at the start of the next buffer. set offset to zero + * and move on + * the buffer pointer to the next buffer. + */ + offset = 0; + pBuffer++; + + } /* end outer else */ + + } /* end while */ +} Index: sys/dev/qat/qat_api/common/utils/lac_lock_free_stack.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/utils/lac_lock_free_stack.h @@ -0,0 +1,87 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef LAC_LOCK_FREE_STACK_H_1 +#define LAC_LOCK_FREE_STACK_H_1 +#include "lac_mem_pools.h" + +typedef union { + struct { + uint64_t ctr : 16; + uint64_t ptr : 48; + }; + uint64_t atomic; +} pointer_t; + +typedef struct { + volatile pointer_t top; +} lock_free_stack_t; + +static inline void * +PTR(const uintptr_t addr48) +{ +#ifdef __x86_64__ + const int64_t addr64 = addr48 << 16; + + /* Do arithmetic shift to restore kernel canonical address (if not NULL) + */ + return (void *)(addr64 >> 16); +#else + return (void *)(addr48); +#endif +} + +static inline lac_mem_blk_t * +pop(lock_free_stack_t *stack) +{ + pointer_t old_top; + pointer_t new_top; + lac_mem_blk_t *next; + + do { + old_top.atomic = stack->top.atomic; + next = PTR(old_top.ptr); + if (NULL == next) + return next; + + new_top.ptr = (uintptr_t)next->pNext; + new_top.ctr = old_top.ctr + 1; + } while (!__sync_bool_compare_and_swap(&stack->top.atomic, + old_top.atomic, + new_top.atomic)); + + return next; +} + +static inline void +push(lock_free_stack_t *stack, lac_mem_blk_t *val) +{ + pointer_t new_top; + pointer_t old_top; + + do { + old_top.atomic = stack->top.atomic; + val->pNext = PTR(old_top.ptr); + new_top.ptr = (uintptr_t)val; + new_top.ctr = old_top.ctr + 1; + } while (!__sync_bool_compare_and_swap(&stack->top.atomic, + old_top.atomic, + new_top.atomic)); +} + +static inline lock_free_stack_t +_init_stack(void) +{ + lock_free_stack_t stack = { { { 0 } } }; + return stack; +} + +static inline lac_mem_blk_t * +top(lock_free_stack_t *stack) +{ + pointer_t old_top = stack->top; + lac_mem_blk_t *next = PTR(old_top.ptr); + return next; +} + +#endif Index: sys/dev/qat/qat_api/common/utils/lac_mem.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/utils/lac_mem.c @@ -0,0 +1,118 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file lac_mem.c Implementation of Memory Functions + * + * @ingroup LacMem + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include header files +******************************************************************************* +*/ +#include "qat_utils.h" +#include "cpa.h" + +#include "icp_accel_devices.h" +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_adf_debug.h" +#include "icp_sal_iommu.h" + +#include "lac_mem.h" +#include "lac_mem_pools.h" +#include "lac_common.h" +#include "lac_list.h" +#include "icp_qat_fw_la.h" +#include "lac_sal_types.h" + +/* +******************************************************************************** +* Static Variables +******************************************************************************** +*/ + +#define MAX_BUFFER_SIZE (LAC_BITS_TO_BYTES(4096)) +/**< @ingroup LacMem + * Maximum size of the buffers used in the resize function */ + +/* +******************************************************************************* +* Define public/global function definitions +******************************************************************************* +*/ +/** + * @ingroup LacMem + */ +CpaStatus +icp_LacBufferRestore(Cpa8U *pUserBuffer, + Cpa32U userLen, + Cpa8U *pWorkingBuffer, + Cpa32U workingLen, + CpaBoolean copyBuf) +{ + Cpa32U padSize = 0; + + /* NULL is a valid value for working buffer as this function may be + * called to clean up in an error case where all the resize operations + * were not completed */ + if (NULL == pWorkingBuffer) { + return CPA_STATUS_SUCCESS; + } + + if (workingLen < userLen) { + QAT_UTILS_LOG("Invalid buffer sizes\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pUserBuffer != pWorkingBuffer) { + + if (CPA_TRUE == copyBuf) { + /* Copy from internal buffer to user buffer */ + padSize = workingLen - userLen; + memcpy(pUserBuffer, pWorkingBuffer + padSize, userLen); + } + + Lac_MemPoolEntryFree(pWorkingBuffer); + } + return CPA_STATUS_SUCCESS; +} + +/** + * @ingroup LacMem + */ +CpaPhysicalAddr +SalMem_virt2PhysExternal(void *pVirtAddr, void *pServiceGen) +{ + sal_service_t *pService = (sal_service_t *)pServiceGen; + + if (NULL != pService->virt2PhysClient) { + return pService->virt2PhysClient(pVirtAddr); + } else { + /* Use internal QAT Utils virt to phys */ + /* Ok for kernel space probably should not use for user */ + return LAC_OS_VIRT_TO_PHYS_INTERNAL(pVirtAddr); + } +} + +size_t +icp_sal_iommu_get_remap_size(size_t size) +{ + return size; +} + +CpaStatus +icp_sal_iommu_map(Cpa64U phaddr, Cpa64U iova, size_t size) +{ + return CPA_STATUS_SUCCESS; +} + +CpaStatus +icp_sal_iommu_unmap(Cpa64U iova, size_t size) +{ + return CPA_STATUS_SUCCESS; +} Index: sys/dev/qat/qat_api/common/utils/lac_mem_pools.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/utils/lac_mem_pools.c @@ -0,0 +1,430 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file lac_mem_pools.c + * + * @ingroup LacMemPool + * + * Memory Pool creation and mgmt function implementations + * + ***************************************************************************/ + +#include "cpa.h" +#include "qat_utils.h" +#include "icp_accel_devices.h" +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_adf_debug.h" +#include "lac_lock_free_stack.h" +#include "lac_mem_pools.h" +#include "lac_mem.h" +#include "lac_common.h" +#include "cpa_dc.h" +#include "dc_session.h" +#include "dc_datapath.h" +#include "icp_qat_fw_comp.h" +#include "icp_buffer_desc.h" +#include "lac_sym.h" + +#define LAC_MEM_POOLS_NUM_SUPPORTED 32000 +/**< @ingroup LacMemPool + * Number of mem pools supported */ + +#define LAC_MEM_POOLS_NAME_SIZE 17 +/**< @ingroup LacMemPool + * 16 bytes plus '\\0' terminator */ + +/**< @ingroup LacMemPool + * This structure is used to manage each pool created using this utility + * feature. The client will maintain a pointer (identifier) to the created + * structure per pool. + */ +typedef struct lac_mem_pool_hdr_s { + lock_free_stack_t stack; + char poolName[LAC_MEM_POOLS_NAME_SIZE]; /*16 bytes of a pool name */ + /**< up to 16 bytes of a pool name */ + unsigned int numElementsInPool; + /**< number of elements in the Pool */ + unsigned int blkSizeInBytes; + /**< Block size in bytes */ + unsigned int blkAlignmentInBytes; + /**< block alignment in bytes */ + lac_mem_blk_t **trackBlks; + /* An array of mem block pointers to track the allocated entries in pool + */ + volatile size_t availBlks; + /* Number of blocks available for allocation in this pool */ +} lac_mem_pool_hdr_t; + +static lac_mem_pool_hdr_t *lac_mem_pools[LAC_MEM_POOLS_NUM_SUPPORTED] = { + NULL +}; +/**< @ingroup LacMemPool + * Array of pointers to the mem pool header structure + */ + +LAC_DECLARE_HIGHEST_BIT_OF(lac_mem_blk_t); +/**< @ingroup LacMemPool + * local constant for quickening computation of additional space allocated + * for holding lac_mem_blk_t container-structure + */ + +/** + ******************************************************************************* + * @ingroup LacMemPool + * This function cleans up a mem pool. + ******************************************************************************/ +void Lac_MemPoolCleanUpInternal(lac_mem_pool_hdr_t *pPoolID); + +static inline Cpa32U +Lac_MemPoolGetElementRealSize(Cpa32U blkSizeInBytes, Cpa32U blkAlignmentInBytes) +{ + Cpa32U addSize = (blkAlignmentInBytes >= sizeof(lac_mem_blk_t) ? + blkAlignmentInBytes : + 1 << (highest_bit_of_lac_mem_blk_t + 1)); + return blkSizeInBytes + addSize; +} + +CpaStatus +Lac_MemPoolCreate(lac_memory_pool_id_t *pPoolID, + char *poolName, + unsigned int numElementsInPool, /*Number of elements*/ + unsigned int blkSizeInBytes, /*Block Size in bytes*/ + unsigned int blkAlignmentInBytes, /*Block alignment (bytes)*/ + CpaBoolean trackMemory, + Cpa32U node) +{ + unsigned int poolSearch = 0; + unsigned int counter = 0; + lac_mem_blk_t *pMemBlkCurrent = NULL; + + void *pMemBlk = NULL; + + if (pPoolID == NULL) { + QAT_UTILS_LOG("Invalid Pool ID param\n"); + return CPA_STATUS_INVALID_PARAM; /*Error*/ + } + + /* Find First available Pool return error otherwise */ + while (lac_mem_pools[poolSearch] != NULL) { + poolSearch++; + if (LAC_MEM_POOLS_NUM_SUPPORTED == poolSearch) { + QAT_UTILS_LOG( + "No more memory pools available for allocation.\n"); + return CPA_STATUS_FAIL; + } + } + + /* Allocate a Pool header */ + lac_mem_pools[poolSearch] = LAC_OS_MALLOC(sizeof(lac_mem_pool_hdr_t)); + if (NULL == lac_mem_pools[poolSearch]) { + QAT_UTILS_LOG( + "Unable to allocate memory for creation of the pool.\n"); + return CPA_STATUS_RESOURCE; /*Error*/ + } + memset(lac_mem_pools[poolSearch], 0, sizeof(lac_mem_pool_hdr_t)); + + /* Copy in Pool Name */ + if (poolName != NULL) { + snprintf(lac_mem_pools[poolSearch]->poolName, + LAC_MEM_POOLS_NAME_SIZE, + "%s", + poolName); + } else { + LAC_OS_FREE(lac_mem_pools[poolSearch]); + lac_mem_pools[poolSearch] = NULL; + QAT_UTILS_LOG("Invalid Pool Name pointer\n"); + return CPA_STATUS_INVALID_PARAM; /*Error*/ + } + + /* Allocate table for tracking memory blocks */ + if (CPA_TRUE == trackMemory) { + lac_mem_pools[poolSearch]->trackBlks = LAC_OS_MALLOC( + (sizeof(lac_mem_blk_t *) * numElementsInPool)); + if (NULL == lac_mem_pools[poolSearch]->trackBlks) { + LAC_OS_FREE(lac_mem_pools[poolSearch]); + lac_mem_pools[poolSearch] = NULL; + QAT_UTILS_LOG( + "Unable to allocate memory for tracking memory blocks.\n"); + return CPA_STATUS_RESOURCE; /*Error*/ + } + } else { + lac_mem_pools[poolSearch]->trackBlks = NULL; + } + + lac_mem_pools[poolSearch]->availBlks = 0; + lac_mem_pools[poolSearch]->stack = _init_stack(); + + /* Calculate alignment needed for allocation */ + for (counter = 0; counter < numElementsInPool; counter++) { + CpaPhysicalAddr physAddr = 0; + /* realSize is computed for allocation of blkSize bytes + + additional + capacity for lac_mem_blk_t structure storage due to the some + OSes + (BSD) limitations for memory alignment to be power of 2; + sizeof(lac_mem_blk_t) is being round up to the closest power + of 2 - + optimised towards the least CPU overhead but at additional + memory + cost + */ + Cpa32U realSize = + Lac_MemPoolGetElementRealSize(blkSizeInBytes, + blkAlignmentInBytes); + Cpa32U addSize = realSize - blkSizeInBytes; + + if (CPA_STATUS_SUCCESS != LAC_OS_CAMALLOC(&pMemBlk, + realSize, + blkAlignmentInBytes, + node)) { + Lac_MemPoolCleanUpInternal(lac_mem_pools[poolSearch]); + lac_mem_pools[poolSearch] = NULL; + QAT_UTILS_LOG( + "Unable to allocate contiguous chunk of memory.\n"); + return CPA_STATUS_RESOURCE; + } + + /* Calcaulate various offsets */ + physAddr = LAC_OS_VIRT_TO_PHYS_INTERNAL( + (void *)((LAC_ARCH_UINT)pMemBlk + addSize)); + + /* physAddr is now already aligned to the greater power of 2: + blkAlignmentInBytes or sizeof(lac_mem_blk_t) round up + We safely put the structure right before the blkSize + real data block + */ + pMemBlkCurrent = + (lac_mem_blk_t *)(((LAC_ARCH_UINT)(pMemBlk)) + addSize - + sizeof(lac_mem_blk_t)); + + pMemBlkCurrent->physDataPtr = physAddr; + pMemBlkCurrent->pMemAllocPtr = pMemBlk; + pMemBlkCurrent->pPoolID = lac_mem_pools[poolSearch]; + pMemBlkCurrent->isInUse = CPA_FALSE; + pMemBlkCurrent->pNext = NULL; + + push(&lac_mem_pools[poolSearch]->stack, pMemBlkCurrent); + + /* Store allocated memory pointer */ + if (lac_mem_pools[poolSearch]->trackBlks != NULL) { + (lac_mem_pools[poolSearch]->trackBlks[counter]) = + (lac_mem_blk_t *)pMemBlkCurrent; + } + __sync_add_and_fetch(&lac_mem_pools[poolSearch]->availBlks, 1); + (lac_mem_pools[poolSearch])->numElementsInPool = counter + 1; + } + + /* Set Pool details in the header */ + (lac_mem_pools[poolSearch])->blkSizeInBytes = blkSizeInBytes; + (lac_mem_pools[poolSearch])->blkAlignmentInBytes = blkAlignmentInBytes; + /* Set the Pool ID output parameter */ + *pPoolID = (LAC_ARCH_UINT)(lac_mem_pools[poolSearch]); + /* Success */ + return CPA_STATUS_SUCCESS; +} + +void * +Lac_MemPoolEntryAlloc(lac_memory_pool_id_t poolID) +{ + lac_mem_pool_hdr_t *pPoolID = (lac_mem_pool_hdr_t *)poolID; + lac_mem_blk_t *pMemBlkCurrent = NULL; + + /* Explicitly removing NULL PoolID check for speed */ + if (pPoolID == NULL) { + QAT_UTILS_LOG("Invalid Pool ID"); + return NULL; + } + + /* Remove block from pool */ + pMemBlkCurrent = pop(&pPoolID->stack); + if (NULL == pMemBlkCurrent) { + return (void *)CPA_STATUS_RETRY; + } + __sync_sub_and_fetch(&pPoolID->availBlks, 1); + pMemBlkCurrent->isInUse = CPA_TRUE; + return (void *)((LAC_ARCH_UINT)(pMemBlkCurrent) + + sizeof(lac_mem_blk_t)); +} + +void +Lac_MemPoolEntryFree(void *pEntry) +{ + lac_mem_blk_t *pMemBlk = NULL; + + /* Explicitly NULL pointer check */ + if (pEntry == NULL) { + QAT_UTILS_LOG("Memory Handle NULL"); + return; + } + + pMemBlk = + (lac_mem_blk_t *)((LAC_ARCH_UINT)pEntry - sizeof(lac_mem_blk_t)); + pMemBlk->isInUse = CPA_FALSE; + + push(&pMemBlk->pPoolID->stack, pMemBlk); + __sync_add_and_fetch(&pMemBlk->pPoolID->availBlks, 1); +} + +void +Lac_MemPoolDestroy(lac_memory_pool_id_t poolID) +{ + unsigned int poolSearch = 0; + lac_mem_pool_hdr_t *pPoolID = (lac_mem_pool_hdr_t *)poolID; + + if (pPoolID != NULL) { + /*Remove entry from table*/ + while (lac_mem_pools[poolSearch] != pPoolID) { + poolSearch++; + + if (LAC_MEM_POOLS_NUM_SUPPORTED == poolSearch) { + QAT_UTILS_LOG("Invalid Pool ID submitted.\n"); + return; + } + } + + lac_mem_pools[poolSearch] = NULL; /*Remove handle from pool*/ + + Lac_MemPoolCleanUpInternal(pPoolID); + } +} + +void +Lac_MemPoolCleanUpInternal(lac_mem_pool_hdr_t *pPoolID) +{ + lac_mem_blk_t *pCurrentBlk = NULL; + void *pFreePtr = NULL; + Cpa32U count = 0; + + if (pPoolID->trackBlks == NULL) { + pCurrentBlk = pop(&pPoolID->stack); + + while (pCurrentBlk != NULL) { + /* Free Data Blocks */ + pFreePtr = pCurrentBlk->pMemAllocPtr; + pCurrentBlk = pop(&pPoolID->stack); + LAC_OS_CAFREE(pFreePtr); + } + } else { + for (count = 0; count < pPoolID->numElementsInPool; count++) { + pFreePtr = (pPoolID->trackBlks[count])->pMemAllocPtr; + LAC_OS_CAFREE(pFreePtr); + } + LAC_OS_FREE(pPoolID->trackBlks); + } + LAC_OS_FREE(pPoolID); +} + +unsigned int +Lac_MemPoolAvailableEntries(lac_memory_pool_id_t poolID) +{ + lac_mem_pool_hdr_t *pPoolID = (lac_mem_pool_hdr_t *)poolID; + if (pPoolID == NULL) { + QAT_UTILS_LOG("Invalid Pool ID\n"); + return 0; + } + return pPoolID->availBlks; +} + +void +Lac_MemPoolStatsShow(void) +{ + unsigned int index = 0; + QAT_UTILS_LOG(SEPARATOR BORDER + " Memory Pools Stats\n" SEPARATOR); + + while (index < LAC_MEM_POOLS_NUM_SUPPORTED) { + if (lac_mem_pools[index] != NULL) { + QAT_UTILS_LOG( + BORDER " Pool Name: %s \n" BORDER + " No. Elements in Pool: %10u \n" BORDER + " Element Size in Bytes: %10u \n" BORDER + " Alignment in Bytes: %10u \n" BORDER + " No. Available Blocks: %10zu \n" SEPARATOR, + lac_mem_pools[index]->poolName, + lac_mem_pools[index]->numElementsInPool, + lac_mem_pools[index]->blkSizeInBytes, + lac_mem_pools[index]->blkAlignmentInBytes, + lac_mem_pools[index]->availBlks); + } + index++; + } +} + +static void +Lac_MemPoolInitSymCookies(lac_sym_cookie_t *pSymCookie) +{ + pSymCookie->keyContentDescPhyAddr = + LAC_OS_VIRT_TO_PHYS_INTERNAL(pSymCookie->u.keyCookie.contentDesc); + pSymCookie->keyHashStateBufferPhyAddr = LAC_OS_VIRT_TO_PHYS_INTERNAL( + pSymCookie->u.keyCookie.hashStateBuffer); + pSymCookie->keySslKeyInputPhyAddr = LAC_OS_VIRT_TO_PHYS_INTERNAL( + &(pSymCookie->u.keyCookie.u.sslKeyInput)); + pSymCookie->keyTlsKeyInputPhyAddr = LAC_OS_VIRT_TO_PHYS_INTERNAL( + &(pSymCookie->u.keyCookie.u.tlsKeyInput)); +} + +CpaStatus +Lac_MemPoolInitSymCookiesPhyAddr(lac_memory_pool_id_t poolID) +{ + lac_mem_pool_hdr_t *pPoolID = (lac_mem_pool_hdr_t *)poolID; + lac_sym_cookie_t *pSymCookie = NULL; + lac_mem_blk_t *pCurrentBlk = NULL; + + if (NULL == pPoolID) { + QAT_UTILS_LOG("Invalid Pool ID\n"); + return CPA_STATUS_FAIL; + } + + if (pPoolID->trackBlks == NULL) { + pCurrentBlk = top(&pPoolID->stack); + + while (pCurrentBlk != NULL) { + pSymCookie = + (lac_sym_cookie_t *)((LAC_ARCH_UINT)(pCurrentBlk) + + sizeof(lac_mem_blk_t)); + pCurrentBlk = pCurrentBlk->pNext; + Lac_MemPoolInitSymCookies(pSymCookie); + } + } else { + Cpa32U count = 0; + + for (count = 0; count < pPoolID->numElementsInPool; count++) { + pCurrentBlk = pPoolID->trackBlks[count]; + pSymCookie = + (lac_sym_cookie_t *)((LAC_ARCH_UINT)(pCurrentBlk) + + sizeof(lac_mem_blk_t)); + Lac_MemPoolInitSymCookies(pSymCookie); + } + } + return CPA_STATUS_SUCCESS; +} + +CpaStatus +Lac_MemPoolInitDcCookiePhyAddr(lac_memory_pool_id_t poolID) +{ + lac_mem_pool_hdr_t *pPoolID = (lac_mem_pool_hdr_t *)poolID; + lac_mem_blk_t *pCurrentBlk = NULL; + + if (NULL == pPoolID) { + QAT_UTILS_LOG("Invalid Pool ID\n"); + return CPA_STATUS_FAIL; + } + + if (NULL == pPoolID->trackBlks) { + pCurrentBlk = top(&pPoolID->stack); + + while (pCurrentBlk != NULL) { + pCurrentBlk = pCurrentBlk->pNext; + } + } else { + Cpa32U count = 0; + + for (count = 0; count < pPoolID->numElementsInPool; count++) { + pCurrentBlk = pPoolID->trackBlks[count]; + } + } + return CPA_STATUS_SUCCESS; +} Index: sys/dev/qat/qat_api/common/utils/lac_sync.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/utils/lac_sync.c @@ -0,0 +1,123 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file lac_sync.c Utility functions containing synchronous callback support + * functions + * + * @ingroup LacSync + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "lac_sync.h" +#include "lac_common.h" + +/* +******************************************************************************* +* Define public/global function definitions +******************************************************************************* +*/ + +/** + ***************************************************************************** + * @ingroup LacSync + *****************************************************************************/ +void +LacSync_GenWakeupSyncCaller(void *pCallbackTag, CpaStatus status) +{ + lac_sync_op_data_t *pSc = (lac_sync_op_data_t *)pCallbackTag; + if (pSc != NULL) { + if (pSc->canceled) { + QAT_UTILS_LOG("Synchronous operation cancelled.\n"); + return; + } + pSc->status = status; + if (CPA_STATUS_SUCCESS != LAC_POST_SEMAPHORE(pSc->sid)) { + QAT_UTILS_LOG("Failed to post semaphore.\n"); + } + } +} + +/** + ***************************************************************************** + * @ingroup LacSync + *****************************************************************************/ +void +LacSync_GenVerifyWakeupSyncCaller(void *pCallbackTag, + CpaStatus status, + CpaBoolean opResult) +{ + lac_sync_op_data_t *pSc = (lac_sync_op_data_t *)pCallbackTag; + if (pSc != NULL) { + if (pSc->canceled) { + QAT_UTILS_LOG("Synchronous operation cancelled.\n"); + return; + } + pSc->status = status; + pSc->opResult = opResult; + if (CPA_STATUS_SUCCESS != LAC_POST_SEMAPHORE(pSc->sid)) { + QAT_UTILS_LOG("Failed to post semaphore.\n"); + } + } +} + +/** + ***************************************************************************** + * @ingroup LacSync + *****************************************************************************/ +void +LacSync_GenVerifyCb(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaBoolean opResult) +{ + LacSync_GenVerifyWakeupSyncCaller(pCallbackTag, status, opResult); +} + +/** + ***************************************************************************** + * @ingroup LacSync + *****************************************************************************/ +void +LacSync_GenFlatBufCb(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaFlatBuffer *pOut) +{ + LacSync_GenWakeupSyncCaller(pCallbackTag, status); +} + +/** + ***************************************************************************** + * @ingroup LacSync + *****************************************************************************/ +void +LacSync_GenFlatBufVerifyCb(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaBoolean opResult, + CpaFlatBuffer *pOut) +{ + LacSync_GenVerifyWakeupSyncCaller(pCallbackTag, status, opResult); +} + +/** + ***************************************************************************** + * @ingroup LacSync + *****************************************************************************/ +void +LacSync_GenDualFlatBufVerifyCb(void *pCallbackTag, + CpaStatus status, + void *pOpdata, + CpaBoolean opResult, + CpaFlatBuffer *pOut0, + CpaFlatBuffer *pOut1) +{ + LacSync_GenVerifyWakeupSyncCaller(pCallbackTag, status, opResult); +} Index: sys/dev/qat/qat_api/common/utils/sal_service_state.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/utils/sal_service_state.c @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file sal_service_state.c Service state checks + * + * @ingroup SalServiceState + * + ***************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ + +#include "cpa.h" +#include "qat_utils.h" +#include "lac_list.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +#include "lac_sal_types.h" +#include "sal_service_state.h" + +CpaBoolean +Sal_ServiceIsRunning(CpaInstanceHandle instanceHandle) +{ + sal_service_t *pService = (sal_service_t *)instanceHandle; + + if (SAL_SERVICE_STATE_RUNNING == pService->state) { + return CPA_TRUE; + } + return CPA_FALSE; +} + +CpaBoolean +Sal_ServiceIsRestarting(CpaInstanceHandle instanceHandle) +{ + sal_service_t *pService = (sal_service_t *)instanceHandle; + + if (SAL_SERVICE_STATE_RESTARTING == pService->state) { + return CPA_TRUE; + } + return CPA_FALSE; +} Index: sys/dev/qat/qat_api/common/utils/sal_statistics.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/utils/sal_statistics.c @@ -0,0 +1,203 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file sal_statistics.c + * + * @defgroup SalStats Sal Statistics + * + * @ingroup SalStats + * + * @description + * This file contains implementation of statistic related functions + * + *****************************************************************************/ + +#include "cpa.h" +#include "lac_common.h" +#include "lac_mem.h" +#include "icp_adf_cfg.h" +#include "icp_accel_devices.h" +#include "sal_statistics.h" + +#include "icp_adf_debug.h" +#include "lac_sal_types.h" +#include "lac_sal.h" + +/** + ****************************************************************************** + * @ingroup SalStats + * Reads from the config file if the given statistic is enabled + * + * @description + * Reads from the config file if the given statistic is enabled + * + * @param[in] device Pointer to an acceleration device structure + * @param[in] statsName Name of the config value to read the value from + * @param[out] pIsEnabled Pointer to a variable where information if the + * given stat is enabled or disabled will be stored + * + * @retval CPA_STATUS_SUCCESS Operation successful + * @retval CPA_STATUS_INVALID_PARAM Invalid param provided + * @retval CPA_STATUS_FAIL Operation failed + * + ******************************************************************************/ +static CpaStatus +SalStatistics_GetStatEnabled(icp_accel_dev_t *device, + const char *statsName, + CpaBoolean *pIsEnabled) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + char param_value[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + + LAC_CHECK_NULL_PARAM(pIsEnabled); + LAC_CHECK_NULL_PARAM(statsName); + + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + statsName, + param_value); + + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get %s from configuration.\n", + statsName); + return status; + } + + if (0 == strncmp(param_value, + SAL_STATISTICS_STRING_OFF, + strlen(SAL_STATISTICS_STRING_OFF))) { + *pIsEnabled = CPA_FALSE; + } else { + *pIsEnabled = CPA_TRUE; + } + + return status; +} + +/* @ingroup SalStats */ +CpaStatus +SalStatistics_InitStatisticsCollection(icp_accel_dev_t *device) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_statistics_collection_t *pStatsCollection = NULL; + Cpa32U enabled_services = 0; + + LAC_CHECK_NULL_PARAM(device); + + pStatsCollection = LAC_OS_MALLOC(sizeof(sal_statistics_collection_t)); + if (NULL == pStatsCollection) { + QAT_UTILS_LOG("Failed to allocate memory for statistic.\n"); + return CPA_STATUS_RESOURCE; + } + device->pQatStats = pStatsCollection; + + status = SalStatistics_GetStatEnabled(device, + SAL_STATS_CFG_ENABLED, + &pStatsCollection->bStatsEnabled); + LAC_CHECK_STATUS(status); + + if (CPA_FALSE == pStatsCollection->bStatsEnabled) { + pStatsCollection->bDcStatsEnabled = CPA_FALSE; + pStatsCollection->bDhStatsEnabled = CPA_FALSE; + pStatsCollection->bDsaStatsEnabled = CPA_FALSE; + pStatsCollection->bEccStatsEnabled = CPA_FALSE; + pStatsCollection->bKeyGenStatsEnabled = CPA_FALSE; + pStatsCollection->bLnStatsEnabled = CPA_FALSE; + pStatsCollection->bPrimeStatsEnabled = CPA_FALSE; + pStatsCollection->bRsaStatsEnabled = CPA_FALSE; + pStatsCollection->bSymStatsEnabled = CPA_FALSE; + + return status; + } + + /* What services are enabled */ + status = SalCtrl_GetEnabledServices(device, &enabled_services); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to get enabled services.\n"); + return CPA_STATUS_FAIL; + } + + /* Check if the compression service is enabled */ + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_COMPRESSION)) { + status = SalStatistics_GetStatEnabled( + device, + SAL_STATS_CFG_DC, + &pStatsCollection->bDcStatsEnabled); + LAC_CHECK_STATUS(status); + } + /* Check if the asym service is enabled */ + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO_ASYM) || + SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO)) { + status = SalStatistics_GetStatEnabled( + device, + SAL_STATS_CFG_DH, + &pStatsCollection->bDhStatsEnabled); + LAC_CHECK_STATUS(status); + + status = SalStatistics_GetStatEnabled( + device, + SAL_STATS_CFG_DSA, + &pStatsCollection->bDsaStatsEnabled); + LAC_CHECK_STATUS(status); + + status = SalStatistics_GetStatEnabled( + device, + SAL_STATS_CFG_ECC, + &pStatsCollection->bEccStatsEnabled); + LAC_CHECK_STATUS(status); + + status = SalStatistics_GetStatEnabled( + device, + SAL_STATS_CFG_KEYGEN, + &pStatsCollection->bKeyGenStatsEnabled); + LAC_CHECK_STATUS(status); + + status = SalStatistics_GetStatEnabled( + device, + SAL_STATS_CFG_LN, + &pStatsCollection->bLnStatsEnabled); + LAC_CHECK_STATUS(status); + + status = SalStatistics_GetStatEnabled( + device, + SAL_STATS_CFG_PRIME, + &pStatsCollection->bPrimeStatsEnabled); + LAC_CHECK_STATUS(status); + + status = SalStatistics_GetStatEnabled( + device, + SAL_STATS_CFG_RSA, + &pStatsCollection->bRsaStatsEnabled); + LAC_CHECK_STATUS(status); + } + + /* Check if the sym service is enabled */ + if (SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO_SYM) || + SalCtrl_IsServiceEnabled(enabled_services, + SAL_SERVICE_TYPE_CRYPTO)) { + status = SalStatistics_GetStatEnabled( + device, + SAL_STATS_CFG_SYM, + &pStatsCollection->bSymStatsEnabled); + LAC_CHECK_STATUS(status); + } + return status; +}; + +/* @ingroup SalStats */ +CpaStatus +SalStatistics_CleanStatisticsCollection(icp_accel_dev_t *device) +{ + sal_statistics_collection_t *pStatsCollection = NULL; + LAC_CHECK_NULL_PARAM(device); + pStatsCollection = (sal_statistics_collection_t *)device->pQatStats; + LAC_OS_FREE(pStatsCollection); + device->pQatStats = NULL; + return CPA_STATUS_SUCCESS; +} Index: sys/dev/qat/qat_api/common/utils/sal_string_parse.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/utils/sal_string_parse.c @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file sal_string_parse.c + * + * @ingroup SalStringParse + * + * @description + * This file contains string parsing functions for both user space and kernel + * space + * + *****************************************************************************/ +#include "cpa.h" +#include "lac_mem.h" +#include "sal_string_parse.h" + +CpaStatus +Sal_StringParsing(char *string1, + Cpa32U instanceNumber, + char *string2, + char *result) +{ + char instNumString[SAL_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + Cpa32U instNumStringLen = 0; + + snprintf(instNumString, + SAL_CFG_MAX_VAL_LEN_IN_BYTES, + "%d", + instanceNumber); + instNumStringLen = strnlen(instNumString, SAL_CFG_MAX_VAL_LEN_IN_BYTES); + if ((strnlen(string1, SAL_CFG_MAX_VAL_LEN_IN_BYTES) + instNumStringLen + + strnlen(string2, SAL_CFG_MAX_VAL_LEN_IN_BYTES)) > + SAL_CFG_MAX_VAL_LEN_IN_BYTES) { + QAT_UTILS_LOG("Size of result too small.\n"); + return CPA_STATUS_FAIL; + } + + LAC_OS_BZERO(result, SAL_CFG_MAX_VAL_LEN_IN_BYTES); + snprintf(result, + SAL_CFG_MAX_VAL_LEN_IN_BYTES, + "%s%d%s", + string1, + instanceNumber, + string2); + + return CPA_STATUS_SUCCESS; +} + +Cpa64U +Sal_Strtoul(const char *cp, char **endp, unsigned int cfgBase) +{ + Cpa64U ulResult = 0; + + ulResult = (Cpa64U)simple_strtoull(cp, endp, cfgBase); + + return ulResult; +} Index: sys/dev/qat/qat_api/common/utils/sal_user_process.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/utils/sal_user_process.c @@ -0,0 +1,84 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file sal_user_process.c + * + * @ingroup SalUserProcess + * + * @description + * This file contains implementation of functions to set/get user process + * name + * + *****************************************************************************/ + +#include "qat_utils.h" +#include "lac_common.h" +static char lacProcessName[LAC_USER_PROCESS_NAME_MAX_LEN + 1] = + LAC_KERNEL_PROCESS_NAME; + +/**< Process name used to obtain values from correct section of config file. */ + +/* + * @ingroup LacCommon + * @description + * This function sets the process name + * + * @context + * This functions is called from module_init or from user space process + * initialisation function + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No + * + * param[in] processName Process name to be set +*/ +CpaStatus +icpSetProcessName(const char *processName) +{ + LAC_CHECK_NULL_PARAM(processName); + + if (strnlen(processName, LAC_USER_PROCESS_NAME_MAX_LEN) == + LAC_USER_PROCESS_NAME_MAX_LEN) { + QAT_UTILS_LOG( + "Process name too long, maximum process name is %d>\n", + LAC_USER_PROCESS_NAME_MAX_LEN); + return CPA_STATUS_FAIL; + } + + strncpy(lacProcessName, processName, LAC_USER_PROCESS_NAME_MAX_LEN); + lacProcessName[LAC_USER_PROCESS_NAME_MAX_LEN] = '\0'; + + return CPA_STATUS_SUCCESS; +} + +/* + * @ingroup LacCommon + * @description + * This function gets the process name + * + * @context + * This functions is called from LAC context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * +*/ +char * +icpGetProcessName(void) +{ + return lacProcessName; +} Index: sys/dev/qat/qat_api/common/utils/sal_versions.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/common/utils/sal_versions.c @@ -0,0 +1,177 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file sal_versions.c + * + * @ingroup SalVersions + * + * @description + * This file contains implementation of functions used to obtain version + * information + * + *****************************************************************************/ + +#include "cpa.h" +#include "qat_utils.h" + +#include "icp_accel_devices.h" +#include "icp_adf_accel_mgr.h" +#include "icp_adf_cfg.h" + +#include "lac_common.h" + +#include "icp_sal_versions.h" + +#define ICP_SAL_VERSIONS_ALL_CAP_MASK 0xFFFFFFFF +/**< Mask used to get all devices from ADF */ + +/** +******************************************************************************* + * @ingroup SalVersions + * Fills in the version info structure + * @description + * This function obtains hardware and software information associated with + * a given device and fills in the version info structure + * + * @param[in] device Pointer to the device for which version information + * is to be obtained. + * @param[out] pVerInfo Pointer to a structure that will hold version + * information + * + * @context + * This function might sleep. It cannot be executed in a context that + * does not permit sleeping. + * @assumptions + * The system has been started + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @return CPA_STATUS_SUCCESS Operation finished successfully + * @return CPA_STATUS_FAIL Operation failed + * + *****************************************************************************/ +static CpaStatus +SalVersions_FillVersionInfo(icp_accel_dev_t *device, + icp_sal_dev_version_info_t *pVerInfo) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + char param_value[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + Cpa32S strSize = 0; + + memset(pVerInfo, 0, sizeof(icp_sal_dev_version_info_t)); + pVerInfo->devId = device->accelId; + + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + ICP_CFG_HW_REV_ID_KEY, + param_value); + LAC_CHECK_STATUS(status); + + strSize = snprintf((char *)pVerInfo->hardwareVersion, + ICP_SAL_VERSIONS_HW_VERSION_SIZE, + "%s", + param_value); + LAC_CHECK_PARAM_RANGE(strSize, 1, ICP_SAL_VERSIONS_HW_VERSION_SIZE); + + memset(param_value, 0, ADF_CFG_MAX_VAL_LEN_IN_BYTES); + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + ICP_CFG_UOF_VER_KEY, + param_value); + LAC_CHECK_STATUS(status); + + strSize = snprintf((char *)pVerInfo->firmwareVersion, + ICP_SAL_VERSIONS_FW_VERSION_SIZE, + "%s", + param_value); + LAC_CHECK_PARAM_RANGE(strSize, 1, ICP_SAL_VERSIONS_FW_VERSION_SIZE); + + memset(param_value, 0, ADF_CFG_MAX_VAL_LEN_IN_BYTES); + status = icp_adf_cfgGetParamValue(device, + LAC_CFG_SECTION_GENERAL, + ICP_CFG_MMP_VER_KEY, + param_value); + LAC_CHECK_STATUS(status); + + strSize = snprintf((char *)pVerInfo->mmpVersion, + ICP_SAL_VERSIONS_MMP_VERSION_SIZE, + "%s", + param_value); + LAC_CHECK_PARAM_RANGE(strSize, 1, ICP_SAL_VERSIONS_MMP_VERSION_SIZE); + + snprintf((char *)pVerInfo->softwareVersion, + ICP_SAL_VERSIONS_SW_VERSION_SIZE, + "%d.%d.%d", + SAL_INFO2_DRIVER_SW_VERSION_MAJ_NUMBER, + SAL_INFO2_DRIVER_SW_VERSION_MIN_NUMBER, + SAL_INFO2_DRIVER_SW_VERSION_PATCH_NUMBER); + + return status; +} + +CpaStatus +icp_sal_getDevVersionInfo(Cpa32U devId, icp_sal_dev_version_info_t *pVerInfo) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa16U numInstances = 0; + icp_accel_dev_t **pAccel_dev = NULL; + Cpa16U num_accel_dev = 0, index = 0; + icp_accel_dev_t *pDevice = NULL; + + LAC_CHECK_NULL_PARAM(pVerInfo); + + status = icp_amgr_getNumInstances(&numInstances); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Error while getting number of devices.\n"); + return CPA_STATUS_FAIL; + } + + if (devId >= ADF_MAX_DEVICES) { + QAT_UTILS_LOG("Invalid devId\n"); + return CPA_STATUS_INVALID_PARAM; + } + + pAccel_dev = + malloc(numInstances * sizeof(icp_accel_dev_t *), M_QAT, M_WAITOK); + + /* Get ADF to return all accel_devs */ + status = + icp_amgr_getAllAccelDevByCapabilities(ICP_SAL_VERSIONS_ALL_CAP_MASK, + pAccel_dev, + &num_accel_dev); + + if (CPA_STATUS_SUCCESS == status) { + for (index = 0; index < num_accel_dev; index++) { + pDevice = (icp_accel_dev_t *)pAccel_dev[index]; + + if (pDevice->accelId == devId) { + status = SalVersions_FillVersionInfo(pDevice, + pVerInfo); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG( + "Error while filling in version info.\n"); + } + break; + } + } + + if (index == num_accel_dev) { + QAT_UTILS_LOG("Device %d not found or not started.\n", + devId); + status = CPA_STATUS_FAIL; + } + } else { + QAT_UTILS_LOG("Error while getting devices.\n"); + } + + free(pAccel_dev, M_QAT); + return status; +} Index: sys/dev/qat/qat_api/device/dev_info.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/device/dev_info.c @@ -0,0 +1,135 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dev_info.c + * + * @defgroup Device + * + * @description + * This file contains implementation of functions for device level APIs + * + *****************************************************************************/ + +/* QAT-API includes */ +#include "cpa_dev.h" +#include "icp_accel_devices.h" +#include "lac_common.h" +#include "icp_adf_cfg.h" +#include "lac_sal_types.h" +#include "icp_adf_accel_mgr.h" +#include "sal_string_parse.h" +#include "lac_sal.h" + +CpaStatus +cpaGetNumDevices(Cpa16U *numDevices) +{ + LAC_CHECK_NULL_PARAM(numDevices); + + return icp_amgr_getNumInstances(numDevices); +} + +CpaStatus +cpaGetDeviceInfo(Cpa16U device, CpaDeviceInfo *deviceInfo) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + icp_accel_dev_t *pDevice = NULL; + Cpa16U numDevicesAvail = 0; + Cpa32U capabilitiesMask = 0; + Cpa32U enabledServices = 0; + + LAC_CHECK_NULL_PARAM(deviceInfo); + status = icp_amgr_getNumInstances(&numDevicesAvail); + /* Check if the application is not attempting to access a + * device that does not exist. + */ + if (0 == numDevicesAvail) { + QAT_UTILS_LOG("Failed to retrieve number of devices!\n"); + return CPA_STATUS_FAIL; + } + if (device >= numDevicesAvail) { + QAT_UTILS_LOG( + "Invalid device access! Number of devices available: %d.\n", + numDevicesAvail); + return CPA_STATUS_FAIL; + } + + /* Clear the entire capability structure before initialising it */ + memset(deviceInfo, 0x00, sizeof(CpaDeviceInfo)); + /* Bus/Device/Function should be 0xFF until initialised */ + deviceInfo->bdf = 0xffff; + + pDevice = icp_adf_getAccelDevByAccelId(device); + if (NULL == pDevice) { + QAT_UTILS_LOG("Failed to retrieve device.\n"); + return status; + } + + /* Device of interest is found, retrieve the information for it */ + deviceInfo->sku = pDevice->sku; + deviceInfo->deviceId = pDevice->pciDevId; + deviceInfo->bdf = icp_adf_get_busAddress(pDevice->accelId); + deviceInfo->numaNode = pDevice->pkg_id; + + if (DEVICE_DH895XCCVF == pDevice->deviceType || + DEVICE_C62XVF == pDevice->deviceType || + DEVICE_C3XXXVF == pDevice->deviceType || + DEVICE_C4XXXVF == pDevice->deviceType) { + deviceInfo->isVf = CPA_TRUE; + } + + status = SalCtrl_GetEnabledServices(pDevice, &enabledServices); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to retrieve enabled services!\n"); + return status; + } + + status = icp_amgr_getAccelDevCapabilities(pDevice, &capabilitiesMask); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG("Failed to retrieve accel capabilities mask!\n"); + return status; + } + + /* Determine if Compression service is enabled */ + if (enabledServices & SAL_SERVICE_TYPE_COMPRESSION) { + deviceInfo->dcEnabled = + (((capabilitiesMask & ICP_ACCEL_CAPABILITIES_COMPRESSION) != + 0) ? + CPA_TRUE : + CPA_FALSE); + } + + /* Determine if Crypto service is enabled */ + if (enabledServices & SAL_SERVICE_TYPE_CRYPTO) { + deviceInfo->cySymEnabled = + (((capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC)) ? + CPA_TRUE : + CPA_FALSE); + deviceInfo->cyAsymEnabled = + (((capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC) != 0) ? + CPA_TRUE : + CPA_FALSE); + } + /* Determine if Crypto Sym service is enabled */ + if (enabledServices & SAL_SERVICE_TYPE_CRYPTO_SYM) { + deviceInfo->cySymEnabled = + (((capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC)) ? + CPA_TRUE : + CPA_FALSE); + } + /* Determine if Crypto Asym service is enabled */ + if (enabledServices & SAL_SERVICE_TYPE_CRYPTO_ASYM) { + deviceInfo->cyAsymEnabled = + (((capabilitiesMask & + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC) != 0) ? + CPA_TRUE : + CPA_FALSE); + } + deviceInfo->deviceMemorySizeAvailable = pDevice->deviceMemAvail; + + return status; +} Index: sys/dev/qat/qat_api/firmware/include/icp_qat_fw.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/firmware/include/icp_qat_fw.h @@ -0,0 +1,1333 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file icp_qat_fw.h + * @defgroup icp_qat_fw_comn ICP QAT FW Common Processing Definitions + * @ingroup icp_qat_fw + * + * @description + * This file documents the common interfaces that the QAT FW running on + * the QAT AE exports. This common layer is used by a number of services + * to export content processing services. + * + *****************************************************************************/ + +#ifndef _ICP_QAT_FW_H_ +#define _ICP_QAT_FW_H_ + +/* +* ============================== +* General Notes on the Interface +*/ + +/* +* +* ============================== +* +* Introduction +* +* Data movement and slice chaining +* +* Endianness +* - Unless otherwise stated, all structures are defined in LITTLE ENDIAN +* MODE +* +* Alignment +* - In general all data structures provided to a request should be aligned +* on the 64 byte boundary so as to allow optimal memory transfers. At the +* minimum they must be aligned to the 8 byte boundary +* +* Sizes +* Quad words = 8 bytes +* +* Terminology +* +* ============================== +*/ + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "icp_qat_hw.h" + +/* Big assumptions that both bitpos and mask are constants */ +#define QAT_FIELD_SET(flags, val, bitpos, mask) \ + (flags) = (((flags) & (~((mask) << (bitpos)))) | \ + (((val) & (mask)) << (bitpos))) + +#define QAT_FIELD_GET(flags, bitpos, mask) (((flags) >> (bitpos)) & (mask)) +#define QAT_FLAG_SET(flags, val, bitpos) \ + ((flags) = (((flags) & (~(1 << (bitpos)))) | (((val)&1) << (bitpos)))) + +#define QAT_FLAG_CLEAR(flags, bitpos) (flags) = ((flags) & (~(1 << (bitpos)))) + +#define QAT_FLAG_GET(flags, bitpos) (((flags) >> (bitpos)) & 1) + +/**< @ingroup icp_qat_fw_comn + * Default request and response ring size in bytes */ +#define ICP_QAT_FW_REQ_DEFAULT_SZ 128 +#define ICP_QAT_FW_RESP_DEFAULT_SZ 32 + +#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8 +#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF + +/**< @ingroup icp_qat_fw_comn + * Common Request - Block sizes definitions in multiples of individual long + * words */ +#define ICP_QAT_FW_NUM_LONGWORDS_1 1 +#define ICP_QAT_FW_NUM_LONGWORDS_2 2 +#define ICP_QAT_FW_NUM_LONGWORDS_3 3 +#define ICP_QAT_FW_NUM_LONGWORDS_4 4 +#define ICP_QAT_FW_NUM_LONGWORDS_5 5 +#define ICP_QAT_FW_NUM_LONGWORDS_6 6 +#define ICP_QAT_FW_NUM_LONGWORDS_7 7 +#define ICP_QAT_FW_NUM_LONGWORDS_10 10 +#define ICP_QAT_FW_NUM_LONGWORDS_13 13 + +/**< @ingroup icp_qat_fw_comn + * Definition of the associated service Id for NULL service type. + * Note: the response is expected to use ICP_QAT_FW_COMN_RESP_SERV_CPM_FW */ +#define ICP_QAT_FW_NULL_REQ_SERV_ID 1 + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Definition of the firmware interface service users, for + * responses. + * @description + * Enumeration which is used to indicate the ids of the services + * for responses using the external firmware interfaces. + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_FW_COMN_RESP_SERV_NULL, /**< NULL service id type */ + ICP_QAT_FW_COMN_RESP_SERV_CPM_FW, /**< CPM FW Service ID */ + ICP_QAT_FW_COMN_RESP_SERV_DELIMITER /**< Delimiter service id type */ +} icp_qat_fw_comn_resp_serv_id_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Definition of the request types + * @description + * Enumeration which is used to indicate the ids of the request + * types used in each of the external firmware interfaces + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_FW_COMN_REQ_NULL = 0, /**< NULL request type */ + ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3, /**< CPM FW PKE Request */ + ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4, /**< CPM FW Lookaside Request */ + ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7, /**< CPM FW DMA Request */ + ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9, /**< CPM FW Compression Request */ + ICP_QAT_FW_COMN_REQ_DELIMITER /**< End delimiter */ + +} icp_qat_fw_comn_request_id_t; + +/* ========================================================================= */ +/* QAT FW REQUEST STRUCTURES */ +/* ========================================================================= */ + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Common request flags type + * + * @description + * Definition of the common request flags. + * + *****************************************************************************/ +typedef uint8_t icp_qat_fw_comn_flags; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Common request - Service specific flags type + * + * @description + * Definition of the common request service specific flags. + * + *****************************************************************************/ +typedef uint16_t icp_qat_fw_serv_specif_flags; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Common request - Extended service specific flags type + * + * @description + * Definition of the common request extended service specific flags. + * + *****************************************************************************/ +typedef uint8_t icp_qat_fw_ext_serv_specif_flags; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Definition of the common QAT FW request content descriptor field - + * points to the content descriptor parameters or itself contains service- + * specific data. Also specifies content descriptor parameter size. + * Contains reserved fields. + * @description + * Common section of the request used across all of the services exposed + * by the QAT FW. Each of the services inherit these common fields + * + *****************************************************************************/ +typedef union icp_qat_fw_comn_req_hdr_cd_pars_s { + /**< LWs 2-5 */ + struct { + uint64_t content_desc_addr; + /**< Address of the content descriptor */ + + uint16_t content_desc_resrvd1; + /**< Content descriptor reserved field */ + + uint8_t content_desc_params_sz; + /**< Size of the content descriptor parameters in quad words. + * These + * parameters describe the session setup configuration info for + * the + * slices that this request relies upon i.e. the configuration + * word and + * cipher key needed by the cipher slice if there is a request + * for + * cipher processing. */ + + uint8_t content_desc_hdr_resrvd2; + /**< Content descriptor reserved field */ + + uint32_t content_desc_resrvd3; + /**< Content descriptor reserved field */ + } s; + + struct { + uint32_t serv_specif_fields[ICP_QAT_FW_NUM_LONGWORDS_4]; + + } s1; + +} icp_qat_fw_comn_req_hdr_cd_pars_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Definition of the common QAT FW request middle block. + * @description + * Common section of the request used across all of the services exposed + * by the QAT FW. Each of the services inherit these common fields + * + *****************************************************************************/ +typedef struct icp_qat_fw_comn_req_mid_s { + /**< LWs 6-13 */ + uint64_t opaque_data; + /**< Opaque data passed unmodified from the request to response messages + * by + * firmware (fw) */ + + uint64_t src_data_addr; + /**< Generic definition of the source data supplied to the QAT AE. The + * common flags are used to further describe the attributes of this + * field */ + + uint64_t dest_data_addr; + /**< Generic definition of the destination data supplied to the QAT AE. + * The + * common flags are used to further describe the attributes of this + * field */ + + uint32_t src_length; + /** < Length of source flat buffer incase src buffer + * type is flat */ + + uint32_t dst_length; + /** < Length of source flat buffer incase dst buffer + * type is flat */ + +} icp_qat_fw_comn_req_mid_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Definition of the common QAT FW request content descriptor control + * block. + * + * @description + * Service specific section of the request used across all of the services + * exposed by the QAT FW. Each of the services populates this block + * uniquely. Refer to the service-specific header structures e.g. + * 'icp_qat_fw_cipher_hdr_s' (for Cipher) etc. + * + *****************************************************************************/ +typedef struct icp_qat_fw_comn_req_cd_ctrl_s { + /**< LWs 27-31 */ + uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5]; + +} icp_qat_fw_comn_req_cd_ctrl_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Definition of the common QAT FW request header. + * @description + * Common section of the request used across all of the services exposed + * by the QAT FW. Each of the services inherit these common fields. The + * reserved field of 7 bits and the service command Id field are all + * service-specific fields, along with the service specific flags. + * + *****************************************************************************/ +typedef struct icp_qat_fw_comn_req_hdr_s { + /**< LW0 */ + uint8_t resrvd1; + /**< reserved field */ + + uint8_t service_cmd_id; + /**< Service Command Id - this field is service-specific + * Please use service-specific command Id here e.g.Crypto Command Id + * or Compression Command Id etc. */ + + uint8_t service_type; + /**< Service type */ + + uint8_t hdr_flags; + /**< This represents a flags field for the Service Request. + * The most significant bit is the 'valid' flag and the only + * one used. All remaining bit positions are unused and + * are therefore reserved and need to be set to 0. */ + + /**< LW1 */ + icp_qat_fw_serv_specif_flags serv_specif_flags; + /**< Common Request service-specific flags + * e.g. Symmetric Crypto Command Flags */ + + icp_qat_fw_comn_flags comn_req_flags; + /**< Common Request Flags consisting of + * - 6 reserved bits, + * - 1 Content Descriptor field type bit and + * - 1 Source/destination pointer type bit */ + + icp_qat_fw_ext_serv_specif_flags extended_serv_specif_flags; + /**< An extension of serv_specif_flags + */ +} icp_qat_fw_comn_req_hdr_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Definition of the common QAT FW request parameter field. + * + * @description + * Service specific section of the request used across all of the services + * exposed by the QAT FW. Each of the services populates this block + * uniquely. Refer to service-specific header structures e.g. + * 'icp_qat_fw_comn_req_cipher_rqpars_s' (for Cipher) etc. + * + *****************************************************************************/ +typedef struct icp_qat_fw_comn_req_rqpars_s { + /**< LWs 14-26 */ + uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13]; + +} icp_qat_fw_comn_req_rqpars_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Definition of the common request structure with service specific + * fields + * @description + * This is a definition of the full qat request structure used by all + * services. Each service is free to use the service fields in its own + * way. This struct is useful as a message passing argument before the + * service contained within the request is determined. + * + *****************************************************************************/ +typedef struct icp_qat_fw_comn_req_s { + /**< LWs 0-1 */ + icp_qat_fw_comn_req_hdr_t comn_hdr; + /**< Common request header */ + + /**< LWs 2-5 */ + icp_qat_fw_comn_req_hdr_cd_pars_t cd_pars; + /**< Common Request content descriptor field which points either to a + * content descriptor + * parameter block or contains the service-specific data itself. */ + + /**< LWs 6-13 */ + icp_qat_fw_comn_req_mid_t comn_mid; + /**< Common request middle section */ + + /**< LWs 14-26 */ + icp_qat_fw_comn_req_rqpars_t serv_specif_rqpars; + /**< Common request service-specific parameter field */ + + /**< LWs 27-31 */ + icp_qat_fw_comn_req_cd_ctrl_t cd_ctrl; + /**< Common request content descriptor control block - + * this field is service-specific */ + +} icp_qat_fw_comn_req_t; + +/* ========================================================================= */ +/* QAT FW RESPONSE STRUCTURES */ +/* ========================================================================= */ + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Error code field + * + * @description + * Overloaded field with 8 bit common error field or two + * 8 bit compression error fields for compression and translator slices + * + *****************************************************************************/ +typedef union icp_qat_fw_comn_error_s { + struct { + uint8_t resrvd; + /**< 8 bit reserved field */ + + uint8_t comn_err_code; + /**< 8 bit common error code */ + + } s; + /**< Structure which is used for non-compression responses */ + + struct { + uint8_t xlat_err_code; + /**< 8 bit translator error field */ + + uint8_t cmp_err_code; + /**< 8 bit compression error field */ + + } s1; + /** Structure which is used for compression responses */ + +} icp_qat_fw_comn_error_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Definition of the common QAT FW response header. + * @description + * This section of the response is common across all of the services + * that generate a firmware interface response + * + *****************************************************************************/ +typedef struct icp_qat_fw_comn_resp_hdr_s { + /**< LW0 */ + uint8_t resrvd1; + /**< Reserved field - this field is service-specific - + * Note: The Response Destination Id has been removed + * from first QWord */ + + uint8_t service_id; + /**< Service Id returned by service block */ + + uint8_t response_type; + /**< Response type - copied from the request to + * the response message */ + + uint8_t hdr_flags; + /**< This represents a flags field for the Response. + * Bit<7> = 'valid' flag + * Bit<6> = 'CNV' flag indicating that CNV was executed + * on the current request + * Bit<5> = 'CNVNR' flag indicating that a recovery happened + * on the current request following a CNV error + * All remaining bits are unused and are therefore reserved. + * They must to be set to 0. + */ + + /**< LW 1 */ + icp_qat_fw_comn_error_t comn_error; + /**< This field is overloaded to allow for one 8 bit common error field + * or two 8 bit error fields from compression and translator */ + + uint8_t comn_status; + /**< Status field which specifies which slice(s) report an error */ + + uint8_t cmd_id; + /**< Command Id - passed from the request to the response message */ + +} icp_qat_fw_comn_resp_hdr_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Definition of the common response structure with service specific + * fields + * @description + * This is a definition of the full qat response structure used by all + * services. + * + *****************************************************************************/ +typedef struct icp_qat_fw_comn_resp_s { + /**< LWs 0-1 */ + icp_qat_fw_comn_resp_hdr_t comn_hdr; + /**< Common header fields */ + + /**< LWs 2-3 */ + uint64_t opaque_data; + /**< Opaque data passed from the request to the response message */ + + /**< LWs 4-7 */ + uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4]; + /**< Reserved */ + +} icp_qat_fw_comn_resp_t; + +/* ========================================================================= */ +/* MACRO DEFINITIONS */ +/* ========================================================================= */ + +/* Common QAT FW request header - structure of LW0 + * + ===== + ---- + ----------- + ----------- + ----------- + ----------- + + * | Bit | 31 | 30 - 24 | 21 - 16 | 15 - 8 | 7 - 0 | + * + ===== + ---- + ----------- + ----------- + ----------- + ----------- + + * | Flags | V | Reserved | Serv Type | Serv Cmd Id | Reserved | + * + ===== + ---- + ----------- + ----------- + ----------- + ----------- + +*/ + +/**< @ingroup icp_qat_fw_comn + * Definition of the setting of the header's valid flag */ +#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1 +/**< @ingroup icp_qat_fw_comn + * Definition of the setting of the header's valid flag */ +#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0 + +/**< @ingroup icp_qat_fw_comn + * Macros defining the bit position and mask of the 'valid' flag, within the + * hdr_flags field of LW0 (service request and response) */ +#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7 +#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1 +#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F + +/* Common QAT FW response header - structure of LW0 + * + ===== + --- + --- + ----- + ----- + --------- + ----------- + ----- + + * | Bit | 31 | 30 | 29 | 28-24 | 21 - 16 | 15 - 8 | 7-0 | + * + ===== + --- + ----+ ----- + ----- + --------- + ----------- + ----- + + * | Flags | V | CNV | CNVNR | Rsvd | Serv Type | Serv Cmd Id | Rsvd | + * + ===== + --- + --- + ----- + ----- + --------- + ----------- + ----- + */ +/**< @ingroup icp_qat_fw_comn + * Macros defining the bit position and mask of 'CNV' flag + * within the hdr_flags field of LW0 (service response only) */ +#define ICP_QAT_FW_COMN_CNV_FLAG_BITPOS 6 +#define ICP_QAT_FW_COMN_CNV_FLAG_MASK 0x1 + +/**< @ingroup icp_qat_fw_comn + * Macros defining the bit position and mask of CNVNR flag + * within the hdr_flags field of LW0 (service response only) */ +#define ICP_QAT_FW_COMN_CNVNR_FLAG_BITPOS 5 +#define ICP_QAT_FW_COMN_CNVNR_FLAG_MASK 0x1 + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for extraction of Service Type Field + * + * @param icp_qat_fw_comn_req_hdr_t Structure 'icp_qat_fw_comn_req_hdr_t' + * to extract the Service Type Field + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \ + icp_qat_fw_comn_req_hdr_t.service_type + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for setting of Service Type Field + * + * @param 'icp_qat_fw_comn_req_hdr_t' structure to set the Service + * Type Field + * @param val Value of the Service Type Field + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \ + icp_qat_fw_comn_req_hdr_t.service_type = val + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for extraction of Service Command Id Field + * + * @param icp_qat_fw_comn_req_hdr_t Structure 'icp_qat_fw_comn_req_hdr_t' + * to extract the Service Command Id Field + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \ + icp_qat_fw_comn_req_hdr_t.service_cmd_id + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for setting of Service Command Id Field + * + * @param 'icp_qat_fw_comn_req_hdr_t' structure to set the + * Service Command Id Field + * @param val Value of the Service Command Id Field + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \ + icp_qat_fw_comn_req_hdr_t.service_cmd_id = val + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Extract the valid flag from the request or response's header flags. + * + * @param hdr_t Request or Response 'hdr_t' structure to extract the valid bit + * from the 'hdr_flags' field. + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \ + ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Extract the CNVNR flag from the header flags in the response only. + * + * @param hdr_t Response 'hdr_t' structure to extract the CNVNR bit + * from the 'hdr_flags' field. + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_HDR_CNVNR_FLAG_GET(hdr_flags) \ + QAT_FIELD_GET(hdr_flags, \ + ICP_QAT_FW_COMN_CNVNR_FLAG_BITPOS, \ + ICP_QAT_FW_COMN_CNVNR_FLAG_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Extract the CNV flag from the header flags in the response only. + * + * @param hdr_t Response 'hdr_t' structure to extract the CNV bit + * from the 'hdr_flags' field. + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_HDR_CNV_FLAG_GET(hdr_flags) \ + QAT_FIELD_GET(hdr_flags, \ + ICP_QAT_FW_COMN_CNV_FLAG_BITPOS, \ + ICP_QAT_FW_COMN_CNV_FLAG_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Set the valid bit in the request's header flags. + * + * @param hdr_t Request or Response 'hdr_t' structure to set the valid bit + * @param val Value of the valid bit flag. + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \ + ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Common macro to extract the valid flag from the header flags field + * within the header structure (request or response). + * + * @param hdr_t Structure (request or response) to extract the + * valid bit from the 'hdr_flags' field. + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \ + QAT_FIELD_GET(hdr_flags, \ + ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \ + ICP_QAT_FW_COMN_VALID_FLAG_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Common macro to extract the remaining reserved flags from the header + flags field within the header structure (request or response). + * + * @param hdr_t Structure (request or response) to extract the + * remaining bits from the 'hdr_flags' field (excluding the + * valid flag). + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \ + (hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Common macro to set the valid bit in the header flags field within + * the header structure (request or response). + * + * @param hdr_t Structure (request or response) containing the header + * flags field, to allow the valid bit to be set. + * @param val Value of the valid bit flag. + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \ + QAT_FIELD_SET((hdr_t.hdr_flags), \ + (val), \ + ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \ + ICP_QAT_FW_COMN_VALID_FLAG_MASK) + +/** +****************************************************************************** +* @ingroup icp_qat_fw_comn +* +* @description +* Macro that must be used when building the common header flags. +* Note that all bits reserved field bits 0-6 (LW0) need to be forced to 0. +* +* @param ptr Value of the valid flag +*****************************************************************************/ + +#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \ + (((valid)&ICP_QAT_FW_COMN_VALID_FLAG_MASK) \ + << ICP_QAT_FW_COMN_VALID_FLAG_BITPOS) + +/* + * < @ingroup icp_qat_fw_comn + * Common Request Flags Definition + * The bit offsets below are within the flags field. These are NOT relative to + * the memory word. Unused fields e.g. reserved bits, must be zeroed. + * + * + ===== + ------ + --- + --- + --- + --- + --- + --- + --- + --- + + * | Bits [15:8] | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | + * + ===== + ------ + --- + --- + --- + --- + --- + --- + --- + --- + + * | Flags[15:8] | Rsv | Rsv | Rsv | Rsv | Rsv | Rsv | Rsv | Rsv | + * + ===== + ------ + --- + --- + --- + --- + --- + --- + --- + --- + + * | Bits [7:0] | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 | + * + ===== + ------ + --- + --- + --- + --- + --- + --- + --- + --- + + * | Flags [7:0] | Rsv | Rsv | Rsv | Rsv | Rsv | BnP | Cdt | Ptr | + * + ===== + ------ + --- + --- + --- + --- + --- + --- + --- + --- + + */ + +#define QAT_COMN_PTR_TYPE_BITPOS 0 +/**< @ingroup icp_qat_fw_comn + * Common Request Flags - Starting bit position indicating + * Src&Dst Buffer Pointer type */ + +#define QAT_COMN_PTR_TYPE_MASK 0x1 +/**< @ingroup icp_qat_fw_comn + * Common Request Flags - One bit mask used to determine + * Src&Dst Buffer Pointer type */ + +#define QAT_COMN_CD_FLD_TYPE_BITPOS 1 +/**< @ingroup icp_qat_fw_comn + * Common Request Flags - Starting bit position indicating + * CD Field type */ + +#define QAT_COMN_CD_FLD_TYPE_MASK 0x1 +/**< @ingroup icp_qat_fw_comn + * Common Request Flags - One bit mask used to determine + * CD Field type */ + +#define QAT_COMN_BNP_ENABLED_BITPOS 2 +/**< @ingroup icp_qat_fw_comn + * Common Request Flags - Starting bit position indicating + * the source buffer contains batch of requests. if this + * bit is set, source buffer is type of Batch And Pack OpData List + * and the Ptr Type Bit only applies to Destination buffer. */ + +#define QAT_COMN_BNP_ENABLED_MASK 0x1 +/**< @ingroup icp_qat_fw_comn + * Batch And Pack Enabled Flag Mask - One bit mask used to determine + * the source buffer is in Batch and Pack OpData Link List Mode. */ + +/* ========================================================================= */ +/* Pointer Type Flag definitions */ +/* ========================================================================= */ +#define QAT_COMN_PTR_TYPE_FLAT 0x0 +/**< @ingroup icp_qat_fw_comn + * Constant value indicating Src&Dst Buffer Pointer type is flat + * If Batch and Pack mode is enabled, only applies to Destination buffer.*/ + +#define QAT_COMN_PTR_TYPE_SGL 0x1 +/**< @ingroup icp_qat_fw_comn + * Constant value indicating Src&Dst Buffer Pointer type is SGL type + * If Batch and Pack mode is enabled, only applies to Destination buffer.*/ + +#define QAT_COMN_PTR_TYPE_BATCH 0x2 +/**< @ingroup icp_qat_fw_comn + * Constant value indicating Src is a batch request + * and Dst Buffer Pointer type is SGL type */ + +/* ========================================================================= */ +/* CD Field Flag definitions */ +/* ========================================================================= */ +#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0 +/**< @ingroup icp_qat_fw_comn + * Constant value indicating CD Field contains 64-bit address */ + +#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1 +/**< @ingroup icp_qat_fw_comn + * Constant value indicating CD Field contains 16 bytes of setup data */ + +/* ========================================================================= */ +/* Batch And Pack Enable/Disable Definitions */ +/* ========================================================================= */ +#define QAT_COMN_BNP_ENABLED 0x1 +/**< @ingroup icp_qat_fw_comn + * Constant value indicating Source buffer will point to Batch And Pack OpData + * List */ + +#define QAT_COMN_BNP_DISABLED 0x0 +/**< @ingroup icp_qat_fw_comn + * Constant value indicating Source buffer will point to Batch And Pack OpData + * List */ + +/** +****************************************************************************** +* @ingroup icp_qat_fw_comn +* +* @description +* Macro that must be used when building the common request flags (for all +* requests but comp BnP). +* Note that all bits reserved field bits 2-15 (LW1) need to be forced to 0. +* +* @param ptr Value of the pointer type flag +* @param cdt Value of the cd field type flag +*****************************************************************************/ +#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \ + ((((cdt)&QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) | \ + (((ptr)&QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS)) + +/** +****************************************************************************** +* @ingroup icp_qat_fw_comn +* +* @description +* Macro that must be used when building the common request flags for comp +* BnP service. +* Note that all bits reserved field bits 3-15 (LW1) need to be forced to 0. +* +* @param ptr Value of the pointer type flag +* @param cdt Value of the cd field type flag +* @param bnp Value of the bnp enabled flag +*****************************************************************************/ +#define ICP_QAT_FW_COMN_FLAGS_BUILD_BNP(cdt, ptr, bnp) \ + ((((cdt)&QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) | \ + (((ptr)&QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS) | \ + (((bnp)&QAT_COMN_BNP_ENABLED_MASK) << QAT_COMN_BNP_ENABLED_BITPOS)) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for extraction of the pointer type bit from the common flags + * + * @param flags Flags to extract the pointer type bit from + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \ + QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for extraction of the cd field type bit from the common flags + * + * @param flags Flags to extract the cd field type type bit from + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \ + QAT_FIELD_GET(flags, \ + QAT_COMN_CD_FLD_TYPE_BITPOS, \ + QAT_COMN_CD_FLD_TYPE_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for extraction of the bnp field type bit from the common flags + * + * @param flags Flags to extract the bnp field type type bit from + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_BNP_ENABLED_GET(flags) \ + QAT_FIELD_GET(flags, \ + QAT_COMN_BNP_ENABLED_BITPOS, \ + QAT_COMN_BNP_ENABLED_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for setting the pointer type bit in the common flags + * + * @param flags Flags in which Pointer Type bit will be set + * @param val Value of the bit to be set in flags + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \ + QAT_FIELD_SET(flags, \ + val, \ + QAT_COMN_PTR_TYPE_BITPOS, \ + QAT_COMN_PTR_TYPE_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for setting the cd field type bit in the common flags + * + * @param flags Flags in which Cd Field Type bit will be set + * @param val Value of the bit to be set in flags + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \ + QAT_FIELD_SET(flags, \ + val, \ + QAT_COMN_CD_FLD_TYPE_BITPOS, \ + QAT_COMN_CD_FLD_TYPE_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for setting the bnp field type bit in the common flags + * + * @param flags Flags in which Bnp Field Type bit will be set + * @param val Value of the bit to be set in flags + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_BNP_ENABLE_SET(flags, val) \ + QAT_FIELD_SET(flags, \ + val, \ + QAT_COMN_BNP_ENABLED_BITPOS, \ + QAT_COMN_BNP_ENABLED_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macros using the bit position and mask to set/extract the next + * and current id nibbles within the next_curr_id field of the + * content descriptor header block. Note that these are defined + * in the common header file, as they are used by compression, cipher + * and authentication. + * + * @param cd_ctrl_hdr_t Content descriptor control block header pointer. + * @param val Value of the field being set. + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4 +#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0 +#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0 +#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F + +#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \ + ((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) >> \ + (ICP_QAT_FW_COMN_NEXT_ID_BITPOS)) + +#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \ + ((cd_ctrl_hdr_t)->next_curr_id) = \ + ((((cd_ctrl_hdr_t)->next_curr_id) & \ + ICP_QAT_FW_COMN_CURR_ID_MASK) | \ + ((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) & \ + ICP_QAT_FW_COMN_NEXT_ID_MASK)) + +#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \ + (((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK) + +#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \ + ((cd_ctrl_hdr_t)->next_curr_id) = \ + ((((cd_ctrl_hdr_t)->next_curr_id) & \ + ICP_QAT_FW_COMN_NEXT_ID_MASK) | \ + ((val)&ICP_QAT_FW_COMN_CURR_ID_MASK)) + +/* + * < @ingroup icp_qat_fw_comn + * Common Status Field Definition The bit offsets below are within the COMMON + * RESPONSE status field, assumed to be 8 bits wide. In the case of the PKE + * response (which follows the CPM 1.5 message format), the status field is 16 + * bits wide. + * The status flags are contained within the most significant byte and align + * with the diagram below. Please therefore refer to the service-specific PKE + * header file for the appropriate macro definition to extract the PKE status + * flag from the PKE response, which assumes that a word is passed to the + * macro. + * + ===== + ------ + --- + --- + ---- + ---- + -------- + ---- + ---------- + + * | Bit | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 | + * + ===== + ------ + --- + --- + ---- + ---- + -------- + ---- + ---------- + + * | Flags | Crypto | Pke | Cmp | Xlat | EOLB | UnSupReq | Rsvd | XltWaApply | + * + ===== + ------ + --- + --- + ---- + ---- + -------- + ---- + ---------- + + * Note: + * For the service specific status bit definitions refer to service header files + * Eg. Crypto Status bit refers to Symmetric Crypto, Key Generation, and NRBG + * Requests' Status. Unused bits e.g. reserved bits need to have been forced to + * 0. + */ + +#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7 +/**< @ingroup icp_qat_fw_comn + * Starting bit position indicating Response for Crypto service Flag */ + +#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1 +/**< @ingroup icp_qat_fw_comn + * One bit mask used to determine Crypto status mask */ + +#define QAT_COMN_RESP_PKE_STATUS_BITPOS 6 +/**< @ingroup icp_qat_fw_comn + * Starting bit position indicating Response for PKE service Flag */ + +#define QAT_COMN_RESP_PKE_STATUS_MASK 0x1 +/**< @ingroup icp_qat_fw_comn + * One bit mask used to determine PKE status mask */ + +#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5 +/**< @ingroup icp_qat_fw_comn + * Starting bit position indicating Response for Compression service Flag */ + +#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1 +/**< @ingroup icp_qat_fw_comn + * One bit mask used to determine Compression status mask */ + +#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4 +/**< @ingroup icp_qat_fw_comn + * Starting bit position indicating Response for Xlat service Flag */ + +#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1 +/**< @ingroup icp_qat_fw_comn + * One bit mask used to determine Translator status mask */ + +#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3 +/**< @ingroup icp_qat_fw_comn + * Starting bit position indicating the last block in a deflate stream for + the compression service Flag */ + +#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1 +/**< @ingroup icp_qat_fw_comn + * One bit mask used to determine the last block in a deflate stream + status mask */ + +#define QAT_COMN_RESP_UNSUPPORTED_REQUEST_BITPOS 2 +/**< @ingroup icp_qat_fw_comn + * Starting bit position indicating when an unsupported service request Flag */ + +#define QAT_COMN_RESP_UNSUPPORTED_REQUEST_MASK 0x1 +/**< @ingroup icp_qat_fw_comn + * One bit mask used to determine the unsupported service request status mask */ + +#define QAT_COMN_RESP_XLT_INV_APPLIED_BITPOS 0 +/**< @ingroup icp_qat_fw_comn + * Bit position indicating that firmware detected an invalid translation during + * dynamic compression and took measures to overcome this + * + */ + +#define QAT_COMN_RESP_XLT_INV_APPLIED_MASK 0x1 +/**< @ingroup icp_qat_fw_comn + * One bit mask */ + +/** + ****************************************************************************** + * @description + * Macro that must be used when building the status + * for the common response + * + * @param crypto Value of the Crypto Service status flag + * @param comp Value of the Compression Service Status flag + * @param xlat Value of the Xlator Status flag + * @param eolb Value of the Compression End of Last Block Status flag + * @param unsupp Value of the Unsupported Request flag + * @param xlt_inv Value of the Invalid Translation flag + *****************************************************************************/ +#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD( \ + crypto, pke, comp, xlat, eolb, unsupp, xlt_inv) \ + ((((crypto)&QAT_COMN_RESP_CRYPTO_STATUS_MASK) \ + << QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \ + (((pke)&QAT_COMN_RESP_PKE_STATUS_MASK) \ + << QAT_COMN_RESP_PKE_STATUS_BITPOS) | \ + (((xlt_inv)&QAT_COMN_RESP_XLT_INV_APPLIED_MASK) \ + << QAT_COMN_RESP_XLT_INV_APPLIED_BITPOS) | \ + (((comp)&QAT_COMN_RESP_CMP_STATUS_MASK) \ + << QAT_COMN_RESP_CMP_STATUS_BITPOS) | \ + (((xlat)&QAT_COMN_RESP_XLAT_STATUS_MASK) \ + << QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \ + (((eolb)&QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) \ + << QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS) | \ + (((unsupp)&QAT_COMN_RESP_UNSUPPORTED_REQUEST_BITPOS) \ + << QAT_COMN_RESP_UNSUPPORTED_REQUEST_MASK)) + +/* ========================================================================= */ +/* GETTERS */ +/* ========================================================================= */ +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for extraction of the Crypto bit from the status + * + * @param status + * Status to extract the status bit from + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \ + QAT_COMN_RESP_CRYPTO_STATUS_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for extraction of the PKE bit from the status + * + * @param status + * Status to extract the status bit from + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_RESP_PKE_STAT_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_PKE_STATUS_BITPOS, \ + QAT_COMN_RESP_PKE_STATUS_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for extraction of the Compression bit from the status + * + * @param status + * Status to extract the status bit from + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_CMP_STATUS_BITPOS, \ + QAT_COMN_RESP_CMP_STATUS_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for extraction of the Translator bit from the status + * + * @param status + * Status to extract the status bit from + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_XLAT_STATUS_BITPOS, \ + QAT_COMN_RESP_XLAT_STATUS_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for extraction of the Translation Invalid bit + * from the status + * + * @param status + * Status to extract the status bit from + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_RESP_XLT_INV_APPLIED_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_XLT_INV_APPLIED_BITPOS, \ + QAT_COMN_RESP_XLT_INV_APPLIED_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for extraction of the end of compression block bit from the + * status + * + * @param status + * Status to extract the status bit from + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \ + QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comn + * + * @description + * Macro for extraction of the Unsupported request from the status + * + * @param status + * Status to extract the status bit from + * + *****************************************************************************/ +#define ICP_QAT_FW_COMN_RESP_UNSUPPORTED_REQUEST_STAT_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_UNSUPPORTED_REQUEST_BITPOS, \ + QAT_COMN_RESP_UNSUPPORTED_REQUEST_MASK) + +/* ========================================================================= */ +/* Status Flag definitions */ +/* ========================================================================= */ + +#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0 +/**< @ingroup icp_qat_fw_comn + * Definition of successful processing of a request */ + +#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1 +/**< @ingroup icp_qat_fw_comn + * Definition of erroneous processing of a request */ + +#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0 +/**< @ingroup icp_qat_fw_comn + * Final Deflate block of a compression request not completed */ + +#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1 +/**< @ingroup icp_qat_fw_comn + * Final Deflate block of a compression request completed */ + +#define ERR_CODE_NO_ERROR 0 +/**< Error Code constant value for no error */ + +#define ERR_CODE_INVALID_BLOCK_TYPE -1 +/* Invalid block type (type == 3)*/ + +#define ERR_CODE_NO_MATCH_ONES_COMP -2 +/* Stored block length does not match one's complement */ + +#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3 +/* Too many length or distance codes */ + +#define ERR_CODE_INCOMPLETE_LEN -4 +/* Code lengths codes incomplete */ + +#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5 +/* Repeat lengths with no first length */ + +#define ERR_CODE_RPT_GT_SPEC_LEN -6 +/* Repeat more than specified lengths */ + +#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7 +/* Invalid lit/len code lengths */ + +#define ERR_CODE_INV_DIS_CODE_LEN -8 +/* Invalid distance code lengths */ + +#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9 +/* Invalid lit/len or distance code in fixed/dynamic block */ + +#define ERR_CODE_DIS_TOO_FAR_BACK -10 +/* Distance too far back in fixed or dynamic block */ + +/* Common Error code definitions */ +#define ERR_CODE_OVERFLOW_ERROR -11 +/**< Error Code constant value for overflow error */ + +#define ERR_CODE_SOFT_ERROR -12 +/**< Error Code constant value for soft error */ + +#define ERR_CODE_FATAL_ERROR -13 +/**< Error Code constant value for hard/fatal error */ + +#define ERR_CODE_COMP_OUTPUT_CORRUPTION -14 +/**< Error Code constant for compression output corruption */ + +#define ERR_CODE_HW_INCOMPLETE_FILE -15 +/**< Error Code constant value for incomplete file hardware error */ + +#define ERR_CODE_SSM_ERROR -16 +/**< Error Code constant value for error detected by SSM e.g. slice hang */ + +#define ERR_CODE_ENDPOINT_ERROR -17 +/**< Error Code constant value for error detected by PCIe Endpoint, e.g. push + * data error */ + +#define ERR_CODE_CNV_ERROR -18 +/**< Error Code constant value for cnv failure */ + +#define ERR_CODE_EMPTY_DYM_BLOCK -19 +/**< Error Code constant value for submission of empty dynamic stored block to + * slice */ + +#define ERR_CODE_EXCEED_MAX_REQ_TIME -24 +/**< Error Code constant for exceeding max request time */ + +#define ERR_CODE_KPT_CRYPTO_SERVICE_FAIL_INVALID_HANDLE -20 +/**< Error Code constant for invalid handle in kpt crypto service */ + +#define ERR_CODE_KPT_CRYPTO_SERVICE_FAIL_HMAC_FAILED -21 +/**< Error Code constant for failed hmac in kpt crypto service */ + +#define ERR_CODE_KPT_CRYPTO_SERVICE_FAIL_INVALID_WRAPPING_ALGO -22 +/**< Error Code constant for invalid wrapping algo in kpt crypto service */ + +#define ERR_CODE_KPT_DRNG_SEED_NOT_LOAD -23 +/**< Error Code constant for no drng seed is not loaded in kpt ecdsa signrs +/service */ + +#define ERR_CODE_MISC_ERROR -50 +/**< Error Code constant for error detected but the source + * of error is not recognized */ + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Slice types for building of the processing chain within the content + * descriptor + * + * @description + * Enumeration used to indicate the ids of the slice types through which + * data will pass. + * + * A logical slice is not a hardware slice but is a software FSM + * performing the actions of a slice + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_FW_SLICE_NULL = 0, /**< NULL slice type */ + ICP_QAT_FW_SLICE_CIPHER = 1, /**< CIPHER slice type */ + ICP_QAT_FW_SLICE_AUTH = 2, /**< AUTH slice type */ + ICP_QAT_FW_SLICE_DRAM_RD = 3, /**< DRAM_RD Logical slice type */ + ICP_QAT_FW_SLICE_DRAM_WR = 4, /**< DRAM_WR Logical slice type */ + ICP_QAT_FW_SLICE_COMP = 5, /**< Compression slice type */ + ICP_QAT_FW_SLICE_XLAT = 6, /**< Translator slice type */ + ICP_QAT_FW_SLICE_DELIMITER /**< End delimiter */ + +} icp_qat_fw_slice_t; + +#endif /* _ICP_QAT_FW_H_ */ Index: sys/dev/qat/qat_api/firmware/include/icp_qat_fw_comp.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/firmware/include/icp_qat_fw_comp.h @@ -0,0 +1,1029 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file icp_qat_fw_comp.h + * @defgroup icp_qat_fw_comp ICP QAT FW Compression Service + * Interface Definitions + * @ingroup icp_qat_fw + * @description + * This file documents structs used to provide the interface to the + * Compression QAT FW service + * + *****************************************************************************/ + +#ifndef _ICP_QAT_FW_COMP_H_ +#define _ICP_QAT_FW_COMP_H_ + +/* +****************************************************************************** +* Include local header files +****************************************************************************** +*/ +#include "icp_qat_fw.h" + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comp + * Definition of the Compression command types + * @description + * Enumeration which is used to indicate the ids of functions + * that are exposed by the Compression QAT FW service + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_FW_COMP_CMD_STATIC = 0, + /*!< Static Compress Request */ + + ICP_QAT_FW_COMP_CMD_DYNAMIC = 1, + /*!< Dynamic Compress Request */ + + ICP_QAT_FW_COMP_CMD_DECOMPRESS = 2, + /*!< Decompress Request */ + + ICP_QAT_FW_COMP_CMD_DELIMITER + /**< Delimiter type */ + +} icp_qat_fw_comp_cmd_id_t; + +/* + * REQUEST FLAGS IN COMMON COMPRESSION + * In common message it is named as SERVICE SPECIFIC FLAGS. + * + * + ===== + ------ + ------ + --- + ----- + ----- + ----- + -- + ---- + --- + + * | Bit | 15 - 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 | + * + ===== + ------ + ----- + --- + ----- + ----- + ----- + -- + ---- + --- + + * | Flags | Rsvd | Dis. |Resvd| Dis. | Enh. |Auto |Sess| Rsvd | Rsvd| + * | | Bits | secure | =0 | Type0 | ASB |Select |Type| = 0 | = 0 | + * | | = 0 |RAM use | | Header | |Best | | | | + * | | |as intmd| | | | | | | | + * | | | buf | | | | | | | | + * + ===== + ------ + ----- + --- + ------ + ----- + ----- + -- + ---- + --- + + */ + +/** Flag usage */ + +#define ICP_QAT_FW_COMP_STATELESS_SESSION 0 +/**< @ingroup icp_qat_fw_comp + * Flag representing that session is stateless */ + +#define ICP_QAT_FW_COMP_STATEFUL_SESSION 1 +/**< @ingroup icp_qat_fw_comp + * Flag representing that session is stateful */ + +#define ICP_QAT_FW_COMP_NOT_AUTO_SELECT_BEST 0 +/**< @ingroup icp_qat_fw_comp + * Flag representing that autoselectbest is NOT used */ + +#define ICP_QAT_FW_COMP_AUTO_SELECT_BEST 1 +/**< @ingroup icp_qat_fw_comp + * Flag representing that autoselectbest is used */ + +#define ICP_QAT_FW_COMP_NOT_ENH_AUTO_SELECT_BEST 0 +/**< @ingroup icp_qat_fw_comp + * Flag representing that enhanced autoselectbest is NOT used */ + +#define ICP_QAT_FW_COMP_ENH_AUTO_SELECT_BEST 1 +/**< @ingroup icp_qat_fw_comp + * Flag representing that enhanced autoselectbest is used */ + +#define ICP_QAT_FW_COMP_NOT_DISABLE_TYPE0_ENH_AUTO_SELECT_BEST 0 +/**< @ingroup icp_qat_fw_comp + * Flag representing that enhanced autoselectbest is NOT used */ + +#define ICP_QAT_FW_COMP_DISABLE_TYPE0_ENH_AUTO_SELECT_BEST 1 +/**< @ingroup icp_qat_fw_comp + * Flag representing that enhanced autoselectbest is used */ + +#define ICP_QAT_FW_COMP_DISABLE_SECURE_RAM_USED_AS_INTMD_BUF 1 +/**< @ingroup icp_qat_fw_comp + * Flag representing secure RAM from being used as + * an intermediate buffer is DISABLED. */ + +#define ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF 0 +/**< @ingroup icp_qat_fw_comp + * Flag representing secure RAM from being used as + * an intermediate buffer is ENABLED. */ + +/** Flag mask & bit position */ + +#define ICP_QAT_FW_COMP_SESSION_TYPE_BITPOS 2 +/**< @ingroup icp_qat_fw_comp + * Starting bit position for the session type */ + +#define ICP_QAT_FW_COMP_SESSION_TYPE_MASK 0x1 +/**< @ingroup icp_qat_fw_comp + * One bit mask used to determine the session type */ + +#define ICP_QAT_FW_COMP_AUTO_SELECT_BEST_BITPOS 3 +/**< @ingroup icp_qat_fw_comp + * Starting bit position for auto select best */ + +#define ICP_QAT_FW_COMP_AUTO_SELECT_BEST_MASK 0x1 +/**< @ingroup icp_qat_fw_comp + * One bit mask for auto select best */ + +#define ICP_QAT_FW_COMP_ENHANCED_AUTO_SELECT_BEST_BITPOS 4 +/**< @ingroup icp_qat_fw_comp + * Starting bit position for enhanced auto select best */ + +#define ICP_QAT_FW_COMP_ENHANCED_AUTO_SELECT_BEST_MASK 0x1 +/**< @ingroup icp_qat_fw_comp + * One bit mask for enhanced auto select best */ + +#define ICP_QAT_FW_COMP_RET_DISABLE_TYPE0_HEADER_DATA_BITPOS 5 +/**< @ingroup icp_qat_fw_comp + * Starting bit position for disabling type zero header write back + when Enhanced autoselect best is enabled. If set firmware does + not return type0 store block header, only copies src to dest. + (if best output is Type0) */ + +#define ICP_QAT_FW_COMP_RET_DISABLE_TYPE0_HEADER_DATA_MASK 0x1 +/**< @ingroup icp_qat_fw_comp + * One bit mask for auto select best */ + +#define ICP_QAT_FW_COMP_DISABLE_SECURE_RAM_AS_INTMD_BUF_BITPOS 7 +/**< @ingroup icp_qat_fw_comp + * Starting bit position for flag used to disable secure ram from + * being used as an intermediate buffer. */ + +#define ICP_QAT_FW_COMP_DISABLE_SECURE_RAM_AS_INTMD_BUF_MASK 0x1 +/**< @ingroup icp_qat_fw_comp + * One bit mask for disable secure ram for use as an intermediate + buffer. */ + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * + * @description + * Macro used for the generation of the command flags for Compression Request. + * This should always be used for the generation of the flags. No direct sets or + * masks should be performed on the flags data + * + * @param sesstype Session Type + * @param autoselect AutoSelectBest + * @enhanced_asb Enhanced AutoSelectBest + * @ret_uncomp RetUnCompressed + * @secure_ram Secure Ram usage + * + *********************************************************************************/ +#define ICP_QAT_FW_COMP_FLAGS_BUILD( \ + sesstype, autoselect, enhanced_asb, ret_uncomp, secure_ram) \ + (((sesstype & ICP_QAT_FW_COMP_SESSION_TYPE_MASK) \ + << ICP_QAT_FW_COMP_SESSION_TYPE_BITPOS) | \ + ((autoselect & ICP_QAT_FW_COMP_AUTO_SELECT_BEST_MASK) \ + << ICP_QAT_FW_COMP_AUTO_SELECT_BEST_BITPOS) | \ + ((enhanced_asb & ICP_QAT_FW_COMP_ENHANCED_AUTO_SELECT_BEST_MASK) \ + << ICP_QAT_FW_COMP_ENHANCED_AUTO_SELECT_BEST_BITPOS) | \ + ((ret_uncomp & ICP_QAT_FW_COMP_RET_DISABLE_TYPE0_HEADER_DATA_MASK) \ + << ICP_QAT_FW_COMP_RET_DISABLE_TYPE0_HEADER_DATA_BITPOS) | \ + ((secure_ram & ICP_QAT_FW_COMP_DISABLE_SECURE_RAM_AS_INTMD_BUF_MASK) \ + << ICP_QAT_FW_COMP_DISABLE_SECURE_RAM_AS_INTMD_BUF_BITPOS)) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * + * @description + * Macro for extraction of the session type bit + * + * @param flags Flags to extract the session type bit from + * + ******************************************************************************/ +#define ICP_QAT_FW_COMP_SESSION_TYPE_GET(flags) \ + QAT_FIELD_GET(flags, \ + ICP_QAT_FW_COMP_SESSION_TYPE_BITPOS, \ + ICP_QAT_FW_COMP_SESSION_TYPE_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * + * @description + * Macro for extraction of the autoSelectBest bit + * + * @param flags Flags to extract the autoSelectBest bit from + * + ******************************************************************************/ +#define ICP_QAT_FW_COMP_AUTO_SELECT_BEST_GET(flags) \ + QAT_FIELD_GET(flags, \ + ICP_QAT_FW_COMP_AUTO_SELECT_BEST_BITPOS, \ + ICP_QAT_FW_COMP_AUTO_SELECT_BEST_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * + * @description + * Macro for extraction of the enhanced asb bit + * + * @param flags Flags to extract the enhanced asb bit from + * + ******************************************************************************/ +#define ICP_QAT_FW_COMP_EN_ASB_GET(flags) \ + QAT_FIELD_GET(flags, \ + ICP_QAT_FW_COMP_ENHANCED_AUTO_SELECT_BEST_BITPOS, \ + ICP_QAT_FW_COMP_ENHANCED_AUTO_SELECT_BEST_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * + * @description + * Macro for extraction of the RetUncomp bit + * + * @param flags Flags to extract the Ret Uncomp bit from + * + ******************************************************************************/ +#define ICP_QAT_FW_COMP_RET_UNCOMP_GET(flags) \ + QAT_FIELD_GET(flags, \ + ICP_QAT_FW_COMP_RET_DISABLE_TYPE0_HEADER_DATA_BITPOS, \ + ICP_QAT_FW_COMP_RET_DISABLE_TYPE0_HEADER_DATA_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * + * @description + * Macro for extraction of the Secure Ram usage bit + * + * @param flags Flags to extract the Secure Ram usage from + * + ******************************************************************************/ +#define ICP_QAT_FW_COMP_SECURE_RAM_USE_GET(flags) \ + QAT_FIELD_GET(flags, \ + ICP_QAT_FW_COMP_DISABLE_SECURE_RAM_AS_INTMD_BUF_BITPOS, \ + ICP_QAT_FW_COMP_DISABLE_SECURE_RAM_AS_INTMD_BUF_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * Definition of the compression header cd pars block + * @description + * Definition of the compression processing cd pars block. + * The structure is a service-specific implementation of the common + * structure. + ******************************************************************************/ +typedef union icp_qat_fw_comp_req_hdr_cd_pars_s { + /**< LWs 2-5 */ + struct { + uint64_t content_desc_addr; + /**< Address of the content descriptor */ + + uint16_t content_desc_resrvd1; + /**< Content descriptor reserved field */ + + uint8_t content_desc_params_sz; + /**< Size of the content descriptor parameters in quad words. + * These + * parameters describe the session setup configuration info for + * the + * slices that this request relies upon i.e. the configuration + * word and + * cipher key needed by the cipher slice if there is a request + * for + * cipher + * processing. */ + + uint8_t content_desc_hdr_resrvd2; + /**< Content descriptor reserved field */ + + uint32_t content_desc_resrvd3; + /**< Content descriptor reserved field */ + } s; + + struct { + uint32_t comp_slice_cfg_word[ICP_QAT_FW_NUM_LONGWORDS_2]; + /* Compression Slice Config Word */ + + uint32_t content_desc_resrvd4; + /**< Content descriptor reserved field */ + } sl; + +} icp_qat_fw_comp_req_hdr_cd_pars_t; + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * Definition of the compression request parameters block + * @description + * Definition of the compression processing request parameters block. + * The structure below forms part of the Compression + Translation + * Parameters block spanning LWs 14-21, thus differing from the common + * base Parameters block structure. Unused fields must be set to 0. + * + ******************************************************************************/ +typedef struct icp_qat_fw_comp_req_params_s { + /**< LW 14 */ + uint32_t comp_len; + /**< Size of input to process in bytes Note: Only EOP requests can be + * odd + * for decompression. IA must set LSB to zero for odd sized intermediate + * inputs */ + + /**< LW 15 */ + uint32_t out_buffer_sz; + /**< Size of output buffer in bytes */ + + /**< LW 16 */ + union { + struct { + /** LW 16 */ + uint32_t initial_crc32; + /**< CRC for processed bytes (input byte count) */ + + /** LW 17 */ + uint32_t initial_adler; + /**< Adler for processed bytes (input byte count) */ + } legacy; + + /** LW 16-17 */ + uint64_t crc_data_addr; + /**< CRC data structure pointer */ + } crc; + + /** LW 18 */ + uint32_t req_par_flags; + + /** LW 19 */ + uint32_t rsrvd; + +} icp_qat_fw_comp_req_params_t; + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * + * @description + * Macro used for the generation of the request parameter flags. + * This should always be used for the generation of the flags. No direct sets or + * masks should be performed on the flags data + * + * @param sop SOP Flag, 0 restore, 1 don't restore + * @param eop EOP Flag, 0 restore, 1 don't restore + * @param bfinal Set bfinal in this block or not + * @param cnv Whether internal CNV check is to be performed + * * ICP_QAT_FW_COMP_NO_CNV + * * ICP_QAT_FW_COMP_CNV + * @param cnvnr Whether internal CNV recovery is to be performed + * * ICP_QAT_FW_COMP_NO_CNV_RECOVERY + * * ICP_QAT_FW_COMP_CNV_RECOVERY + * @param crc CRC Mode Flag - 0 legacy, 1 crc data struct + * + *****************************************************************************/ +#define ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD( \ + sop, eop, bfinal, cnv, cnvnr, crc) \ + (((sop & ICP_QAT_FW_COMP_SOP_MASK) << ICP_QAT_FW_COMP_SOP_BITPOS) | \ + ((eop & ICP_QAT_FW_COMP_EOP_MASK) << ICP_QAT_FW_COMP_EOP_BITPOS) | \ + ((bfinal & ICP_QAT_FW_COMP_BFINAL_MASK) \ + << ICP_QAT_FW_COMP_BFINAL_BITPOS) | \ + ((cnv & ICP_QAT_FW_COMP_CNV_MASK) << ICP_QAT_FW_COMP_CNV_BITPOS) | \ + ((cnvnr & ICP_QAT_FW_COMP_CNV_RECOVERY_MASK) \ + << ICP_QAT_FW_COMP_CNV_RECOVERY_BITPOS) | \ + ((crc & ICP_QAT_FW_COMP_CRC_MODE_MASK) \ + << ICP_QAT_FW_COMP_CRC_MODE_BITPOS)) + +/* + * REQUEST FLAGS IN REQUEST PARAMETERS COMPRESSION + * + * + ===== + ----- + --- +-----+-------+ --- + ---------+ --- + ---- + --- + + * --- + + * | Bit | 31-20 | 19 | 18 | 17 | 16 | 15 - 7 | 6 | 5-2 | 1 | 0 + * | + * + ===== + ----- + --- +-----+-------+ --- + ---------+ --- | ---- + --- + + * --- + + * | Flags | Resvd | CRC |Resvd| CNVNR | CNV |Resvd Bits|BFin |Resvd | EOP | + * SOP | + * | | =0 | Mode| =0 | | | =0 | | =0 | | | + * | | | | | | | | | | | | + * + ===== + ----- + --- +-----+-------+ --- + ---------+ --- | ---- + --- + + * --- + + */ + +#define ICP_QAT_FW_COMP_NOT_SOP 0 +/**< @ingroup icp_qat_fw_comp + * Flag representing that a request is NOT Start of Packet */ + +#define ICP_QAT_FW_COMP_SOP 1 +/**< @ingroup icp_qat_fw_comp + * Flag representing that a request IS Start of Packet */ + +#define ICP_QAT_FW_COMP_NOT_EOP 0 +/**< @ingroup icp_qat_fw_comp + * Flag representing that a request is NOT Start of Packet */ + +#define ICP_QAT_FW_COMP_EOP 1 +/**< @ingroup icp_qat_fw_comp + * Flag representing that a request IS End of Packet */ + +#define ICP_QAT_FW_COMP_NOT_BFINAL 0 +/**< @ingroup icp_qat_fw_comp + * Flag representing to indicate firmware this is not the last block */ + +#define ICP_QAT_FW_COMP_BFINAL 1 +/**< @ingroup icp_qat_fw_comp + * Flag representing to indicate firmware this is the last block */ + +#define ICP_QAT_FW_COMP_NO_CNV 0 +/**< @ingroup icp_qat_fw_comp + * Flag indicating that NO cnv check is to be performed on the request */ + +#define ICP_QAT_FW_COMP_CNV 1 +/**< @ingroup icp_qat_fw_comp + * Flag indicating that a cnv check IS to be performed on the request */ + +#define ICP_QAT_FW_COMP_NO_CNV_RECOVERY 0 +/**< @ingroup icp_qat_fw_comp + * Flag indicating that NO cnv recovery is to be performed on the request */ + +#define ICP_QAT_FW_COMP_CNV_RECOVERY 1 +/**< @ingroup icp_qat_fw_comp + * Flag indicating that a cnv recovery is to be performed on the request */ + +#define ICP_QAT_FW_COMP_CRC_MODE_LEGACY 0 +/**< @ingroup icp_qat_fw_comp + * Flag representing to use the legacy CRC mode */ + +#define ICP_QAT_FW_COMP_CRC_MODE_E2E 1 +/**< @ingroup icp_qat_fw_comp + * Flag representing to use the external CRC data struct */ + +#define ICP_QAT_FW_COMP_SOP_BITPOS 0 +/**< @ingroup icp_qat_fw_comp + * Starting bit position for SOP */ + +#define ICP_QAT_FW_COMP_SOP_MASK 0x1 +/**< @ingroup icp_qat_fw_comp + * One bit mask used to determine SOP */ + +#define ICP_QAT_FW_COMP_EOP_BITPOS 1 +/**< @ingroup icp_qat_fw_comp + * Starting bit position for EOP */ + +#define ICP_QAT_FW_COMP_EOP_MASK 0x1 +/**< @ingroup icp_qat_fw_comp + * One bit mask used to determine EOP */ + +#define ICP_QAT_FW_COMP_BFINAL_MASK 0x1 +/**< @ingroup icp_qat_fw_comp + * One bit mask for the bfinal bit */ + +#define ICP_QAT_FW_COMP_BFINAL_BITPOS 6 +/**< @ingroup icp_qat_fw_comp + * Starting bit position for the bfinal bit */ + +#define ICP_QAT_FW_COMP_CNV_MASK 0x1 +/**< @ingroup icp_qat_fw_comp + * One bit mask for the CNV bit */ + +#define ICP_QAT_FW_COMP_CNV_BITPOS 16 +/**< @ingroup icp_qat_fw_comp + * Starting bit position for the CNV bit */ + +#define ICP_QAT_FW_COMP_CNV_RECOVERY_MASK 0x1 +/**< @ingroup icp_qat_fw_comp + * One bit mask for the CNV Recovery bit */ + +#define ICP_QAT_FW_COMP_CNV_RECOVERY_BITPOS 17 +/**< @ingroup icp_qat_fw_comp + * Starting bit position for the CNV Recovery bit */ + +#define ICP_QAT_FW_COMP_CRC_MODE_BITPOS 19 +/**< @ingroup icp_qat_fw_comp + * Starting bit position for CRC mode */ + +#define ICP_QAT_FW_COMP_CRC_MODE_MASK 0x1 +/**< @ingroup icp_qat_fw_comp + * One bit mask used to determine CRC mode */ + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * + * @description + * Macro for extraction of the SOP bit + * + * @param flags Flags to extract the SOP bit from + * + ******************************************************************************/ +#define ICP_QAT_FW_COMP_SOP_GET(flags) \ + QAT_FIELD_GET(flags, \ + ICP_QAT_FW_COMP_SOP_BITPOS, \ + ICP_QAT_FW_COMP_SOP_MASK) + +/** +****************************************************************************** +* @ingroup icp_qat_fw_comp +* +* @description +* Macro for extraction of the EOP bit +* +* @param flags Flags to extract the EOP bit from +* +*****************************************************************************/ +#define ICP_QAT_FW_COMP_EOP_GET(flags) \ + QAT_FIELD_GET(flags, \ + ICP_QAT_FW_COMP_EOP_BITPOS, \ + ICP_QAT_FW_COMP_EOP_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * + * @description + * Macro for extraction of the bfinal bit + * + * @param flags Flags to extract the bfinal bit from + * + ******************************************************************************/ +#define ICP_QAT_FW_COMP_BFINAL_GET(flags) \ + QAT_FIELD_GET(flags, \ + ICP_QAT_FW_COMP_BFINAL_BITPOS, \ + ICP_QAT_FW_COMP_BFINAL_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * + * @description + * Macro for extraction of the CNV bit + * + * @param flags Flag set containing the CNV flag + * + *****************************************************************************/ +#define ICP_QAT_FW_COMP_CNV_GET(flags) \ + QAT_FIELD_GET(flags, \ + ICP_QAT_FW_COMP_CNV_BITPOS, \ + ICP_QAT_FW_COMP_CNV_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * + * @description + * Macro for extraction of the crc mode bit + * + * @param flags Flags to extract the crc mode bit from + * + ******************************************************************************/ +#define ICP_QAT_FW_COMP_CRC_MODE_GET(flags) \ + QAT_FIELD_GET(flags, \ + ICP_QAT_FW_COMP_CRC_MODE_BITPOS, \ + ICP_QAT_FW_COMP_CRC_MODE_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * Definition of the translator request parameters block + * @description + * Definition of the translator processing request parameters block + * The structure below forms part of the Compression + Translation + * Parameters block spanning LWs 20-21, thus differing from the common + * base Parameters block structure. Unused fields must be set to 0. + * + ******************************************************************************/ +typedef struct icp_qat_fw_xlt_req_params_s { + /**< LWs 20-21 */ + uint64_t inter_buff_ptr; + /**< This field specifies the physical address of an intermediate + * buffer SGL array. The array contains a pair of 64-bit + * intermediate buffer pointers to SGL buffer descriptors, one pair + * per CPM. Please refer to the CPM1.6 Firmware Interface HLD + * specification for more details. */ +} icp_qat_fw_xlt_req_params_t; + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * Compression header of the content descriptor block + * @description + * Definition of the service-specific compression control block header + * structure. The compression parameters are defined per algorithm + * and are located in the icp_qat_hw.h file. This compression + * cd block spans LWs 24-29, forming part of the compression + translation + * cd block, thus differing from the common base content descriptor + * structure. + * + ******************************************************************************/ +typedef struct icp_qat_fw_comp_cd_hdr_s { + /**< LW 24 */ + uint16_t ram_bank_flags; + /**< Flags to show which ram banks to access */ + + uint8_t comp_cfg_offset; + /**< Quad word offset from the content descriptor parameters address to + * the + * parameters for the compression processing */ + + uint8_t next_curr_id; + /**< This field combines the next and current id (each four bits) - + * the next id is the most significant nibble. + * Next Id: Set to the next slice to pass the compressed data through. + * Set to ICP_QAT_FW_SLICE_DRAM_WR if the data is not to go through + * anymore slices after compression + * Current Id: Initialised with the compression slice type */ + + /**< LW 25 */ + uint32_t resrvd; + + /**< LWs 26-27 */ + uint64_t comp_state_addr; + /**< Pointer to compression state */ + + /**< LWs 28-29 */ + uint64_t ram_banks_addr; + /**< Pointer to banks */ + +} icp_qat_fw_comp_cd_hdr_t; + +#define COMP_CPR_INITIAL_CRC 0 +#define COMP_CPR_INITIAL_ADLER 1 + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * Translator content descriptor header block + * @description + * Definition of the structure used to describe the translation processing + * to perform on data. The translator parameters are defined per algorithm + * and are located in the icp_qat_hw.h file. This translation cd block + * spans LWs 30-31, forming part of the compression + translation cd block, + * thus differing from the common base content descriptor structure. + * + ******************************************************************************/ +typedef struct icp_qat_fw_xlt_cd_hdr_s { + /**< LW 30 */ + uint16_t resrvd1; + /**< Reserved field and assumed set to 0 */ + + uint8_t resrvd2; + /**< Reserved field and assumed set to 0 */ + + uint8_t next_curr_id; + /**< This field combines the next and current id (each four bits) - + * the next id is the most significant nibble. + * Next Id: Set to the next slice to pass the translated data through. + * Set to ICP_QAT_FW_SLICE_DRAM_WR if the data is not to go through + * any more slices after compression + * Current Id: Initialised with the translation slice type */ + + /**< LW 31 */ + uint32_t resrvd3; + /**< Reserved and should be set to zero, needed for quadword alignment + */ +} icp_qat_fw_xlt_cd_hdr_t; + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * Definition of the common Compression QAT FW request + * @description + * This is a definition of the full request structure for + * compression and translation. + * + ******************************************************************************/ +typedef struct icp_qat_fw_comp_req_s { + /**< LWs 0-1 */ + icp_qat_fw_comn_req_hdr_t comn_hdr; + /**< Common request header - for Service Command Id, + * use service-specific Compression Command Id. + * Service Specific Flags - use Compression Command Flags */ + + /**< LWs 2-5 */ + icp_qat_fw_comp_req_hdr_cd_pars_t cd_pars; + /**< Compression service-specific content descriptor field which points + * either to a content descriptor parameter block or contains the + * compression slice config word. */ + + /**< LWs 6-13 */ + icp_qat_fw_comn_req_mid_t comn_mid; + /**< Common request middle section */ + + /**< LWs 14-19 */ + icp_qat_fw_comp_req_params_t comp_pars; + /**< Compression request Parameters block */ + + /**< LWs 20-21 */ + union { + icp_qat_fw_xlt_req_params_t xlt_pars; + /**< Translation request Parameters block */ + + uint32_t resrvd1[ICP_QAT_FW_NUM_LONGWORDS_2]; + /**< Reserved if not used for translation */ + } u1; + + /**< LWs 22-23 */ + union { + uint32_t resrvd2[ICP_QAT_FW_NUM_LONGWORDS_2]; + /**< Reserved - not used if Batch and Pack is disabled.*/ + + uint64_t bnp_res_table_addr; + /**< A generic pointer to the unbounded list of + * icp_qat_fw_resp_comp_pars_t members. This pointer is only + * used when the Batch and Pack is enabled. */ + } u3; + + /**< LWs 24-29 */ + icp_qat_fw_comp_cd_hdr_t comp_cd_ctrl; + /**< Compression request content descriptor control + * block header */ + + /**< LWs 30-31 */ + union { + icp_qat_fw_xlt_cd_hdr_t xlt_cd_ctrl; + /**< Translation request content descriptor + * control block header */ + + uint32_t resrvd3[ICP_QAT_FW_NUM_LONGWORDS_2]; + /**< Reserved if not used for translation */ + } u2; + +} icp_qat_fw_comp_req_t; + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_comp + * Definition of the compression QAT FW response descriptor + * parameters + * @description + * This part of the response is specific to the compression response. + * + ******************************************************************************/ +typedef struct icp_qat_fw_resp_comp_pars_s { + /**< LW 4 */ + uint32_t input_byte_counter; + /**< Input byte counter */ + + /**< LW 5 */ + uint32_t output_byte_counter; + /**< Output byte counter */ + + /** LW 6-7 */ + union { + struct { + /** LW 6 */ + uint32_t curr_crc32; + /**< Current CRC32 */ + + /** LW 7 */ + uint32_t curr_adler_32; + /**< Current Adler32 */ + } legacy; + + uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_2]; + /**< Reserved if not in legacy mode */ + } crc; + +} icp_qat_fw_resp_comp_pars_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comp + * Definition of a single result metadata structure inside Batch and Pack + * results table array. It describes the output if single job in the + * batch and pack jobs. + * Total number of entries in BNP Out table shall be equal to total + * number of requests in the 'batch'. + * @description + * This structure is specific to the compression output. + * + *****************************************************************************/ +typedef struct icp_qat_fw_comp_bnp_out_tbl_entry_s { + /**< LWs 0-3 */ + icp_qat_fw_resp_comp_pars_t comp_out_pars; + /**< Common output params (checksums and byte counts) */ + + /**< LW 4 */ + icp_qat_fw_comn_error_t comn_error; + /**< This field is overloaded to allow for one 8 bit common error field + * or two 8 bit error fields from compression and translator */ + + uint8_t comn_status; + /**< Status field which specifies which slice(s) report an error */ + + uint8_t reserved0; + /**< Reserved, shall be set to zero */ + + uint32_t reserved1; + /**< Reserved, shall be set to zero, + added for aligning entries to quadword boundary */ +} icp_qat_fw_comp_bnp_out_tbl_entry_t; + +/** +***************************************************************************** +* @ingroup icp_qat_fw_comp +* Supported modes for skipping regions of input or output buffers. +* +* @description +* This enumeration lists the supported modes for skipping regions of +* input or output buffers. +* +*****************************************************************************/ +typedef enum icp_qat_fw_comp_bnp_skip_mode_s { + ICP_QAT_FW_SKIP_DISABLED = 0, + /**< Skip mode is disabled */ + ICP_QAT_FW_SKIP_AT_START = 1, + /**< Skip region is at the start of the buffer. */ + ICP_QAT_FW_SKIP_AT_END = 2, + /**< Skip region is at the end of the buffer. */ + ICP_QAT_FW_SKIP_STRIDE = 3 + /**< Skip region occurs at regular intervals within the buffer. + specifies the number of bytes between each + skip region. */ +} icp_qat_fw_comp_bnp_skip_mode_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Flags describing the skip and compression job bahaviour. refer to flag + * definitions on skip mode and reset/flush types. + * Note: compression behaviour flags are ignored for destination skip info. + * @description + * Definition of the common request flags. + * + *****************************************************************************/ +typedef uint8_t icp_qat_fw_comp_bnp_flags_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_comn + * Skip Region Data. + * @description + * This structure contains data relating to configuring skip region + * behaviour. A skip region is a region of an input buffer that + * should be omitted from processing or a region that should be inserted + * into the output buffer. + * + *****************************************************************************/ +typedef struct icp_qat_fw_comp_bnp_skip_info_s { + /**< LW 0 */ + uint16_t skip_length; + /**next_curr_id_cipher) & \ + ICP_QAT_FW_COMN_NEXT_ID_MASK) >> \ + (ICP_QAT_FW_COMN_NEXT_ID_BITPOS)) + +#define ICP_QAT_FW_CIPHER_NEXT_ID_SET(cd_ctrl_hdr_t, val) \ + (cd_ctrl_hdr_t)->next_curr_id_cipher = \ + ((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \ + ICP_QAT_FW_COMN_CURR_ID_MASK) | \ + ((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) & \ + ICP_QAT_FW_COMN_NEXT_ID_MASK)) + +#define ICP_QAT_FW_CIPHER_CURR_ID_GET(cd_ctrl_hdr_t) \ + (((cd_ctrl_hdr_t)->next_curr_id_cipher) & ICP_QAT_FW_COMN_CURR_ID_MASK) + +#define ICP_QAT_FW_CIPHER_CURR_ID_SET(cd_ctrl_hdr_t, val) \ + (cd_ctrl_hdr_t)->next_curr_id_cipher = \ + ((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \ + ICP_QAT_FW_COMN_NEXT_ID_MASK) | \ + ((val)&ICP_QAT_FW_COMN_CURR_ID_MASK)) + +/** Authentication fields within Cipher + Authentication structure */ +#define ICP_QAT_FW_AUTH_NEXT_ID_GET(cd_ctrl_hdr_t) \ + ((((cd_ctrl_hdr_t)->next_curr_id_auth) & \ + ICP_QAT_FW_COMN_NEXT_ID_MASK) >> \ + (ICP_QAT_FW_COMN_NEXT_ID_BITPOS)) + +#define ICP_QAT_FW_AUTH_NEXT_ID_SET(cd_ctrl_hdr_t, val) \ + (cd_ctrl_hdr_t)->next_curr_id_auth = \ + ((((cd_ctrl_hdr_t)->next_curr_id_auth) & \ + ICP_QAT_FW_COMN_CURR_ID_MASK) | \ + ((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) & \ + ICP_QAT_FW_COMN_NEXT_ID_MASK)) + +#define ICP_QAT_FW_AUTH_CURR_ID_GET(cd_ctrl_hdr_t) \ + (((cd_ctrl_hdr_t)->next_curr_id_auth) & ICP_QAT_FW_COMN_CURR_ID_MASK) + +#define ICP_QAT_FW_AUTH_CURR_ID_SET(cd_ctrl_hdr_t, val) \ + (cd_ctrl_hdr_t)->next_curr_id_auth = \ + ((((cd_ctrl_hdr_t)->next_curr_id_auth) & \ + ICP_QAT_FW_COMN_NEXT_ID_MASK) | \ + ((val)&ICP_QAT_FW_COMN_CURR_ID_MASK)) + +/* Definitions of the bits in the test_status_info of the TRNG_TEST response. + * The values returned by the Lookaside service are given below + * The Test result and Test Fail Count values are only valid if the Test + * Results Valid (Tv) is set. + * + * TRNG Test Status Info + * + ===== + ------------------------------------------------ + --- + --- + + * | Bit | 31 - 2 | 1 | 0 | + * + ===== + ------------------------------------------------ + --- + --- + + * | Flags | RESERVED = 0 | Tv | Ts | + * + ===== + ------------------------------------------------------------ + + */ +/****************************************************************************** + * @ingroup icp_qat_fw_la + * Definition of the Lookaside TRNG Test Status Information received as + * a part of icp_qat_fw_la_trng_test_result_t + * + *****************************************************************************/ +#define QAT_FW_LA_TRNG_TEST_STATUS_TS_BITPOS 0 +/**< @ingroup icp_qat_fw_la + * TRNG Test Result t_status field bit pos definition.*/ + +#define QAT_FW_LA_TRNG_TEST_STATUS_TS_MASK 0x1 +/**< @ingroup icp_qat_fw_la + * TRNG Test Result t_status field mask definition.*/ + +#define QAT_FW_LA_TRNG_TEST_STATUS_TV_BITPOS 1 +/**< @ingroup icp_qat_fw_la + * TRNG Test Result test results valid field bit pos definition.*/ + +#define QAT_FW_LA_TRNG_TEST_STATUS_TV_MASK 0x1 +/**< @ingroup icp_qat_fw_la + * TRNG Test Result test results valid field mask definition.*/ + +/****************************************************************************** + * @ingroup icp_qat_fw_la + * Definition of the Lookaside TRNG test_status values. + * + * + *****************************************************************************/ +#define QAT_FW_LA_TRNG_TEST_STATUS_TV_VALID 1 +/**< @ingroup icp_qat_fw_la + * TRNG TEST Response Test Results Valid Value.*/ + +#define QAT_FW_LA_TRNG_TEST_STATUS_TV_NOT_VALID 0 +/**< @ingroup icp_qat_fw_la + * TRNG TEST Response Test Results are NOT Valid Value.*/ + +#define QAT_FW_LA_TRNG_TEST_STATUS_TS_NO_FAILS 1 +/**< @ingroup icp_qat_fw_la + * Value for TRNG Test status tests have NO FAILs Value.*/ + +#define QAT_FW_LA_TRNG_TEST_STATUS_TS_HAS_FAILS 0 +/**< @ingroup icp_qat_fw_la + * Value for TRNG Test status tests have one or more FAILS Value.*/ + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_la + * + * @description + * Macro for extraction of the Test Status Field returned in the response + * to TRNG TEST command. + * + * @param test_status 8 bit test_status value to extract the status bit + * + *****************************************************************************/ +#define ICP_QAT_FW_LA_TRNG_TEST_STATUS_TS_FLD_GET(test_status) \ + QAT_FIELD_GET(test_status, \ + QAT_FW_LA_TRNG_TEST_STATUS_TS_BITPOS, \ + QAT_FW_LA_TRNG_TEST_STATUS_TS_MASK) +/** + ****************************************************************************** + * @ingroup icp_qat_fw_la + * + * @description + * Macro for extraction of the Test Results Valid Field returned in the + * response to TRNG TEST command. + * + * @param test_status 8 bit test_status value to extract the Tests + * Results valid bit + * + *****************************************************************************/ +#define ICP_QAT_FW_LA_TRNG_TEST_STATUS_TV_FLD_GET(test_status) \ + QAT_FIELD_GET(test_status, \ + QAT_FW_LA_TRNG_TEST_STATUS_TV_BITPOS, \ + QAT_FW_LA_TRNG_TEST_STATUS_TV_MASK) + +/* + ****************************************************************************** + * MGF Max supported input parameters + ****************************************************************************** + */ +#define ICP_QAT_FW_LA_MGF_SEED_LEN_MAX 255 +/**< @ingroup icp_qat_fw_la + * Maximum seed length for MGF1 request in bytes + * Typical values may be 48, 64, 128 bytes (or any).*/ + +#define ICP_QAT_FW_LA_MGF_MASK_LEN_MAX 65528 +/**< @ingroup icp_qat_fw_la + * Maximum mask length for MGF1 request in bytes + * Typical values may be 8 (64-bit), 16 (128-bit). MUST be quad word multiple */ + +/* + ****************************************************************************** + * SSL Max supported input parameters + ****************************************************************************** + */ +#define ICP_QAT_FW_LA_SSL_SECRET_LEN_MAX 512 +/**< @ingroup icp_qat_fw_la + * Maximum secret length for SSL3 Key Gen request (bytes) */ + +#define ICP_QAT_FW_LA_SSL_ITERATES_LEN_MAX 16 +/**< @ingroup icp_qat_fw_la + * Maximum iterations for SSL3 Key Gen request (integer) */ + +#define ICP_QAT_FW_LA_SSL_LABEL_LEN_MAX 136 +/**< @ingroup icp_qat_fw_la + * Maximum label length for SSL3 Key Gen request (bytes) */ + +#define ICP_QAT_FW_LA_SSL_SEED_LEN_MAX 64 +/**< @ingroup icp_qat_fw_la + * Maximum seed length for SSL3 Key Gen request (bytes) */ + +#define ICP_QAT_FW_LA_SSL_OUTPUT_LEN_MAX 248 +/**< @ingroup icp_qat_fw_la + * Maximum output length for SSL3 Key Gen request (bytes) */ + +/* + ****************************************************************************** + * TLS Max supported input parameters + ****************************************************************************** + */ +#define ICP_QAT_FW_LA_TLS_SECRET_LEN_MAX 128 +/**< @ingroup icp_qat_fw_la + * Maximum secret length for TLS Key Gen request (bytes) */ + +#define ICP_QAT_FW_LA_TLS_V1_1_SECRET_LEN_MAX 128 +/**< @ingroup icp_qat_fw_la + * Maximum secret length for TLS Key Gen request (bytes) */ + +#define ICP_QAT_FW_LA_TLS_V1_2_SECRET_LEN_MAX 64 +/**< @ingroup icp_qat_fw_la + * Maximum secret length for TLS Key Gen request (bytes) */ + +#define ICP_QAT_FW_LA_TLS_LABEL_LEN_MAX 255 +/**< @ingroup icp_qat_fw_la + * Maximum label length for TLS Key Gen request (bytes) */ + +#define ICP_QAT_FW_LA_TLS_SEED_LEN_MAX 64 +/**< @ingroup icp_qat_fw_la + * Maximum seed length for TLS Key Gen request (bytes) */ + +#define ICP_QAT_FW_LA_TLS_OUTPUT_LEN_MAX 248 +/**< @ingroup icp_qat_fw_la + * Maximum output length for TLS Key Gen request (bytes) */ + +/* + ****************************************************************************** + * HKDF input parameters + ****************************************************************************** + */ + +#define QAT_FW_HKDF_LABEL_BUFFER_SZ 78 +#define QAT_FW_HKDF_LABEL_LEN_SZ 1 +#define QAT_FW_HKDF_LABEL_FLAGS_SZ 1 + +#define QAT_FW_HKDF_LABEL_STRUCT_SZ \ + (QAT_FW_HKDF_LABEL_BUFFER_SZ + QAT_FW_HKDF_LABEL_LEN_SZ + \ + QAT_FW_HKDF_LABEL_FLAGS_SZ) + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_la + * + * @description + * Wraps an RFC 8446 HkdfLabel with metadata for use in HKDF Expand-Label + * operations. + * + *****************************************************************************/ +struct icp_qat_fw_hkdf_label { + uint8_t label[QAT_FW_HKDF_LABEL_BUFFER_SZ]; + /**< Buffer containing an HkdfLabel as specified in RFC 8446 */ + + uint8_t label_length; + /**< The size of the HkdfLabel */ + + union { + uint8_t label_flags; + /**< For first-level labels: each bit in [0..3] will trigger a + * child + * Expand-Label operation on the corresponding sublabel. Bits + * [4..7] + * are reserved. + */ + + uint8_t sublabel_flags; + /**< For sublabels the following flags are defined: + * - QAT_FW_HKDF_INNER_SUBLABEL_12_BYTE_OKM_BITPOS + * - QAT_FW_HKDF_INNER_SUBLABEL_16_BYTE_OKM_BITPOS + * - QAT_FW_HKDF_INNER_SUBLABEL_32_BYTE_OKM_BITPOS + */ + } u; +}; + +#define ICP_QAT_FW_LA_HKDF_SECRET_LEN_MAX 64 +/**< Maximum secret length for HKDF request (bytes) */ + +#define ICP_QAT_FW_LA_HKDF_IKM_LEN_MAX 64 +/**< Maximum IKM length for HKDF request (bytes) */ + +#define QAT_FW_HKDF_MAX_LABELS 4 +/**< Maximum number of label structures allowed in the labels buffer */ + +#define QAT_FW_HKDF_MAX_SUBLABELS 4 +/**< Maximum number of label structures allowed in the sublabels buffer */ + +/* + ****************************************************************************** + * HKDF inner sublabel flags + ****************************************************************************** + */ + +#define QAT_FW_HKDF_INNER_SUBLABEL_12_BYTE_OKM_BITPOS 0 +/**< Limit sublabel expand output to 12 bytes -- used with the "iv" sublabel */ + +#define QAT_FW_HKDF_INNER_SUBLABEL_16_BYTE_OKM_BITPOS 1 +/**< Limit sublabel expand output to 16 bytes -- used with SHA-256 "key" */ + +#define QAT_FW_HKDF_INNER_SUBLABEL_32_BYTE_OKM_BITPOS 2 +/**< Limit sublabel expand output to 32 bytes -- used with SHA-384 "key" */ + +#endif /* _ICP_QAT_FW_LA_H_ */ Index: sys/dev/qat/qat_api/firmware/include/icp_qat_fw_mmp.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/firmware/include/icp_qat_fw_mmp.h @@ -0,0 +1,5926 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + * @file icp_qat_fw_mmp.h + * @defgroup icp_qat_fw_mmp ICP QAT FW MMP Processing Definitions + * @ingroup icp_qat_fw + * $Revision: 0.1 $ + * @brief + * This file documents the external interfaces that the QAT FW running + * on the QAT Acceleration Engine provides to clients wanting to + * accelerate crypto assymetric applications + */ + +#ifndef __ICP_QAT_FW_MMP__ +#define __ICP_QAT_FW_MMP__ + +/************************************************************************** + * Include local header files + ************************************************************************** + */ + +#include "icp_qat_fw.h" + +/************************************************************************** + * Local constants + ************************************************************************** + */ +#define ICP_QAT_FW_PKE_INPUT_COUNT_MAX 7 +/**< @ingroup icp_qat_fw_pke + * Maximum number of input paramaters in all PKE request */ +#define ICP_QAT_FW_PKE_OUTPUT_COUNT_MAX 5 +/**< @ingroup icp_qat_fw_pke + * Maximum number of output paramaters in all PKE request */ + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Initialisation sequence , + * to be used when icp_qat_fw_pke_request_s::functionalityId is #PKE_INIT. + */ +typedef struct icp_qat_fw_mmp_init_input_s { + uint64_t z; /**< zeroed quadword (1 qwords)*/ +} icp_qat_fw_mmp_init_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Diffie-Hellman Modular exponentiation base 2 for + * 768-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DH_G2_768. + */ +typedef struct icp_qat_fw_mmp_dh_g2_768_input_s { + uint64_t e; /**< exponent > 0 and < 2^768 (12 qwords)*/ + uint64_t m; /**< modulus ≥ 2^767 and < 2^768 (12 qwords)*/ +} icp_qat_fw_mmp_dh_g2_768_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Diffie-Hellman Modular exponentiation for 768-bit + * numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DH_768. + */ +typedef struct icp_qat_fw_mmp_dh_768_input_s { + uint64_t g; /**< base ≥ 0 and < 2^768 (12 qwords)*/ + uint64_t e; /**< exponent > 0 and < 2^768 (12 qwords)*/ + uint64_t m; /**< modulus ≥ 2^767 and < 2^768 (12 qwords)*/ +} icp_qat_fw_mmp_dh_768_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Diffie-Hellman Modular exponentiation base 2 for + * 1024-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DH_G2_1024. + */ +typedef struct icp_qat_fw_mmp_dh_g2_1024_input_s { + uint64_t e; /**< exponent > 0 and < 2^1024 (16 qwords)*/ + uint64_t m; /**< modulus ≥ 2^1023 and < 2^1024 (16 qwords)*/ +} icp_qat_fw_mmp_dh_g2_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Diffie-Hellman Modular exponentiation for + * 1024-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DH_1024. + */ +typedef struct icp_qat_fw_mmp_dh_1024_input_s { + uint64_t g; /**< base ≥ 0 and < 2^1024 (16 qwords)*/ + uint64_t e; /**< exponent > 0 and < 2^1024 (16 qwords)*/ + uint64_t m; /**< modulus ≥ 2^1023 and < 2^1024 (16 qwords)*/ +} icp_qat_fw_mmp_dh_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Diffie-Hellman Modular exponentiation base 2 for + * 1536-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DH_G2_1536. + */ +typedef struct icp_qat_fw_mmp_dh_g2_1536_input_s { + uint64_t e; /**< exponent > 0 and < 2^1536 (24 qwords)*/ + uint64_t m; /**< modulus ≥ 2^1535 and < 2^1536 (24 qwords)*/ +} icp_qat_fw_mmp_dh_g2_1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Diffie-Hellman Modular exponentiation for + * 1536-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DH_1536. + */ +typedef struct icp_qat_fw_mmp_dh_1536_input_s { + uint64_t g; /**< base ≥ 0 and < 2^1536 (24 qwords)*/ + uint64_t e; /**< exponent > 0 and < 2^1536 (24 qwords)*/ + uint64_t m; /**< modulus ≥ 2^1535 and < 2^1536 (24 qwords)*/ +} icp_qat_fw_mmp_dh_1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Diffie-Hellman Modular exponentiation base 2 for + * 2048-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DH_G2_2048. + */ +typedef struct icp_qat_fw_mmp_dh_g2_2048_input_s { + uint64_t e; /**< exponent > 0 and < 2^2048 (32 qwords)*/ + uint64_t m; /**< modulus ≥ 2^2047 and < 2^2048 (32 qwords)*/ +} icp_qat_fw_mmp_dh_g2_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Diffie-Hellman Modular exponentiation for + * 2048-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DH_2048. + */ +typedef struct icp_qat_fw_mmp_dh_2048_input_s { + uint64_t g; /**< base ≥ 0 and < 2^2048 (32 qwords)*/ + uint64_t e; /**< exponent > 0 and < 2^2048 (32 qwords)*/ + uint64_t m; /**< modulus ≥ 2^2047 and < 2^2048 (32 qwords)*/ +} icp_qat_fw_mmp_dh_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Diffie-Hellman Modular exponentiation base 2 for + * 3072-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DH_G2_3072. + */ +typedef struct icp_qat_fw_mmp_dh_g2_3072_input_s { + uint64_t e; /**< exponent > 0 and < 2^3072 (48 qwords)*/ + uint64_t m; /**< modulus ≥ 2^3071 and < 2^3072 (48 qwords)*/ +} icp_qat_fw_mmp_dh_g2_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Diffie-Hellman Modular exponentiation for + * 3072-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DH_3072. + */ +typedef struct icp_qat_fw_mmp_dh_3072_input_s { + uint64_t g; /**< base ≥ 0 and < 2^3072 (48 qwords)*/ + uint64_t e; /**< exponent > 0 and < 2^3072 (48 qwords)*/ + uint64_t m; /**< modulus ≥ 2^3071 and < 2^3072 (48 qwords)*/ +} icp_qat_fw_mmp_dh_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Diffie-Hellman Modular exponentiation base 2 for + * 4096-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DH_G2_4096. + */ +typedef struct icp_qat_fw_mmp_dh_g2_4096_input_s { + uint64_t e; /**< exponent > 0 and < 2^4096 (64 qwords)*/ + uint64_t m; /**< modulus ≥ 2^4095 and < 2^4096 (64 qwords)*/ +} icp_qat_fw_mmp_dh_g2_4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Diffie-Hellman Modular exponentiation for + * 4096-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DH_4096. + */ +typedef struct icp_qat_fw_mmp_dh_4096_input_s { + uint64_t g; /**< base ≥ 0 and < 2^4096 (64 qwords)*/ + uint64_t e; /**< exponent > 0 and < 2^4096 (64 qwords)*/ + uint64_t m; /**< modulus ≥ 2^4095 and < 2^4096 (64 qwords)*/ +} icp_qat_fw_mmp_dh_4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 512 key generation first form , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_KP1_512. + */ +typedef struct icp_qat_fw_mmp_rsa_kp1_512_input_s { + uint64_t + p; /**< RSA parameter, prime,  2 < p < 2^256 (4 qwords)*/ + uint64_t + q; /**< RSA parameter, prime,  2 < q < 2^256 (4 qwords)*/ + uint64_t e; /**< RSA public key, must be odd, ≥ 3 and ≤ (p*q)-1, +  with GCD(e, p-1, q-1) = 1 (8 qwords)*/ +} icp_qat_fw_mmp_rsa_kp1_512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 512 key generation second form , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_KP2_512. + */ +typedef struct icp_qat_fw_mmp_rsa_kp2_512_input_s { + uint64_t + p; /**< RSA parameter, prime,  2^255 < p < 2^256 (4 + qwords)*/ + uint64_t + q; /**< RSA parameter, prime,  2^255 < q < 2^256 (4 + qwords)*/ + uint64_t e; /**< RSA public key, must be odd, ≥ 3 and ≤ (p*q)-1, +  with GCD(e, p-1, q-1) = 1 (8 qwords)*/ +} icp_qat_fw_mmp_rsa_kp2_512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 512 Encryption , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_EP_512. + */ +typedef struct icp_qat_fw_mmp_rsa_ep_512_input_s { + uint64_t m; /**< message representative, < n (8 qwords)*/ + uint64_t e; /**< RSA public key, ≥ 3 and ≤ n-1 (8 qwords)*/ + uint64_t n; /**< RSA key, > 0 and < 2^256 (8 qwords)*/ +} icp_qat_fw_mmp_rsa_ep_512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 512 Decryption , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_DP1_512. + */ +typedef struct icp_qat_fw_mmp_rsa_dp1_512_input_s { + uint64_t c; /**< cipher text representative, < n (8 qwords)*/ + uint64_t d; /**< RSA private key (RSADP first form) (8 qwords)*/ + uint64_t n; /**< RSA key > 0 and < 2^256 (8 qwords)*/ +} icp_qat_fw_mmp_rsa_dp1_512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 1024 Decryption with CRT , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_DP2_512. + */ +typedef struct icp_qat_fw_mmp_rsa_dp2_512_input_s { + uint64_t c; /**< cipher text representative, < (p*q) (8 qwords)*/ + uint64_t + p; /**< RSA parameter, prime,  2^255 < p < 2^256 (4 + qwords)*/ + uint64_t + q; /**< RSA parameter, prime,  2^255 < q < 2^256 (4 + qwords)*/ + uint64_t dp; /**< RSA private key, 0 < dp < p-1 (4 qwords)*/ + uint64_t dq; /**< RSA private key 0 < dq < q-1 (4 qwords)*/ + uint64_t qinv; /**< RSA private key 0 < qInv < p (4 qwords)*/ +} icp_qat_fw_mmp_rsa_dp2_512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 1024 key generation first form , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_KP1_1024. + */ +typedef struct icp_qat_fw_mmp_rsa_kp1_1024_input_s { + uint64_t + p; /**< RSA parameter, prime,  2 < p < 2^512 (8 qwords)*/ + uint64_t + q; /**< RSA parameter, prime,  2 < q < 2^512 (8 qwords)*/ + uint64_t e; /**< RSA public key, must be odd, ≥ 3 and ≤ (p*q)-1, +  with GCD(e, p-1, q-1) = 1 (16 qwords)*/ +} icp_qat_fw_mmp_rsa_kp1_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 1024 key generation second form , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_KP2_1024. + */ +typedef struct icp_qat_fw_mmp_rsa_kp2_1024_input_s { + uint64_t + p; /**< RSA parameter, prime,  2^511 < p < 2^512 (8 + qwords)*/ + uint64_t + q; /**< RSA parameter, prime,  2^511 < q < 2^512 (8 + qwords)*/ + uint64_t e; /**< RSA public key, must be odd, ≥ 3 and ≤ (p*q)-1, +  with GCD(e, p-1, q-1) = 1 (16 qwords)*/ +} icp_qat_fw_mmp_rsa_kp2_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 1024 Encryption , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_EP_1024. + */ +typedef struct icp_qat_fw_mmp_rsa_ep_1024_input_s { + uint64_t m; /**< message representative, < n (16 qwords)*/ + uint64_t e; /**< RSA public key, ≥ 3 and ≤ n-1 (16 qwords)*/ + uint64_t n; /**< RSA key, > 0 and < 2^1024 (16 qwords)*/ +} icp_qat_fw_mmp_rsa_ep_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 1024 Decryption , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_DP1_1024. + */ +typedef struct icp_qat_fw_mmp_rsa_dp1_1024_input_s { + uint64_t c; /**< cipher text representative, < n (16 qwords)*/ + uint64_t d; /**< RSA private key (RSADP first form) (16 qwords)*/ + uint64_t n; /**< RSA key > 0 and < 2^1024 (16 qwords)*/ +} icp_qat_fw_mmp_rsa_dp1_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 1024 Decryption with CRT , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_DP2_1024. + */ +typedef struct icp_qat_fw_mmp_rsa_dp2_1024_input_s { + uint64_t c; /**< cipher text representative, < (p*q) (16 qwords)*/ + uint64_t + p; /**< RSA parameter, prime,  2^511 < p < 2^512 (8 + qwords)*/ + uint64_t + q; /**< RSA parameter, prime,  2^511 < q < 2^512 (8 + qwords)*/ + uint64_t dp; /**< RSA private key, 0 < dp < p-1 (8 qwords)*/ + uint64_t dq; /**< RSA private key 0 < dq < q-1 (8 qwords)*/ + uint64_t qinv; /**< RSA private key 0 < qInv < p (8 qwords)*/ +} icp_qat_fw_mmp_rsa_dp2_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 1536 key generation first form , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_KP1_1536. + */ +typedef struct icp_qat_fw_mmp_rsa_kp1_1536_input_s { + uint64_t p; /**< RSA parameter, prime,  2 < p < 2^768 (12 + qwords)*/ + uint64_t q; /**< RSA parameter, prime,  2 < q < 2^768 (12 + qwords)*/ + uint64_t e; /**< RSA public key, must be odd, ≥ 3 and ≤ (p*q)-1, +  with GCD(e, p-1, q-1) = 1 (24 qwords)*/ +} icp_qat_fw_mmp_rsa_kp1_1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 1536 key generation second form , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_KP2_1536. + */ +typedef struct icp_qat_fw_mmp_rsa_kp2_1536_input_s { + uint64_t + p; /**< RSA parameter, prime,  2^767 < p < 2^768 (12 + qwords)*/ + uint64_t + q; /**< RSA parameter, prime,  2^767 < q < 2^768 (12 + qwords)*/ + uint64_t e; /**< RSA public key, must be odd, ≥ 3 and ≤ (p*q)-1, +  with GCD(e, p-1, q-1) = 1 (24 qwords)*/ +} icp_qat_fw_mmp_rsa_kp2_1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 1536 Encryption , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_EP_1536. + */ +typedef struct icp_qat_fw_mmp_rsa_ep_1536_input_s { + uint64_t m; /**< message representative, < n (24 qwords)*/ + uint64_t e; /**< RSA public key, ≥ 3 and ≤ (p*q)-1 (24 qwords)*/ + uint64_t n; /**< RSA key > 0 and < 2^1536 (24 qwords)*/ +} icp_qat_fw_mmp_rsa_ep_1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 1536 Decryption , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_DP1_1536. + */ +typedef struct icp_qat_fw_mmp_rsa_dp1_1536_input_s { + uint64_t c; /**< cipher text representative, < n (24 qwords)*/ + uint64_t d; /**< RSA private key (24 qwords)*/ + uint64_t n; /**< RSA key, > 0 and < 2^1536 (24 qwords)*/ +} icp_qat_fw_mmp_rsa_dp1_1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 1536 Decryption with CRT , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_DP2_1536. + */ +typedef struct icp_qat_fw_mmp_rsa_dp2_1536_input_s { + uint64_t c; /**< cipher text representative, < (p*q) (24 qwords)*/ + uint64_t + p; /**< RSA parameter, prime,  2^767 < p < 2^768 (12 + qwords)*/ + uint64_t + q; /**< RSA parameter, prime,  2^767 < p < 2^768 (12 + qwords)*/ + uint64_t dp; /**< RSA private key, 0 < dp < p-1 (12 qwords)*/ + uint64_t dq; /**< RSA private key, 0 < dq < q-1 (12 qwords)*/ + uint64_t qinv; /**< RSA private key, 0 < qInv < p (12 qwords)*/ +} icp_qat_fw_mmp_rsa_dp2_1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 2048 key generation first form , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_KP1_2048. + */ +typedef struct icp_qat_fw_mmp_rsa_kp1_2048_input_s { + uint64_t p; /**< RSA parameter, prime,  2 < p < 2^1024 (16 + qwords)*/ + uint64_t q; /**< RSA parameter, prime,  2 < q < 2^1024 (16 + qwords)*/ + uint64_t e; /**< RSA public key, must be odd, ≥ 3 and ≤ (p*q)-1, +  with GCD(e, p-1, q-1) = 1 (32 qwords)*/ +} icp_qat_fw_mmp_rsa_kp1_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 2048 key generation second form , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_KP2_2048. + */ +typedef struct icp_qat_fw_mmp_rsa_kp2_2048_input_s { + uint64_t p; /**< RSA parameter, prime,  2^1023 < p < 2^1024 + (16 qwords)*/ + uint64_t q; /**< RSA parameter, prime,  2^1023 < q < 2^1024 + (16 qwords)*/ + uint64_t e; /**< RSA public key, must be odd, ≥ 3 and ≤ (p*q)-1, +  with GCD(e, p-1, q-1) = 1 (32 qwords)*/ +} icp_qat_fw_mmp_rsa_kp2_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 2048 Encryption , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_EP_2048. + */ +typedef struct icp_qat_fw_mmp_rsa_ep_2048_input_s { + uint64_t m; /**< message representative, < n (32 qwords)*/ + uint64_t e; /**< RSA public key, ≥ 3 and ≤ n-1 (32 qwords)*/ + uint64_t n; /**< RSA key > 0 and < 2^2048 (32 qwords)*/ +} icp_qat_fw_mmp_rsa_ep_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 2048 Decryption , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_DP1_2048. + */ +typedef struct icp_qat_fw_mmp_rsa_dp1_2048_input_s { + uint64_t c; /**< cipher text representative, < n (32 qwords)*/ + uint64_t d; /**< RSA private key (32 qwords)*/ + uint64_t n; /**< RSA key > 0 and < 2^2048 (32 qwords)*/ +} icp_qat_fw_mmp_rsa_dp1_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 2048 Decryption with CRT , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_DP2_2048. + */ +typedef struct icp_qat_fw_mmp_rsa_dp2_2048_input_s { + uint64_t c; /**< cipher text representative, < (p*q) (32 qwords)*/ + uint64_t p; /**< RSA parameter, prime,  2^1023 < p < 2^1024 + (16 qwords)*/ + uint64_t q; /**< RSA parameter, prime,  2^1023 < q < 2^1024 + (16 qwords)*/ + uint64_t dp; /**< RSA private key, 0 < dp < p-1 (16 qwords)*/ + uint64_t dq; /**< RSA private key, 0 < dq < q-1 (16 qwords)*/ + uint64_t qinv; /**< RSA private key, 0 < qInv < p (16 qwords)*/ +} icp_qat_fw_mmp_rsa_dp2_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 3072 key generation first form , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_KP1_3072. + */ +typedef struct icp_qat_fw_mmp_rsa_kp1_3072_input_s { + uint64_t p; /**< RSA parameter, prime,  2 < p < 2^1536 (24 + qwords)*/ + uint64_t q; /**< RSA parameter, prime,  2 < q < 2^1536 (24 + qwords)*/ + uint64_t e; /**< RSA public key, must be odd, ≥ 3 and ≤ (p*q)-1, +  with GCD(e, p-1, q-1) = 1 (48 qwords)*/ +} icp_qat_fw_mmp_rsa_kp1_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 3072 key generation second form , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_KP2_3072. + */ +typedef struct icp_qat_fw_mmp_rsa_kp2_3072_input_s { + uint64_t p; /**< RSA parameter, prime,  2^1535 < p < 2^1536 + (24 qwords)*/ + uint64_t q; /**< RSA parameter, prime,  2^1535 < q < 2^1536 + (24 qwords)*/ + uint64_t e; /**< RSA public key, must be odd, ≥ 3 and ≤ (p*q)-1, +  with GCD(e, p-1, q-1) = 1 (48 qwords)*/ +} icp_qat_fw_mmp_rsa_kp2_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 3072 Encryption , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_EP_3072. + */ +typedef struct icp_qat_fw_mmp_rsa_ep_3072_input_s { + uint64_t m; /**< message representative, < n (48 qwords)*/ + uint64_t e; /**< RSA public key, ≥ 3 and ≤ n-1 (48 qwords)*/ + uint64_t n; /**< RSA key > 0 and < 2^3072 (48 qwords)*/ +} icp_qat_fw_mmp_rsa_ep_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 3072 Decryption , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_DP1_3072. + */ +typedef struct icp_qat_fw_mmp_rsa_dp1_3072_input_s { + uint64_t c; /**< cipher text representative, < n (48 qwords)*/ + uint64_t d; /**< RSA private key (48 qwords)*/ + uint64_t n; /**< RSA key > 0 and < 2^3072 (48 qwords)*/ +} icp_qat_fw_mmp_rsa_dp1_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 3072 Decryption with CRT , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_DP2_3072. + */ +typedef struct icp_qat_fw_mmp_rsa_dp2_3072_input_s { + uint64_t c; /**< cipher text representative, < (p*q) (48 qwords)*/ + uint64_t p; /**< RSA parameter, prime,  2^1535 < p < 2^1536 + (24 qwords)*/ + uint64_t q; /**< RSA parameter, prime,  2^1535 < q < 2^1536 + (24 qwords)*/ + uint64_t dp; /**< RSA private key, 0 < dp < p-1 (24 qwords)*/ + uint64_t dq; /**< RSA private key, 0 < dq < q-1 (24 qwords)*/ + uint64_t qinv; /**< RSA private key, 0 < qInv < p (24 qwords)*/ +} icp_qat_fw_mmp_rsa_dp2_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 4096 key generation first form , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_KP1_4096. + */ +typedef struct icp_qat_fw_mmp_rsa_kp1_4096_input_s { + uint64_t p; /**< RSA parameter, prime,  2 < p < 2^2048 (32 + qwords)*/ + uint64_t q; /**< RSA parameter, prime,  2 < q < 2^2048 (32 + qwords)*/ + uint64_t e; /**< RSA public key, must be odd, ≥ 3 and ≤ (p*q)-1, +  with GCD(e, p-1, q-1) = 1 (64 qwords)*/ +} icp_qat_fw_mmp_rsa_kp1_4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 4096 key generation second form , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_KP2_4096. + */ +typedef struct icp_qat_fw_mmp_rsa_kp2_4096_input_s { + uint64_t p; /**< RSA parameter, prime,  2^2047 < p < 2^2048 + (32 qwords)*/ + uint64_t q; /**< RSA parameter, prime,  2^2047 < q < 2^2048 + (32 qwords)*/ + uint64_t e; /**< RSA public key, must be odd, ≥ 3 and ≤ (p*q)-1, +  with GCD(e, p-1, q-1) = 1 (64 qwords)*/ +} icp_qat_fw_mmp_rsa_kp2_4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 4096 Encryption , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_EP_4096. + */ +typedef struct icp_qat_fw_mmp_rsa_ep_4096_input_s { + uint64_t m; /**< message representative, < n (64 qwords)*/ + uint64_t e; /**< RSA public key, ≥ 3 and ≤ n-1 (64 qwords)*/ + uint64_t n; /**< RSA key, > 0 and < 2^4096 (64 qwords)*/ +} icp_qat_fw_mmp_rsa_ep_4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 4096 Decryption , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_DP1_4096. + */ +typedef struct icp_qat_fw_mmp_rsa_dp1_4096_input_s { + uint64_t c; /**< cipher text representative, < n (64 qwords)*/ + uint64_t d; /**< RSA private key (64 qwords)*/ + uint64_t n; /**< RSA key, > 0 and < 2^4096 (64 qwords)*/ +} icp_qat_fw_mmp_rsa_dp1_4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for RSA 4096 Decryption with CRT , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_RSA_DP2_4096. + */ +typedef struct icp_qat_fw_mmp_rsa_dp2_4096_input_s { + uint64_t c; /**< cipher text representative, < (p*q) (64 qwords)*/ + uint64_t p; /**< RSA parameter, prime,  2^2047 < p < 2^2048 + (32 qwords)*/ + uint64_t q; /**< RSA parameter, prime,  2^2047 < q < 2^2048 + (32 qwords)*/ + uint64_t dp; /**< RSA private key, 0 < dp < p-1 (32 qwords)*/ + uint64_t dq; /**< RSA private key, 0 < dq < q-1 (32 qwords)*/ + uint64_t qinv; /**< RSA private key, 0 < qInv < p (32 qwords)*/ +} icp_qat_fw_mmp_rsa_dp2_4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for GCD primality test for 192-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_GCD_PT_192. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_192_input_s { + uint64_t m; /**< prime candidate > 1 and < 2^192 (3 qwords)*/ +} icp_qat_fw_mmp_gcd_pt_192_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for GCD primality test for 256-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_GCD_PT_256. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_256_input_s { + uint64_t m; /**< prime candidate > 1 and < 2^256 (4 qwords)*/ +} icp_qat_fw_mmp_gcd_pt_256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for GCD primality test for 384-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_GCD_PT_384. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_384_input_s { + uint64_t m; /**< prime candidate > 1 and < 2^384 (6 qwords)*/ +} icp_qat_fw_mmp_gcd_pt_384_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for GCD primality test for 512-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_GCD_PT_512. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_512_input_s { + uint64_t m; /**< prime candidate > 1 and < 2^512 (8 qwords)*/ +} icp_qat_fw_mmp_gcd_pt_512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for GCD primality test for 768-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_GCD_PT_768. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_768_input_s { + uint64_t m; /**< prime candidate > 1 and < 2^768 (12 qwords)*/ +} icp_qat_fw_mmp_gcd_pt_768_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for GCD primality test for 1024-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_GCD_PT_1024. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_1024_input_s { + uint64_t m; /**< prime candidate > 1 and < 2^1024 (16 qwords)*/ +} icp_qat_fw_mmp_gcd_pt_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for GCD primality test for 1536-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_GCD_PT_1536. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_1536_input_s { + uint64_t m; /**< (24 qwords)*/ +} icp_qat_fw_mmp_gcd_pt_1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for GCD primality test for 2048-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_GCD_PT_2048. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_2048_input_s { + uint64_t m; /**< prime candidate > 1 and < 2^2048 (32 qwords)*/ +} icp_qat_fw_mmp_gcd_pt_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for GCD primality test for 3072-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_GCD_PT_3072. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_3072_input_s { + uint64_t m; /**< prime candidate > 1 and < 2^3072 (48 qwords)*/ +} icp_qat_fw_mmp_gcd_pt_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for GCD primality test for 4096-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_GCD_PT_4096. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_4096_input_s { + uint64_t m; /**< prime candidate > 1 and < 2^4096 (64 qwords)*/ +} icp_qat_fw_mmp_gcd_pt_4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Fermat primality test for 160-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_FERMAT_PT_160. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_160_input_s { + uint64_t m; /**< prime candidate, 2^159 < m < 2^160 (3 qwords)*/ +} icp_qat_fw_mmp_fermat_pt_160_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Fermat primality test for 512-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_FERMAT_PT_512. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_512_input_s { + uint64_t m; /**< prime candidate, 2^511 < m < 2^512 (8 qwords)*/ +} icp_qat_fw_mmp_fermat_pt_512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Fermat primality test for <e; 512-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_FERMAT_PT_L512. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_l512_input_s { + uint64_t m; /**< prime candidate, 5 < m < 2^512 (8 qwords)*/ +} icp_qat_fw_mmp_fermat_pt_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Fermat primality test for 768-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_FERMAT_PT_768. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_768_input_s { + uint64_t m; /**< prime candidate, 2^767 < m < 2^768 (12 qwords)*/ +} icp_qat_fw_mmp_fermat_pt_768_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Fermat primality test for 1024-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_FERMAT_PT_1024. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_1024_input_s { + uint64_t + m; /**< prime candidate, 2^1023 < m < 2^1024 (16 qwords)*/ +} icp_qat_fw_mmp_fermat_pt_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Fermat primality test for 1536-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_FERMAT_PT_1536. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_1536_input_s { + uint64_t + m; /**< prime candidate, 2^1535 < m < 2^1536 (24 qwords)*/ +} icp_qat_fw_mmp_fermat_pt_1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Fermat primality test for 2048-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_FERMAT_PT_2048. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_2048_input_s { + uint64_t + m; /**< prime candidate, 2^2047 < m < 2^2048 (32 qwords)*/ +} icp_qat_fw_mmp_fermat_pt_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Fermat primality test for 3072-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_FERMAT_PT_3072. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_3072_input_s { + uint64_t + m; /**< prime candidate, 2^3071 < m < 2^3072 (48 qwords)*/ +} icp_qat_fw_mmp_fermat_pt_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Fermat primality test for 4096-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_FERMAT_PT_4096. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_4096_input_s { + uint64_t + m; /**< prime candidate, 2^4095 < m < 2^4096 (64 qwords)*/ +} icp_qat_fw_mmp_fermat_pt_4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Miller-Rabin primality test for 160-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_MR_PT_160. + */ +typedef struct icp_qat_fw_mmp_mr_pt_160_input_s { + uint64_t x; /**< randomness > 1 and < m-1 (3 qwords)*/ + uint64_t m; /**< prime candidate > 2^159 and < 2^160 (3 qwords)*/ +} icp_qat_fw_mmp_mr_pt_160_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Miller-Rabin primality test for 512-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_MR_PT_512. + */ +typedef struct icp_qat_fw_mmp_mr_pt_512_input_s { + uint64_t x; /**< randomness > 1 and < m-1 (8 qwords)*/ + uint64_t m; /**< prime candidate > 2^511 and < 2^512 (8 qwords)*/ +} icp_qat_fw_mmp_mr_pt_512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Miller-Rabin primality test for 768-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_MR_PT_768. + */ +typedef struct icp_qat_fw_mmp_mr_pt_768_input_s { + uint64_t x; /**< randomness > 1 and < m-1 (12 qwords)*/ + uint64_t m; /**< prime candidate > 2^767 and < 2^768 (12 qwords)*/ +} icp_qat_fw_mmp_mr_pt_768_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Miller-Rabin primality test for 1024-bit numbers + * , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_MR_PT_1024. + */ +typedef struct icp_qat_fw_mmp_mr_pt_1024_input_s { + uint64_t x; /**< randomness > 1 and < m-1 (16 qwords)*/ + uint64_t + m; /**< prime candidate > 2^1023 and < 2^1024 (16 qwords)*/ +} icp_qat_fw_mmp_mr_pt_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Miller-Rabin primality test for 1536-bit numbers + * , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_MR_PT_1536. + */ +typedef struct icp_qat_fw_mmp_mr_pt_1536_input_s { + uint64_t x; /**< randomness > 1 and < m-1 (24 qwords)*/ + uint64_t + m; /**< prime candidate > 2^1535 and < 2^1536 (24 qwords)*/ +} icp_qat_fw_mmp_mr_pt_1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Miller-Rabin primality test for 2048-bit numbers + * , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_MR_PT_2048. + */ +typedef struct icp_qat_fw_mmp_mr_pt_2048_input_s { + uint64_t x; /**< randomness > 1 and <m-1 (32 qwords)*/ + uint64_t + m; /**< prime candidate > 2^2047 and < 2^2048 (32 qwords)*/ +} icp_qat_fw_mmp_mr_pt_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Miller-Rabin primality test for 3072-bit numbers + * , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_MR_PT_3072. + */ +typedef struct icp_qat_fw_mmp_mr_pt_3072_input_s { + uint64_t x; /**< randomness > 1 and < m-1 (48 qwords)*/ + uint64_t + m; /**< prime candidate > 2^3071 and < 2^3072 (48 qwords)*/ +} icp_qat_fw_mmp_mr_pt_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Miller-Rabin primality test for 4096-bit numbers + * , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_MR_PT_4096. + */ +typedef struct icp_qat_fw_mmp_mr_pt_4096_input_s { + uint64_t x; /**< randomness > 1 and < m-1 (64 qwords)*/ + uint64_t + m; /**< prime candidate > 2^4095 and < 2^4096 (64 qwords)*/ +} icp_qat_fw_mmp_mr_pt_4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Miller-Rabin primality test for 512-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_MR_PT_L512. + */ +typedef struct icp_qat_fw_mmp_mr_pt_l512_input_s { + uint64_t x; /**< randomness > 1 and < m-1 (8 qwords)*/ + uint64_t m; /**< prime candidate > 1 and < 2^512 (8 qwords)*/ +} icp_qat_fw_mmp_mr_pt_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Lucas primality test for 160-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_LUCAS_PT_160. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_160_input_s { + uint64_t + m; /**< odd prime candidate > 2^159 and < 2^160 (3 qwords)*/ +} icp_qat_fw_mmp_lucas_pt_160_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Lucas primality test for 512-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_LUCAS_PT_512. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_512_input_s { + uint64_t + m; /**< odd prime candidate > 2^511 and < 2^512 (8 qwords)*/ +} icp_qat_fw_mmp_lucas_pt_512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Lucas primality test for 768-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_LUCAS_PT_768. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_768_input_s { + uint64_t + m; /**< odd prime candidate > 2^767 and < 2^768 (12 qwords)*/ +} icp_qat_fw_mmp_lucas_pt_768_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Lucas primality test for 1024-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_LUCAS_PT_1024. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_1024_input_s { + uint64_t m; /**< odd prime candidate > 2^1023 and < 2^1024 (16 + qwords)*/ +} icp_qat_fw_mmp_lucas_pt_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Lucas primality test for 1536-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_LUCAS_PT_1536. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_1536_input_s { + uint64_t m; /**< odd prime candidate > 2^1535 and < 2^1536 (24 + qwords)*/ +} icp_qat_fw_mmp_lucas_pt_1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Lucas primality test for 2048-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_LUCAS_PT_2048. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_2048_input_s { + uint64_t m; /**< odd prime candidate > 2^2047 and < 2^2048 (32 + qwords)*/ +} icp_qat_fw_mmp_lucas_pt_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Lucas primality test for 3072-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_LUCAS_PT_3072. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_3072_input_s { + uint64_t m; /**< odd prime candidate > 2^3071 and < 2^3072 (48 + qwords)*/ +} icp_qat_fw_mmp_lucas_pt_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Lucas primality test for 4096-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_LUCAS_PT_4096. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_4096_input_s { + uint64_t m; /**< odd prime candidate > 2^4096 and < 2^4096 (64 + qwords)*/ +} icp_qat_fw_mmp_lucas_pt_4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Lucas primality test for L512-bit numbers , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_LUCAS_PT_L512. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_l512_input_s { + uint64_t m; /**< odd prime candidate > 5 and < 2^512 (8 qwords)*/ +} icp_qat_fw_mmp_lucas_pt_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular exponentiation for numbers less than + * 512-bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODEXP_L512. + */ +typedef struct icp_qat_fw_maths_modexp_l512_input_s { + uint64_t g; /**< base ≥ 0 and < 2^512 (8 qwords)*/ + uint64_t e; /**< exponent ≥ 0 and < 2^512 (8 qwords)*/ + uint64_t m; /**< modulus > 0 and < 2^512 (8 qwords)*/ +} icp_qat_fw_maths_modexp_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular exponentiation for numbers less than + * 1024-bit , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODEXP_L1024. + */ +typedef struct icp_qat_fw_maths_modexp_l1024_input_s { + uint64_t g; /**< base ≥ 0 and < 2^1024 (16 qwords)*/ + uint64_t e; /**< exponent ≥ 0 and < 2^1024 (16 qwords)*/ + uint64_t m; /**< modulus > 0 and < 2^1024 (16 qwords)*/ +} icp_qat_fw_maths_modexp_l1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular exponentiation for numbers less than + * 1536-bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODEXP_L1536. + */ +typedef struct icp_qat_fw_maths_modexp_l1536_input_s { + uint64_t g; /**< base ≥ 0 and < 2^1536 (24 qwords)*/ + uint64_t e; /**< exponent ≥ 0 and < 2^1536 (24 qwords)*/ + uint64_t m; /**< modulus > 0 and < 2^1536 (24 qwords)*/ +} icp_qat_fw_maths_modexp_l1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular exponentiation for numbers less than + * 2048-bit , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODEXP_L2048. + */ +typedef struct icp_qat_fw_maths_modexp_l2048_input_s { + uint64_t g; /**< base ≥ 0 and < 2^2048 (32 qwords)*/ + uint64_t e; /**< exponent ≥ 0 and < 2^2048 (32 qwords)*/ + uint64_t m; /**< modulus > 0 and < 2^2048 (32 qwords)*/ +} icp_qat_fw_maths_modexp_l2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular exponentiation for numbers less than + * 2560-bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODEXP_L2560. + */ +typedef struct icp_qat_fw_maths_modexp_l2560_input_s { + uint64_t g; /**< base ≥ 0 and < 2^2560 (40 qwords)*/ + uint64_t e; /**< exponent ≥ 0 and < 2^2560 (40 qwords)*/ + uint64_t m; /**< modulus > 0 and < 2^2560 (40 qwords)*/ +} icp_qat_fw_maths_modexp_l2560_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular exponentiation for numbers less than + * 3072-bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODEXP_L3072. + */ +typedef struct icp_qat_fw_maths_modexp_l3072_input_s { + uint64_t g; /**< base ≥ 0 and < 2^3072 (48 qwords)*/ + uint64_t e; /**< exponent ≥ 0 and < 2^3072 (48 qwords)*/ + uint64_t m; /**< modulus > 0 and < 2^3072 (48 qwords)*/ +} icp_qat_fw_maths_modexp_l3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular exponentiation for numbers less than + * 3584-bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODEXP_L3584. + */ +typedef struct icp_qat_fw_maths_modexp_l3584_input_s { + uint64_t g; /**< base ≥ 0 and < 2^3584 (56 qwords)*/ + uint64_t e; /**< exponent ≥ 0 and < 2^3584 (56 qwords)*/ + uint64_t m; /**< modulus > 0 and < 2^3584 (56 qwords)*/ +} icp_qat_fw_maths_modexp_l3584_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular exponentiation for numbers less than + * 4096-bit , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODEXP_L4096. + */ +typedef struct icp_qat_fw_maths_modexp_l4096_input_s { + uint64_t g; /**< base ≥ 0 and < 2^4096 (64 qwords)*/ + uint64_t e; /**< exponent ≥ 0 and < 2^4096 (64 qwords)*/ + uint64_t m; /**< modulus > 0 and < 2^4096 (64 qwords)*/ +} icp_qat_fw_maths_modexp_l4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 128 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_ODD_L128. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l128_input_s { + uint64_t a; /**< number > 0 and < 2^128 (2 qwords)*/ + uint64_t + b; /**< odd modulus > 0 and < 2^128, coprime to a (2 qwords)*/ +} icp_qat_fw_maths_modinv_odd_l128_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 192 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_ODD_L192. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l192_input_s { + uint64_t a; /**< number > 0 and < 2^192 (3 qwords)*/ + uint64_t + b; /**< odd modulus > 0 and < 2^192, coprime to a (3 qwords)*/ +} icp_qat_fw_maths_modinv_odd_l192_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 256 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_ODD_L256. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l256_input_s { + uint64_t a; /**< number > 0 and < 2^256 (4 qwords)*/ + uint64_t + b; /**< odd modulus > 0 and < 2^256, coprime to a (4 qwords)*/ +} icp_qat_fw_maths_modinv_odd_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 384 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_ODD_L384. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l384_input_s { + uint64_t a; /**< number > 0 and < 2^384 (6 qwords)*/ + uint64_t + b; /**< odd modulus > 0 and < 2^384, coprime to a (6 qwords)*/ +} icp_qat_fw_maths_modinv_odd_l384_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 512 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_ODD_L512. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l512_input_s { + uint64_t a; /**< number > 0 and < 2^512 (8 qwords)*/ + uint64_t + b; /**< odd modulus > 0 and < 2^512, coprime to a (8 qwords)*/ +} icp_qat_fw_maths_modinv_odd_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 768 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_ODD_L768. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l768_input_s { + uint64_t a; /**< number > 0 and < 2^768 (12 qwords)*/ + uint64_t b; /**< odd modulus > 0 and < 2^768 ,coprime to a (12 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l768_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 1024 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_ODD_L1024. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l1024_input_s { + uint64_t a; /**< number > 0 and < 2^1024 (16 qwords)*/ + uint64_t b; /**< odd modulus > 0 and < 2^1024, coprime to a (16 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 1536 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_ODD_L1536. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l1536_input_s { + uint64_t a; /**< number > 0 and < 2^1536 (24 qwords)*/ + uint64_t b; /**< odd modulus > 0 and < 2^1536, coprime to a (24 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 2048 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_ODD_L2048. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l2048_input_s { + uint64_t a; /**< number > 0 and < 2^2048 (32 qwords)*/ + uint64_t b; /**< odd modulus > 0 and < 2^2048, coprime to a (32 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 3072 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_ODD_L3072. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l3072_input_s { + uint64_t a; /**< number > 0 and < 2^3072 (48 qwords)*/ + uint64_t b; /**< odd modulus > 0 and < 2^3072, coprime to a (48 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 4096 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_ODD_L4096. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l4096_input_s { + uint64_t a; /**< number > 0 and < 2^4096 (64 qwords)*/ + uint64_t b; /**< odd modulus > 0 and < 2^4096, coprime to a (64 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 128 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_EVEN_L128. + */ +typedef struct icp_qat_fw_maths_modinv_even_l128_input_s { + uint64_t a; /**< odd number > 0 and < 2^128 (2 qwords)*/ + uint64_t + b; /**< even modulus > 0 and < 2^128, coprime with a (2 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l128_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 192 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_EVEN_L192. + */ +typedef struct icp_qat_fw_maths_modinv_even_l192_input_s { + uint64_t a; /**< odd number > 0 and < 2^192 (3 qwords)*/ + uint64_t + b; /**< even modulus > 0 and < 2^192, coprime with a (3 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l192_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 256 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_EVEN_L256. + */ +typedef struct icp_qat_fw_maths_modinv_even_l256_input_s { + uint64_t a; /**< odd number > 0 and < 2^256 (4 qwords)*/ + uint64_t + b; /**< even modulus > 0 and < 2^256, coprime with a (4 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 384 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_EVEN_L384. + */ +typedef struct icp_qat_fw_maths_modinv_even_l384_input_s { + uint64_t a; /**< odd number > 0 and < 2^384 (6 qwords)*/ + uint64_t + b; /**< even modulus > 0 and < 2^384, coprime with a (6 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l384_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 512 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_EVEN_L512. + */ +typedef struct icp_qat_fw_maths_modinv_even_l512_input_s { + uint64_t a; /**< odd number > 0 and < 2^512 (8 qwords)*/ + uint64_t + b; /**< even modulus > 0 and < 2^512, coprime with a (8 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 768 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_EVEN_L768. + */ +typedef struct icp_qat_fw_maths_modinv_even_l768_input_s { + uint64_t a; /**< odd number > 0 and < 2^768 (12 qwords)*/ + uint64_t b; /**< even modulus > 0 and < 2^768, coprime with a + (12 qwords)*/ +} icp_qat_fw_maths_modinv_even_l768_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 1024 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_EVEN_L1024. + */ +typedef struct icp_qat_fw_maths_modinv_even_l1024_input_s { + uint64_t a; /**< odd number > 0 and < 2^1024 (16 qwords)*/ + uint64_t b; /**< even modulus > 0 and < 2^1024, coprime with a + (16 qwords)*/ +} icp_qat_fw_maths_modinv_even_l1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 1536 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_EVEN_L1536. + */ +typedef struct icp_qat_fw_maths_modinv_even_l1536_input_s { + uint64_t a; /**< odd number > 0 and < 2^1536 (24 qwords)*/ + uint64_t b; /**< even modulus > 0 and < 2^1536, coprime with a + (24 qwords)*/ +} icp_qat_fw_maths_modinv_even_l1536_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 2048 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_EVEN_L2048. + */ +typedef struct icp_qat_fw_maths_modinv_even_l2048_input_s { + uint64_t a; /**< odd number > 0 and < 2^2048 (32 qwords)*/ + uint64_t b; /**< even modulus > 0 and < 2^2048, coprime with a + (32 qwords)*/ +} icp_qat_fw_maths_modinv_even_l2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 3072 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_EVEN_L3072. + */ +typedef struct icp_qat_fw_maths_modinv_even_l3072_input_s { + uint64_t a; /**< odd number > 0 and < 2^3072 (48 qwords)*/ + uint64_t b; /**< even modulus > 0 and < 2^3072, coprime with a + (48 qwords)*/ +} icp_qat_fw_maths_modinv_even_l3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for Modular multiplicative inverse for numbers less + * than 4096 bits , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_MODINV_EVEN_L4096. + */ +typedef struct icp_qat_fw_maths_modinv_even_l4096_input_s { + uint64_t a; /**< odd number > 0 and < 2^4096 (64 qwords)*/ + uint64_t b; /**< even modulus > 0 and < 2^4096, coprime with a + (64 qwords)*/ +} icp_qat_fw_maths_modinv_even_l4096_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA parameter generation P , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_GEN_P_1024_160. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_p_1024_160_input_s { + uint64_t x; /**< DSA 1024-bit randomness (16 qwords)*/ + uint64_t q; /**< DSA 160-bit parameter (3 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_p_1024_160_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA key generation G , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_GEN_G_1024. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_g_1024_input_s { + uint64_t p; /**< DSA 1024-bit parameter (16 qwords)*/ + uint64_t q; /**< DSA 160-bit parameter (3 qwords)*/ + uint64_t h; /**< DSA 1024-bit parameter (16 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_g_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA key generation Y , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_GEN_Y_1024. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_y_1024_input_s { + uint64_t p; /**< DSA 1024-bit parameter (16 qwords)*/ + uint64_t g; /**< DSA parameter (16 qwords)*/ + uint64_t + x; /**< randomly generated DSA parameter (160 bits), (3 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_y_1024_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Sign R , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_SIGN_R_1024_160. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_1024_160_input_s { + uint64_t k; /**< randomly generated DSA parameter (3 qwords)*/ + uint64_t p; /**< DSA parameter, (16 qwords)*/ + uint64_t q; /**< DSA parameter (3 qwords)*/ + uint64_t g; /**< DSA parameter (16 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_1024_160_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Sign S , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_SIGN_S_160. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_s_160_input_s { + uint64_t m; /**< digest message to be signed (3 qwords)*/ + uint64_t k; /**< randomly generated DSA parameter (3 qwords)*/ + uint64_t q; /**< DSA parameter (3 qwords)*/ + uint64_t r; /**< DSA parameter (3 qwords)*/ + uint64_t x; /**< randomly generated DSA parameter (3 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_s_160_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Sign R S , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_SIGN_R_S_1024_160. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_s_1024_160_input_s { + uint64_t m; /**< digest of the message to be signed (3 qwords)*/ + uint64_t k; /**< randomly generated DSA parameter (3 qwords)*/ + uint64_t p; /**< DSA parameter (16 qwords)*/ + uint64_t q; /**< DSA parameter (3 qwords)*/ + uint64_t g; /**< DSA parameter (16 qwords)*/ + uint64_t x; /**< randomly generated DSA parameter (3 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_s_1024_160_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Verify , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_VERIFY_1024_160. + */ +typedef struct icp_qat_fw_mmp_dsa_verify_1024_160_input_s { + uint64_t r; /**< DSA 160-bits signature (3 qwords)*/ + uint64_t s; /**< DSA 160-bits signature (3 qwords)*/ + uint64_t m; /**< digest of the message (3 qwords)*/ + uint64_t p; /**< DSA parameter (16 qwords)*/ + uint64_t q; /**< DSA parameter (3 qwords)*/ + uint64_t g; /**< DSA parameter (16 qwords)*/ + uint64_t y; /**< DSA parameter (16 qwords)*/ +} icp_qat_fw_mmp_dsa_verify_1024_160_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA parameter generation P , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_GEN_P_2048_224. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_p_2048_224_input_s { + uint64_t x; /**< DSA 2048-bit randomness (32 qwords)*/ + uint64_t q; /**< DSA 224-bit parameter (4 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_p_2048_224_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA key generation Y , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_GEN_Y_2048. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_y_2048_input_s { + uint64_t p; /**< DSA 2048-bit parameter (32 qwords)*/ + uint64_t g; /**< DSA parameter (32 qwords)*/ + uint64_t x; /**< randomly generated DSA parameter (224/256 bits), (4 + qwords)*/ +} icp_qat_fw_mmp_dsa_gen_y_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Sign R , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_SIGN_R_2048_224. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_2048_224_input_s { + uint64_t k; /**< randomly generated DSA parameter (4 qwords)*/ + uint64_t p; /**< DSA parameter, (32 qwords)*/ + uint64_t q; /**< DSA parameter (4 qwords)*/ + uint64_t g; /**< DSA parameter (32 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_2048_224_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Sign S , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_SIGN_S_224. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_s_224_input_s { + uint64_t m; /**< digest message to be signed (4 qwords)*/ + uint64_t k; /**< randomly generated DSA parameter (4 qwords)*/ + uint64_t q; /**< DSA parameter (4 qwords)*/ + uint64_t r; /**< DSA parameter (4 qwords)*/ + uint64_t x; /**< randomly generated DSA parameter (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_s_224_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Sign R S , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_SIGN_R_S_2048_224. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_s_2048_224_input_s { + uint64_t m; /**< digest of the message to be signed (4 qwords)*/ + uint64_t k; /**< randomly generated DSA parameter (4 qwords)*/ + uint64_t p; /**< DSA parameter (32 qwords)*/ + uint64_t q; /**< DSA parameter (4 qwords)*/ + uint64_t g; /**< DSA parameter (32 qwords)*/ + uint64_t x; /**< randomly generated DSA parameter (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_s_2048_224_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Verify , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_VERIFY_2048_224. + */ +typedef struct icp_qat_fw_mmp_dsa_verify_2048_224_input_s { + uint64_t r; /**< DSA 224-bits signature (4 qwords)*/ + uint64_t s; /**< DSA 224-bits signature (4 qwords)*/ + uint64_t m; /**< digest of the message (4 qwords)*/ + uint64_t p; /**< DSA parameter (32 qwords)*/ + uint64_t q; /**< DSA parameter (4 qwords)*/ + uint64_t g; /**< DSA parameter (32 qwords)*/ + uint64_t y; /**< DSA parameter (32 qwords)*/ +} icp_qat_fw_mmp_dsa_verify_2048_224_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA parameter generation P , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_GEN_P_2048_256. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_p_2048_256_input_s { + uint64_t x; /**< DSA 2048-bit randomness (32 qwords)*/ + uint64_t q; /**< DSA 256-bit parameter (4 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_p_2048_256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA key generation G , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_GEN_G_2048. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_g_2048_input_s { + uint64_t p; /**< DSA 2048-bit parameter (32 qwords)*/ + uint64_t q; /**< DSA 256-bit parameter (4 qwords)*/ + uint64_t h; /**< DSA 2048-bit parameter (32 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_g_2048_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Sign R , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_SIGN_R_2048_256. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_2048_256_input_s { + uint64_t k; /**< randomly generated DSA parameter (4 qwords)*/ + uint64_t p; /**< DSA parameter, (32 qwords)*/ + uint64_t q; /**< DSA parameter (4 qwords)*/ + uint64_t g; /**< DSA parameter (32 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_2048_256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Sign S , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_SIGN_S_256. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_s_256_input_s { + uint64_t m; /**< digest message to be signed (4 qwords)*/ + uint64_t k; /**< randomly generated DSA parameter (4 qwords)*/ + uint64_t q; /**< DSA parameter (4 qwords)*/ + uint64_t r; /**< DSA parameter (4 qwords)*/ + uint64_t x; /**< randomly generated DSA parameter (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_s_256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Sign R S , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_SIGN_R_S_2048_256. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_s_2048_256_input_s { + uint64_t m; /**< digest of the message to be signed (4 qwords)*/ + uint64_t k; /**< randomly generated DSA parameter (4 qwords)*/ + uint64_t p; /**< DSA parameter (32 qwords)*/ + uint64_t q; /**< DSA parameter (4 qwords)*/ + uint64_t g; /**< DSA parameter (32 qwords)*/ + uint64_t x; /**< randomly generated DSA parameter (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_s_2048_256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Verify , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_VERIFY_2048_256. + */ +typedef struct icp_qat_fw_mmp_dsa_verify_2048_256_input_s { + uint64_t r; /**< DSA 256-bits signature (4 qwords)*/ + uint64_t s; /**< DSA 256-bits signature (4 qwords)*/ + uint64_t m; /**< digest of the message (4 qwords)*/ + uint64_t p; /**< DSA parameter (32 qwords)*/ + uint64_t q; /**< DSA parameter (4 qwords)*/ + uint64_t g; /**< DSA parameter (32 qwords)*/ + uint64_t y; /**< DSA parameter (32 qwords)*/ +} icp_qat_fw_mmp_dsa_verify_2048_256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA parameter generation P , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_GEN_P_3072_256. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_p_3072_256_input_s { + uint64_t x; /**< DSA 3072-bit randomness (48 qwords)*/ + uint64_t q; /**< DSA 256-bit parameter (4 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_p_3072_256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA key generation G , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_GEN_G_3072. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_g_3072_input_s { + uint64_t p; /**< DSA 3072-bit parameter (48 qwords)*/ + uint64_t q; /**< DSA 256-bit parameter (4 qwords)*/ + uint64_t h; /**< DSA 3072-bit parameter (48 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_g_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA key generation Y , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_GEN_Y_3072. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_y_3072_input_s { + uint64_t p; /**< DSA 3072-bit parameter (48 qwords)*/ + uint64_t g; /**< DSA parameter (48 qwords)*/ + uint64_t + x; /**< randomly generated DSA parameter (3072 bits), (4 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_y_3072_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Sign R , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_SIGN_R_3072_256. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_3072_256_input_s { + uint64_t k; /**< randomly generated DSA parameter (4 qwords)*/ + uint64_t p; /**< DSA parameter, (48 qwords)*/ + uint64_t q; /**< DSA parameter (4 qwords)*/ + uint64_t g; /**< DSA parameter (48 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_3072_256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Sign R S , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_SIGN_R_S_3072_256. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_s_3072_256_input_s { + uint64_t m; /**< digest of the message to be signed (4 qwords)*/ + uint64_t k; /**< randomly generated DSA parameter (4 qwords)*/ + uint64_t p; /**< DSA parameter (48 qwords)*/ + uint64_t q; /**< DSA parameter (4 qwords)*/ + uint64_t g; /**< DSA parameter (48 qwords)*/ + uint64_t x; /**< randomly generated DSA parameter (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_s_3072_256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for DSA Verify , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_DSA_VERIFY_3072_256. + */ +typedef struct icp_qat_fw_mmp_dsa_verify_3072_256_input_s { + uint64_t r; /**< DSA 256-bits signature (4 qwords)*/ + uint64_t s; /**< DSA 256-bits signature (4 qwords)*/ + uint64_t m; /**< digest of the message (4 qwords)*/ + uint64_t p; /**< DSA parameter (48 qwords)*/ + uint64_t q; /**< DSA parameter (4 qwords)*/ + uint64_t g; /**< DSA parameter (48 qwords)*/ + uint64_t y; /**< DSA parameter (48 qwords)*/ +} icp_qat_fw_mmp_dsa_verify_3072_256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA Sign RS for curves B/K-163 and B/K-233 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_RS_GF2_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l256_input_s { + uint64_t in; /**< concatenated input parameters (G, n, q, a, b, k, e, d) + (36 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA Sign R for curves B/K-163 and B/K-233 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_R_GF2_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_input_s { + uint64_t + xg; /**< x coordinate of base point G of B/K-163 of B/K-233 (4 + qwords)*/ + uint64_t + yg; /**< y coordinate of base point G of B/K-163 or B/K-233 (4 + qwords)*/ + uint64_t + n; /**< order of the base point of B/K-163 or B/K-233 (4 qwords)*/ + uint64_t q; /**< field polynomial of B/K-163 or B/K-233 (4 qwords)*/ + uint64_t + a; /**< a equation coefficient of B/K-163 of B/K-233 (4 qwords)*/ + uint64_t + b; /**< b equation coefficient of B/K-163 or B/K-233 (4 qwords)*/ + uint64_t k; /**< random value > 0 and < n (4 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA Sign S for curves with n < 2^256 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_S_GF2_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_s_gf2_l256_input_s { + uint64_t e; /**< hash of message (0 < e < 2^256) (4 qwords)*/ + uint64_t d; /**< private key (>0 and < n) (4 qwords)*/ + uint64_t + r; /**< ECDSA r signature value (>0 and < n) (4 qwords)*/ + uint64_t k; /**< random value > 0 and < n (4 qwords)*/ + uint64_t n; /**< order of the base point G (2 < n < 2^256) (4 + qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_s_gf2_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA Verify for curves B/K-163 and B/K-233 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_VERIFY_GF2_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_verify_gf2_l256_input_s { + uint64_t + in; /**< concatenated curve parameter (e,s,r,n,G,Q,a,b,q) (44 + qwords)*/ +} icp_qat_fw_mmp_ecdsa_verify_gf2_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA Sign RS , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_RS_GF2_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l512_input_s { + uint64_t in; /**< concatenated input parameters (G, n, q, a, b, k, e, d) + (72 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GF2 Sign R , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_R_GF2_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_input_s { + uint64_t xg; /**< x coordinate of verified base point (> 0 and + degree(x(G)) < degree(q)) (8 qwords)*/ + uint64_t yg; /**< y coordinate of verified base point (> 0 and + degree(y(G)) < degree(q)) (8 qwords)*/ + uint64_t n; /**< order of the base point G, which must be prime and a + divisor of #E and < 2^512) (8 qwords)*/ + uint64_t + q; /**< field polynomial of degree > 2 and < 512 (8 qwords)*/ + uint64_t a; /**< a equation coefficient (degree(a) < degree(q)) (8 + qwords)*/ + uint64_t b; /**< b equation coefficient (degree(b) < degree(q)) (8 + qwords)*/ + uint64_t k; /**< random value > 0 and < n (8 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GF2 Sign S , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_S_GF2_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_s_gf2_l512_input_s { + uint64_t e; /**< hash of message (0 < e < 2^512) (8 qwords)*/ + uint64_t d; /**< private key (>0 and < n) (8 qwords)*/ + uint64_t + r; /**< ECDSA r signature value (>0 and < n) (8 qwords)*/ + uint64_t k; /**< random value > 0 and < n (8 qwords)*/ + uint64_t n; /**< order of the base point G, which must be prime and a + divisor of #E and < 2^512) (8 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_s_gf2_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GF2 Verify , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_VERIFY_GF2_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_verify_gf2_l512_input_s { + uint64_t + in; /**< concatenated curve parameters (e, s, r, n, xG, yG, xQ, yQ, + a, b, q) (88 qwords)*/ +} icp_qat_fw_mmp_ecdsa_verify_gf2_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GF2 Sign RS for curves B-571/K-571 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_RS_GF2_571. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_rs_gf2_571_input_s { + uint64_t + in; /**< concatenated input parameters (x(G), y(G), n, q, a, b, k, + e, d) (81 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_rs_gf2_571_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GF2 Sign S for curves with deg(q) < 576 + * , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_S_GF2_571. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_s_gf2_571_input_s { + uint64_t e; /**< hash of message < 2^576 (9 qwords)*/ + uint64_t d; /**< private key (> 0 and < n) (9 qwords)*/ + uint64_t + r; /**< ECDSA r signature value (> 0 and < n) (9 qwords)*/ + uint64_t k; /**< random value (> 0 and < n) (9 qwords)*/ + uint64_t + n; /**< order of the base point of the curve (n < 2^576) (9 + qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_s_gf2_571_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GF2 Sign R for degree 571 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_R_GF2_571. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_input_s { + uint64_t xg; /**< x coordinate of verified base point belonging to + B/K-571 (9 qwords)*/ + uint64_t yg; /**< y coordinate of verified base point belonging to + B/K-571 (9 qwords)*/ + uint64_t n; /**< order of the base point G (9 qwords)*/ + uint64_t q; /**< irreducible field polynomial of B/K-571 (9 qwords)*/ + uint64_t a; /**< a coefficient of curve B/K-571 (degree(a) < + degree(q)) (9 qwords)*/ + uint64_t b; /**< b coefficient of curve B/K-571 (degree(b) < + degree(q)) (9 qwords)*/ + uint64_t k; /**< random value > 0 and < n (9 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GF2 Verify for degree 571 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_VERIFY_GF2_571. + */ +typedef struct icp_qat_fw_mmp_ecdsa_verify_gf2_571_input_s { + uint64_t in; /**< concatenated input (e, s, r, n, G, Q, a, b, q) (99 + qwords)*/ +} icp_qat_fw_mmp_ecdsa_verify_gf2_571_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for MATHS GF2 Point Multiplication , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_POINT_MULTIPLICATION_GF2_L256. + */ +typedef struct icp_qat_fw_maths_point_multiplication_gf2_l256_input_s { + uint64_t k; /**< scalar multiplier > 0 and < 2^256 (4 qwords)*/ + uint64_t xg; /**< x coordinate of curve point (degree(xG) < 256) (4 + qwords)*/ + uint64_t yg; /**< y coordinate of curve point (degree(yG) < 256) (4 + qwords)*/ + uint64_t + a; /**< a equation coefficient of B/K-163 or B/K-233 (4 qwords)*/ + uint64_t + b; /**< b equation coefficient of B/K-163 or B/K-233 (4 qwords)*/ + uint64_t q; /**< field polynomial of B/K-163 or B/K-233 (4 qwords)*/ + uint64_t h; /**< cofactor of B/K-163 or B/K-233 (4 qwords)*/ +} icp_qat_fw_maths_point_multiplication_gf2_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for MATHS GF2 Point Verification , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_POINT_VERIFY_GF2_L256. + */ +typedef struct icp_qat_fw_maths_point_verify_gf2_l256_input_s { + uint64_t xq; /**< x coordinate of input point (4 qwords)*/ + uint64_t yq; /**< y coordinate of input point (4 qwords)*/ + uint64_t + q; /**< field polynomial of curve, degree(q) < 256 (4 qwords)*/ + uint64_t + a; /**< a equation coefficient of curve, degree(a) < 256 (4 + qwords)*/ + uint64_t + b; /**< b equation coefficient of curve, degree(b) < 256 (4 + qwords)*/ +} icp_qat_fw_maths_point_verify_gf2_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for MATHS GF2 Point Multiplication , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_POINT_MULTIPLICATION_GF2_L512. + */ +typedef struct icp_qat_fw_maths_point_multiplication_gf2_l512_input_s { + uint64_t k; /**< scalar multiplier > 0 and < 2^512 (8 qwords)*/ + uint64_t xg; /**< x coordinate of curve point (degree(xG) < 512) (8 + qwords)*/ + uint64_t yg; /**< y coordinate of curve point (degree(yG) < 512) (8 + qwords)*/ + uint64_t + a; /**< a equation coefficient (degree(a) < 512) (8 qwords)*/ + uint64_t + b; /**< b equation coefficient (degree(b) < 512) (8 qwords)*/ + uint64_t + q; /**< field polynomial of degree > 2 and < 512 (8 qwords)*/ + uint64_t h; /**< cofactor (< 2^512) (8 qwords)*/ +} icp_qat_fw_maths_point_multiplication_gf2_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for MATHS GF2 Point Verification , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_POINT_VERIFY_GF2_L512. + */ +typedef struct icp_qat_fw_maths_point_verify_gf2_l512_input_s { + uint64_t xq; /**< x coordinate of input point (8 qwords)*/ + uint64_t yq; /**< y coordinate of input point (8 qwords)*/ + uint64_t + q; /**< field polynomial of degree > 2 and < 512 (8 qwords)*/ + uint64_t + a; /**< a equation coefficient (degree(a) < 512) (8 qwords)*/ + uint64_t + b; /**< b equation coefficient (degree(a) < 512) (8 qwords)*/ +} icp_qat_fw_maths_point_verify_gf2_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC GF2 Point Multiplication for curves + * B-571/K-571 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_POINT_MULTIPLICATION_GF2_571. + */ +typedef struct icp_qat_fw_maths_point_multiplication_gf2_571_input_s { + uint64_t k; /**< scalar value > 0 and < 2^576 (9 qwords)*/ + uint64_t xg; /**< x coordinate of curve point (degree(xG) < + degree(q)) (9 qwords)*/ + uint64_t yg; /**< y coordinate of curve point (degree(xG) < + degree(q)) (9 qwords)*/ + uint64_t a; /**< a equation coefficient for B/K-571 (9 qwords)*/ + uint64_t b; /**< b equation coefficient for B/K-571 (9 qwords)*/ + uint64_t q; /**< field polynomial of B/K-571 (9 qwords)*/ + uint64_t h; /**< cofactor for B/K-571 (1 qwords)*/ +} icp_qat_fw_maths_point_multiplication_gf2_571_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC GF2 Point Verification for degree 571 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_POINT_VERIFY_GF2_571. + */ +typedef struct icp_qat_fw_maths_point_verify_gf2_571_input_s { + uint64_t xq; /**< x coordinate of candidate public key (9 qwords)*/ + uint64_t yq; /**< y coordinate of candidate public key (9 qwords)*/ + uint64_t q; /**< field polynomial of B/K-571 (9 qwords)*/ + uint64_t a; /**< a equation coefficient of B/K-571 (9 qwords)*/ + uint64_t b; /**< b equation coefficient of B/K-571 (9 qwords)*/ +} icp_qat_fw_maths_point_verify_gf2_571_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GFP Sign R , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_R_GFP_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_input_s { + uint64_t xg; /**< x coordinate of base point G, (4 qwords)*/ + uint64_t yg; /**< y coordinate of base point G, (4 qwords)*/ + uint64_t n; /**< order of the base point G, which shall be prime (4 + qwords)*/ + uint64_t q; /**< modulus (4 qwords)*/ + uint64_t a; /**< a equation coefficient (4 qwords)*/ + uint64_t b; /**< b equation coefficient (4 qwords)*/ + uint64_t k; /**< random value (4 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GFP Sign S , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_S_GFP_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_s_gfp_l256_input_s { + uint64_t e; /**< digest of the message to be signed (4 qwords)*/ + uint64_t d; /**< private key (4 qwords)*/ + uint64_t r; /**< DSA r signature value (4 qwords)*/ + uint64_t k; /**< random value (4 qwords)*/ + uint64_t n; /**< order of the base point G, which shall be prime (4 + qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_s_gfp_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GFP Sign RS , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_RS_GFP_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l256_input_s { + uint64_t + in; /**< {xG, yG, n, q, a, b, k, e, d} concatenated (36 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GFP Verify , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_VERIFY_GFP_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_verify_gfp_l256_input_s { + uint64_t in; /**< in = {e, s, r, n, xG, yG, xQ, yQ, a, b ,q} + concatenated (44 qwords)*/ +} icp_qat_fw_mmp_ecdsa_verify_gfp_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GFP Sign R , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_R_GFP_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_input_s { + uint64_t xg; /**< x coordinate of base point G, (8 qwords)*/ + uint64_t yg; /**< y coordinate of base point G, (8 qwords)*/ + uint64_t n; /**< order of the base point G, which shall be prime (8 + qwords)*/ + uint64_t q; /**< modulus (8 qwords)*/ + uint64_t a; /**< a equation coefficient (8 qwords)*/ + uint64_t b; /**< b equation coefficient (8 qwords)*/ + uint64_t k; /**< random value (8 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GFP Sign S , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_S_GFP_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_s_gfp_l512_input_s { + uint64_t e; /**< digest of the message to be signed (8 qwords)*/ + uint64_t d; /**< private key (8 qwords)*/ + uint64_t r; /**< DSA r signature value (8 qwords)*/ + uint64_t k; /**< random value (8 qwords)*/ + uint64_t n; /**< order of the base point G, which shall be prime (8 + qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_s_gfp_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GFP Sign RS , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_RS_GFP_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l512_input_s { + uint64_t + in; /**< {xG, yG, n, q, a, b, k, e, d} concatenated (72 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GFP Verify , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_VERIFY_GFP_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_verify_gfp_l512_input_s { + uint64_t in; /**< in = {e, s, r, n, xG, yG, xQ, yQ, a, b ,q} + concatenated (88 qwords)*/ +} icp_qat_fw_mmp_ecdsa_verify_gfp_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GFP Sign R , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_R_GFP_521. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_input_s { + uint64_t xg; /**< x coordinate of base point G, (9 qwords)*/ + uint64_t yg; /**< y coordinate of base point G, (9 qwords)*/ + uint64_t n; /**< order of the base point G, which shall be prime (9 + qwords)*/ + uint64_t q; /**< modulus (9 qwords)*/ + uint64_t a; /**< a equation coefficient (9 qwords)*/ + uint64_t b; /**< b equation coefficient (9 qwords)*/ + uint64_t k; /**< random value (9 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GFP Sign S , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_S_GFP_521. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_s_gfp_521_input_s { + uint64_t e; /**< digest of the message to be signed (9 qwords)*/ + uint64_t d; /**< private key (9 qwords)*/ + uint64_t r; /**< DSA r signature value (9 qwords)*/ + uint64_t k; /**< random value (9 qwords)*/ + uint64_t n; /**< order of the base point G, which shall be prime (9 + qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_s_gfp_521_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GFP Sign RS , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_SIGN_RS_GFP_521. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_rs_gfp_521_input_s { + uint64_t + in; /**< {xG, yG, n, q, a, b, k, e, d} concatenated (81 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_rs_gfp_521_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECDSA GFP Verify , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #PKE_ECDSA_VERIFY_GFP_521. + */ +typedef struct icp_qat_fw_mmp_ecdsa_verify_gfp_521_input_s { + uint64_t in; /**< in = {e, s, r, n, xG, yG, xQ, yQ, a, b ,q} + concatenated (99 qwords)*/ +} icp_qat_fw_mmp_ecdsa_verify_gfp_521_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC GFP Point Multiplication , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_POINT_MULTIPLICATION_GFP_L256. + */ +typedef struct icp_qat_fw_maths_point_multiplication_gfp_l256_input_s { + uint64_t k; /**< scalar multiplier (4 qwords)*/ + uint64_t xg; /**< x coordinate of curve point (4 qwords)*/ + uint64_t yg; /**< y coordinate of curve point (4 qwords)*/ + uint64_t a; /**< a equation coefficient (4 qwords)*/ + uint64_t b; /**< b equation coefficient (4 qwords)*/ + uint64_t q; /**< modulus (4 qwords)*/ + uint64_t h; /**< cofactor (4 qwords)*/ +} icp_qat_fw_maths_point_multiplication_gfp_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC GFP Partial Point Verification , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_POINT_VERIFY_GFP_L256. + */ +typedef struct icp_qat_fw_maths_point_verify_gfp_l256_input_s { + uint64_t xq; /**< x coordinate of candidate point (4 qwords)*/ + uint64_t yq; /**< y coordinate of candidate point (4 qwords)*/ + uint64_t q; /**< modulus (4 qwords)*/ + uint64_t a; /**< a equation coefficient (4 qwords)*/ + uint64_t b; /**< b equation coefficient (4 qwords)*/ +} icp_qat_fw_maths_point_verify_gfp_l256_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC GFP Point Multiplication , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_POINT_MULTIPLICATION_GFP_L512. + */ +typedef struct icp_qat_fw_maths_point_multiplication_gfp_l512_input_s { + uint64_t k; /**< scalar multiplier (8 qwords)*/ + uint64_t xg; /**< x coordinate of curve point (8 qwords)*/ + uint64_t yg; /**< y coordinate of curve point (8 qwords)*/ + uint64_t a; /**< a equation coefficient (8 qwords)*/ + uint64_t b; /**< b equation coefficient (8 qwords)*/ + uint64_t q; /**< modulus (8 qwords)*/ + uint64_t h; /**< cofactor (8 qwords)*/ +} icp_qat_fw_maths_point_multiplication_gfp_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC GFP Partial Point , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_POINT_VERIFY_GFP_L512. + */ +typedef struct icp_qat_fw_maths_point_verify_gfp_l512_input_s { + uint64_t xq; /**< x coordinate of candidate point (8 qwords)*/ + uint64_t yq; /**< y coordinate of candidate point (8 qwords)*/ + uint64_t q; /**< modulus (8 qwords)*/ + uint64_t a; /**< a equation coefficient (8 qwords)*/ + uint64_t b; /**< b equation coefficient (8 qwords)*/ +} icp_qat_fw_maths_point_verify_gfp_l512_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC GFP Point Multiplication , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_POINT_MULTIPLICATION_GFP_521. + */ +typedef struct icp_qat_fw_maths_point_multiplication_gfp_521_input_s { + uint64_t k; /**< scalar multiplier (9 qwords)*/ + uint64_t xg; /**< x coordinate of curve point (9 qwords)*/ + uint64_t yg; /**< y coordinate of curve point (9 qwords)*/ + uint64_t a; /**< a equation coefficient (9 qwords)*/ + uint64_t b; /**< b equation coefficient (9 qwords)*/ + uint64_t q; /**< modulus (9 qwords)*/ + uint64_t h; /**< cofactor (1 qwords)*/ +} icp_qat_fw_maths_point_multiplication_gfp_521_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC GFP Partial Point Verification , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #MATHS_POINT_VERIFY_GFP_521. + */ +typedef struct icp_qat_fw_maths_point_verify_gfp_521_input_s { + uint64_t xq; /**< x coordinate of candidate point (9 qwords)*/ + uint64_t yq; /**< y coordinate of candidate point (9 qwords)*/ + uint64_t q; /**< modulus (9 qwords)*/ + uint64_t a; /**< a equation coefficient (9 qwords)*/ + uint64_t b; /**< b equation coefficient (9 qwords)*/ +} icp_qat_fw_maths_point_verify_gfp_521_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC curve25519 Variable Point Multiplication + * [k]P(x), as specified in RFC7748 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #POINT_MULTIPLICATION_C25519. + */ +typedef struct icp_qat_fw_point_multiplication_c25519_input_s { + uint64_t xp; /**< xP = Montgomery affine coordinate X of point P (4 + qwords)*/ + uint64_t k; /**< k = scalar (4 qwords)*/ +} icp_qat_fw_point_multiplication_c25519_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC curve25519 Generator Point Multiplication + * [k]G(x), as specified in RFC7748 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #GENERATOR_MULTIPLICATION_C25519. + */ +typedef struct icp_qat_fw_generator_multiplication_c25519_input_s { + uint64_t k; /**< k = scalar (4 qwords)*/ +} icp_qat_fw_generator_multiplication_c25519_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC edwards25519 Variable Point Multiplication + * [k]P, as specified in RFC8032 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #POINT_MULTIPLICATION_ED25519. + */ +typedef struct icp_qat_fw_point_multiplication_ed25519_input_s { + uint64_t xp; /**< xP = Twisted Edwards affine coordinate X of point P + (4 qwords)*/ + uint64_t yp; /**< yP = Twisted Edwards affine coordinate Y of point P + (4 qwords)*/ + uint64_t k; /**< k = scalar (4 qwords)*/ +} icp_qat_fw_point_multiplication_ed25519_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC edwards25519 Generator Point Multiplication + * [k]G, as specified in RFC8032 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #GENERATOR_MULTIPLICATION_ED25519. + */ +typedef struct icp_qat_fw_generator_multiplication_ed25519_input_s { + uint64_t k; /**< k = scalar (4 qwords)*/ +} icp_qat_fw_generator_multiplication_ed25519_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC curve448 Variable Point Multiplication + * [k]P(x), as specified in RFC7748 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #POINT_MULTIPLICATION_C448. + */ +typedef struct icp_qat_fw_point_multiplication_c448_input_s { + uint64_t xp; /**< xP = Montgomery affine coordinate X of point P (8 + qwords)*/ + uint64_t k; /**< k = scalar (8 qwords)*/ +} icp_qat_fw_point_multiplication_c448_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC curve448 Generator Point Multiplication + * [k]G(x), as specified in RFC7748 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #GENERATOR_MULTIPLICATION_C448. + */ +typedef struct icp_qat_fw_generator_multiplication_c448_input_s { + uint64_t k; /**< k = scalar (8 qwords)*/ +} icp_qat_fw_generator_multiplication_c448_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC edwards448 Variable Point Multiplication + * [k]P, as specified in RFC8032 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #POINT_MULTIPLICATION_ED448. + */ +typedef struct icp_qat_fw_point_multiplication_ed448_input_s { + uint64_t + xp; /**< xP = Edwards affine coordinate X of point P (8 qwords)*/ + uint64_t + yp; /**< yP = Edwards affine coordinate Y of point P (8 qwords)*/ + uint64_t k; /**< k = scalar (8 qwords)*/ +} icp_qat_fw_point_multiplication_ed448_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Input parameter list for ECC edwards448 Generator Point Multiplication + * [k]P, as specified in RFC8032 , + * to be used when icp_qat_fw_pke_request_s::functionalityId is + * #GENERATOR_MULTIPLICATION_ED448. + */ +typedef struct icp_qat_fw_generator_multiplication_ed448_input_s { + uint64_t k; /**< k = scalar (8 qwords)*/ +} icp_qat_fw_generator_multiplication_ed448_input_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * MMP input parameters + */ +typedef union icp_qat_fw_mmp_input_param_u { + /** Generic parameter structure : All members of this wrapper structure + * are pointers to large integers. + */ + uint64_t flat_array[ICP_QAT_FW_PKE_INPUT_COUNT_MAX]; + + /** Initialisation sequence */ + icp_qat_fw_mmp_init_input_t mmp_init; + + /** Diffie-Hellman Modular exponentiation base 2 for 768-bit numbers */ + icp_qat_fw_mmp_dh_g2_768_input_t mmp_dh_g2_768; + + /** Diffie-Hellman Modular exponentiation for 768-bit numbers */ + icp_qat_fw_mmp_dh_768_input_t mmp_dh_768; + + /** Diffie-Hellman Modular exponentiation base 2 for 1024-bit numbers */ + icp_qat_fw_mmp_dh_g2_1024_input_t mmp_dh_g2_1024; + + /** Diffie-Hellman Modular exponentiation for 1024-bit numbers */ + icp_qat_fw_mmp_dh_1024_input_t mmp_dh_1024; + + /** Diffie-Hellman Modular exponentiation base 2 for 1536-bit numbers */ + icp_qat_fw_mmp_dh_g2_1536_input_t mmp_dh_g2_1536; + + /** Diffie-Hellman Modular exponentiation for 1536-bit numbers */ + icp_qat_fw_mmp_dh_1536_input_t mmp_dh_1536; + + /** Diffie-Hellman Modular exponentiation base 2 for 2048-bit numbers */ + icp_qat_fw_mmp_dh_g2_2048_input_t mmp_dh_g2_2048; + + /** Diffie-Hellman Modular exponentiation for 2048-bit numbers */ + icp_qat_fw_mmp_dh_2048_input_t mmp_dh_2048; + + /** Diffie-Hellman Modular exponentiation base 2 for 3072-bit numbers */ + icp_qat_fw_mmp_dh_g2_3072_input_t mmp_dh_g2_3072; + + /** Diffie-Hellman Modular exponentiation for 3072-bit numbers */ + icp_qat_fw_mmp_dh_3072_input_t mmp_dh_3072; + + /** Diffie-Hellman Modular exponentiation base 2 for 4096-bit numbers */ + icp_qat_fw_mmp_dh_g2_4096_input_t mmp_dh_g2_4096; + + /** Diffie-Hellman Modular exponentiation for 4096-bit numbers */ + icp_qat_fw_mmp_dh_4096_input_t mmp_dh_4096; + + /** RSA 512 key generation first form */ + icp_qat_fw_mmp_rsa_kp1_512_input_t mmp_rsa_kp1_512; + + /** RSA 512 key generation second form */ + icp_qat_fw_mmp_rsa_kp2_512_input_t mmp_rsa_kp2_512; + + /** RSA 512 Encryption */ + icp_qat_fw_mmp_rsa_ep_512_input_t mmp_rsa_ep_512; + + /** RSA 512 Decryption */ + icp_qat_fw_mmp_rsa_dp1_512_input_t mmp_rsa_dp1_512; + + /** RSA 1024 Decryption with CRT */ + icp_qat_fw_mmp_rsa_dp2_512_input_t mmp_rsa_dp2_512; + + /** RSA 1024 key generation first form */ + icp_qat_fw_mmp_rsa_kp1_1024_input_t mmp_rsa_kp1_1024; + + /** RSA 1024 key generation second form */ + icp_qat_fw_mmp_rsa_kp2_1024_input_t mmp_rsa_kp2_1024; + + /** RSA 1024 Encryption */ + icp_qat_fw_mmp_rsa_ep_1024_input_t mmp_rsa_ep_1024; + + /** RSA 1024 Decryption */ + icp_qat_fw_mmp_rsa_dp1_1024_input_t mmp_rsa_dp1_1024; + + /** RSA 1024 Decryption with CRT */ + icp_qat_fw_mmp_rsa_dp2_1024_input_t mmp_rsa_dp2_1024; + + /** RSA 1536 key generation first form */ + icp_qat_fw_mmp_rsa_kp1_1536_input_t mmp_rsa_kp1_1536; + + /** RSA 1536 key generation second form */ + icp_qat_fw_mmp_rsa_kp2_1536_input_t mmp_rsa_kp2_1536; + + /** RSA 1536 Encryption */ + icp_qat_fw_mmp_rsa_ep_1536_input_t mmp_rsa_ep_1536; + + /** RSA 1536 Decryption */ + icp_qat_fw_mmp_rsa_dp1_1536_input_t mmp_rsa_dp1_1536; + + /** RSA 1536 Decryption with CRT */ + icp_qat_fw_mmp_rsa_dp2_1536_input_t mmp_rsa_dp2_1536; + + /** RSA 2048 key generation first form */ + icp_qat_fw_mmp_rsa_kp1_2048_input_t mmp_rsa_kp1_2048; + + /** RSA 2048 key generation second form */ + icp_qat_fw_mmp_rsa_kp2_2048_input_t mmp_rsa_kp2_2048; + + /** RSA 2048 Encryption */ + icp_qat_fw_mmp_rsa_ep_2048_input_t mmp_rsa_ep_2048; + + /** RSA 2048 Decryption */ + icp_qat_fw_mmp_rsa_dp1_2048_input_t mmp_rsa_dp1_2048; + + /** RSA 2048 Decryption with CRT */ + icp_qat_fw_mmp_rsa_dp2_2048_input_t mmp_rsa_dp2_2048; + + /** RSA 3072 key generation first form */ + icp_qat_fw_mmp_rsa_kp1_3072_input_t mmp_rsa_kp1_3072; + + /** RSA 3072 key generation second form */ + icp_qat_fw_mmp_rsa_kp2_3072_input_t mmp_rsa_kp2_3072; + + /** RSA 3072 Encryption */ + icp_qat_fw_mmp_rsa_ep_3072_input_t mmp_rsa_ep_3072; + + /** RSA 3072 Decryption */ + icp_qat_fw_mmp_rsa_dp1_3072_input_t mmp_rsa_dp1_3072; + + /** RSA 3072 Decryption with CRT */ + icp_qat_fw_mmp_rsa_dp2_3072_input_t mmp_rsa_dp2_3072; + + /** RSA 4096 key generation first form */ + icp_qat_fw_mmp_rsa_kp1_4096_input_t mmp_rsa_kp1_4096; + + /** RSA 4096 key generation second form */ + icp_qat_fw_mmp_rsa_kp2_4096_input_t mmp_rsa_kp2_4096; + + /** RSA 4096 Encryption */ + icp_qat_fw_mmp_rsa_ep_4096_input_t mmp_rsa_ep_4096; + + /** RSA 4096 Decryption */ + icp_qat_fw_mmp_rsa_dp1_4096_input_t mmp_rsa_dp1_4096; + + /** RSA 4096 Decryption with CRT */ + icp_qat_fw_mmp_rsa_dp2_4096_input_t mmp_rsa_dp2_4096; + + /** GCD primality test for 192-bit numbers */ + icp_qat_fw_mmp_gcd_pt_192_input_t mmp_gcd_pt_192; + + /** GCD primality test for 256-bit numbers */ + icp_qat_fw_mmp_gcd_pt_256_input_t mmp_gcd_pt_256; + + /** GCD primality test for 384-bit numbers */ + icp_qat_fw_mmp_gcd_pt_384_input_t mmp_gcd_pt_384; + + /** GCD primality test for 512-bit numbers */ + icp_qat_fw_mmp_gcd_pt_512_input_t mmp_gcd_pt_512; + + /** GCD primality test for 768-bit numbers */ + icp_qat_fw_mmp_gcd_pt_768_input_t mmp_gcd_pt_768; + + /** GCD primality test for 1024-bit numbers */ + icp_qat_fw_mmp_gcd_pt_1024_input_t mmp_gcd_pt_1024; + + /** GCD primality test for 1536-bit numbers */ + icp_qat_fw_mmp_gcd_pt_1536_input_t mmp_gcd_pt_1536; + + /** GCD primality test for 2048-bit numbers */ + icp_qat_fw_mmp_gcd_pt_2048_input_t mmp_gcd_pt_2048; + + /** GCD primality test for 3072-bit numbers */ + icp_qat_fw_mmp_gcd_pt_3072_input_t mmp_gcd_pt_3072; + + /** GCD primality test for 4096-bit numbers */ + icp_qat_fw_mmp_gcd_pt_4096_input_t mmp_gcd_pt_4096; + + /** Fermat primality test for 160-bit numbers */ + icp_qat_fw_mmp_fermat_pt_160_input_t mmp_fermat_pt_160; + + /** Fermat primality test for 512-bit numbers */ + icp_qat_fw_mmp_fermat_pt_512_input_t mmp_fermat_pt_512; + + /** Fermat primality test for <e; 512-bit numbers */ + icp_qat_fw_mmp_fermat_pt_l512_input_t mmp_fermat_pt_l512; + + /** Fermat primality test for 768-bit numbers */ + icp_qat_fw_mmp_fermat_pt_768_input_t mmp_fermat_pt_768; + + /** Fermat primality test for 1024-bit numbers */ + icp_qat_fw_mmp_fermat_pt_1024_input_t mmp_fermat_pt_1024; + + /** Fermat primality test for 1536-bit numbers */ + icp_qat_fw_mmp_fermat_pt_1536_input_t mmp_fermat_pt_1536; + + /** Fermat primality test for 2048-bit numbers */ + icp_qat_fw_mmp_fermat_pt_2048_input_t mmp_fermat_pt_2048; + + /** Fermat primality test for 3072-bit numbers */ + icp_qat_fw_mmp_fermat_pt_3072_input_t mmp_fermat_pt_3072; + + /** Fermat primality test for 4096-bit numbers */ + icp_qat_fw_mmp_fermat_pt_4096_input_t mmp_fermat_pt_4096; + + /** Miller-Rabin primality test for 160-bit numbers */ + icp_qat_fw_mmp_mr_pt_160_input_t mmp_mr_pt_160; + + /** Miller-Rabin primality test for 512-bit numbers */ + icp_qat_fw_mmp_mr_pt_512_input_t mmp_mr_pt_512; + + /** Miller-Rabin primality test for 768-bit numbers */ + icp_qat_fw_mmp_mr_pt_768_input_t mmp_mr_pt_768; + + /** Miller-Rabin primality test for 1024-bit numbers */ + icp_qat_fw_mmp_mr_pt_1024_input_t mmp_mr_pt_1024; + + /** Miller-Rabin primality test for 1536-bit numbers */ + icp_qat_fw_mmp_mr_pt_1536_input_t mmp_mr_pt_1536; + + /** Miller-Rabin primality test for 2048-bit numbers */ + icp_qat_fw_mmp_mr_pt_2048_input_t mmp_mr_pt_2048; + + /** Miller-Rabin primality test for 3072-bit numbers */ + icp_qat_fw_mmp_mr_pt_3072_input_t mmp_mr_pt_3072; + + /** Miller-Rabin primality test for 4096-bit numbers */ + icp_qat_fw_mmp_mr_pt_4096_input_t mmp_mr_pt_4096; + + /** Miller-Rabin primality test for 512-bit numbers */ + icp_qat_fw_mmp_mr_pt_l512_input_t mmp_mr_pt_l512; + + /** Lucas primality test for 160-bit numbers */ + icp_qat_fw_mmp_lucas_pt_160_input_t mmp_lucas_pt_160; + + /** Lucas primality test for 512-bit numbers */ + icp_qat_fw_mmp_lucas_pt_512_input_t mmp_lucas_pt_512; + + /** Lucas primality test for 768-bit numbers */ + icp_qat_fw_mmp_lucas_pt_768_input_t mmp_lucas_pt_768; + + /** Lucas primality test for 1024-bit numbers */ + icp_qat_fw_mmp_lucas_pt_1024_input_t mmp_lucas_pt_1024; + + /** Lucas primality test for 1536-bit numbers */ + icp_qat_fw_mmp_lucas_pt_1536_input_t mmp_lucas_pt_1536; + + /** Lucas primality test for 2048-bit numbers */ + icp_qat_fw_mmp_lucas_pt_2048_input_t mmp_lucas_pt_2048; + + /** Lucas primality test for 3072-bit numbers */ + icp_qat_fw_mmp_lucas_pt_3072_input_t mmp_lucas_pt_3072; + + /** Lucas primality test for 4096-bit numbers */ + icp_qat_fw_mmp_lucas_pt_4096_input_t mmp_lucas_pt_4096; + + /** Lucas primality test for L512-bit numbers */ + icp_qat_fw_mmp_lucas_pt_l512_input_t mmp_lucas_pt_l512; + + /** Modular exponentiation for numbers less than 512-bits */ + icp_qat_fw_maths_modexp_l512_input_t maths_modexp_l512; + + /** Modular exponentiation for numbers less than 1024-bit */ + icp_qat_fw_maths_modexp_l1024_input_t maths_modexp_l1024; + + /** Modular exponentiation for numbers less than 1536-bits */ + icp_qat_fw_maths_modexp_l1536_input_t maths_modexp_l1536; + + /** Modular exponentiation for numbers less than 2048-bit */ + icp_qat_fw_maths_modexp_l2048_input_t maths_modexp_l2048; + + /** Modular exponentiation for numbers less than 2560-bits */ + icp_qat_fw_maths_modexp_l2560_input_t maths_modexp_l2560; + + /** Modular exponentiation for numbers less than 3072-bits */ + icp_qat_fw_maths_modexp_l3072_input_t maths_modexp_l3072; + + /** Modular exponentiation for numbers less than 3584-bits */ + icp_qat_fw_maths_modexp_l3584_input_t maths_modexp_l3584; + + /** Modular exponentiation for numbers less than 4096-bit */ + icp_qat_fw_maths_modexp_l4096_input_t maths_modexp_l4096; + + /** Modular multiplicative inverse for numbers less than 128 bits */ + icp_qat_fw_maths_modinv_odd_l128_input_t maths_modinv_odd_l128; + + /** Modular multiplicative inverse for numbers less than 192 bits */ + icp_qat_fw_maths_modinv_odd_l192_input_t maths_modinv_odd_l192; + + /** Modular multiplicative inverse for numbers less than 256 bits */ + icp_qat_fw_maths_modinv_odd_l256_input_t maths_modinv_odd_l256; + + /** Modular multiplicative inverse for numbers less than 384 bits */ + icp_qat_fw_maths_modinv_odd_l384_input_t maths_modinv_odd_l384; + + /** Modular multiplicative inverse for numbers less than 512 bits */ + icp_qat_fw_maths_modinv_odd_l512_input_t maths_modinv_odd_l512; + + /** Modular multiplicative inverse for numbers less than 768 bits */ + icp_qat_fw_maths_modinv_odd_l768_input_t maths_modinv_odd_l768; + + /** Modular multiplicative inverse for numbers less than 1024 bits */ + icp_qat_fw_maths_modinv_odd_l1024_input_t maths_modinv_odd_l1024; + + /** Modular multiplicative inverse for numbers less than 1536 bits */ + icp_qat_fw_maths_modinv_odd_l1536_input_t maths_modinv_odd_l1536; + + /** Modular multiplicative inverse for numbers less than 2048 bits */ + icp_qat_fw_maths_modinv_odd_l2048_input_t maths_modinv_odd_l2048; + + /** Modular multiplicative inverse for numbers less than 3072 bits */ + icp_qat_fw_maths_modinv_odd_l3072_input_t maths_modinv_odd_l3072; + + /** Modular multiplicative inverse for numbers less than 4096 bits */ + icp_qat_fw_maths_modinv_odd_l4096_input_t maths_modinv_odd_l4096; + + /** Modular multiplicative inverse for numbers less than 128 bits */ + icp_qat_fw_maths_modinv_even_l128_input_t maths_modinv_even_l128; + + /** Modular multiplicative inverse for numbers less than 192 bits */ + icp_qat_fw_maths_modinv_even_l192_input_t maths_modinv_even_l192; + + /** Modular multiplicative inverse for numbers less than 256 bits */ + icp_qat_fw_maths_modinv_even_l256_input_t maths_modinv_even_l256; + + /** Modular multiplicative inverse for numbers less than 384 bits */ + icp_qat_fw_maths_modinv_even_l384_input_t maths_modinv_even_l384; + + /** Modular multiplicative inverse for numbers less than 512 bits */ + icp_qat_fw_maths_modinv_even_l512_input_t maths_modinv_even_l512; + + /** Modular multiplicative inverse for numbers less than 768 bits */ + icp_qat_fw_maths_modinv_even_l768_input_t maths_modinv_even_l768; + + /** Modular multiplicative inverse for numbers less than 1024 bits */ + icp_qat_fw_maths_modinv_even_l1024_input_t maths_modinv_even_l1024; + + /** Modular multiplicative inverse for numbers less than 1536 bits */ + icp_qat_fw_maths_modinv_even_l1536_input_t maths_modinv_even_l1536; + + /** Modular multiplicative inverse for numbers less than 2048 bits */ + icp_qat_fw_maths_modinv_even_l2048_input_t maths_modinv_even_l2048; + + /** Modular multiplicative inverse for numbers less than 3072 bits */ + icp_qat_fw_maths_modinv_even_l3072_input_t maths_modinv_even_l3072; + + /** Modular multiplicative inverse for numbers less than 4096 bits */ + icp_qat_fw_maths_modinv_even_l4096_input_t maths_modinv_even_l4096; + + /** DSA parameter generation P */ + icp_qat_fw_mmp_dsa_gen_p_1024_160_input_t mmp_dsa_gen_p_1024_160; + + /** DSA key generation G */ + icp_qat_fw_mmp_dsa_gen_g_1024_input_t mmp_dsa_gen_g_1024; + + /** DSA key generation Y */ + icp_qat_fw_mmp_dsa_gen_y_1024_input_t mmp_dsa_gen_y_1024; + + /** DSA Sign R */ + icp_qat_fw_mmp_dsa_sign_r_1024_160_input_t mmp_dsa_sign_r_1024_160; + + /** DSA Sign S */ + icp_qat_fw_mmp_dsa_sign_s_160_input_t mmp_dsa_sign_s_160; + + /** DSA Sign R S */ + icp_qat_fw_mmp_dsa_sign_r_s_1024_160_input_t mmp_dsa_sign_r_s_1024_160; + + /** DSA Verify */ + icp_qat_fw_mmp_dsa_verify_1024_160_input_t mmp_dsa_verify_1024_160; + + /** DSA parameter generation P */ + icp_qat_fw_mmp_dsa_gen_p_2048_224_input_t mmp_dsa_gen_p_2048_224; + + /** DSA key generation Y */ + icp_qat_fw_mmp_dsa_gen_y_2048_input_t mmp_dsa_gen_y_2048; + + /** DSA Sign R */ + icp_qat_fw_mmp_dsa_sign_r_2048_224_input_t mmp_dsa_sign_r_2048_224; + + /** DSA Sign S */ + icp_qat_fw_mmp_dsa_sign_s_224_input_t mmp_dsa_sign_s_224; + + /** DSA Sign R S */ + icp_qat_fw_mmp_dsa_sign_r_s_2048_224_input_t mmp_dsa_sign_r_s_2048_224; + + /** DSA Verify */ + icp_qat_fw_mmp_dsa_verify_2048_224_input_t mmp_dsa_verify_2048_224; + + /** DSA parameter generation P */ + icp_qat_fw_mmp_dsa_gen_p_2048_256_input_t mmp_dsa_gen_p_2048_256; + + /** DSA key generation G */ + icp_qat_fw_mmp_dsa_gen_g_2048_input_t mmp_dsa_gen_g_2048; + + /** DSA Sign R */ + icp_qat_fw_mmp_dsa_sign_r_2048_256_input_t mmp_dsa_sign_r_2048_256; + + /** DSA Sign S */ + icp_qat_fw_mmp_dsa_sign_s_256_input_t mmp_dsa_sign_s_256; + + /** DSA Sign R S */ + icp_qat_fw_mmp_dsa_sign_r_s_2048_256_input_t mmp_dsa_sign_r_s_2048_256; + + /** DSA Verify */ + icp_qat_fw_mmp_dsa_verify_2048_256_input_t mmp_dsa_verify_2048_256; + + /** DSA parameter generation P */ + icp_qat_fw_mmp_dsa_gen_p_3072_256_input_t mmp_dsa_gen_p_3072_256; + + /** DSA key generation G */ + icp_qat_fw_mmp_dsa_gen_g_3072_input_t mmp_dsa_gen_g_3072; + + /** DSA key generation Y */ + icp_qat_fw_mmp_dsa_gen_y_3072_input_t mmp_dsa_gen_y_3072; + + /** DSA Sign R */ + icp_qat_fw_mmp_dsa_sign_r_3072_256_input_t mmp_dsa_sign_r_3072_256; + + /** DSA Sign R S */ + icp_qat_fw_mmp_dsa_sign_r_s_3072_256_input_t mmp_dsa_sign_r_s_3072_256; + + /** DSA Verify */ + icp_qat_fw_mmp_dsa_verify_3072_256_input_t mmp_dsa_verify_3072_256; + + /** ECDSA Sign RS for curves B/K-163 and B/K-233 */ + icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l256_input_t + mmp_ecdsa_sign_rs_gf2_l256; + + /** ECDSA Sign R for curves B/K-163 and B/K-233 */ + icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_input_t mmp_ecdsa_sign_r_gf2_l256; + + /** ECDSA Sign S for curves with n < 2^256 */ + icp_qat_fw_mmp_ecdsa_sign_s_gf2_l256_input_t mmp_ecdsa_sign_s_gf2_l256; + + /** ECDSA Verify for curves B/K-163 and B/K-233 */ + icp_qat_fw_mmp_ecdsa_verify_gf2_l256_input_t mmp_ecdsa_verify_gf2_l256; + + /** ECDSA Sign RS */ + icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l512_input_t + mmp_ecdsa_sign_rs_gf2_l512; + + /** ECDSA GF2 Sign R */ + icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_input_t mmp_ecdsa_sign_r_gf2_l512; + + /** ECDSA GF2 Sign S */ + icp_qat_fw_mmp_ecdsa_sign_s_gf2_l512_input_t mmp_ecdsa_sign_s_gf2_l512; + + /** ECDSA GF2 Verify */ + icp_qat_fw_mmp_ecdsa_verify_gf2_l512_input_t mmp_ecdsa_verify_gf2_l512; + + /** ECDSA GF2 Sign RS for curves B-571/K-571 */ + icp_qat_fw_mmp_ecdsa_sign_rs_gf2_571_input_t mmp_ecdsa_sign_rs_gf2_571; + + /** ECDSA GF2 Sign S for curves with deg(q) < 576 */ + icp_qat_fw_mmp_ecdsa_sign_s_gf2_571_input_t mmp_ecdsa_sign_s_gf2_571; + + /** ECDSA GF2 Sign R for degree 571 */ + icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_input_t mmp_ecdsa_sign_r_gf2_571; + + /** ECDSA GF2 Verify for degree 571 */ + icp_qat_fw_mmp_ecdsa_verify_gf2_571_input_t mmp_ecdsa_verify_gf2_571; + + /** MATHS GF2 Point Multiplication */ + icp_qat_fw_maths_point_multiplication_gf2_l256_input_t + maths_point_multiplication_gf2_l256; + + /** MATHS GF2 Point Verification */ + icp_qat_fw_maths_point_verify_gf2_l256_input_t + maths_point_verify_gf2_l256; + + /** MATHS GF2 Point Multiplication */ + icp_qat_fw_maths_point_multiplication_gf2_l512_input_t + maths_point_multiplication_gf2_l512; + + /** MATHS GF2 Point Verification */ + icp_qat_fw_maths_point_verify_gf2_l512_input_t + maths_point_verify_gf2_l512; + + /** ECC GF2 Point Multiplication for curves B-571/K-571 */ + icp_qat_fw_maths_point_multiplication_gf2_571_input_t + maths_point_multiplication_gf2_571; + + /** ECC GF2 Point Verification for degree 571 */ + icp_qat_fw_maths_point_verify_gf2_571_input_t + maths_point_verify_gf2_571; + + /** ECDSA GFP Sign R */ + icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_input_t mmp_ecdsa_sign_r_gfp_l256; + + /** ECDSA GFP Sign S */ + icp_qat_fw_mmp_ecdsa_sign_s_gfp_l256_input_t mmp_ecdsa_sign_s_gfp_l256; + + /** ECDSA GFP Sign RS */ + icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l256_input_t + mmp_ecdsa_sign_rs_gfp_l256; + + /** ECDSA GFP Verify */ + icp_qat_fw_mmp_ecdsa_verify_gfp_l256_input_t mmp_ecdsa_verify_gfp_l256; + + /** ECDSA GFP Sign R */ + icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_input_t mmp_ecdsa_sign_r_gfp_l512; + + /** ECDSA GFP Sign S */ + icp_qat_fw_mmp_ecdsa_sign_s_gfp_l512_input_t mmp_ecdsa_sign_s_gfp_l512; + + /** ECDSA GFP Sign RS */ + icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l512_input_t + mmp_ecdsa_sign_rs_gfp_l512; + + /** ECDSA GFP Verify */ + icp_qat_fw_mmp_ecdsa_verify_gfp_l512_input_t mmp_ecdsa_verify_gfp_l512; + + /** ECDSA GFP Sign R */ + icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_input_t mmp_ecdsa_sign_r_gfp_521; + + /** ECDSA GFP Sign S */ + icp_qat_fw_mmp_ecdsa_sign_s_gfp_521_input_t mmp_ecdsa_sign_s_gfp_521; + + /** ECDSA GFP Sign RS */ + icp_qat_fw_mmp_ecdsa_sign_rs_gfp_521_input_t mmp_ecdsa_sign_rs_gfp_521; + + /** ECDSA GFP Verify */ + icp_qat_fw_mmp_ecdsa_verify_gfp_521_input_t mmp_ecdsa_verify_gfp_521; + + /** ECC GFP Point Multiplication */ + icp_qat_fw_maths_point_multiplication_gfp_l256_input_t + maths_point_multiplication_gfp_l256; + + /** ECC GFP Partial Point Verification */ + icp_qat_fw_maths_point_verify_gfp_l256_input_t + maths_point_verify_gfp_l256; + + /** ECC GFP Point Multiplication */ + icp_qat_fw_maths_point_multiplication_gfp_l512_input_t + maths_point_multiplication_gfp_l512; + + /** ECC GFP Partial Point */ + icp_qat_fw_maths_point_verify_gfp_l512_input_t + maths_point_verify_gfp_l512; + + /** ECC GFP Point Multiplication */ + icp_qat_fw_maths_point_multiplication_gfp_521_input_t + maths_point_multiplication_gfp_521; + + /** ECC GFP Partial Point Verification */ + icp_qat_fw_maths_point_verify_gfp_521_input_t + maths_point_verify_gfp_521; + + /** ECC curve25519 Variable Point Multiplication [k]P(x), as specified + * in RFC7748 */ + icp_qat_fw_point_multiplication_c25519_input_t + point_multiplication_c25519; + + /** ECC curve25519 Generator Point Multiplication [k]G(x), as specified + * in RFC7748 */ + icp_qat_fw_generator_multiplication_c25519_input_t + generator_multiplication_c25519; + + /** ECC edwards25519 Variable Point Multiplication [k]P, as specified in + * RFC8032 */ + icp_qat_fw_point_multiplication_ed25519_input_t + point_multiplication_ed25519; + + /** ECC edwards25519 Generator Point Multiplication [k]G, as specified + * in RFC8032 */ + icp_qat_fw_generator_multiplication_ed25519_input_t + generator_multiplication_ed25519; + + /** ECC curve448 Variable Point Multiplication [k]P(x), as specified in + * RFC7748 */ + icp_qat_fw_point_multiplication_c448_input_t point_multiplication_c448; + + /** ECC curve448 Generator Point Multiplication [k]G(x), as specified in + * RFC7748 */ + icp_qat_fw_generator_multiplication_c448_input_t + generator_multiplication_c448; + + /** ECC edwards448 Variable Point Multiplication [k]P, as specified in + * RFC8032 */ + icp_qat_fw_point_multiplication_ed448_input_t + point_multiplication_ed448; + + /** ECC edwards448 Generator Point Multiplication [k]P, as specified in + * RFC8032 */ + icp_qat_fw_generator_multiplication_ed448_input_t + generator_multiplication_ed448; +} icp_qat_fw_mmp_input_param_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Initialisation sequence , + * to be used when icp_qat_fw_pke_response_s::functionalityId is #PKE_INIT. + */ +typedef struct icp_qat_fw_mmp_init_output_s { + uint64_t zz; /**< 1'd quadword (1 qwords)*/ +} icp_qat_fw_mmp_init_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Diffie-Hellman Modular exponentiation base 2 for + * 768-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DH_G2_768. + */ +typedef struct icp_qat_fw_mmp_dh_g2_768_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (12 + qwords)*/ +} icp_qat_fw_mmp_dh_g2_768_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Diffie-Hellman Modular exponentiation for + * 768-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DH_768. + */ +typedef struct icp_qat_fw_mmp_dh_768_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (12 + qwords)*/ +} icp_qat_fw_mmp_dh_768_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Diffie-Hellman Modular exponentiation base 2 for + * 1024-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DH_G2_1024. + */ +typedef struct icp_qat_fw_mmp_dh_g2_1024_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (16 + qwords)*/ +} icp_qat_fw_mmp_dh_g2_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Diffie-Hellman Modular exponentiation for + * 1024-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DH_1024. + */ +typedef struct icp_qat_fw_mmp_dh_1024_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (16 + qwords)*/ +} icp_qat_fw_mmp_dh_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Diffie-Hellman Modular exponentiation base 2 for + * 1536-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DH_G2_1536. + */ +typedef struct icp_qat_fw_mmp_dh_g2_1536_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (24 + qwords)*/ +} icp_qat_fw_mmp_dh_g2_1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Diffie-Hellman Modular exponentiation for + * 1536-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DH_1536. + */ +typedef struct icp_qat_fw_mmp_dh_1536_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (24 + qwords)*/ +} icp_qat_fw_mmp_dh_1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Diffie-Hellman Modular exponentiation base 2 for + * 2048-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DH_G2_2048. + */ +typedef struct icp_qat_fw_mmp_dh_g2_2048_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (32 + qwords)*/ +} icp_qat_fw_mmp_dh_g2_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Diffie-Hellman Modular exponentiation for + * 2048-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DH_2048. + */ +typedef struct icp_qat_fw_mmp_dh_2048_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (32 + qwords)*/ +} icp_qat_fw_mmp_dh_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Diffie-Hellman Modular exponentiation base 2 for + * 3072-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DH_G2_3072. + */ +typedef struct icp_qat_fw_mmp_dh_g2_3072_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (48 + qwords)*/ +} icp_qat_fw_mmp_dh_g2_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Diffie-Hellman Modular exponentiation for + * 3072-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DH_3072. + */ +typedef struct icp_qat_fw_mmp_dh_3072_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (48 + qwords)*/ +} icp_qat_fw_mmp_dh_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Diffie-Hellman Modular exponentiation base 2 for + * 4096-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DH_G2_4096. + */ +typedef struct icp_qat_fw_mmp_dh_g2_4096_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (64 + qwords)*/ +} icp_qat_fw_mmp_dh_g2_4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Diffie-Hellman Modular exponentiation for + * 4096-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DH_4096. + */ +typedef struct icp_qat_fw_mmp_dh_4096_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (64 + qwords)*/ +} icp_qat_fw_mmp_dh_4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 512 key generation first form , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_KP1_512. + */ +typedef struct icp_qat_fw_mmp_rsa_kp1_512_output_s { + uint64_t n; /**< RSA key (8 qwords)*/ + uint64_t d; /**< RSA private key (first form) (8 qwords)*/ +} icp_qat_fw_mmp_rsa_kp1_512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 512 key generation second form , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_KP2_512. + */ +typedef struct icp_qat_fw_mmp_rsa_kp2_512_output_s { + uint64_t n; /**< RSA key (8 qwords)*/ + uint64_t d; /**< RSA private key (second form) (8 qwords)*/ + uint64_t dp; /**< RSA private key (second form) (4 qwords)*/ + uint64_t dq; /**< RSA private key (second form) (4 qwords)*/ + uint64_t qinv; /**< RSA private key (second form) (4 qwords)*/ +} icp_qat_fw_mmp_rsa_kp2_512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 512 Encryption , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_EP_512. + */ +typedef struct icp_qat_fw_mmp_rsa_ep_512_output_s { + uint64_t c; /**< cipher text representative, < n (8 qwords)*/ +} icp_qat_fw_mmp_rsa_ep_512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 512 Decryption , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_DP1_512. + */ +typedef struct icp_qat_fw_mmp_rsa_dp1_512_output_s { + uint64_t m; /**< message representative, < n (8 qwords)*/ +} icp_qat_fw_mmp_rsa_dp1_512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 1024 Decryption with CRT , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_DP2_512. + */ +typedef struct icp_qat_fw_mmp_rsa_dp2_512_output_s { + uint64_t m; /**< message representative, < (p*q) (8 qwords)*/ +} icp_qat_fw_mmp_rsa_dp2_512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 1024 key generation first form , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_KP1_1024. + */ +typedef struct icp_qat_fw_mmp_rsa_kp1_1024_output_s { + uint64_t n; /**< RSA key (16 qwords)*/ + uint64_t d; /**< RSA private key (first form) (16 qwords)*/ +} icp_qat_fw_mmp_rsa_kp1_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 1024 key generation second form , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_KP2_1024. + */ +typedef struct icp_qat_fw_mmp_rsa_kp2_1024_output_s { + uint64_t n; /**< RSA key (16 qwords)*/ + uint64_t d; /**< RSA private key (second form) (16 qwords)*/ + uint64_t dp; /**< RSA private key (second form) (8 qwords)*/ + uint64_t dq; /**< RSA private key (second form) (8 qwords)*/ + uint64_t qinv; /**< RSA private key (second form) (8 qwords)*/ +} icp_qat_fw_mmp_rsa_kp2_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 1024 Encryption , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_EP_1024. + */ +typedef struct icp_qat_fw_mmp_rsa_ep_1024_output_s { + uint64_t c; /**< cipher text representative, < n (16 qwords)*/ +} icp_qat_fw_mmp_rsa_ep_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 1024 Decryption , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_DP1_1024. + */ +typedef struct icp_qat_fw_mmp_rsa_dp1_1024_output_s { + uint64_t m; /**< message representative, < n (16 qwords)*/ +} icp_qat_fw_mmp_rsa_dp1_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 1024 Decryption with CRT , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_DP2_1024. + */ +typedef struct icp_qat_fw_mmp_rsa_dp2_1024_output_s { + uint64_t m; /**< message representative, < (p*q) (16 qwords)*/ +} icp_qat_fw_mmp_rsa_dp2_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 1536 key generation first form , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_KP1_1536. + */ +typedef struct icp_qat_fw_mmp_rsa_kp1_1536_output_s { + uint64_t n; /**< RSA key (24 qwords)*/ + uint64_t d; /**< RSA private key (24 qwords)*/ +} icp_qat_fw_mmp_rsa_kp1_1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 1536 key generation second form , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_KP2_1536. + */ +typedef struct icp_qat_fw_mmp_rsa_kp2_1536_output_s { + uint64_t n; /**< RSA key (24 qwords)*/ + uint64_t d; /**< RSA private key (24 qwords)*/ + uint64_t dp; /**< RSA private key (12 qwords)*/ + uint64_t dq; /**< RSA private key (12 qwords)*/ + uint64_t qinv; /**< RSA private key (12 qwords)*/ +} icp_qat_fw_mmp_rsa_kp2_1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 1536 Encryption , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_EP_1536. + */ +typedef struct icp_qat_fw_mmp_rsa_ep_1536_output_s { + uint64_t c; /**< cipher text representative, < n (24 qwords)*/ +} icp_qat_fw_mmp_rsa_ep_1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 1536 Decryption , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_DP1_1536. + */ +typedef struct icp_qat_fw_mmp_rsa_dp1_1536_output_s { + uint64_t m; /**< message representative, < n (24 qwords)*/ +} icp_qat_fw_mmp_rsa_dp1_1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 1536 Decryption with CRT , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_DP2_1536. + */ +typedef struct icp_qat_fw_mmp_rsa_dp2_1536_output_s { + uint64_t m; /**< message representative, < (p*q) (24 qwords)*/ +} icp_qat_fw_mmp_rsa_dp2_1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 2048 key generation first form , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_KP1_2048. + */ +typedef struct icp_qat_fw_mmp_rsa_kp1_2048_output_s { + uint64_t n; /**< RSA key (32 qwords)*/ + uint64_t d; /**< RSA private key (32 qwords)*/ +} icp_qat_fw_mmp_rsa_kp1_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 2048 key generation second form , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_KP2_2048. + */ +typedef struct icp_qat_fw_mmp_rsa_kp2_2048_output_s { + uint64_t n; /**< RSA key (32 qwords)*/ + uint64_t d; /**< RSA private key (32 qwords)*/ + uint64_t dp; /**< RSA private key (16 qwords)*/ + uint64_t dq; /**< RSA private key (16 qwords)*/ + uint64_t qinv; /**< RSA private key (16 qwords)*/ +} icp_qat_fw_mmp_rsa_kp2_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 2048 Encryption , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_EP_2048. + */ +typedef struct icp_qat_fw_mmp_rsa_ep_2048_output_s { + uint64_t c; /**< cipher text representative, < n (32 qwords)*/ +} icp_qat_fw_mmp_rsa_ep_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 2048 Decryption , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_DP1_2048. + */ +typedef struct icp_qat_fw_mmp_rsa_dp1_2048_output_s { + uint64_t m; /**< message representative, < n (32 qwords)*/ +} icp_qat_fw_mmp_rsa_dp1_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 2048 Decryption with CRT , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_DP2_2048. + */ +typedef struct icp_qat_fw_mmp_rsa_dp2_2048_output_s { + uint64_t m; /**< message representative, < (p*q) (32 qwords)*/ +} icp_qat_fw_mmp_rsa_dp2_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 3072 key generation first form , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_KP1_3072. + */ +typedef struct icp_qat_fw_mmp_rsa_kp1_3072_output_s { + uint64_t n; /**< RSA key (48 qwords)*/ + uint64_t d; /**< RSA private key (48 qwords)*/ +} icp_qat_fw_mmp_rsa_kp1_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 3072 key generation second form , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_KP2_3072. + */ +typedef struct icp_qat_fw_mmp_rsa_kp2_3072_output_s { + uint64_t n; /**< RSA key (48 qwords)*/ + uint64_t d; /**< RSA private key (48 qwords)*/ + uint64_t dp; /**< RSA private key (24 qwords)*/ + uint64_t dq; /**< RSA private key (24 qwords)*/ + uint64_t qinv; /**< RSA private key (24 qwords)*/ +} icp_qat_fw_mmp_rsa_kp2_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 3072 Encryption , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_EP_3072. + */ +typedef struct icp_qat_fw_mmp_rsa_ep_3072_output_s { + uint64_t c; /**< cipher text representative, < n (48 qwords)*/ +} icp_qat_fw_mmp_rsa_ep_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 3072 Decryption , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_DP1_3072. + */ +typedef struct icp_qat_fw_mmp_rsa_dp1_3072_output_s { + uint64_t m; /**< message representative, < n (48 qwords)*/ +} icp_qat_fw_mmp_rsa_dp1_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 3072 Decryption with CRT , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_DP2_3072. + */ +typedef struct icp_qat_fw_mmp_rsa_dp2_3072_output_s { + uint64_t m; /**< message representative, < (p*q) (48 qwords)*/ +} icp_qat_fw_mmp_rsa_dp2_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 4096 key generation first form , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_KP1_4096. + */ +typedef struct icp_qat_fw_mmp_rsa_kp1_4096_output_s { + uint64_t n; /**< RSA key (64 qwords)*/ + uint64_t d; /**< RSA private key (64 qwords)*/ +} icp_qat_fw_mmp_rsa_kp1_4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 4096 key generation second form , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_KP2_4096. + */ +typedef struct icp_qat_fw_mmp_rsa_kp2_4096_output_s { + uint64_t n; /**< RSA key (64 qwords)*/ + uint64_t d; /**< RSA private key (64 qwords)*/ + uint64_t dp; /**< RSA private key (32 qwords)*/ + uint64_t dq; /**< RSA private key (32 qwords)*/ + uint64_t qinv; /**< RSA private key (32 qwords)*/ +} icp_qat_fw_mmp_rsa_kp2_4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 4096 Encryption , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_EP_4096. + */ +typedef struct icp_qat_fw_mmp_rsa_ep_4096_output_s { + uint64_t c; /**< cipher text representative, < n (64 qwords)*/ +} icp_qat_fw_mmp_rsa_ep_4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 4096 Decryption , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_DP1_4096. + */ +typedef struct icp_qat_fw_mmp_rsa_dp1_4096_output_s { + uint64_t m; /**< message representative, < n (64 qwords)*/ +} icp_qat_fw_mmp_rsa_dp1_4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for RSA 4096 Decryption with CRT , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_RSA_DP2_4096. + */ +typedef struct icp_qat_fw_mmp_rsa_dp2_4096_output_s { + uint64_t m; /**< message representative, < (p*q) (64 qwords)*/ +} icp_qat_fw_mmp_rsa_dp2_4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for GCD primality test for 192-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_GCD_PT_192. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_192_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_gcd_pt_192_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for GCD primality test for 256-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_GCD_PT_256. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_256_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_gcd_pt_256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for GCD primality test for 384-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_GCD_PT_384. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_384_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_gcd_pt_384_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for GCD primality test for 512-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_GCD_PT_512. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_512_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_gcd_pt_512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for GCD primality test for 768-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_GCD_PT_768. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_768_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_gcd_pt_768_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for GCD primality test for 1024-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_GCD_PT_1024. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_1024_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_gcd_pt_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for GCD primality test for 1536-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_GCD_PT_1536. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_1536_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_gcd_pt_1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for GCD primality test for 2048-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_GCD_PT_2048. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_2048_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_gcd_pt_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for GCD primality test for 3072-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_GCD_PT_3072. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_3072_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_gcd_pt_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for GCD primality test for 4096-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_GCD_PT_4096. + */ +typedef struct icp_qat_fw_mmp_gcd_pt_4096_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_gcd_pt_4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Fermat primality test for 160-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_FERMAT_PT_160. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_160_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_fermat_pt_160_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Fermat primality test for 512-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_FERMAT_PT_512. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_512_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_fermat_pt_512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Fermat primality test for <e; 512-bit numbers + * , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_FERMAT_PT_L512. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_l512_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_fermat_pt_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Fermat primality test for 768-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_FERMAT_PT_768. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_768_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_fermat_pt_768_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Fermat primality test for 1024-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_FERMAT_PT_1024. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_1024_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_fermat_pt_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Fermat primality test for 1536-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_FERMAT_PT_1536. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_1536_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_fermat_pt_1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Fermat primality test for 2048-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_FERMAT_PT_2048. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_2048_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_fermat_pt_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Fermat primality test for 3072-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_FERMAT_PT_3072. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_3072_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_fermat_pt_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Fermat primality test for 4096-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_FERMAT_PT_4096. + */ +typedef struct icp_qat_fw_mmp_fermat_pt_4096_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_fermat_pt_4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Miller-Rabin primality test for 160-bit numbers + * , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_MR_PT_160. + */ +typedef struct icp_qat_fw_mmp_mr_pt_160_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_mr_pt_160_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Miller-Rabin primality test for 512-bit numbers + * , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_MR_PT_512. + */ +typedef struct icp_qat_fw_mmp_mr_pt_512_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_mr_pt_512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Miller-Rabin primality test for 768-bit numbers + * , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_MR_PT_768. + */ +typedef struct icp_qat_fw_mmp_mr_pt_768_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_mr_pt_768_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Miller-Rabin primality test for 1024-bit numbers + * , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_MR_PT_1024. + */ +typedef struct icp_qat_fw_mmp_mr_pt_1024_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_mr_pt_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Miller-Rabin primality test for 1536-bit numbers + * , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_MR_PT_1536. + */ +typedef struct icp_qat_fw_mmp_mr_pt_1536_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_mr_pt_1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Miller-Rabin primality test for 2048-bit numbers + * , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_MR_PT_2048. + */ +typedef struct icp_qat_fw_mmp_mr_pt_2048_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_mr_pt_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Miller-Rabin primality test for 3072-bit numbers + * , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_MR_PT_3072. + */ +typedef struct icp_qat_fw_mmp_mr_pt_3072_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_mr_pt_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Miller-Rabin primality test for 4096-bit numbers + * , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_MR_PT_4096. + */ +typedef struct icp_qat_fw_mmp_mr_pt_4096_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_mr_pt_4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Miller-Rabin primality test for 512-bit numbers + * , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_MR_PT_L512. + */ +typedef struct icp_qat_fw_mmp_mr_pt_l512_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_mr_pt_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Lucas primality test for 160-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_LUCAS_PT_160. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_160_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_lucas_pt_160_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Lucas primality test for 512-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_LUCAS_PT_512. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_512_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_lucas_pt_512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Lucas primality test for 768-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_LUCAS_PT_768. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_768_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_lucas_pt_768_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Lucas primality test for 1024-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_LUCAS_PT_1024. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_1024_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_lucas_pt_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Lucas primality test for 1536-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_LUCAS_PT_1536. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_1536_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_lucas_pt_1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Lucas primality test for 2048-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_LUCAS_PT_2048. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_2048_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_lucas_pt_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Lucas primality test for 3072-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_LUCAS_PT_3072. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_3072_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_lucas_pt_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Lucas primality test for 4096-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_LUCAS_PT_4096. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_4096_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_lucas_pt_4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Lucas primality test for L512-bit numbers , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_LUCAS_PT_L512. + */ +typedef struct icp_qat_fw_mmp_lucas_pt_l512_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_lucas_pt_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular exponentiation for numbers less than + * 512-bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODEXP_L512. + */ +typedef struct icp_qat_fw_maths_modexp_l512_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (8 + qwords)*/ +} icp_qat_fw_maths_modexp_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular exponentiation for numbers less than + * 1024-bit , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODEXP_L1024. + */ +typedef struct icp_qat_fw_maths_modexp_l1024_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (16 + qwords)*/ +} icp_qat_fw_maths_modexp_l1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular exponentiation for numbers less than + * 1536-bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODEXP_L1536. + */ +typedef struct icp_qat_fw_maths_modexp_l1536_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (24 + qwords)*/ +} icp_qat_fw_maths_modexp_l1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular exponentiation for numbers less than + * 2048-bit , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODEXP_L2048. + */ +typedef struct icp_qat_fw_maths_modexp_l2048_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (32 + qwords)*/ +} icp_qat_fw_maths_modexp_l2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular exponentiation for numbers less than + * 2560-bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODEXP_L2560. + */ +typedef struct icp_qat_fw_maths_modexp_l2560_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (40 + qwords)*/ +} icp_qat_fw_maths_modexp_l2560_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular exponentiation for numbers less than + * 3072-bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODEXP_L3072. + */ +typedef struct icp_qat_fw_maths_modexp_l3072_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (48 + qwords)*/ +} icp_qat_fw_maths_modexp_l3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular exponentiation for numbers less than + * 3584-bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODEXP_L3584. + */ +typedef struct icp_qat_fw_maths_modexp_l3584_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (56 + qwords)*/ +} icp_qat_fw_maths_modexp_l3584_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular exponentiation for numbers less than + * 4096-bit , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODEXP_L4096. + */ +typedef struct icp_qat_fw_maths_modexp_l4096_output_s { + uint64_t r; /**< modular exponentiation result ≥ 0 and < m (64 + qwords)*/ +} icp_qat_fw_maths_modexp_l4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 128 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_ODD_L128. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l128_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (2 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l128_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 192 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_ODD_L192. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l192_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (3 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l192_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 256 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_ODD_L256. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l256_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (4 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 384 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_ODD_L384. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l384_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (6 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l384_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 512 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_ODD_L512. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l512_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (8 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 768 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_ODD_L768. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l768_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (12 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l768_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 1024 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_ODD_L1024. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l1024_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (16 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 1536 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_ODD_L1536. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l1536_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (24 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 2048 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_ODD_L2048. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l2048_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (32 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 3072 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_ODD_L3072. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l3072_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (48 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 4096 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_ODD_L4096. + */ +typedef struct icp_qat_fw_maths_modinv_odd_l4096_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (64 + qwords)*/ +} icp_qat_fw_maths_modinv_odd_l4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 128 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_EVEN_L128. + */ +typedef struct icp_qat_fw_maths_modinv_even_l128_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (2 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l128_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 192 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_EVEN_L192. + */ +typedef struct icp_qat_fw_maths_modinv_even_l192_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (3 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l192_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 256 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_EVEN_L256. + */ +typedef struct icp_qat_fw_maths_modinv_even_l256_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (4 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 384 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_EVEN_L384. + */ +typedef struct icp_qat_fw_maths_modinv_even_l384_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (6 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l384_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 512 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_EVEN_L512. + */ +typedef struct icp_qat_fw_maths_modinv_even_l512_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (8 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 768 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_EVEN_L768. + */ +typedef struct icp_qat_fw_maths_modinv_even_l768_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (12 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l768_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 1024 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_EVEN_L1024. + */ +typedef struct icp_qat_fw_maths_modinv_even_l1024_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (16 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 1536 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_EVEN_L1536. + */ +typedef struct icp_qat_fw_maths_modinv_even_l1536_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (24 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l1536_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 2048 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_EVEN_L2048. + */ +typedef struct icp_qat_fw_maths_modinv_even_l2048_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (32 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 3072 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_EVEN_L3072. + */ +typedef struct icp_qat_fw_maths_modinv_even_l3072_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (48 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for Modular multiplicative inverse for numbers less + * than 4096 bits , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_MODINV_EVEN_L4096. + */ +typedef struct icp_qat_fw_maths_modinv_even_l4096_output_s { + uint64_t + c; /**< modular multiplicative inverse of a, > 0 and < b (64 + qwords)*/ +} icp_qat_fw_maths_modinv_even_l4096_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA parameter generation P , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_GEN_P_1024_160. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_p_1024_160_output_s { + uint64_t p; /**< candidate for DSA parameter p (16 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_p_1024_160_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA key generation G , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_GEN_G_1024. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_g_1024_output_s { + uint64_t g; /**< DSA parameter (16 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_g_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA key generation Y , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_GEN_Y_1024. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_y_1024_output_s { + uint64_t y; /**< DSA parameter (16 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_y_1024_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Sign R , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_SIGN_R_1024_160. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_1024_160_output_s { + uint64_t r; /**< DSA 160-bits signature (3 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_1024_160_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Sign S , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_SIGN_S_160. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_s_160_output_s { + uint64_t s; /**< s DSA 160-bits signature (3 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_s_160_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Sign R S , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_SIGN_R_S_1024_160. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_s_1024_160_output_s { + uint64_t r; /**< DSA 160-bits signature (3 qwords)*/ + uint64_t s; /**< DSA 160-bits signature (3 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_s_1024_160_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Verify , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_VERIFY_1024_160. + */ +typedef struct icp_qat_fw_mmp_dsa_verify_1024_160_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_dsa_verify_1024_160_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA parameter generation P , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_GEN_P_2048_224. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_p_2048_224_output_s { + uint64_t p; /**< candidate for DSA parameter p (32 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_p_2048_224_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA key generation Y , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_GEN_Y_2048. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_y_2048_output_s { + uint64_t y; /**< DSA parameter (32 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_y_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Sign R , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_SIGN_R_2048_224. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_2048_224_output_s { + uint64_t r; /**< DSA 224-bits signature (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_2048_224_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Sign S , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_SIGN_S_224. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_s_224_output_s { + uint64_t s; /**< s DSA 224-bits signature (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_s_224_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Sign R S , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_SIGN_R_S_2048_224. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_s_2048_224_output_s { + uint64_t r; /**< DSA 224-bits signature (4 qwords)*/ + uint64_t s; /**< DSA 224-bits signature (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_s_2048_224_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Verify , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_VERIFY_2048_224. + */ +typedef struct icp_qat_fw_mmp_dsa_verify_2048_224_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_dsa_verify_2048_224_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA parameter generation P , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_GEN_P_2048_256. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_p_2048_256_output_s { + uint64_t p; /**< candidate for DSA parameter p (32 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_p_2048_256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA key generation G , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_GEN_G_2048. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_g_2048_output_s { + uint64_t g; /**< DSA parameter (32 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_g_2048_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Sign R , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_SIGN_R_2048_256. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_2048_256_output_s { + uint64_t r; /**< DSA 256-bits signature (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_2048_256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Sign S , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_SIGN_S_256. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_s_256_output_s { + uint64_t s; /**< s DSA 256-bits signature (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_s_256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Sign R S , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_SIGN_R_S_2048_256. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_s_2048_256_output_s { + uint64_t r; /**< DSA 256-bits signature (4 qwords)*/ + uint64_t s; /**< DSA 256-bits signature (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_s_2048_256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Verify , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_VERIFY_2048_256. + */ +typedef struct icp_qat_fw_mmp_dsa_verify_2048_256_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_dsa_verify_2048_256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA parameter generation P , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_GEN_P_3072_256. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_p_3072_256_output_s { + uint64_t p; /**< candidate for DSA parameter p (48 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_p_3072_256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA key generation G , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_GEN_G_3072. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_g_3072_output_s { + uint64_t g; /**< DSA parameter (48 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_g_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA key generation Y , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_GEN_Y_3072. + */ +typedef struct icp_qat_fw_mmp_dsa_gen_y_3072_output_s { + uint64_t y; /**< DSA parameter (48 qwords)*/ +} icp_qat_fw_mmp_dsa_gen_y_3072_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Sign R , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_SIGN_R_3072_256. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_3072_256_output_s { + uint64_t r; /**< DSA 256-bits signature (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_3072_256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Sign R S , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_SIGN_R_S_3072_256. + */ +typedef struct icp_qat_fw_mmp_dsa_sign_r_s_3072_256_output_s { + uint64_t r; /**< DSA 256-bits signature (4 qwords)*/ + uint64_t s; /**< DSA 256-bits signature (4 qwords)*/ +} icp_qat_fw_mmp_dsa_sign_r_s_3072_256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for DSA Verify , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_DSA_VERIFY_3072_256. + */ +typedef struct icp_qat_fw_mmp_dsa_verify_3072_256_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_dsa_verify_3072_256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA Sign RS for curves B/K-163 and B/K-233 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_RS_GF2_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l256_output_s { + uint64_t r; /**< ECDSA signature r > 0 and < n (4 qwords)*/ + uint64_t s; /**< ECDSA signature s > 0 and < n (4 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA Sign R for curves B/K-163 and B/K-233 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_R_GF2_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_output_s { + uint64_t r; /**< ECDSA signature r > 0 and < n (4 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA Sign S for curves with n < 2^256 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_S_GF2_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_s_gf2_l256_output_s { + uint64_t s; /**< ECDSA signature s > 0 and < n (4 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_s_gf2_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA Verify for curves B/K-163 and B/K-233 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_VERIFY_GF2_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_verify_gf2_l256_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_ecdsa_verify_gf2_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA Sign RS , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_RS_GF2_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l512_output_s { + uint64_t r; /**< (8 qwords)*/ + uint64_t s; /**< ECDSA signature r > 0 and < n ECDSA signature s + > 0 and < n (8 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GF2 Sign R , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_R_GF2_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_output_s { + uint64_t r; /**< ECDSA signature r > 0 and < n (8 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GF2 Sign S , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_S_GF2_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_s_gf2_l512_output_s { + uint64_t s; /**< ECDSA signature s > 0 and < n (8 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_s_gf2_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GF2 Verify , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_VERIFY_GF2_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_verify_gf2_l512_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_ecdsa_verify_gf2_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GF2 Sign RS for curves B-571/K-571 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_RS_GF2_571. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_rs_gf2_571_output_s { + uint64_t r; /**< (9 qwords)*/ + uint64_t s; /**< ECDSA signature r > 0 and < n ECDSA signature s + > 0 and < n (9 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_rs_gf2_571_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GF2 Sign S for curves with deg(q) < 576 + * , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_S_GF2_571. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_s_gf2_571_output_s { + uint64_t s; /**< ECDSA signature s > 0 and < n (9 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_s_gf2_571_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GF2 Sign R for degree 571 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_R_GF2_571. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_output_s { + uint64_t r; /**< ECDSA signature r > 0 and < n (9 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GF2 Verify for degree 571 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_VERIFY_GF2_571. + */ +typedef struct icp_qat_fw_mmp_ecdsa_verify_gf2_571_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_ecdsa_verify_gf2_571_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for MATHS GF2 Point Multiplication , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_POINT_MULTIPLICATION_GF2_L256. + */ +typedef struct icp_qat_fw_maths_point_multiplication_gf2_l256_output_s { + uint64_t xk; /**< x coordinate of resultant point (< degree(q)) (4 + qwords)*/ + uint64_t yk; /**< y coordinate of resultant point (< degree(q)) (4 + qwords)*/ +} icp_qat_fw_maths_point_multiplication_gf2_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for MATHS GF2 Point Verification , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_POINT_VERIFY_GF2_L256. + */ +typedef struct icp_qat_fw_maths_point_verify_gf2_l256_output_s { + /* no output parameters */ +} icp_qat_fw_maths_point_verify_gf2_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for MATHS GF2 Point Multiplication , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_POINT_MULTIPLICATION_GF2_L512. + */ +typedef struct icp_qat_fw_maths_point_multiplication_gf2_l512_output_s { + uint64_t xk; /**< x coordinate of resultant point (< q) (8 qwords)*/ + uint64_t yk; /**< y coordinate of resultant point (< q) (8 qwords)*/ +} icp_qat_fw_maths_point_multiplication_gf2_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for MATHS GF2 Point Verification , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_POINT_VERIFY_GF2_L512. + */ +typedef struct icp_qat_fw_maths_point_verify_gf2_l512_output_s { + /* no output parameters */ +} icp_qat_fw_maths_point_verify_gf2_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC GF2 Point Multiplication for curves + * B-571/K-571 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_POINT_MULTIPLICATION_GF2_571. + */ +typedef struct icp_qat_fw_maths_point_multiplication_gf2_571_output_s { + uint64_t xk; /**< x coordinate of resultant point (degree < + degree(q)) (9 qwords)*/ + uint64_t yk; /**< y coordinate of resultant point (degree < + degree(q)) (9 qwords)*/ +} icp_qat_fw_maths_point_multiplication_gf2_571_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC GF2 Point Verification for degree 571 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_POINT_VERIFY_GF2_571. + */ +typedef struct icp_qat_fw_maths_point_verify_gf2_571_output_s { + /* no output parameters */ +} icp_qat_fw_maths_point_verify_gf2_571_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GFP Sign R , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_R_GFP_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_output_s { + uint64_t r; /**< ECDSA signature (4 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GFP Sign S , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_S_GFP_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_s_gfp_l256_output_s { + uint64_t s; /**< ECDSA signature s (4 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_s_gfp_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GFP Sign RS , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_RS_GFP_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l256_output_s { + uint64_t r; /**< ECDSA signature r (4 qwords)*/ + uint64_t s; /**< ECDSA signature s (4 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GFP Verify , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_VERIFY_GFP_L256. + */ +typedef struct icp_qat_fw_mmp_ecdsa_verify_gfp_l256_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_ecdsa_verify_gfp_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GFP Sign R , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_R_GFP_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_output_s { + uint64_t r; /**< ECDSA signature (8 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GFP Sign S , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_S_GFP_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_s_gfp_l512_output_s { + uint64_t s; /**< ECDSA signature s (8 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_s_gfp_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GFP Sign RS , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_RS_GFP_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l512_output_s { + uint64_t r; /**< ECDSA signature r (8 qwords)*/ + uint64_t s; /**< ECDSA signature s (8 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GFP Verify , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_VERIFY_GFP_L512. + */ +typedef struct icp_qat_fw_mmp_ecdsa_verify_gfp_l512_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_ecdsa_verify_gfp_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GFP Sign R , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_R_GFP_521. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_output_s { + uint64_t r; /**< ECDSA signature (9 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GFP Sign S , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_S_GFP_521. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_s_gfp_521_output_s { + uint64_t s; /**< ECDSA signature s (9 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_s_gfp_521_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GFP Sign RS , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_SIGN_RS_GFP_521. + */ +typedef struct icp_qat_fw_mmp_ecdsa_sign_rs_gfp_521_output_s { + uint64_t r; /**< ECDSA signature r (9 qwords)*/ + uint64_t s; /**< ECDSA signature s (9 qwords)*/ +} icp_qat_fw_mmp_ecdsa_sign_rs_gfp_521_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECDSA GFP Verify , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #PKE_ECDSA_VERIFY_GFP_521. + */ +typedef struct icp_qat_fw_mmp_ecdsa_verify_gfp_521_output_s { + /* no output parameters */ +} icp_qat_fw_mmp_ecdsa_verify_gfp_521_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC GFP Point Multiplication , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_POINT_MULTIPLICATION_GFP_L256. + */ +typedef struct icp_qat_fw_maths_point_multiplication_gfp_l256_output_s { + uint64_t xk; /**< x coordinate of resultant EC point (4 qwords)*/ + uint64_t yk; /**< y coordinate of resultant EC point (4 qwords)*/ +} icp_qat_fw_maths_point_multiplication_gfp_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC GFP Partial Point Verification , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_POINT_VERIFY_GFP_L256. + */ +typedef struct icp_qat_fw_maths_point_verify_gfp_l256_output_s { + /* no output parameters */ +} icp_qat_fw_maths_point_verify_gfp_l256_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC GFP Point Multiplication , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_POINT_MULTIPLICATION_GFP_L512. + */ +typedef struct icp_qat_fw_maths_point_multiplication_gfp_l512_output_s { + uint64_t xk; /**< x coordinate of resultant EC point (8 qwords)*/ + uint64_t yk; /**< y coordinate of resultant EC point (8 qwords)*/ +} icp_qat_fw_maths_point_multiplication_gfp_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC GFP Partial Point , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_POINT_VERIFY_GFP_L512. + */ +typedef struct icp_qat_fw_maths_point_verify_gfp_l512_output_s { + /* no output parameters */ +} icp_qat_fw_maths_point_verify_gfp_l512_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC GFP Point Multiplication , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_POINT_MULTIPLICATION_GFP_521. + */ +typedef struct icp_qat_fw_maths_point_multiplication_gfp_521_output_s { + uint64_t xk; /**< x coordinate of resultant EC point (9 qwords)*/ + uint64_t yk; /**< y coordinate of resultant EC point (9 qwords)*/ +} icp_qat_fw_maths_point_multiplication_gfp_521_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC GFP Partial Point Verification , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #MATHS_POINT_VERIFY_GFP_521. + */ +typedef struct icp_qat_fw_maths_point_verify_gfp_521_output_s { + /* no output parameters */ +} icp_qat_fw_maths_point_verify_gfp_521_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC curve25519 Variable Point Multiplication + * [k]P(x), as specified in RFC7748 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #POINT_MULTIPLICATION_C25519. + */ +typedef struct icp_qat_fw_point_multiplication_c25519_output_s { + uint64_t + xr; /**< xR = Montgomery affine coordinate X of point [k]P (4 + qwords)*/ +} icp_qat_fw_point_multiplication_c25519_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC curve25519 Generator Point Multiplication + * [k]G(x), as specified in RFC7748 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #GENERATOR_MULTIPLICATION_C25519. + */ +typedef struct icp_qat_fw_generator_multiplication_c25519_output_s { + uint64_t + xr; /**< xR = Montgomery affine coordinate X of point [k]G (4 + qwords)*/ +} icp_qat_fw_generator_multiplication_c25519_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC edwards25519 Variable Point Multiplication + * [k]P, as specified in RFC8032 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #POINT_MULTIPLICATION_ED25519. + */ +typedef struct icp_qat_fw_point_multiplication_ed25519_output_s { + uint64_t + xr; /**< xR = Twisted Edwards affine coordinate X of point [k]P (4 + qwords)*/ + uint64_t + yr; /**< yR = Twisted Edwards affine coordinate Y of point [k]P (4 + qwords)*/ +} icp_qat_fw_point_multiplication_ed25519_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC edwards25519 Generator Point Multiplication + * [k]G, as specified in RFC8032 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #GENERATOR_MULTIPLICATION_ED25519. + */ +typedef struct icp_qat_fw_generator_multiplication_ed25519_output_s { + uint64_t + xr; /**< xR = Twisted Edwards affine coordinate X of point [k]G (4 + qwords)*/ + uint64_t + yr; /**< yR = Twisted Edwards affine coordinate Y of point [k]G (4 + qwords)*/ +} icp_qat_fw_generator_multiplication_ed25519_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC curve448 Variable Point Multiplication + * [k]P(x), as specified in RFC7748 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #POINT_MULTIPLICATION_C448. + */ +typedef struct icp_qat_fw_point_multiplication_c448_output_s { + uint64_t + xr; /**< xR = Montgomery affine coordinate X of point [k]P (8 + qwords)*/ +} icp_qat_fw_point_multiplication_c448_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC curve448 Generator Point Multiplication + * [k]G(x), as specified in RFC7748 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #GENERATOR_MULTIPLICATION_C448. + */ +typedef struct icp_qat_fw_generator_multiplication_c448_output_s { + uint64_t + xr; /**< xR = Montgomery affine coordinate X of point [k]G (8 + qwords)*/ +} icp_qat_fw_generator_multiplication_c448_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC edwards448 Variable Point Multiplication + * [k]P, as specified in RFC8032 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #POINT_MULTIPLICATION_ED448. + */ +typedef struct icp_qat_fw_point_multiplication_ed448_output_s { + uint64_t xr; /**< xR = Edwards affine coordinate X of point [k]P (8 + qwords)*/ + uint64_t yr; /**< yR = Edwards affine coordinate Y of point [k]P (8 + qwords)*/ +} icp_qat_fw_point_multiplication_ed448_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * Output parameter list for ECC edwards448 Generator Point Multiplication + * [k]P, as specified in RFC8032 , + * to be used when icp_qat_fw_pke_response_s::functionalityId is + * #GENERATOR_MULTIPLICATION_ED448. + */ +typedef struct icp_qat_fw_generator_multiplication_ed448_output_s { + uint64_t xr; /**< xR = Edwards affine coordinate X of point [k]G (8 + qwords)*/ + uint64_t yr; /**< yR = Edwards affine coordinate Y of point [k]G (8 + qwords)*/ +} icp_qat_fw_generator_multiplication_ed448_output_t; + +/** + * @ingroup icp_qat_fw_mmp + * @brief + * MMP output parameters + */ +typedef union icp_qat_fw_mmp_output_param_u { + /** Generic parameter structure : All members of this wrapper structure + * are pointers to large integers. + */ + uint64_t flat_array[ICP_QAT_FW_PKE_OUTPUT_COUNT_MAX]; + + /** Initialisation sequence */ + icp_qat_fw_mmp_init_output_t mmp_init; + + /** Diffie-Hellman Modular exponentiation base 2 for 768-bit numbers */ + icp_qat_fw_mmp_dh_g2_768_output_t mmp_dh_g2_768; + + /** Diffie-Hellman Modular exponentiation for 768-bit numbers */ + icp_qat_fw_mmp_dh_768_output_t mmp_dh_768; + + /** Diffie-Hellman Modular exponentiation base 2 for 1024-bit numbers */ + icp_qat_fw_mmp_dh_g2_1024_output_t mmp_dh_g2_1024; + + /** Diffie-Hellman Modular exponentiation for 1024-bit numbers */ + icp_qat_fw_mmp_dh_1024_output_t mmp_dh_1024; + + /** Diffie-Hellman Modular exponentiation base 2 for 1536-bit numbers */ + icp_qat_fw_mmp_dh_g2_1536_output_t mmp_dh_g2_1536; + + /** Diffie-Hellman Modular exponentiation for 1536-bit numbers */ + icp_qat_fw_mmp_dh_1536_output_t mmp_dh_1536; + + /** Diffie-Hellman Modular exponentiation base 2 for 2048-bit numbers */ + icp_qat_fw_mmp_dh_g2_2048_output_t mmp_dh_g2_2048; + + /** Diffie-Hellman Modular exponentiation for 2048-bit numbers */ + icp_qat_fw_mmp_dh_2048_output_t mmp_dh_2048; + + /** Diffie-Hellman Modular exponentiation base 2 for 3072-bit numbers */ + icp_qat_fw_mmp_dh_g2_3072_output_t mmp_dh_g2_3072; + + /** Diffie-Hellman Modular exponentiation for 3072-bit numbers */ + icp_qat_fw_mmp_dh_3072_output_t mmp_dh_3072; + + /** Diffie-Hellman Modular exponentiation base 2 for 4096-bit numbers */ + icp_qat_fw_mmp_dh_g2_4096_output_t mmp_dh_g2_4096; + + /** Diffie-Hellman Modular exponentiation for 4096-bit numbers */ + icp_qat_fw_mmp_dh_4096_output_t mmp_dh_4096; + + /** RSA 512 key generation first form */ + icp_qat_fw_mmp_rsa_kp1_512_output_t mmp_rsa_kp1_512; + + /** RSA 512 key generation second form */ + icp_qat_fw_mmp_rsa_kp2_512_output_t mmp_rsa_kp2_512; + + /** RSA 512 Encryption */ + icp_qat_fw_mmp_rsa_ep_512_output_t mmp_rsa_ep_512; + + /** RSA 512 Decryption */ + icp_qat_fw_mmp_rsa_dp1_512_output_t mmp_rsa_dp1_512; + + /** RSA 1024 Decryption with CRT */ + icp_qat_fw_mmp_rsa_dp2_512_output_t mmp_rsa_dp2_512; + + /** RSA 1024 key generation first form */ + icp_qat_fw_mmp_rsa_kp1_1024_output_t mmp_rsa_kp1_1024; + + /** RSA 1024 key generation second form */ + icp_qat_fw_mmp_rsa_kp2_1024_output_t mmp_rsa_kp2_1024; + + /** RSA 1024 Encryption */ + icp_qat_fw_mmp_rsa_ep_1024_output_t mmp_rsa_ep_1024; + + /** RSA 1024 Decryption */ + icp_qat_fw_mmp_rsa_dp1_1024_output_t mmp_rsa_dp1_1024; + + /** RSA 1024 Decryption with CRT */ + icp_qat_fw_mmp_rsa_dp2_1024_output_t mmp_rsa_dp2_1024; + + /** RSA 1536 key generation first form */ + icp_qat_fw_mmp_rsa_kp1_1536_output_t mmp_rsa_kp1_1536; + + /** RSA 1536 key generation second form */ + icp_qat_fw_mmp_rsa_kp2_1536_output_t mmp_rsa_kp2_1536; + + /** RSA 1536 Encryption */ + icp_qat_fw_mmp_rsa_ep_1536_output_t mmp_rsa_ep_1536; + + /** RSA 1536 Decryption */ + icp_qat_fw_mmp_rsa_dp1_1536_output_t mmp_rsa_dp1_1536; + + /** RSA 1536 Decryption with CRT */ + icp_qat_fw_mmp_rsa_dp2_1536_output_t mmp_rsa_dp2_1536; + + /** RSA 2048 key generation first form */ + icp_qat_fw_mmp_rsa_kp1_2048_output_t mmp_rsa_kp1_2048; + + /** RSA 2048 key generation second form */ + icp_qat_fw_mmp_rsa_kp2_2048_output_t mmp_rsa_kp2_2048; + + /** RSA 2048 Encryption */ + icp_qat_fw_mmp_rsa_ep_2048_output_t mmp_rsa_ep_2048; + + /** RSA 2048 Decryption */ + icp_qat_fw_mmp_rsa_dp1_2048_output_t mmp_rsa_dp1_2048; + + /** RSA 2048 Decryption with CRT */ + icp_qat_fw_mmp_rsa_dp2_2048_output_t mmp_rsa_dp2_2048; + + /** RSA 3072 key generation first form */ + icp_qat_fw_mmp_rsa_kp1_3072_output_t mmp_rsa_kp1_3072; + + /** RSA 3072 key generation second form */ + icp_qat_fw_mmp_rsa_kp2_3072_output_t mmp_rsa_kp2_3072; + + /** RSA 3072 Encryption */ + icp_qat_fw_mmp_rsa_ep_3072_output_t mmp_rsa_ep_3072; + + /** RSA 3072 Decryption */ + icp_qat_fw_mmp_rsa_dp1_3072_output_t mmp_rsa_dp1_3072; + + /** RSA 3072 Decryption with CRT */ + icp_qat_fw_mmp_rsa_dp2_3072_output_t mmp_rsa_dp2_3072; + + /** RSA 4096 key generation first form */ + icp_qat_fw_mmp_rsa_kp1_4096_output_t mmp_rsa_kp1_4096; + + /** RSA 4096 key generation second form */ + icp_qat_fw_mmp_rsa_kp2_4096_output_t mmp_rsa_kp2_4096; + + /** RSA 4096 Encryption */ + icp_qat_fw_mmp_rsa_ep_4096_output_t mmp_rsa_ep_4096; + + /** RSA 4096 Decryption */ + icp_qat_fw_mmp_rsa_dp1_4096_output_t mmp_rsa_dp1_4096; + + /** RSA 4096 Decryption with CRT */ + icp_qat_fw_mmp_rsa_dp2_4096_output_t mmp_rsa_dp2_4096; + + /** GCD primality test for 192-bit numbers */ + icp_qat_fw_mmp_gcd_pt_192_output_t mmp_gcd_pt_192; + + /** GCD primality test for 256-bit numbers */ + icp_qat_fw_mmp_gcd_pt_256_output_t mmp_gcd_pt_256; + + /** GCD primality test for 384-bit numbers */ + icp_qat_fw_mmp_gcd_pt_384_output_t mmp_gcd_pt_384; + + /** GCD primality test for 512-bit numbers */ + icp_qat_fw_mmp_gcd_pt_512_output_t mmp_gcd_pt_512; + + /** GCD primality test for 768-bit numbers */ + icp_qat_fw_mmp_gcd_pt_768_output_t mmp_gcd_pt_768; + + /** GCD primality test for 1024-bit numbers */ + icp_qat_fw_mmp_gcd_pt_1024_output_t mmp_gcd_pt_1024; + + /** GCD primality test for 1536-bit numbers */ + icp_qat_fw_mmp_gcd_pt_1536_output_t mmp_gcd_pt_1536; + + /** GCD primality test for 2048-bit numbers */ + icp_qat_fw_mmp_gcd_pt_2048_output_t mmp_gcd_pt_2048; + + /** GCD primality test for 3072-bit numbers */ + icp_qat_fw_mmp_gcd_pt_3072_output_t mmp_gcd_pt_3072; + + /** GCD primality test for 4096-bit numbers */ + icp_qat_fw_mmp_gcd_pt_4096_output_t mmp_gcd_pt_4096; + + /** Fermat primality test for 160-bit numbers */ + icp_qat_fw_mmp_fermat_pt_160_output_t mmp_fermat_pt_160; + + /** Fermat primality test for 512-bit numbers */ + icp_qat_fw_mmp_fermat_pt_512_output_t mmp_fermat_pt_512; + + /** Fermat primality test for <e; 512-bit numbers */ + icp_qat_fw_mmp_fermat_pt_l512_output_t mmp_fermat_pt_l512; + + /** Fermat primality test for 768-bit numbers */ + icp_qat_fw_mmp_fermat_pt_768_output_t mmp_fermat_pt_768; + + /** Fermat primality test for 1024-bit numbers */ + icp_qat_fw_mmp_fermat_pt_1024_output_t mmp_fermat_pt_1024; + + /** Fermat primality test for 1536-bit numbers */ + icp_qat_fw_mmp_fermat_pt_1536_output_t mmp_fermat_pt_1536; + + /** Fermat primality test for 2048-bit numbers */ + icp_qat_fw_mmp_fermat_pt_2048_output_t mmp_fermat_pt_2048; + + /** Fermat primality test for 3072-bit numbers */ + icp_qat_fw_mmp_fermat_pt_3072_output_t mmp_fermat_pt_3072; + + /** Fermat primality test for 4096-bit numbers */ + icp_qat_fw_mmp_fermat_pt_4096_output_t mmp_fermat_pt_4096; + + /** Miller-Rabin primality test for 160-bit numbers */ + icp_qat_fw_mmp_mr_pt_160_output_t mmp_mr_pt_160; + + /** Miller-Rabin primality test for 512-bit numbers */ + icp_qat_fw_mmp_mr_pt_512_output_t mmp_mr_pt_512; + + /** Miller-Rabin primality test for 768-bit numbers */ + icp_qat_fw_mmp_mr_pt_768_output_t mmp_mr_pt_768; + + /** Miller-Rabin primality test for 1024-bit numbers */ + icp_qat_fw_mmp_mr_pt_1024_output_t mmp_mr_pt_1024; + + /** Miller-Rabin primality test for 1536-bit numbers */ + icp_qat_fw_mmp_mr_pt_1536_output_t mmp_mr_pt_1536; + + /** Miller-Rabin primality test for 2048-bit numbers */ + icp_qat_fw_mmp_mr_pt_2048_output_t mmp_mr_pt_2048; + + /** Miller-Rabin primality test for 3072-bit numbers */ + icp_qat_fw_mmp_mr_pt_3072_output_t mmp_mr_pt_3072; + + /** Miller-Rabin primality test for 4096-bit numbers */ + icp_qat_fw_mmp_mr_pt_4096_output_t mmp_mr_pt_4096; + + /** Miller-Rabin primality test for 512-bit numbers */ + icp_qat_fw_mmp_mr_pt_l512_output_t mmp_mr_pt_l512; + + /** Lucas primality test for 160-bit numbers */ + icp_qat_fw_mmp_lucas_pt_160_output_t mmp_lucas_pt_160; + + /** Lucas primality test for 512-bit numbers */ + icp_qat_fw_mmp_lucas_pt_512_output_t mmp_lucas_pt_512; + + /** Lucas primality test for 768-bit numbers */ + icp_qat_fw_mmp_lucas_pt_768_output_t mmp_lucas_pt_768; + + /** Lucas primality test for 1024-bit numbers */ + icp_qat_fw_mmp_lucas_pt_1024_output_t mmp_lucas_pt_1024; + + /** Lucas primality test for 1536-bit numbers */ + icp_qat_fw_mmp_lucas_pt_1536_output_t mmp_lucas_pt_1536; + + /** Lucas primality test for 2048-bit numbers */ + icp_qat_fw_mmp_lucas_pt_2048_output_t mmp_lucas_pt_2048; + + /** Lucas primality test for 3072-bit numbers */ + icp_qat_fw_mmp_lucas_pt_3072_output_t mmp_lucas_pt_3072; + + /** Lucas primality test for 4096-bit numbers */ + icp_qat_fw_mmp_lucas_pt_4096_output_t mmp_lucas_pt_4096; + + /** Lucas primality test for L512-bit numbers */ + icp_qat_fw_mmp_lucas_pt_l512_output_t mmp_lucas_pt_l512; + + /** Modular exponentiation for numbers less than 512-bits */ + icp_qat_fw_maths_modexp_l512_output_t maths_modexp_l512; + + /** Modular exponentiation for numbers less than 1024-bit */ + icp_qat_fw_maths_modexp_l1024_output_t maths_modexp_l1024; + + /** Modular exponentiation for numbers less than 1536-bits */ + icp_qat_fw_maths_modexp_l1536_output_t maths_modexp_l1536; + + /** Modular exponentiation for numbers less than 2048-bit */ + icp_qat_fw_maths_modexp_l2048_output_t maths_modexp_l2048; + + /** Modular exponentiation for numbers less than 2560-bits */ + icp_qat_fw_maths_modexp_l2560_output_t maths_modexp_l2560; + + /** Modular exponentiation for numbers less than 3072-bits */ + icp_qat_fw_maths_modexp_l3072_output_t maths_modexp_l3072; + + /** Modular exponentiation for numbers less than 3584-bits */ + icp_qat_fw_maths_modexp_l3584_output_t maths_modexp_l3584; + + /** Modular exponentiation for numbers less than 4096-bit */ + icp_qat_fw_maths_modexp_l4096_output_t maths_modexp_l4096; + + /** Modular multiplicative inverse for numbers less than 128 bits */ + icp_qat_fw_maths_modinv_odd_l128_output_t maths_modinv_odd_l128; + + /** Modular multiplicative inverse for numbers less than 192 bits */ + icp_qat_fw_maths_modinv_odd_l192_output_t maths_modinv_odd_l192; + + /** Modular multiplicative inverse for numbers less than 256 bits */ + icp_qat_fw_maths_modinv_odd_l256_output_t maths_modinv_odd_l256; + + /** Modular multiplicative inverse for numbers less than 384 bits */ + icp_qat_fw_maths_modinv_odd_l384_output_t maths_modinv_odd_l384; + + /** Modular multiplicative inverse for numbers less than 512 bits */ + icp_qat_fw_maths_modinv_odd_l512_output_t maths_modinv_odd_l512; + + /** Modular multiplicative inverse for numbers less than 768 bits */ + icp_qat_fw_maths_modinv_odd_l768_output_t maths_modinv_odd_l768; + + /** Modular multiplicative inverse for numbers less than 1024 bits */ + icp_qat_fw_maths_modinv_odd_l1024_output_t maths_modinv_odd_l1024; + + /** Modular multiplicative inverse for numbers less than 1536 bits */ + icp_qat_fw_maths_modinv_odd_l1536_output_t maths_modinv_odd_l1536; + + /** Modular multiplicative inverse for numbers less than 2048 bits */ + icp_qat_fw_maths_modinv_odd_l2048_output_t maths_modinv_odd_l2048; + + /** Modular multiplicative inverse for numbers less than 3072 bits */ + icp_qat_fw_maths_modinv_odd_l3072_output_t maths_modinv_odd_l3072; + + /** Modular multiplicative inverse for numbers less than 4096 bits */ + icp_qat_fw_maths_modinv_odd_l4096_output_t maths_modinv_odd_l4096; + + /** Modular multiplicative inverse for numbers less than 128 bits */ + icp_qat_fw_maths_modinv_even_l128_output_t maths_modinv_even_l128; + + /** Modular multiplicative inverse for numbers less than 192 bits */ + icp_qat_fw_maths_modinv_even_l192_output_t maths_modinv_even_l192; + + /** Modular multiplicative inverse for numbers less than 256 bits */ + icp_qat_fw_maths_modinv_even_l256_output_t maths_modinv_even_l256; + + /** Modular multiplicative inverse for numbers less than 384 bits */ + icp_qat_fw_maths_modinv_even_l384_output_t maths_modinv_even_l384; + + /** Modular multiplicative inverse for numbers less than 512 bits */ + icp_qat_fw_maths_modinv_even_l512_output_t maths_modinv_even_l512; + + /** Modular multiplicative inverse for numbers less than 768 bits */ + icp_qat_fw_maths_modinv_even_l768_output_t maths_modinv_even_l768; + + /** Modular multiplicative inverse for numbers less than 1024 bits */ + icp_qat_fw_maths_modinv_even_l1024_output_t maths_modinv_even_l1024; + + /** Modular multiplicative inverse for numbers less than 1536 bits */ + icp_qat_fw_maths_modinv_even_l1536_output_t maths_modinv_even_l1536; + + /** Modular multiplicative inverse for numbers less than 2048 bits */ + icp_qat_fw_maths_modinv_even_l2048_output_t maths_modinv_even_l2048; + + /** Modular multiplicative inverse for numbers less than 3072 bits */ + icp_qat_fw_maths_modinv_even_l3072_output_t maths_modinv_even_l3072; + + /** Modular multiplicative inverse for numbers less than 4096 bits */ + icp_qat_fw_maths_modinv_even_l4096_output_t maths_modinv_even_l4096; + + /** DSA parameter generation P */ + icp_qat_fw_mmp_dsa_gen_p_1024_160_output_t mmp_dsa_gen_p_1024_160; + + /** DSA key generation G */ + icp_qat_fw_mmp_dsa_gen_g_1024_output_t mmp_dsa_gen_g_1024; + + /** DSA key generation Y */ + icp_qat_fw_mmp_dsa_gen_y_1024_output_t mmp_dsa_gen_y_1024; + + /** DSA Sign R */ + icp_qat_fw_mmp_dsa_sign_r_1024_160_output_t mmp_dsa_sign_r_1024_160; + + /** DSA Sign S */ + icp_qat_fw_mmp_dsa_sign_s_160_output_t mmp_dsa_sign_s_160; + + /** DSA Sign R S */ + icp_qat_fw_mmp_dsa_sign_r_s_1024_160_output_t mmp_dsa_sign_r_s_1024_160; + + /** DSA Verify */ + icp_qat_fw_mmp_dsa_verify_1024_160_output_t mmp_dsa_verify_1024_160; + + /** DSA parameter generation P */ + icp_qat_fw_mmp_dsa_gen_p_2048_224_output_t mmp_dsa_gen_p_2048_224; + + /** DSA key generation Y */ + icp_qat_fw_mmp_dsa_gen_y_2048_output_t mmp_dsa_gen_y_2048; + + /** DSA Sign R */ + icp_qat_fw_mmp_dsa_sign_r_2048_224_output_t mmp_dsa_sign_r_2048_224; + + /** DSA Sign S */ + icp_qat_fw_mmp_dsa_sign_s_224_output_t mmp_dsa_sign_s_224; + + /** DSA Sign R S */ + icp_qat_fw_mmp_dsa_sign_r_s_2048_224_output_t mmp_dsa_sign_r_s_2048_224; + + /** DSA Verify */ + icp_qat_fw_mmp_dsa_verify_2048_224_output_t mmp_dsa_verify_2048_224; + + /** DSA parameter generation P */ + icp_qat_fw_mmp_dsa_gen_p_2048_256_output_t mmp_dsa_gen_p_2048_256; + + /** DSA key generation G */ + icp_qat_fw_mmp_dsa_gen_g_2048_output_t mmp_dsa_gen_g_2048; + + /** DSA Sign R */ + icp_qat_fw_mmp_dsa_sign_r_2048_256_output_t mmp_dsa_sign_r_2048_256; + + /** DSA Sign S */ + icp_qat_fw_mmp_dsa_sign_s_256_output_t mmp_dsa_sign_s_256; + + /** DSA Sign R S */ + icp_qat_fw_mmp_dsa_sign_r_s_2048_256_output_t mmp_dsa_sign_r_s_2048_256; + + /** DSA Verify */ + icp_qat_fw_mmp_dsa_verify_2048_256_output_t mmp_dsa_verify_2048_256; + + /** DSA parameter generation P */ + icp_qat_fw_mmp_dsa_gen_p_3072_256_output_t mmp_dsa_gen_p_3072_256; + + /** DSA key generation G */ + icp_qat_fw_mmp_dsa_gen_g_3072_output_t mmp_dsa_gen_g_3072; + + /** DSA key generation Y */ + icp_qat_fw_mmp_dsa_gen_y_3072_output_t mmp_dsa_gen_y_3072; + + /** DSA Sign R */ + icp_qat_fw_mmp_dsa_sign_r_3072_256_output_t mmp_dsa_sign_r_3072_256; + + /** DSA Sign R S */ + icp_qat_fw_mmp_dsa_sign_r_s_3072_256_output_t mmp_dsa_sign_r_s_3072_256; + + /** DSA Verify */ + icp_qat_fw_mmp_dsa_verify_3072_256_output_t mmp_dsa_verify_3072_256; + + /** ECDSA Sign RS for curves B/K-163 and B/K-233 */ + icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l256_output_t + mmp_ecdsa_sign_rs_gf2_l256; + + /** ECDSA Sign R for curves B/K-163 and B/K-233 */ + icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_output_t mmp_ecdsa_sign_r_gf2_l256; + + /** ECDSA Sign S for curves with n < 2^256 */ + icp_qat_fw_mmp_ecdsa_sign_s_gf2_l256_output_t mmp_ecdsa_sign_s_gf2_l256; + + /** ECDSA Verify for curves B/K-163 and B/K-233 */ + icp_qat_fw_mmp_ecdsa_verify_gf2_l256_output_t mmp_ecdsa_verify_gf2_l256; + + /** ECDSA Sign RS */ + icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l512_output_t + mmp_ecdsa_sign_rs_gf2_l512; + + /** ECDSA GF2 Sign R */ + icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_output_t mmp_ecdsa_sign_r_gf2_l512; + + /** ECDSA GF2 Sign S */ + icp_qat_fw_mmp_ecdsa_sign_s_gf2_l512_output_t mmp_ecdsa_sign_s_gf2_l512; + + /** ECDSA GF2 Verify */ + icp_qat_fw_mmp_ecdsa_verify_gf2_l512_output_t mmp_ecdsa_verify_gf2_l512; + + /** ECDSA GF2 Sign RS for curves B-571/K-571 */ + icp_qat_fw_mmp_ecdsa_sign_rs_gf2_571_output_t mmp_ecdsa_sign_rs_gf2_571; + + /** ECDSA GF2 Sign S for curves with deg(q) < 576 */ + icp_qat_fw_mmp_ecdsa_sign_s_gf2_571_output_t mmp_ecdsa_sign_s_gf2_571; + + /** ECDSA GF2 Sign R for degree 571 */ + icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_output_t mmp_ecdsa_sign_r_gf2_571; + + /** ECDSA GF2 Verify for degree 571 */ + icp_qat_fw_mmp_ecdsa_verify_gf2_571_output_t mmp_ecdsa_verify_gf2_571; + + /** MATHS GF2 Point Multiplication */ + icp_qat_fw_maths_point_multiplication_gf2_l256_output_t + maths_point_multiplication_gf2_l256; + + /** MATHS GF2 Point Verification */ + icp_qat_fw_maths_point_verify_gf2_l256_output_t + maths_point_verify_gf2_l256; + + /** MATHS GF2 Point Multiplication */ + icp_qat_fw_maths_point_multiplication_gf2_l512_output_t + maths_point_multiplication_gf2_l512; + + /** MATHS GF2 Point Verification */ + icp_qat_fw_maths_point_verify_gf2_l512_output_t + maths_point_verify_gf2_l512; + + /** ECC GF2 Point Multiplication for curves B-571/K-571 */ + icp_qat_fw_maths_point_multiplication_gf2_571_output_t + maths_point_multiplication_gf2_571; + + /** ECC GF2 Point Verification for degree 571 */ + icp_qat_fw_maths_point_verify_gf2_571_output_t + maths_point_verify_gf2_571; + + /** ECDSA GFP Sign R */ + icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_output_t mmp_ecdsa_sign_r_gfp_l256; + + /** ECDSA GFP Sign S */ + icp_qat_fw_mmp_ecdsa_sign_s_gfp_l256_output_t mmp_ecdsa_sign_s_gfp_l256; + + /** ECDSA GFP Sign RS */ + icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l256_output_t + mmp_ecdsa_sign_rs_gfp_l256; + + /** ECDSA GFP Verify */ + icp_qat_fw_mmp_ecdsa_verify_gfp_l256_output_t mmp_ecdsa_verify_gfp_l256; + + /** ECDSA GFP Sign R */ + icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_output_t mmp_ecdsa_sign_r_gfp_l512; + + /** ECDSA GFP Sign S */ + icp_qat_fw_mmp_ecdsa_sign_s_gfp_l512_output_t mmp_ecdsa_sign_s_gfp_l512; + + /** ECDSA GFP Sign RS */ + icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l512_output_t + mmp_ecdsa_sign_rs_gfp_l512; + + /** ECDSA GFP Verify */ + icp_qat_fw_mmp_ecdsa_verify_gfp_l512_output_t mmp_ecdsa_verify_gfp_l512; + + /** ECDSA GFP Sign R */ + icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_output_t mmp_ecdsa_sign_r_gfp_521; + + /** ECDSA GFP Sign S */ + icp_qat_fw_mmp_ecdsa_sign_s_gfp_521_output_t mmp_ecdsa_sign_s_gfp_521; + + /** ECDSA GFP Sign RS */ + icp_qat_fw_mmp_ecdsa_sign_rs_gfp_521_output_t mmp_ecdsa_sign_rs_gfp_521; + + /** ECDSA GFP Verify */ + icp_qat_fw_mmp_ecdsa_verify_gfp_521_output_t mmp_ecdsa_verify_gfp_521; + + /** ECC GFP Point Multiplication */ + icp_qat_fw_maths_point_multiplication_gfp_l256_output_t + maths_point_multiplication_gfp_l256; + + /** ECC GFP Partial Point Verification */ + icp_qat_fw_maths_point_verify_gfp_l256_output_t + maths_point_verify_gfp_l256; + + /** ECC GFP Point Multiplication */ + icp_qat_fw_maths_point_multiplication_gfp_l512_output_t + maths_point_multiplication_gfp_l512; + + /** ECC GFP Partial Point */ + icp_qat_fw_maths_point_verify_gfp_l512_output_t + maths_point_verify_gfp_l512; + + /** ECC GFP Point Multiplication */ + icp_qat_fw_maths_point_multiplication_gfp_521_output_t + maths_point_multiplication_gfp_521; + + /** ECC GFP Partial Point Verification */ + icp_qat_fw_maths_point_verify_gfp_521_output_t + maths_point_verify_gfp_521; + + /** ECC curve25519 Variable Point Multiplication [k]P(x), as specified + * in RFC7748 */ + icp_qat_fw_point_multiplication_c25519_output_t + point_multiplication_c25519; + + /** ECC curve25519 Generator Point Multiplication [k]G(x), as specified + * in RFC7748 */ + icp_qat_fw_generator_multiplication_c25519_output_t + generator_multiplication_c25519; + + /** ECC edwards25519 Variable Point Multiplication [k]P, as specified in + * RFC8032 */ + icp_qat_fw_point_multiplication_ed25519_output_t + point_multiplication_ed25519; + + /** ECC edwards25519 Generator Point Multiplication [k]G, as specified + * in RFC8032 */ + icp_qat_fw_generator_multiplication_ed25519_output_t + generator_multiplication_ed25519; + + /** ECC curve448 Variable Point Multiplication [k]P(x), as specified in + * RFC7748 */ + icp_qat_fw_point_multiplication_c448_output_t point_multiplication_c448; + + /** ECC curve448 Generator Point Multiplication [k]G(x), as specified in + * RFC7748 */ + icp_qat_fw_generator_multiplication_c448_output_t + generator_multiplication_c448; + + /** ECC edwards448 Variable Point Multiplication [k]P, as specified in + * RFC8032 */ + icp_qat_fw_point_multiplication_ed448_output_t + point_multiplication_ed448; + + /** ECC edwards448 Generator Point Multiplication [k]P, as specified in + * RFC8032 */ + icp_qat_fw_generator_multiplication_ed448_output_t + generator_multiplication_ed448; +} icp_qat_fw_mmp_output_param_t; + +#endif /* __ICP_QAT_FW_MMP__ */ + +/* --- (Automatically generated (build v. 2.7), do not modify manually) --- */ + +/* --- end of file --- */ Index: sys/dev/qat/qat_api/firmware/include/icp_qat_fw_mmp_ids.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/firmware/include/icp_qat_fw_mmp_ids.h @@ -0,0 +1,1555 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + * @file icp_qat_fw_mmp_ids.h + * @ingroup icp_qat_fw_mmp + * $Revision: 0.1 $ + * @brief + * This file documents the external interfaces that the QAT FW running + * on the QAT Acceleration Engine provides to clients wanting to + * accelerate crypto assymetric applications + */ + +#ifndef __ICP_QAT_FW_MMP_IDS__ +#define __ICP_QAT_FW_MMP_IDS__ + +#define PKE_INIT 0x09061798 +/**< Functionality ID for Initialisation sequence + * @li 1 input parameters : @link icp_qat_fw_mmp_init_input_s::z z @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_init_output_s::zz zz @endlink + */ +#define PKE_DH_G2_768 0x1c0b1a10 +/**< Functionality ID for Diffie-Hellman Modular exponentiation base 2 for + * 768-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_dh_g2_768_input_s::e e @endlink + * @link icp_qat_fw_mmp_dh_g2_768_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dh_g2_768_output_s::r r + * @endlink + */ +#define PKE_DH_768 0x210c1a1b +/**< Functionality ID for Diffie-Hellman Modular exponentiation for 768-bit + * numbers + * @li 3 input parameters : @link icp_qat_fw_mmp_dh_768_input_s::g g @endlink + * @link icp_qat_fw_mmp_dh_768_input_s::e e @endlink @link + * icp_qat_fw_mmp_dh_768_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dh_768_output_s::r r @endlink + */ +#define PKE_DH_G2_1024 0x220b1a27 +/**< Functionality ID for Diffie-Hellman Modular exponentiation base 2 for + * 1024-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_dh_g2_1024_input_s::e e + * @endlink @link icp_qat_fw_mmp_dh_g2_1024_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dh_g2_1024_output_s::r r + * @endlink + */ +#define PKE_DH_1024 0x290c1a32 +/**< Functionality ID for Diffie-Hellman Modular exponentiation for 1024-bit + * numbers + * @li 3 input parameters : @link icp_qat_fw_mmp_dh_1024_input_s::g g @endlink + * @link icp_qat_fw_mmp_dh_1024_input_s::e e @endlink @link + * icp_qat_fw_mmp_dh_1024_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dh_1024_output_s::r r @endlink + */ +#define PKE_DH_G2_1536 0x2e0b1a3e +/**< Functionality ID for Diffie-Hellman Modular exponentiation base 2 for + * 1536-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_dh_g2_1536_input_s::e e + * @endlink @link icp_qat_fw_mmp_dh_g2_1536_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dh_g2_1536_output_s::r r + * @endlink + */ +#define PKE_DH_1536 0x390c1a49 +/**< Functionality ID for Diffie-Hellman Modular exponentiation for 1536-bit + * numbers + * @li 3 input parameters : @link icp_qat_fw_mmp_dh_1536_input_s::g g @endlink + * @link icp_qat_fw_mmp_dh_1536_input_s::e e @endlink @link + * icp_qat_fw_mmp_dh_1536_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dh_1536_output_s::r r @endlink + */ +#define PKE_DH_G2_2048 0x3e0b1a55 +/**< Functionality ID for Diffie-Hellman Modular exponentiation base 2 for + * 2048-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_dh_g2_2048_input_s::e e + * @endlink @link icp_qat_fw_mmp_dh_g2_2048_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dh_g2_2048_output_s::r r + * @endlink + */ +#define PKE_DH_2048 0x4d0c1a60 +/**< Functionality ID for Diffie-Hellman Modular exponentiation for 2048-bit + * numbers + * @li 3 input parameters : @link icp_qat_fw_mmp_dh_2048_input_s::g g @endlink + * @link icp_qat_fw_mmp_dh_2048_input_s::e e @endlink @link + * icp_qat_fw_mmp_dh_2048_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dh_2048_output_s::r r @endlink + */ +#define PKE_DH_G2_3072 0x3a0b1a6c +/**< Functionality ID for Diffie-Hellman Modular exponentiation base 2 for + * 3072-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_dh_g2_3072_input_s::e e + * @endlink @link icp_qat_fw_mmp_dh_g2_3072_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dh_g2_3072_output_s::r r + * @endlink + */ +#define PKE_DH_3072 0x510c1a77 +/**< Functionality ID for Diffie-Hellman Modular exponentiation for 3072-bit + * numbers + * @li 3 input parameters : @link icp_qat_fw_mmp_dh_3072_input_s::g g @endlink + * @link icp_qat_fw_mmp_dh_3072_input_s::e e @endlink @link + * icp_qat_fw_mmp_dh_3072_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dh_3072_output_s::r r @endlink + */ +#define PKE_DH_G2_4096 0x4a0b1a83 +/**< Functionality ID for Diffie-Hellman Modular exponentiation base 2 for + * 4096-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_dh_g2_4096_input_s::e e + * @endlink @link icp_qat_fw_mmp_dh_g2_4096_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dh_g2_4096_output_s::r r + * @endlink + */ +#define PKE_DH_4096 0x690c1a8e +/**< Functionality ID for Diffie-Hellman Modular exponentiation for 4096-bit + * numbers + * @li 3 input parameters : @link icp_qat_fw_mmp_dh_4096_input_s::g g @endlink + * @link icp_qat_fw_mmp_dh_4096_input_s::e e @endlink @link + * icp_qat_fw_mmp_dh_4096_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dh_4096_output_s::r r @endlink + */ +#define PKE_RSA_KP1_512 0x191d1a9a +/**< Functionality ID for RSA 512 key generation first form + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_kp1_512_input_s::p p + * @endlink @link icp_qat_fw_mmp_rsa_kp1_512_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_kp1_512_input_s::e e @endlink + * @li 2 output parameters : @link icp_qat_fw_mmp_rsa_kp1_512_output_s::n n + * @endlink @link icp_qat_fw_mmp_rsa_kp1_512_output_s::d d @endlink + */ +#define PKE_RSA_KP2_512 0x19401acc +/**< Functionality ID for RSA 512 key generation second form + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_kp2_512_input_s::p p + * @endlink @link icp_qat_fw_mmp_rsa_kp2_512_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_kp2_512_input_s::e e @endlink + * @li 5 output parameters : @link icp_qat_fw_mmp_rsa_kp2_512_output_s::n n + * @endlink @link icp_qat_fw_mmp_rsa_kp2_512_output_s::d d @endlink @link + * icp_qat_fw_mmp_rsa_kp2_512_output_s::dp dp @endlink @link + * icp_qat_fw_mmp_rsa_kp2_512_output_s::dq dq @endlink @link + * icp_qat_fw_mmp_rsa_kp2_512_output_s::qinv qinv @endlink + */ +#define PKE_RSA_EP_512 0x1c161b21 +/**< Functionality ID for RSA 512 Encryption + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_ep_512_input_s::m m + * @endlink @link icp_qat_fw_mmp_rsa_ep_512_input_s::e e @endlink @link + * icp_qat_fw_mmp_rsa_ep_512_input_s::n n @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_ep_512_output_s::c c + * @endlink + */ +#define PKE_RSA_DP1_512 0x1c161b3c +/**< Functionality ID for RSA 512 Decryption + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_dp1_512_input_s::c c + * @endlink @link icp_qat_fw_mmp_rsa_dp1_512_input_s::d d @endlink @link + * icp_qat_fw_mmp_rsa_dp1_512_input_s::n n @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_dp1_512_output_s::m m + * @endlink + */ +#define PKE_RSA_DP2_512 0x1c131b57 +/**< Functionality ID for RSA 1024 Decryption with CRT + * @li 6 input parameters : @link icp_qat_fw_mmp_rsa_dp2_512_input_s::c c + * @endlink @link icp_qat_fw_mmp_rsa_dp2_512_input_s::p p @endlink @link + * icp_qat_fw_mmp_rsa_dp2_512_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_dp2_512_input_s::dp dp @endlink @link + * icp_qat_fw_mmp_rsa_dp2_512_input_s::dq dq @endlink @link + * icp_qat_fw_mmp_rsa_dp2_512_input_s::qinv qinv @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_dp2_512_output_s::m m + * @endlink + */ +#define PKE_RSA_KP1_1024 0x36181b71 +/**< Functionality ID for RSA 1024 key generation first form + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_kp1_1024_input_s::p p + * @endlink @link icp_qat_fw_mmp_rsa_kp1_1024_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_kp1_1024_input_s::e e @endlink + * @li 2 output parameters : @link icp_qat_fw_mmp_rsa_kp1_1024_output_s::n n + * @endlink @link icp_qat_fw_mmp_rsa_kp1_1024_output_s::d d @endlink + */ +#define PKE_RSA_KP2_1024 0x40451b9e +/**< Functionality ID for RSA 1024 key generation second form + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_kp2_1024_input_s::p p + * @endlink @link icp_qat_fw_mmp_rsa_kp2_1024_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_kp2_1024_input_s::e e @endlink + * @li 5 output parameters : @link icp_qat_fw_mmp_rsa_kp2_1024_output_s::n n + * @endlink @link icp_qat_fw_mmp_rsa_kp2_1024_output_s::d d @endlink @link + * icp_qat_fw_mmp_rsa_kp2_1024_output_s::dp dp @endlink @link + * icp_qat_fw_mmp_rsa_kp2_1024_output_s::dq dq @endlink @link + * icp_qat_fw_mmp_rsa_kp2_1024_output_s::qinv qinv @endlink + */ +#define PKE_RSA_EP_1024 0x35111bf7 +/**< Functionality ID for RSA 1024 Encryption + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_ep_1024_input_s::m m + * @endlink @link icp_qat_fw_mmp_rsa_ep_1024_input_s::e e @endlink @link + * icp_qat_fw_mmp_rsa_ep_1024_input_s::n n @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_ep_1024_output_s::c c + * @endlink + */ +#define PKE_RSA_DP1_1024 0x35111c12 +/**< Functionality ID for RSA 1024 Decryption + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_dp1_1024_input_s::c c + * @endlink @link icp_qat_fw_mmp_rsa_dp1_1024_input_s::d d @endlink @link + * icp_qat_fw_mmp_rsa_dp1_1024_input_s::n n @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_dp1_1024_output_s::m m + * @endlink + */ +#define PKE_RSA_DP2_1024 0x26131c2d +/**< Functionality ID for RSA 1024 Decryption with CRT + * @li 6 input parameters : @link icp_qat_fw_mmp_rsa_dp2_1024_input_s::c c + * @endlink @link icp_qat_fw_mmp_rsa_dp2_1024_input_s::p p @endlink @link + * icp_qat_fw_mmp_rsa_dp2_1024_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_dp2_1024_input_s::dp dp @endlink @link + * icp_qat_fw_mmp_rsa_dp2_1024_input_s::dq dq @endlink @link + * icp_qat_fw_mmp_rsa_dp2_1024_input_s::qinv qinv @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_dp2_1024_output_s::m m + * @endlink + */ +#define PKE_RSA_KP1_1536 0x531d1c46 +/**< Functionality ID for RSA 1536 key generation first form + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_kp1_1536_input_s::p p + * @endlink @link icp_qat_fw_mmp_rsa_kp1_1536_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_kp1_1536_input_s::e e @endlink + * @li 2 output parameters : @link icp_qat_fw_mmp_rsa_kp1_1536_output_s::n n + * @endlink @link icp_qat_fw_mmp_rsa_kp1_1536_output_s::d d @endlink + */ +#define PKE_RSA_KP2_1536 0x32391c78 +/**< Functionality ID for RSA 1536 key generation second form + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_kp2_1536_input_s::p p + * @endlink @link icp_qat_fw_mmp_rsa_kp2_1536_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_kp2_1536_input_s::e e @endlink + * @li 5 output parameters : @link icp_qat_fw_mmp_rsa_kp2_1536_output_s::n n + * @endlink @link icp_qat_fw_mmp_rsa_kp2_1536_output_s::d d @endlink @link + * icp_qat_fw_mmp_rsa_kp2_1536_output_s::dp dp @endlink @link + * icp_qat_fw_mmp_rsa_kp2_1536_output_s::dq dq @endlink @link + * icp_qat_fw_mmp_rsa_kp2_1536_output_s::qinv qinv @endlink + */ +#define PKE_RSA_EP_1536 0x4d111cdc +/**< Functionality ID for RSA 1536 Encryption + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_ep_1536_input_s::m m + * @endlink @link icp_qat_fw_mmp_rsa_ep_1536_input_s::e e @endlink @link + * icp_qat_fw_mmp_rsa_ep_1536_input_s::n n @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_ep_1536_output_s::c c + * @endlink + */ +#define PKE_RSA_DP1_1536 0x4d111cf7 +/**< Functionality ID for RSA 1536 Decryption + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_dp1_1536_input_s::c c + * @endlink @link icp_qat_fw_mmp_rsa_dp1_1536_input_s::d d @endlink @link + * icp_qat_fw_mmp_rsa_dp1_1536_input_s::n n @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_dp1_1536_output_s::m m + * @endlink + */ +#define PKE_RSA_DP2_1536 0x45111d12 +/**< Functionality ID for RSA 1536 Decryption with CRT + * @li 6 input parameters : @link icp_qat_fw_mmp_rsa_dp2_1536_input_s::c c + * @endlink @link icp_qat_fw_mmp_rsa_dp2_1536_input_s::p p @endlink @link + * icp_qat_fw_mmp_rsa_dp2_1536_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_dp2_1536_input_s::dp dp @endlink @link + * icp_qat_fw_mmp_rsa_dp2_1536_input_s::dq dq @endlink @link + * icp_qat_fw_mmp_rsa_dp2_1536_input_s::qinv qinv @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_dp2_1536_output_s::m m + * @endlink + */ +#define PKE_RSA_KP1_2048 0x72181d2e +/**< Functionality ID for RSA 2048 key generation first form + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_kp1_2048_input_s::p p + * @endlink @link icp_qat_fw_mmp_rsa_kp1_2048_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_kp1_2048_input_s::e e @endlink + * @li 2 output parameters : @link icp_qat_fw_mmp_rsa_kp1_2048_output_s::n n + * @endlink @link icp_qat_fw_mmp_rsa_kp1_2048_output_s::d d @endlink + */ +#define PKE_RSA_KP2_2048 0x42341d5b +/**< Functionality ID for RSA 2048 key generation second form + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_kp2_2048_input_s::p p + * @endlink @link icp_qat_fw_mmp_rsa_kp2_2048_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_kp2_2048_input_s::e e @endlink + * @li 5 output parameters : @link icp_qat_fw_mmp_rsa_kp2_2048_output_s::n n + * @endlink @link icp_qat_fw_mmp_rsa_kp2_2048_output_s::d d @endlink @link + * icp_qat_fw_mmp_rsa_kp2_2048_output_s::dp dp @endlink @link + * icp_qat_fw_mmp_rsa_kp2_2048_output_s::dq dq @endlink @link + * icp_qat_fw_mmp_rsa_kp2_2048_output_s::qinv qinv @endlink + */ +#define PKE_RSA_EP_2048 0x6e111dba +/**< Functionality ID for RSA 2048 Encryption + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_ep_2048_input_s::m m + * @endlink @link icp_qat_fw_mmp_rsa_ep_2048_input_s::e e @endlink @link + * icp_qat_fw_mmp_rsa_ep_2048_input_s::n n @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_ep_2048_output_s::c c + * @endlink + */ +#define PKE_RSA_DP1_2048 0x6e111dda +/**< Functionality ID for RSA 2048 Decryption + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_dp1_2048_input_s::c c + * @endlink @link icp_qat_fw_mmp_rsa_dp1_2048_input_s::d d @endlink @link + * icp_qat_fw_mmp_rsa_dp1_2048_input_s::n n @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_dp1_2048_output_s::m m + * @endlink + */ +#define PKE_RSA_DP2_2048 0x59121dfa +/**< Functionality ID for RSA 2048 Decryption with CRT + * @li 6 input parameters : @link icp_qat_fw_mmp_rsa_dp2_2048_input_s::c c + * @endlink @link icp_qat_fw_mmp_rsa_dp2_2048_input_s::p p @endlink @link + * icp_qat_fw_mmp_rsa_dp2_2048_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_dp2_2048_input_s::dp dp @endlink @link + * icp_qat_fw_mmp_rsa_dp2_2048_input_s::dq dq @endlink @link + * icp_qat_fw_mmp_rsa_dp2_2048_input_s::qinv qinv @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_dp2_2048_output_s::m m + * @endlink + */ +#define PKE_RSA_KP1_3072 0x60191e16 +/**< Functionality ID for RSA 3072 key generation first form + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_kp1_3072_input_s::p p + * @endlink @link icp_qat_fw_mmp_rsa_kp1_3072_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_kp1_3072_input_s::e e @endlink + * @li 2 output parameters : @link icp_qat_fw_mmp_rsa_kp1_3072_output_s::n n + * @endlink @link icp_qat_fw_mmp_rsa_kp1_3072_output_s::d d @endlink + */ +#define PKE_RSA_KP2_3072 0x68331e45 +/**< Functionality ID for RSA 3072 key generation second form + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_kp2_3072_input_s::p p + * @endlink @link icp_qat_fw_mmp_rsa_kp2_3072_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_kp2_3072_input_s::e e @endlink + * @li 5 output parameters : @link icp_qat_fw_mmp_rsa_kp2_3072_output_s::n n + * @endlink @link icp_qat_fw_mmp_rsa_kp2_3072_output_s::d d @endlink @link + * icp_qat_fw_mmp_rsa_kp2_3072_output_s::dp dp @endlink @link + * icp_qat_fw_mmp_rsa_kp2_3072_output_s::dq dq @endlink @link + * icp_qat_fw_mmp_rsa_kp2_3072_output_s::qinv qinv @endlink + */ +#define PKE_RSA_EP_3072 0x7d111ea3 +/**< Functionality ID for RSA 3072 Encryption + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_ep_3072_input_s::m m + * @endlink @link icp_qat_fw_mmp_rsa_ep_3072_input_s::e e @endlink @link + * icp_qat_fw_mmp_rsa_ep_3072_input_s::n n @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_ep_3072_output_s::c c + * @endlink + */ +#define PKE_RSA_DP1_3072 0x7d111ebe +/**< Functionality ID for RSA 3072 Decryption + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_dp1_3072_input_s::c c + * @endlink @link icp_qat_fw_mmp_rsa_dp1_3072_input_s::d d @endlink @link + * icp_qat_fw_mmp_rsa_dp1_3072_input_s::n n @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_dp1_3072_output_s::m m + * @endlink + */ +#define PKE_RSA_DP2_3072 0x81121ed9 +/**< Functionality ID for RSA 3072 Decryption with CRT + * @li 6 input parameters : @link icp_qat_fw_mmp_rsa_dp2_3072_input_s::c c + * @endlink @link icp_qat_fw_mmp_rsa_dp2_3072_input_s::p p @endlink @link + * icp_qat_fw_mmp_rsa_dp2_3072_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_dp2_3072_input_s::dp dp @endlink @link + * icp_qat_fw_mmp_rsa_dp2_3072_input_s::dq dq @endlink @link + * icp_qat_fw_mmp_rsa_dp2_3072_input_s::qinv qinv @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_dp2_3072_output_s::m m + * @endlink + */ +#define PKE_RSA_KP1_4096 0x7d1f1ef6 +/**< Functionality ID for RSA 4096 key generation first form + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_kp1_4096_input_s::p p + * @endlink @link icp_qat_fw_mmp_rsa_kp1_4096_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_kp1_4096_input_s::e e @endlink + * @li 2 output parameters : @link icp_qat_fw_mmp_rsa_kp1_4096_output_s::n n + * @endlink @link icp_qat_fw_mmp_rsa_kp1_4096_output_s::d d @endlink + */ +#define PKE_RSA_KP2_4096 0x91251f27 +/**< Functionality ID for RSA 4096 key generation second form + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_kp2_4096_input_s::p p + * @endlink @link icp_qat_fw_mmp_rsa_kp2_4096_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_kp2_4096_input_s::e e @endlink + * @li 5 output parameters : @link icp_qat_fw_mmp_rsa_kp2_4096_output_s::n n + * @endlink @link icp_qat_fw_mmp_rsa_kp2_4096_output_s::d d @endlink @link + * icp_qat_fw_mmp_rsa_kp2_4096_output_s::dp dp @endlink @link + * icp_qat_fw_mmp_rsa_kp2_4096_output_s::dq dq @endlink @link + * icp_qat_fw_mmp_rsa_kp2_4096_output_s::qinv qinv @endlink + */ +#define PKE_RSA_EP_4096 0xa5101f7e +/**< Functionality ID for RSA 4096 Encryption + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_ep_4096_input_s::m m + * @endlink @link icp_qat_fw_mmp_rsa_ep_4096_input_s::e e @endlink @link + * icp_qat_fw_mmp_rsa_ep_4096_input_s::n n @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_ep_4096_output_s::c c + * @endlink + */ +#define PKE_RSA_DP1_4096 0xa5101f98 +/**< Functionality ID for RSA 4096 Decryption + * @li 3 input parameters : @link icp_qat_fw_mmp_rsa_dp1_4096_input_s::c c + * @endlink @link icp_qat_fw_mmp_rsa_dp1_4096_input_s::d d @endlink @link + * icp_qat_fw_mmp_rsa_dp1_4096_input_s::n n @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_dp1_4096_output_s::m m + * @endlink + */ +#define PKE_RSA_DP2_4096 0xb1111fb2 +/**< Functionality ID for RSA 4096 Decryption with CRT + * @li 6 input parameters : @link icp_qat_fw_mmp_rsa_dp2_4096_input_s::c c + * @endlink @link icp_qat_fw_mmp_rsa_dp2_4096_input_s::p p @endlink @link + * icp_qat_fw_mmp_rsa_dp2_4096_input_s::q q @endlink @link + * icp_qat_fw_mmp_rsa_dp2_4096_input_s::dp dp @endlink @link + * icp_qat_fw_mmp_rsa_dp2_4096_input_s::dq dq @endlink @link + * icp_qat_fw_mmp_rsa_dp2_4096_input_s::qinv qinv @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_rsa_dp2_4096_output_s::m m + * @endlink + */ +#define PKE_GCD_PT_192 0x19201fcd +/**< Functionality ID for GCD primality test for 192-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_gcd_pt_192_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_GCD_PT_256 0x19201ff7 +/**< Functionality ID for GCD primality test for 256-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_gcd_pt_256_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_GCD_PT_384 0x19202021 +/**< Functionality ID for GCD primality test for 384-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_gcd_pt_384_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_GCD_PT_512 0x1b1b204b +/**< Functionality ID for GCD primality test for 512-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_gcd_pt_512_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_GCD_PT_768 0x170c2070 +/**< Functionality ID for GCD primality test for 768-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_gcd_pt_768_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_GCD_PT_1024 0x130f2085 +/**< Functionality ID for GCD primality test for 1024-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_gcd_pt_1024_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_GCD_PT_1536 0x1d0c2094 +/**< Functionality ID for GCD primality test for 1536-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_gcd_pt_1536_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_GCD_PT_2048 0x210c20a5 +/**< Functionality ID for GCD primality test for 2048-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_gcd_pt_2048_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_GCD_PT_3072 0x290c20b6 +/**< Functionality ID for GCD primality test for 3072-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_gcd_pt_3072_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_GCD_PT_4096 0x310c20c7 +/**< Functionality ID for GCD primality test for 4096-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_gcd_pt_4096_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_FERMAT_PT_160 0x0e1120d8 +/**< Functionality ID for Fermat primality test for 160-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_fermat_pt_160_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_FERMAT_PT_512 0x121120ee +/**< Functionality ID for Fermat primality test for 512-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_fermat_pt_512_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_FERMAT_PT_L512 0x19162104 +/**< Functionality ID for Fermat primality test for <e; 512-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_fermat_pt_l512_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_FERMAT_PT_768 0x19112124 +/**< Functionality ID for Fermat primality test for 768-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_fermat_pt_768_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_FERMAT_PT_1024 0x1f11213a +/**< Functionality ID for Fermat primality test for 1024-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_fermat_pt_1024_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_FERMAT_PT_1536 0x2b112150 +/**< Functionality ID for Fermat primality test for 1536-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_fermat_pt_1536_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_FERMAT_PT_2048 0x3b112166 +/**< Functionality ID for Fermat primality test for 2048-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_fermat_pt_2048_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_FERMAT_PT_3072 0x3a11217c +/**< Functionality ID for Fermat primality test for 3072-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_fermat_pt_3072_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_FERMAT_PT_4096 0x4a112192 +/**< Functionality ID for Fermat primality test for 4096-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_fermat_pt_4096_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_MR_PT_160 0x0e1221a8 +/**< Functionality ID for Miller-Rabin primality test for 160-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_mr_pt_160_input_s::x x @endlink + * @link icp_qat_fw_mmp_mr_pt_160_input_s::m m @endlink + * @li no output parameters + */ +#define PKE_MR_PT_512 0x111221bf +/**< Functionality ID for Miller-Rabin primality test for 512-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_mr_pt_512_input_s::x x @endlink + * @link icp_qat_fw_mmp_mr_pt_512_input_s::m m @endlink + * @li no output parameters + */ +#define PKE_MR_PT_768 0x1d0d21d6 +/**< Functionality ID for Miller-Rabin primality test for 768-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_mr_pt_768_input_s::x x @endlink + * @link icp_qat_fw_mmp_mr_pt_768_input_s::m m @endlink + * @li no output parameters + */ +#define PKE_MR_PT_1024 0x250d21ed +/**< Functionality ID for Miller-Rabin primality test for 1024-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_mr_pt_1024_input_s::x x + * @endlink @link icp_qat_fw_mmp_mr_pt_1024_input_s::m m @endlink + * @li no output parameters + */ +#define PKE_MR_PT_1536 0x350d2204 +/**< Functionality ID for Miller-Rabin primality test for 1536-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_mr_pt_1536_input_s::x x + * @endlink @link icp_qat_fw_mmp_mr_pt_1536_input_s::m m @endlink + * @li no output parameters + */ +#define PKE_MR_PT_2048 0x490d221b +/**< Functionality ID for Miller-Rabin primality test for 2048-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_mr_pt_2048_input_s::x x + * @endlink @link icp_qat_fw_mmp_mr_pt_2048_input_s::m m @endlink + * @li no output parameters + */ +#define PKE_MR_PT_3072 0x4d0d2232 +/**< Functionality ID for Miller-Rabin primality test for 3072-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_mr_pt_3072_input_s::x x + * @endlink @link icp_qat_fw_mmp_mr_pt_3072_input_s::m m @endlink + * @li no output parameters + */ +#define PKE_MR_PT_4096 0x650d2249 +/**< Functionality ID for Miller-Rabin primality test for 4096-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_mr_pt_4096_input_s::x x + * @endlink @link icp_qat_fw_mmp_mr_pt_4096_input_s::m m @endlink + * @li no output parameters + */ +#define PKE_MR_PT_L512 0x18182260 +/**< Functionality ID for Miller-Rabin primality test for 512-bit numbers + * @li 2 input parameters : @link icp_qat_fw_mmp_mr_pt_l512_input_s::x x + * @endlink @link icp_qat_fw_mmp_mr_pt_l512_input_s::m m @endlink + * @li no output parameters + */ +#define PKE_LUCAS_PT_160 0x0e0c227e +/**< Functionality ID for Lucas primality test for 160-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_lucas_pt_160_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_LUCAS_PT_512 0x110c228f +/**< Functionality ID for Lucas primality test for 512-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_lucas_pt_512_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_LUCAS_PT_768 0x130c22a0 +/**< Functionality ID for Lucas primality test for 768-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_lucas_pt_768_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_LUCAS_PT_1024 0x150c22b1 +/**< Functionality ID for Lucas primality test for 1024-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_lucas_pt_1024_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_LUCAS_PT_1536 0x190c22c2 +/**< Functionality ID for Lucas primality test for 1536-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_lucas_pt_1536_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_LUCAS_PT_2048 0x1d0c22d3 +/**< Functionality ID for Lucas primality test for 2048-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_lucas_pt_2048_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_LUCAS_PT_3072 0x250c22e4 +/**< Functionality ID for Lucas primality test for 3072-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_lucas_pt_3072_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_LUCAS_PT_4096 0x661522f5 +/**< Functionality ID for Lucas primality test for 4096-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_lucas_pt_4096_input_s::m m + * @endlink + * @li no output parameters + */ +#define PKE_LUCAS_PT_L512 0x1617230a +/**< Functionality ID for Lucas primality test for L512-bit numbers + * @li 1 input parameters : @link icp_qat_fw_mmp_lucas_pt_l512_input_s::m m + * @endlink + * @li no output parameters + */ +#define MATHS_MODEXP_L512 0x150c2327 +/**< Functionality ID for Modular exponentiation for numbers less than 512-bits + * @li 3 input parameters : @link icp_qat_fw_maths_modexp_l512_input_s::g g + * @endlink @link icp_qat_fw_maths_modexp_l512_input_s::e e @endlink @link + * icp_qat_fw_maths_modexp_l512_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modexp_l512_output_s::r r + * @endlink + */ +#define MATHS_MODEXP_L1024 0x2d0c233e +/**< Functionality ID for Modular exponentiation for numbers less than 1024-bit + * @li 3 input parameters : @link icp_qat_fw_maths_modexp_l1024_input_s::g g + * @endlink @link icp_qat_fw_maths_modexp_l1024_input_s::e e @endlink @link + * icp_qat_fw_maths_modexp_l1024_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modexp_l1024_output_s::r r + * @endlink + */ +#define MATHS_MODEXP_L1536 0x410c2355 +/**< Functionality ID for Modular exponentiation for numbers less than 1536-bits + * @li 3 input parameters : @link icp_qat_fw_maths_modexp_l1536_input_s::g g + * @endlink @link icp_qat_fw_maths_modexp_l1536_input_s::e e @endlink @link + * icp_qat_fw_maths_modexp_l1536_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modexp_l1536_output_s::r r + * @endlink + */ +#define MATHS_MODEXP_L2048 0x5e12236c +/**< Functionality ID for Modular exponentiation for numbers less than 2048-bit + * @li 3 input parameters : @link icp_qat_fw_maths_modexp_l2048_input_s::g g + * @endlink @link icp_qat_fw_maths_modexp_l2048_input_s::e e @endlink @link + * icp_qat_fw_maths_modexp_l2048_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modexp_l2048_output_s::r r + * @endlink + */ +#define MATHS_MODEXP_L2560 0x60162388 +/**< Functionality ID for Modular exponentiation for numbers less than 2560-bits + * @li 3 input parameters : @link icp_qat_fw_maths_modexp_l2560_input_s::g g + * @endlink @link icp_qat_fw_maths_modexp_l2560_input_s::e e @endlink @link + * icp_qat_fw_maths_modexp_l2560_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modexp_l2560_output_s::r r + * @endlink + */ +#define MATHS_MODEXP_L3072 0x650c23a9 +/**< Functionality ID for Modular exponentiation for numbers less than 3072-bits + * @li 3 input parameters : @link icp_qat_fw_maths_modexp_l3072_input_s::g g + * @endlink @link icp_qat_fw_maths_modexp_l3072_input_s::e e @endlink @link + * icp_qat_fw_maths_modexp_l3072_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modexp_l3072_output_s::r r + * @endlink + */ +#define MATHS_MODEXP_L3584 0x801623c0 +/**< Functionality ID for Modular exponentiation for numbers less than 3584-bits + * @li 3 input parameters : @link icp_qat_fw_maths_modexp_l3584_input_s::g g + * @endlink @link icp_qat_fw_maths_modexp_l3584_input_s::e e @endlink @link + * icp_qat_fw_maths_modexp_l3584_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modexp_l3584_output_s::r r + * @endlink + */ +#define MATHS_MODEXP_L4096 0x850c23e1 +/**< Functionality ID for Modular exponentiation for numbers less than 4096-bit + * @li 3 input parameters : @link icp_qat_fw_maths_modexp_l4096_input_s::g g + * @endlink @link icp_qat_fw_maths_modexp_l4096_input_s::e e @endlink @link + * icp_qat_fw_maths_modexp_l4096_input_s::m m @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modexp_l4096_output_s::r r + * @endlink + */ +#define MATHS_MODINV_ODD_L128 0x090623f8 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 128 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_odd_l128_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_odd_l128_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_odd_l128_output_s::c + * c @endlink + */ +#define MATHS_MODINV_ODD_L192 0x0a0623fe +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 192 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_odd_l192_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_odd_l192_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_odd_l192_output_s::c + * c @endlink + */ +#define MATHS_MODINV_ODD_L256 0x0a062404 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 256 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_odd_l256_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_odd_l256_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_odd_l256_output_s::c + * c @endlink + */ +#define MATHS_MODINV_ODD_L384 0x0b06240a +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 384 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_odd_l384_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_odd_l384_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_odd_l384_output_s::c + * c @endlink + */ +#define MATHS_MODINV_ODD_L512 0x0c062410 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 512 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_odd_l512_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_odd_l512_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_odd_l512_output_s::c + * c @endlink + */ +#define MATHS_MODINV_ODD_L768 0x0e062416 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 768 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_odd_l768_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_odd_l768_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_odd_l768_output_s::c + * c @endlink + */ +#define MATHS_MODINV_ODD_L1024 0x1006241c +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 1024 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_odd_l1024_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_odd_l1024_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_odd_l1024_output_s::c + * c @endlink + */ +#define MATHS_MODINV_ODD_L1536 0x18062422 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 1536 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_odd_l1536_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_odd_l1536_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_odd_l1536_output_s::c + * c @endlink + */ +#define MATHS_MODINV_ODD_L2048 0x20062428 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 2048 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_odd_l2048_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_odd_l2048_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_odd_l2048_output_s::c + * c @endlink + */ +#define MATHS_MODINV_ODD_L3072 0x3006242e +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 3072 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_odd_l3072_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_odd_l3072_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_odd_l3072_output_s::c + * c @endlink + */ +#define MATHS_MODINV_ODD_L4096 0x40062434 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 4096 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_odd_l4096_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_odd_l4096_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_odd_l4096_output_s::c + * c @endlink + */ +#define MATHS_MODINV_EVEN_L128 0x0906243a +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 128 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_even_l128_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_even_l128_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_even_l128_output_s::c + * c @endlink + */ +#define MATHS_MODINV_EVEN_L192 0x0a062440 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 192 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_even_l192_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_even_l192_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_even_l192_output_s::c + * c @endlink + */ +#define MATHS_MODINV_EVEN_L256 0x0a062446 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 256 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_even_l256_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_even_l256_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_even_l256_output_s::c + * c @endlink + */ +#define MATHS_MODINV_EVEN_L384 0x0e0b244c +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 384 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_even_l384_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_even_l384_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_even_l384_output_s::c + * c @endlink + */ +#define MATHS_MODINV_EVEN_L512 0x110b2457 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 512 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_even_l512_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_even_l512_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_even_l512_output_s::c + * c @endlink + */ +#define MATHS_MODINV_EVEN_L768 0x170b2462 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 768 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_even_l768_input_s::a a + * @endlink @link icp_qat_fw_maths_modinv_even_l768_input_s::b b @endlink + * @li 1 output parameters : @link icp_qat_fw_maths_modinv_even_l768_output_s::c + * c @endlink + */ +#define MATHS_MODINV_EVEN_L1024 0x1d0b246d +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 1024 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_even_l1024_input_s::a + * a @endlink @link icp_qat_fw_maths_modinv_even_l1024_input_s::b b @endlink + * @li 1 output parameters : @link + * icp_qat_fw_maths_modinv_even_l1024_output_s::c c @endlink + */ +#define MATHS_MODINV_EVEN_L1536 0x290b2478 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 1536 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_even_l1536_input_s::a + * a @endlink @link icp_qat_fw_maths_modinv_even_l1536_input_s::b b @endlink + * @li 1 output parameters : @link + * icp_qat_fw_maths_modinv_even_l1536_output_s::c c @endlink + */ +#define MATHS_MODINV_EVEN_L2048 0x350b2483 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 2048 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_even_l2048_input_s::a + * a @endlink @link icp_qat_fw_maths_modinv_even_l2048_input_s::b b @endlink + * @li 1 output parameters : @link + * icp_qat_fw_maths_modinv_even_l2048_output_s::c c @endlink + */ +#define MATHS_MODINV_EVEN_L3072 0x4d0b248e +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 3072 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_even_l3072_input_s::a + * a @endlink @link icp_qat_fw_maths_modinv_even_l3072_input_s::b b @endlink + * @li 1 output parameters : @link + * icp_qat_fw_maths_modinv_even_l3072_output_s::c c @endlink + */ +#define MATHS_MODINV_EVEN_L4096 0x650b2499 +/**< Functionality ID for Modular multiplicative inverse for numbers less than + * 4096 bits + * @li 2 input parameters : @link icp_qat_fw_maths_modinv_even_l4096_input_s::a + * a @endlink @link icp_qat_fw_maths_modinv_even_l4096_input_s::b b @endlink + * @li 1 output parameters : @link + * icp_qat_fw_maths_modinv_even_l4096_output_s::c c @endlink + */ +#define PKE_DSA_GEN_P_1024_160 0x381824a4 +/**< Functionality ID for DSA parameter generation P + * @li 2 input parameters : @link icp_qat_fw_mmp_dsa_gen_p_1024_160_input_s::x x + * @endlink @link icp_qat_fw_mmp_dsa_gen_p_1024_160_input_s::q q @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_gen_p_1024_160_output_s::p + * p @endlink + */ +#define PKE_DSA_GEN_G_1024 0x261424d4 +/**< Functionality ID for DSA key generation G + * @li 3 input parameters : @link icp_qat_fw_mmp_dsa_gen_g_1024_input_s::p p + * @endlink @link icp_qat_fw_mmp_dsa_gen_g_1024_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_gen_g_1024_input_s::h h @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_gen_g_1024_output_s::g g + * @endlink + */ +#define PKE_DSA_GEN_Y_1024 0x291224ed +/**< Functionality ID for DSA key generation Y + * @li 3 input parameters : @link icp_qat_fw_mmp_dsa_gen_y_1024_input_s::p p + * @endlink @link icp_qat_fw_mmp_dsa_gen_y_1024_input_s::g g @endlink @link + * icp_qat_fw_mmp_dsa_gen_y_1024_input_s::x x @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_gen_y_1024_output_s::y y + * @endlink + */ +#define PKE_DSA_SIGN_R_1024_160 0x2c1c2504 +/**< Functionality ID for DSA Sign R + * @li 4 input parameters : @link icp_qat_fw_mmp_dsa_sign_r_1024_160_input_s::k + * k @endlink @link icp_qat_fw_mmp_dsa_sign_r_1024_160_input_s::p p @endlink + * @link icp_qat_fw_mmp_dsa_sign_r_1024_160_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_1024_160_input_s::g g @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_dsa_sign_r_1024_160_output_s::r r @endlink + */ +#define PKE_DSA_SIGN_S_160 0x12142526 +/**< Functionality ID for DSA Sign S + * @li 5 input parameters : @link icp_qat_fw_mmp_dsa_sign_s_160_input_s::m m + * @endlink @link icp_qat_fw_mmp_dsa_sign_s_160_input_s::k k @endlink @link + * icp_qat_fw_mmp_dsa_sign_s_160_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_sign_s_160_input_s::r r @endlink @link + * icp_qat_fw_mmp_dsa_sign_s_160_input_s::x x @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_sign_s_160_output_s::s s + * @endlink + */ +#define PKE_DSA_SIGN_R_S_1024_160 0x301e2540 +/**< Functionality ID for DSA Sign R S + * @li 6 input parameters : @link + * icp_qat_fw_mmp_dsa_sign_r_s_1024_160_input_s::m m @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_1024_160_input_s::k k @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_1024_160_input_s::p p @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_1024_160_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_1024_160_input_s::g g @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_1024_160_input_s::x x @endlink + * @li 2 output parameters : @link + * icp_qat_fw_mmp_dsa_sign_r_s_1024_160_output_s::r r @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_1024_160_output_s::s s @endlink + */ +#define PKE_DSA_VERIFY_1024_160 0x323a2570 +/**< Functionality ID for DSA Verify + * @li 7 input parameters : @link icp_qat_fw_mmp_dsa_verify_1024_160_input_s::r + * r @endlink @link icp_qat_fw_mmp_dsa_verify_1024_160_input_s::s s @endlink + * @link icp_qat_fw_mmp_dsa_verify_1024_160_input_s::m m @endlink @link + * icp_qat_fw_mmp_dsa_verify_1024_160_input_s::p p @endlink @link + * icp_qat_fw_mmp_dsa_verify_1024_160_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_verify_1024_160_input_s::g g @endlink @link + * icp_qat_fw_mmp_dsa_verify_1024_160_input_s::y y @endlink + * @li no output parameters + */ +#define PKE_DSA_GEN_P_2048_224 0x341d25be +/**< Functionality ID for DSA parameter generation P + * @li 2 input parameters : @link icp_qat_fw_mmp_dsa_gen_p_2048_224_input_s::x x + * @endlink @link icp_qat_fw_mmp_dsa_gen_p_2048_224_input_s::q q @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_gen_p_2048_224_output_s::p + * p @endlink + */ +#define PKE_DSA_GEN_Y_2048 0x4d1225ea +/**< Functionality ID for DSA key generation Y + * @li 3 input parameters : @link icp_qat_fw_mmp_dsa_gen_y_2048_input_s::p p + * @endlink @link icp_qat_fw_mmp_dsa_gen_y_2048_input_s::g g @endlink @link + * icp_qat_fw_mmp_dsa_gen_y_2048_input_s::x x @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_gen_y_2048_output_s::y y + * @endlink + */ +#define PKE_DSA_SIGN_R_2048_224 0x511c2601 +/**< Functionality ID for DSA Sign R + * @li 4 input parameters : @link icp_qat_fw_mmp_dsa_sign_r_2048_224_input_s::k + * k @endlink @link icp_qat_fw_mmp_dsa_sign_r_2048_224_input_s::p p @endlink + * @link icp_qat_fw_mmp_dsa_sign_r_2048_224_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_2048_224_input_s::g g @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_dsa_sign_r_2048_224_output_s::r r @endlink + */ +#define PKE_DSA_SIGN_S_224 0x15142623 +/**< Functionality ID for DSA Sign S + * @li 5 input parameters : @link icp_qat_fw_mmp_dsa_sign_s_224_input_s::m m + * @endlink @link icp_qat_fw_mmp_dsa_sign_s_224_input_s::k k @endlink @link + * icp_qat_fw_mmp_dsa_sign_s_224_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_sign_s_224_input_s::r r @endlink @link + * icp_qat_fw_mmp_dsa_sign_s_224_input_s::x x @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_sign_s_224_output_s::s s + * @endlink + */ +#define PKE_DSA_SIGN_R_S_2048_224 0x571e263d +/**< Functionality ID for DSA Sign R S + * @li 6 input parameters : @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_224_input_s::m m @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_224_input_s::k k @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_224_input_s::p p @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_224_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_224_input_s::g g @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_224_input_s::x x @endlink + * @li 2 output parameters : @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_224_output_s::r r @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_224_output_s::s s @endlink + */ +#define PKE_DSA_VERIFY_2048_224 0x6930266d +/**< Functionality ID for DSA Verify + * @li 7 input parameters : @link icp_qat_fw_mmp_dsa_verify_2048_224_input_s::r + * r @endlink @link icp_qat_fw_mmp_dsa_verify_2048_224_input_s::s s @endlink + * @link icp_qat_fw_mmp_dsa_verify_2048_224_input_s::m m @endlink @link + * icp_qat_fw_mmp_dsa_verify_2048_224_input_s::p p @endlink @link + * icp_qat_fw_mmp_dsa_verify_2048_224_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_verify_2048_224_input_s::g g @endlink @link + * icp_qat_fw_mmp_dsa_verify_2048_224_input_s::y y @endlink + * @li no output parameters + */ +#define PKE_DSA_GEN_P_2048_256 0x431126b7 +/**< Functionality ID for DSA parameter generation P + * @li 2 input parameters : @link icp_qat_fw_mmp_dsa_gen_p_2048_256_input_s::x x + * @endlink @link icp_qat_fw_mmp_dsa_gen_p_2048_256_input_s::q q @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_gen_p_2048_256_output_s::p + * p @endlink + */ +#define PKE_DSA_GEN_G_2048 0x4b1426ed +/**< Functionality ID for DSA key generation G + * @li 3 input parameters : @link icp_qat_fw_mmp_dsa_gen_g_2048_input_s::p p + * @endlink @link icp_qat_fw_mmp_dsa_gen_g_2048_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_gen_g_2048_input_s::h h @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_gen_g_2048_output_s::g g + * @endlink + */ +#define PKE_DSA_SIGN_R_2048_256 0x5b182706 +/**< Functionality ID for DSA Sign R + * @li 4 input parameters : @link icp_qat_fw_mmp_dsa_sign_r_2048_256_input_s::k + * k @endlink @link icp_qat_fw_mmp_dsa_sign_r_2048_256_input_s::p p @endlink + * @link icp_qat_fw_mmp_dsa_sign_r_2048_256_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_2048_256_input_s::g g @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_dsa_sign_r_2048_256_output_s::r r @endlink + */ +#define PKE_DSA_SIGN_S_256 0x15142733 +/**< Functionality ID for DSA Sign S + * @li 5 input parameters : @link icp_qat_fw_mmp_dsa_sign_s_256_input_s::m m + * @endlink @link icp_qat_fw_mmp_dsa_sign_s_256_input_s::k k @endlink @link + * icp_qat_fw_mmp_dsa_sign_s_256_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_sign_s_256_input_s::r r @endlink @link + * icp_qat_fw_mmp_dsa_sign_s_256_input_s::x x @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_sign_s_256_output_s::s s + * @endlink + */ +#define PKE_DSA_SIGN_R_S_2048_256 0x5a2a274d +/**< Functionality ID for DSA Sign R S + * @li 6 input parameters : @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_256_input_s::m m @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_256_input_s::k k @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_256_input_s::p p @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_256_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_256_input_s::g g @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_256_input_s::x x @endlink + * @li 2 output parameters : @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_256_output_s::r r @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_2048_256_output_s::s s @endlink + */ +#define PKE_DSA_VERIFY_2048_256 0x723a2789 +/**< Functionality ID for DSA Verify + * @li 7 input parameters : @link icp_qat_fw_mmp_dsa_verify_2048_256_input_s::r + * r @endlink @link icp_qat_fw_mmp_dsa_verify_2048_256_input_s::s s @endlink + * @link icp_qat_fw_mmp_dsa_verify_2048_256_input_s::m m @endlink @link + * icp_qat_fw_mmp_dsa_verify_2048_256_input_s::p p @endlink @link + * icp_qat_fw_mmp_dsa_verify_2048_256_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_verify_2048_256_input_s::g g @endlink @link + * icp_qat_fw_mmp_dsa_verify_2048_256_input_s::y y @endlink + * @li no output parameters + */ +#define PKE_DSA_GEN_P_3072_256 0x4b1127e0 +/**< Functionality ID for DSA parameter generation P + * @li 2 input parameters : @link icp_qat_fw_mmp_dsa_gen_p_3072_256_input_s::x x + * @endlink @link icp_qat_fw_mmp_dsa_gen_p_3072_256_input_s::q q @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_gen_p_3072_256_output_s::p + * p @endlink + */ +#define PKE_DSA_GEN_G_3072 0x4f142816 +/**< Functionality ID for DSA key generation G + * @li 3 input parameters : @link icp_qat_fw_mmp_dsa_gen_g_3072_input_s::p p + * @endlink @link icp_qat_fw_mmp_dsa_gen_g_3072_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_gen_g_3072_input_s::h h @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_gen_g_3072_output_s::g g + * @endlink + */ +#define PKE_DSA_GEN_Y_3072 0x5112282f +/**< Functionality ID for DSA key generation Y + * @li 3 input parameters : @link icp_qat_fw_mmp_dsa_gen_y_3072_input_s::p p + * @endlink @link icp_qat_fw_mmp_dsa_gen_y_3072_input_s::g g @endlink @link + * icp_qat_fw_mmp_dsa_gen_y_3072_input_s::x x @endlink + * @li 1 output parameters : @link icp_qat_fw_mmp_dsa_gen_y_3072_output_s::y y + * @endlink + */ +#define PKE_DSA_SIGN_R_3072_256 0x59282846 +/**< Functionality ID for DSA Sign R + * @li 4 input parameters : @link icp_qat_fw_mmp_dsa_sign_r_3072_256_input_s::k + * k @endlink @link icp_qat_fw_mmp_dsa_sign_r_3072_256_input_s::p p @endlink + * @link icp_qat_fw_mmp_dsa_sign_r_3072_256_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_3072_256_input_s::g g @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_dsa_sign_r_3072_256_output_s::r r @endlink + */ +#define PKE_DSA_SIGN_R_S_3072_256 0x61292874 +/**< Functionality ID for DSA Sign R S + * @li 6 input parameters : @link + * icp_qat_fw_mmp_dsa_sign_r_s_3072_256_input_s::m m @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_3072_256_input_s::k k @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_3072_256_input_s::p p @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_3072_256_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_3072_256_input_s::g g @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_3072_256_input_s::x x @endlink + * @li 2 output parameters : @link + * icp_qat_fw_mmp_dsa_sign_r_s_3072_256_output_s::r r @endlink @link + * icp_qat_fw_mmp_dsa_sign_r_s_3072_256_output_s::s s @endlink + */ +#define PKE_DSA_VERIFY_3072_256 0x7f4328ae +/**< Functionality ID for DSA Verify + * @li 7 input parameters : @link icp_qat_fw_mmp_dsa_verify_3072_256_input_s::r + * r @endlink @link icp_qat_fw_mmp_dsa_verify_3072_256_input_s::s s @endlink + * @link icp_qat_fw_mmp_dsa_verify_3072_256_input_s::m m @endlink @link + * icp_qat_fw_mmp_dsa_verify_3072_256_input_s::p p @endlink @link + * icp_qat_fw_mmp_dsa_verify_3072_256_input_s::q q @endlink @link + * icp_qat_fw_mmp_dsa_verify_3072_256_input_s::g g @endlink @link + * icp_qat_fw_mmp_dsa_verify_3072_256_input_s::y y @endlink + * @li no output parameters + */ +#define PKE_ECDSA_SIGN_RS_GF2_L256 0x46512907 +/**< Functionality ID for ECDSA Sign RS for curves B/K-163 and B/K-233 + * @li 1 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l256_input_s::in in @endlink + * @li 2 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l256_output_s::r r @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l256_output_s::s s @endlink + */ +#define PKE_ECDSA_SIGN_R_GF2_L256 0x323a298f +/**< Functionality ID for ECDSA Sign R for curves B/K-163 and B/K-233 + * @li 7 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_input_s::xg xg @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_input_s::yg yg @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_input_s::n n @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_input_s::q q @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_input_s::a a @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_input_s::b b @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_input_s::k k @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l256_output_s::r r @endlink + */ +#define PKE_ECDSA_SIGN_S_GF2_L256 0x2b2229e6 +/**< Functionality ID for ECDSA Sign S for curves with n < 2^256 + * @li 5 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_l256_input_s::e e @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_l256_input_s::d d @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_l256_input_s::r r @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_l256_input_s::k k @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_l256_input_s::n n @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_l256_output_s::s s @endlink + */ +#define PKE_ECDSA_VERIFY_GF2_L256 0x337e2a27 +/**< Functionality ID for ECDSA Verify for curves B/K-163 and B/K-233 + * @li 1 input parameters : @link + * icp_qat_fw_mmp_ecdsa_verify_gf2_l256_input_s::in in @endlink + * @li no output parameters + */ +#define PKE_ECDSA_SIGN_RS_GF2_L512 0x5e5f2ad7 +/**< Functionality ID for ECDSA Sign RS + * @li 1 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l512_input_s::in in @endlink + * @li 2 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l512_output_s::r r @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gf2_l512_output_s::s s @endlink + */ +#define PKE_ECDSA_SIGN_R_GF2_L512 0x84312b6a +/**< Functionality ID for ECDSA GF2 Sign R + * @li 7 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_input_s::xg xg @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_input_s::yg yg @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_input_s::n n @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_input_s::q q @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_input_s::a a @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_input_s::b b @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_input_s::k k @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_l512_output_s::r r @endlink + */ +#define PKE_ECDSA_SIGN_S_GF2_L512 0x26182bbe +/**< Functionality ID for ECDSA GF2 Sign S + * @li 5 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_l512_input_s::e e @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_l512_input_s::d d @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_l512_input_s::r r @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_l512_input_s::k k @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_l512_input_s::n n @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_l512_output_s::s s @endlink + */ +#define PKE_ECDSA_VERIFY_GF2_L512 0x58892bea +/**< Functionality ID for ECDSA GF2 Verify + * @li 1 input parameters : @link + * icp_qat_fw_mmp_ecdsa_verify_gf2_l512_input_s::in in @endlink + * @li no output parameters + */ +#define PKE_ECDSA_SIGN_RS_GF2_571 0x554a2c93 +/**< Functionality ID for ECDSA GF2 Sign RS for curves B-571/K-571 + * @li 1 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gf2_571_input_s::in in @endlink + * @li 2 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gf2_571_output_s::r r @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gf2_571_output_s::s s @endlink + */ +#define PKE_ECDSA_SIGN_S_GF2_571 0x52332d09 +/**< Functionality ID for ECDSA GF2 Sign S for curves with deg(q) < 576 + * @li 5 input parameters : @link icp_qat_fw_mmp_ecdsa_sign_s_gf2_571_input_s::e + * e @endlink @link icp_qat_fw_mmp_ecdsa_sign_s_gf2_571_input_s::d d @endlink + * @link icp_qat_fw_mmp_ecdsa_sign_s_gf2_571_input_s::r r @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_571_input_s::k k @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_571_input_s::n n @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_s_gf2_571_output_s::s s @endlink + */ +#define PKE_ECDSA_SIGN_R_GF2_571 0x731a2d51 +/**< Functionality ID for ECDSA GF2 Sign R for degree 571 + * @li 7 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_input_s::xg xg @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_input_s::yg yg @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_input_s::n n @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_input_s::q q @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_input_s::a a @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_input_s::b b @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_input_s::k k @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_r_gf2_571_output_s::r r @endlink + */ +#define PKE_ECDSA_VERIFY_GF2_571 0x4f6c2d91 +/**< Functionality ID for ECDSA GF2 Verify for degree 571 + * @li 1 input parameters : @link + * icp_qat_fw_mmp_ecdsa_verify_gf2_571_input_s::in in @endlink + * @li no output parameters + */ +#define MATHS_POINT_MULTIPLICATION_GF2_L256 0x3b242e38 +/**< Functionality ID for MATHS GF2 Point Multiplication + * @li 7 input parameters : @link + * icp_qat_fw_maths_point_multiplication_gf2_l256_input_s::k k @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l256_input_s::xg xg @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l256_input_s::yg yg @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l256_input_s::a a @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l256_input_s::b b @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l256_input_s::q q @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l256_input_s::h h @endlink + * @li 2 output parameters : @link + * icp_qat_fw_maths_point_multiplication_gf2_l256_output_s::xk xk @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l256_output_s::yk yk @endlink + */ +#define MATHS_POINT_VERIFY_GF2_L256 0x231a2e7c +/**< Functionality ID for MATHS GF2 Point Verification + * @li 5 input parameters : @link + * icp_qat_fw_maths_point_verify_gf2_l256_input_s::xq xq @endlink @link + * icp_qat_fw_maths_point_verify_gf2_l256_input_s::yq yq @endlink @link + * icp_qat_fw_maths_point_verify_gf2_l256_input_s::q q @endlink @link + * icp_qat_fw_maths_point_verify_gf2_l256_input_s::a a @endlink @link + * icp_qat_fw_maths_point_verify_gf2_l256_input_s::b b @endlink + * @li no output parameters + */ +#define MATHS_POINT_MULTIPLICATION_GF2_L512 0x722c2e96 +/**< Functionality ID for MATHS GF2 Point Multiplication + * @li 7 input parameters : @link + * icp_qat_fw_maths_point_multiplication_gf2_l512_input_s::k k @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l512_input_s::xg xg @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l512_input_s::yg yg @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l512_input_s::a a @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l512_input_s::b b @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l512_input_s::q q @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l512_input_s::h h @endlink + * @li 2 output parameters : @link + * icp_qat_fw_maths_point_multiplication_gf2_l512_output_s::xk xk @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_l512_output_s::yk yk @endlink + */ +#define MATHS_POINT_VERIFY_GF2_L512 0x25132ee2 +/**< Functionality ID for MATHS GF2 Point Verification + * @li 5 input parameters : @link + * icp_qat_fw_maths_point_verify_gf2_l512_input_s::xq xq @endlink @link + * icp_qat_fw_maths_point_verify_gf2_l512_input_s::yq yq @endlink @link + * icp_qat_fw_maths_point_verify_gf2_l512_input_s::q q @endlink @link + * icp_qat_fw_maths_point_verify_gf2_l512_input_s::a a @endlink @link + * icp_qat_fw_maths_point_verify_gf2_l512_input_s::b b @endlink + * @li no output parameters + */ +#define MATHS_POINT_MULTIPLICATION_GF2_571 0x44152ef5 +/**< Functionality ID for ECC GF2 Point Multiplication for curves B-571/K-571 + * @li 7 input parameters : @link + * icp_qat_fw_maths_point_multiplication_gf2_571_input_s::k k @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_571_input_s::xg xg @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_571_input_s::yg yg @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_571_input_s::a a @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_571_input_s::b b @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_571_input_s::q q @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_571_input_s::h h @endlink + * @li 2 output parameters : @link + * icp_qat_fw_maths_point_multiplication_gf2_571_output_s::xk xk @endlink @link + * icp_qat_fw_maths_point_multiplication_gf2_571_output_s::yk yk @endlink + */ +#define MATHS_POINT_VERIFY_GF2_571 0x12072f1b +/**< Functionality ID for ECC GF2 Point Verification for degree 571 + * @li 5 input parameters : @link + * icp_qat_fw_maths_point_verify_gf2_571_input_s::xq xq @endlink @link + * icp_qat_fw_maths_point_verify_gf2_571_input_s::yq yq @endlink @link + * icp_qat_fw_maths_point_verify_gf2_571_input_s::q q @endlink @link + * icp_qat_fw_maths_point_verify_gf2_571_input_s::a a @endlink @link + * icp_qat_fw_maths_point_verify_gf2_571_input_s::b b @endlink + * @li no output parameters + */ +#define PKE_ECDSA_SIGN_R_GFP_L256 0x431b2f22 +/**< Functionality ID for ECDSA GFP Sign R + * @li 7 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_input_s::xg xg @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_input_s::yg yg @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_input_s::n n @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_input_s::q q @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_input_s::a a @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_input_s::b b @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_input_s::k k @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l256_output_s::r r @endlink + */ +#define PKE_ECDSA_SIGN_S_GFP_L256 0x2b252f6d +/**< Functionality ID for ECDSA GFP Sign S + * @li 5 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_l256_input_s::e e @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_l256_input_s::d d @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_l256_input_s::r r @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_l256_input_s::k k @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_l256_input_s::n n @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_l256_output_s::s s @endlink + */ +#define PKE_ECDSA_SIGN_RS_GFP_L256 0x6a3c2fa6 +/**< Functionality ID for ECDSA GFP Sign RS + * @li 1 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l256_input_s::in in @endlink + * @li 2 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l256_output_s::r r @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l256_output_s::s s @endlink + */ +#define PKE_ECDSA_VERIFY_GFP_L256 0x325b3023 +/**< Functionality ID for ECDSA GFP Verify + * @li 1 input parameters : @link + * icp_qat_fw_mmp_ecdsa_verify_gfp_l256_input_s::in in @endlink + * @li no output parameters + */ +#define PKE_ECDSA_SIGN_R_GFP_L512 0x4e2530b3 +/**< Functionality ID for ECDSA GFP Sign R + * @li 7 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_input_s::xg xg @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_input_s::yg yg @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_input_s::n n @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_input_s::q q @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_input_s::a a @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_input_s::b b @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_input_s::k k @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_l512_output_s::r r @endlink + */ +#define PKE_ECDSA_SIGN_S_GFP_L512 0x251830fa +/**< Functionality ID for ECDSA GFP Sign S + * @li 5 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_l512_input_s::e e @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_l512_input_s::d d @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_l512_input_s::r r @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_l512_input_s::k k @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_l512_input_s::n n @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_l512_output_s::s s @endlink + */ +#define PKE_ECDSA_SIGN_RS_GFP_L512 0x5a2b3127 +/**< Functionality ID for ECDSA GFP Sign RS + * @li 1 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l512_input_s::in in @endlink + * @li 2 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l512_output_s::r r @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gfp_l512_output_s::s s @endlink + */ +#define PKE_ECDSA_VERIFY_GFP_L512 0x3553318a +/**< Functionality ID for ECDSA GFP Verify + * @li 1 input parameters : @link + * icp_qat_fw_mmp_ecdsa_verify_gfp_l512_input_s::in in @endlink + * @li no output parameters + */ +#define PKE_ECDSA_SIGN_R_GFP_521 0x772c31fe +/**< Functionality ID for ECDSA GFP Sign R + * @li 7 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_input_s::xg xg @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_input_s::yg yg @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_input_s::n n @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_input_s::q q @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_input_s::a a @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_input_s::b b @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_input_s::k k @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_r_gfp_521_output_s::r r @endlink + */ +#define PKE_ECDSA_SIGN_S_GFP_521 0x52343251 +/**< Functionality ID for ECDSA GFP Sign S + * @li 5 input parameters : @link icp_qat_fw_mmp_ecdsa_sign_s_gfp_521_input_s::e + * e @endlink @link icp_qat_fw_mmp_ecdsa_sign_s_gfp_521_input_s::d d @endlink + * @link icp_qat_fw_mmp_ecdsa_sign_s_gfp_521_input_s::r r @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_521_input_s::k k @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_521_input_s::n n @endlink + * @li 1 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_s_gfp_521_output_s::s s @endlink + */ +#define PKE_ECDSA_SIGN_RS_GFP_521 0x494a329b +/**< Functionality ID for ECDSA GFP Sign RS + * @li 1 input parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gfp_521_input_s::in in @endlink + * @li 2 output parameters : @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gfp_521_output_s::r r @endlink @link + * icp_qat_fw_mmp_ecdsa_sign_rs_gfp_521_output_s::s s @endlink + */ +#define PKE_ECDSA_VERIFY_GFP_521 0x554c331f +/**< Functionality ID for ECDSA GFP Verify + * @li 1 input parameters : @link + * icp_qat_fw_mmp_ecdsa_verify_gfp_521_input_s::in in @endlink + * @li no output parameters + */ +#define MATHS_POINT_MULTIPLICATION_GFP_L256 0x432033a6 +/**< Functionality ID for ECC GFP Point Multiplication + * @li 7 input parameters : @link + * icp_qat_fw_maths_point_multiplication_gfp_l256_input_s::k k @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l256_input_s::xg xg @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l256_input_s::yg yg @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l256_input_s::a a @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l256_input_s::b b @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l256_input_s::q q @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l256_input_s::h h @endlink + * @li 2 output parameters : @link + * icp_qat_fw_maths_point_multiplication_gfp_l256_output_s::xk xk @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l256_output_s::yk yk @endlink + */ +#define MATHS_POINT_VERIFY_GFP_L256 0x1f0c33fc +/**< Functionality ID for ECC GFP Partial Point Verification + * @li 5 input parameters : @link + * icp_qat_fw_maths_point_verify_gfp_l256_input_s::xq xq @endlink @link + * icp_qat_fw_maths_point_verify_gfp_l256_input_s::yq yq @endlink @link + * icp_qat_fw_maths_point_verify_gfp_l256_input_s::q q @endlink @link + * icp_qat_fw_maths_point_verify_gfp_l256_input_s::a a @endlink @link + * icp_qat_fw_maths_point_verify_gfp_l256_input_s::b b @endlink + * @li no output parameters + */ +#define MATHS_POINT_MULTIPLICATION_GFP_L512 0x41253419 +/**< Functionality ID for ECC GFP Point Multiplication + * @li 7 input parameters : @link + * icp_qat_fw_maths_point_multiplication_gfp_l512_input_s::k k @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l512_input_s::xg xg @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l512_input_s::yg yg @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l512_input_s::a a @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l512_input_s::b b @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l512_input_s::q q @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l512_input_s::h h @endlink + * @li 2 output parameters : @link + * icp_qat_fw_maths_point_multiplication_gfp_l512_output_s::xk xk @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_l512_output_s::yk yk @endlink + */ +#define MATHS_POINT_VERIFY_GFP_L512 0x2612345c +/**< Functionality ID for ECC GFP Partial Point + * @li 5 input parameters : @link + * icp_qat_fw_maths_point_verify_gfp_l512_input_s::xq xq @endlink @link + * icp_qat_fw_maths_point_verify_gfp_l512_input_s::yq yq @endlink @link + * icp_qat_fw_maths_point_verify_gfp_l512_input_s::q q @endlink @link + * icp_qat_fw_maths_point_verify_gfp_l512_input_s::a a @endlink @link + * icp_qat_fw_maths_point_verify_gfp_l512_input_s::b b @endlink + * @li no output parameters + */ +#define MATHS_POINT_MULTIPLICATION_GFP_521 0x5511346e +/**< Functionality ID for ECC GFP Point Multiplication + * @li 7 input parameters : @link + * icp_qat_fw_maths_point_multiplication_gfp_521_input_s::k k @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_521_input_s::xg xg @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_521_input_s::yg yg @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_521_input_s::a a @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_521_input_s::b b @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_521_input_s::q q @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_521_input_s::h h @endlink + * @li 2 output parameters : @link + * icp_qat_fw_maths_point_multiplication_gfp_521_output_s::xk xk @endlink @link + * icp_qat_fw_maths_point_multiplication_gfp_521_output_s::yk yk @endlink + */ +#define MATHS_POINT_VERIFY_GFP_521 0x0e0734be +/**< Functionality ID for ECC GFP Partial Point Verification + * @li 5 input parameters : @link + * icp_qat_fw_maths_point_verify_gfp_521_input_s::xq xq @endlink @link + * icp_qat_fw_maths_point_verify_gfp_521_input_s::yq yq @endlink @link + * icp_qat_fw_maths_point_verify_gfp_521_input_s::q q @endlink @link + * icp_qat_fw_maths_point_verify_gfp_521_input_s::a a @endlink @link + * icp_qat_fw_maths_point_verify_gfp_521_input_s::b b @endlink + * @li no output parameters + */ +#define POINT_MULTIPLICATION_C25519 0x0a0634c6 +/**< Functionality ID for ECC curve25519 Variable Point Multiplication [k]P(x), + * as specified in RFC7748 + * @li 2 input parameters : @link + * icp_qat_fw_point_multiplication_c25519_input_s::xp xp @endlink @link + * icp_qat_fw_point_multiplication_c25519_input_s::k k @endlink + * @li 1 output parameters : @link + * icp_qat_fw_point_multiplication_c25519_output_s::xr xr @endlink + */ +#define GENERATOR_MULTIPLICATION_C25519 0x0a0634d6 +/**< Functionality ID for ECC curve25519 Generator Point Multiplication [k]G(x), + * as specified in RFC7748 + * @li 1 input parameters : @link + * icp_qat_fw_generator_multiplication_c25519_input_s::k k @endlink + * @li 1 output parameters : @link + * icp_qat_fw_generator_multiplication_c25519_output_s::xr xr @endlink + */ +#define POINT_MULTIPLICATION_ED25519 0x100b34e6 +/**< Functionality ID for ECC edwards25519 Variable Point Multiplication [k]P, + * as specified in RFC8032 + * @li 3 input parameters : @link + * icp_qat_fw_point_multiplication_ed25519_input_s::xp xp @endlink @link + * icp_qat_fw_point_multiplication_ed25519_input_s::yp yp @endlink @link + * icp_qat_fw_point_multiplication_ed25519_input_s::k k @endlink + * @li 2 output parameters : @link + * icp_qat_fw_point_multiplication_ed25519_output_s::xr xr @endlink @link + * icp_qat_fw_point_multiplication_ed25519_output_s::yr yr @endlink + */ +#define GENERATOR_MULTIPLICATION_ED25519 0x100a34f6 +/**< Functionality ID for ECC edwards25519 Generator Point Multiplication [k]G, + * as specified in RFC8032 + * @li 1 input parameters : @link + * icp_qat_fw_generator_multiplication_ed25519_input_s::k k @endlink + * @li 2 output parameters : @link + * icp_qat_fw_generator_multiplication_ed25519_output_s::xr xr @endlink @link + * icp_qat_fw_generator_multiplication_ed25519_output_s::yr yr @endlink + */ +#define POINT_MULTIPLICATION_C448 0x0c063506 +/**< Functionality ID for ECC curve448 Variable Point Multiplication [k]P(x), as + * specified in RFC7748 + * @li 2 input parameters : @link + * icp_qat_fw_point_multiplication_c448_input_s::xp xp @endlink @link + * icp_qat_fw_point_multiplication_c448_input_s::k k @endlink + * @li 1 output parameters : @link + * icp_qat_fw_point_multiplication_c448_output_s::xr xr @endlink + */ +#define GENERATOR_MULTIPLICATION_C448 0x0c063516 +/**< Functionality ID for ECC curve448 Generator Point Multiplication [k]G(x), + * as specified in RFC7748 + * @li 1 input parameters : @link + * icp_qat_fw_generator_multiplication_c448_input_s::k k @endlink + * @li 1 output parameters : @link + * icp_qat_fw_generator_multiplication_c448_output_s::xr xr @endlink + */ +#define POINT_MULTIPLICATION_ED448 0x1a0b3526 +/**< Functionality ID for ECC edwards448 Variable Point Multiplication [k]P, as + * specified in RFC8032 + * @li 3 input parameters : @link + * icp_qat_fw_point_multiplication_ed448_input_s::xp xp @endlink @link + * icp_qat_fw_point_multiplication_ed448_input_s::yp yp @endlink @link + * icp_qat_fw_point_multiplication_ed448_input_s::k k @endlink + * @li 2 output parameters : @link + * icp_qat_fw_point_multiplication_ed448_output_s::xr xr @endlink @link + * icp_qat_fw_point_multiplication_ed448_output_s::yr yr @endlink + */ +#define GENERATOR_MULTIPLICATION_ED448 0x1a0a3536 +/**< Functionality ID for ECC edwards448 Generator Point Multiplication [k]P, as + * specified in RFC8032 + * @li 1 input parameters : @link + * icp_qat_fw_generator_multiplication_ed448_input_s::k k @endlink + * @li 2 output parameters : @link + * icp_qat_fw_generator_multiplication_ed448_output_s::xr xr @endlink @link + * icp_qat_fw_generator_multiplication_ed448_output_s::yr yr @endlink + */ + +#define PKE_LIVENESS 0x00000001 +/**< Functionality ID for PKE_LIVENESS + * @li 0 input parameter(s) + * @li 1 output parameter(s) (8 qwords) + */ +#define PKE_INTERFACE_SIGNATURE 0x972ded54 +/**< Encoded signature of the interface specifications + */ + +#define PKE_INVALID_FUNC_ID 0xffffffff + +#endif /* __ICP_QAT_FW_MMP_IDS__ */ + +/* --- (Automatically generated (relocation v. 1.3), do not modify manually) --- + */ + +/* --- end of file --- */ Index: sys/dev/qat/qat_api/firmware/include/icp_qat_fw_pke.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/firmware/include/icp_qat_fw_pke.h @@ -0,0 +1,418 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + * @file icp_qat_fw_pke.h + * @defgroup icp_qat_fw_pke ICP QAT FW PKE Processing Definitions + * @ingroup icp_qat_fw + * $Revision: 0.1 $ + * @brief + * This file documents the external interfaces that the QAT FW running + * on the QAT Acceleration Engine provides to clients wanting to + * accelerate crypto assymetric applications + */ + +#ifndef _ICP_QAT_FW_PKE_ +#define _ICP_QAT_FW_PKE_ + +/* +**************************************************************************** +* Include local header files +**************************************************************************** +*/ +#include "icp_qat_fw.h" + +/** + ***************************************************************************** + * + * @ingroup icp_qat_fw_pke + * + * @brief + * PKE response status field structure contained + * within LW1, comprising the common error codes and + * the response flags. + * + *****************************************************************************/ +typedef struct icp_qat_fw_pke_resp_status_s { + uint8_t comn_err_code; + /**< 8 bit common error code */ + + uint8_t pke_resp_flags; + /**< 8-bit PKE response flags */ + +} icp_qat_fw_pke_resp_status_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_pke + * Definition of the QAT FW PKE request header pars field. + * Structure differs from the DH895xxCC common base header structure, hence + * redefined here. + * @description + * PKE request message header pars structure + * + *****************************************************************************/ +typedef struct icp_qat_fw_req_hdr_pke_cd_pars_s { + /**< LWs 2-3 */ + uint64_t content_desc_addr; + /**< Content descriptor pointer */ + + /**< LW 4 */ + uint32_t content_desc_resrvd; + /**< Content descriptor reserved field */ + + /**< LW 5 */ + uint32_t func_id; + /**< MMP functionality Id */ + +} icp_qat_fw_req_hdr_pke_cd_pars_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_pke + * Definition of the QAT FW PKE request header mid section. + * Structure differs from the DH895xxCC common base header structure, + * instead following the DH89xxCC format, hence redefined here. + * @description + * PKE request message header middle structure + * + *****************************************************************************/ +typedef struct icp_qat_fw_req_pke_mid_s { + /**< LWs 6-11 */ + uint64_t opaque_data; + /**< Opaque data passed unmodified from the request to response messages + * by + * firmware (fw) */ + + uint64_t src_data_addr; + /**< Generic definition of the source data supplied to the QAT AE. The + * common flags are used to further describe the attributes of this + * field */ + + uint64_t dest_data_addr; + /**< Generic definition of the destination data supplied to the QAT AE. + * The + * common flags are used to further describe the attributes of this + * field */ + + /**< Following DH89xxCC structure format - footer is excluded */ + +} icp_qat_fw_req_pke_mid_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_fw_pke + * Definition of the QAT FW PKE request header. + * Structure differs from the DH895xxCC common base header structure, + * instead following the DH89xxCC format, hence redefined here. + * @description + * PKE request message header structure + * + *****************************************************************************/ +typedef struct icp_qat_fw_req_pke_hdr_s { + /**< LW0 */ + uint8_t resrvd1; + /**< reserved field */ + + uint8_t resrvd2; + /**< reserved field */ + + uint8_t service_type; + /**< Service type */ + + uint8_t hdr_flags; + /**< This represents a flags field for the Service Request. + * The most significant bit is the 'valid' flag and the only + * one used. All remaining bit positions are unused and + * are therefore reserved and need to be set to 0. */ + + /**< LW1 */ + icp_qat_fw_comn_flags comn_req_flags; + /**< Common Request flags must indicate flat buffer (as per DH89xxCC) + * Common Request flags - PKE slice flags no longer used - slice + * allocated to a threadstrand.*/ + + uint16_t resrvd4; + /**< (DH89xxCC) CD Header Size and CD Params Size unused. Set to zero. + */ + + /**< LWs 2-5 */ + icp_qat_fw_req_hdr_pke_cd_pars_t cd_pars; + /**< PKE request message header pars structure - this differs + * from the DH895xxCC common base structure */ + +} icp_qat_fw_req_pke_hdr_t; + +/** + *************************************************************************** + * + * @ingroup icp_qat_fw_pke + * + * @brief + * PKE request message structure (64 bytes) + * + *****************************************************************************/ +typedef struct icp_qat_fw_pke_request_s { + /**< LWs 0-5 */ + icp_qat_fw_req_pke_hdr_t pke_hdr; + /**< Request header for PKE - CD Header/Param size + * must be zero */ + + /**< LWs 6-11 (same as DH89xxCC) */ + icp_qat_fw_req_pke_mid_t pke_mid; + /**< Request middle section for PKE */ + + /**< LW 12 */ + uint8_t output_param_count; + /**< Number of output large integers + * for request */ + + uint8_t input_param_count; + /**< Number of input large integers + * for request */ + + uint16_t resrvd1; + /** Reserved **/ + + /**< LW 13 */ + uint32_t resrvd2; + /**< Reserved */ + + /**< LWs 14-15 */ + uint64_t next_req_adr; + /** < PKE - next request address */ + +} icp_qat_fw_pke_request_t; + +/** + ***************************************************************************** + * + * @ingroup icp_qat_fw_pke + * + * @brief + * PKE response message header structure + * + *****************************************************************************/ +typedef struct icp_qat_fw_resp_pke_hdr_s { + /**< LW0 */ + uint8_t resrvd1; + /**< The Response Destination Id has been removed + * from first QWord */ + + uint8_t resrvd2; + /**< Response Pipe Id field is unused (reserved) + * - Functionality within DH895xxCC uses arbiter instead */ + + uint8_t response_type; + /**< Response type - copied from the request to + * the response message */ + + uint8_t hdr_flags; + /**< This represents a flags field for the Response. + * The most significant bit is the 'valid' flag and the only + * one used. All remaining bit positions are unused and + * are therefore reserved */ + + /**< LW1 */ + icp_qat_fw_pke_resp_status_t resp_status; + + uint16_t resrvd4; + /**< (DH89xxCC) CD Header Size and CD Params Size fields unused. + * Set to zero. */ + +} icp_qat_fw_resp_pke_hdr_t; + +/** + ***************************************************************************** + * + * @ingroup icp_qat_fw_pke + * + * @brief + * PKE response message structure (32 bytes) + * + *****************************************************************************/ +typedef struct icp_qat_fw_pke_resp_s { + /**< LWs 0-1 */ + icp_qat_fw_resp_pke_hdr_t pke_resp_hdr; + /**< Response header for PKE */ + + /**< LWs 2-3 */ + uint64_t opaque_data; + /**< Opaque data passed from the request to the response message */ + + /**< LWs 4-5 */ + uint64_t src_data_addr; + /**< Generic definition of the source data supplied to the QAT AE. The + * common flags are used to further describe the attributes of this + * field */ + + /**< LWs 6-7 */ + uint64_t dest_data_addr; + /**< Generic definition of the destination data supplied to the QAT AE. + * The + * common flags are used to further describe the attributes of this + * field */ + +} icp_qat_fw_pke_resp_t; + +/* ========================================================================= */ +/* MACRO DEFINITIONS */ +/* ========================================================================= */ + +/**< @ingroup icp_qat_fw_pke + * Macro defining the bit position and mask of the 'valid' flag, within the + * hdr_flags field of LW0 (service request and response) of the PKE request */ +#define ICP_QAT_FW_PKE_HDR_VALID_FLAG_BITPOS 7 +#define ICP_QAT_FW_PKE_HDR_VALID_FLAG_MASK 0x1 + +/**< @ingroup icp_qat_fw_pke + * Macro defining the bit position and mask of the PKE status flag, within the + * status field LW1 of a PKE response message */ +#define QAT_COMN_RESP_PKE_STATUS_BITPOS 6 +/**< @ingroup icp_qat_fw_pke + * Starting bit position indicating the PKE status flag within the PKE response + * pke_resp_flags byte. */ + +#define QAT_COMN_RESP_PKE_STATUS_MASK 0x1 +/**< @ingroup icp_qat_fw_pke + * One bit mask used to determine PKE status mask */ + +/* + * < @ingroup icp_qat_fw_pke + * *** PKE Response Status Field Definition *** + * The PKE response follows the CPM 1.5 message format. The status field is 16 bits + * wide, where the status flags are contained within the most significant byte of the + * icp_qat_fw_pke_resp_status_t structure. The lower 8 bits of this word now contain + * the common error codes, which are defined in the common header file(*). + */ +/* + ===== + ----- + ---- + ----- + ----- + ----- + ----- + ----- + ----- + ----------------------- + + * | Bit | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | [7....0] | + * + ===== + ----- + ---- + ----- + ----- + ----- + ----- + ----- + ----- + ----------------------- + + * | Flags | Rsrvd | Pke | Rsrvd | Rsrvd | Rsrvd | Rsrvd | Rsrvd | Rsrvd | Common error codes(*) | + * + ===== + ----- + ---- + ----- + ----- + ----- + ----- + ----- + ----- + ----------------------- + + */ + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_pke + * + * @description + * Macro for extraction of the PKE bit from the 16-bit status field + * particular to a PKE response. The status flags are contained within + * the most significant byte of the word. The lower 8 bits of this status + * word now contain the common error codes, which are defined in the common + * header file. The appropriate macro definition to extract the PKE status + * flag from the PKE response assumes that a single byte i.e. + *pke_resp_flags + * is passed to the macro. + * + * @param status + * Status to extract the PKE status bit + * + *****************************************************************************/ +#define ICP_QAT_FW_PKE_RESP_PKE_STAT_GET(flags) \ + QAT_FIELD_GET((flags), \ + QAT_COMN_RESP_PKE_STATUS_BITPOS, \ + QAT_COMN_RESP_PKE_STATUS_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_pke + * + * @description + * Extract the valid flag from the PKE Request's header flags. Note that + * this invokes the common macro which may be used by either the request + * or the response. + * + * @param icp_qat_fw_req_pke_hdr_t Structure passed to extract the valid bit + * from the 'hdr_flags' field. + * + *****************************************************************************/ +#define ICP_QAT_FW_PKE_RQ_VALID_FLAG_GET(icp_qat_fw_req_pke_hdr_t) \ + ICP_QAT_FW_PKE_HDR_VALID_FLAG_GET(icp_qat_fw_req_pke_hdr_t) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_pke + * + * @description + * Set the valid bit in the PKE Request's header flags. Note that + * this invokes the common macro which may be used by either the request + * or the response. + * + * @param icp_qat_fw_req_pke_hdr_t Structure passed to set the valid bit. + * @param val Value of the valid bit flag. + * + *****************************************************************************/ +#define ICP_QAT_FW_PKE_RQ_VALID_FLAG_SET(icp_qat_fw_req_pke_hdr_t, val) \ + ICP_QAT_FW_PKE_HDR_VALID_FLAG_SET(icp_qat_fw_req_pke_hdr_t, val) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_pke + * + * @description + * Extract the valid flag from the PKE Response's header flags. Note that + * invokes the common macro which may be used by either the request + * or the response. + * + * @param icp_qat_fw_resp_pke_hdr_t Structure to extract the valid bit + * from the 'hdr_flags' field. + * + *****************************************************************************/ +#define ICP_QAT_FW_PKE_RESP_VALID_FLAG_GET(icp_qat_fw_resp_pke_hdr_t) \ + ICP_QAT_FW_PKE_HDR_VALID_FLAG_GET(icp_qat_fw_resp_pke_hdr_t) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_pke + * + * @description + * Set the valid bit in the PKE Response's header flags. Note that + * this invokes the common macro which may be used by either the + * request or the response. + * + * @param icp_qat_fw_resp_pke_hdr_t Structure to set the valid bit + * @param val Value of the valid bit flag. + * + *****************************************************************************/ +#define ICP_QAT_FW_PKE_RESP_VALID_FLAG_SET(icp_qat_fw_resp_pke_hdr_t, val) \ + ICP_QAT_FW_PKE_HDR_VALID_FLAG_SET(icp_qat_fw_resp_pke_hdr_t, val) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_pke + * + * @description + * Common macro to extract the valid flag from the header flags field + * within the header structure (request or response). + * + * @param hdr_t Structure (request or response) to extract the + * valid bit from the 'hdr_flags' field. + * + *****************************************************************************/ +#define ICP_QAT_FW_PKE_HDR_VALID_FLAG_GET(hdr_t) \ + QAT_FIELD_GET(hdr_t.hdr_flags, \ + ICP_QAT_FW_PKE_HDR_VALID_FLAG_BITPOS, \ + ICP_QAT_FW_PKE_HDR_VALID_FLAG_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_fw_pke + * + * @description + * Common macro to set the valid bit in the header flags field within + * the header structure (request or response). + * + * @param hdr_t Structure (request or response) containing the header + * flags field, to allow the valid bit to be set. + * @param val Value of the valid bit flag. + * + *****************************************************************************/ +#define ICP_QAT_FW_PKE_HDR_VALID_FLAG_SET(hdr_t, val) \ + QAT_FIELD_SET((hdr_t.hdr_flags), \ + (val), \ + ICP_QAT_FW_PKE_HDR_VALID_FLAG_BITPOS, \ + ICP_QAT_FW_PKE_HDR_VALID_FLAG_MASK) + +#endif /* _ICP_QAT_FW_PKE_ */ Index: sys/dev/qat/qat_api/firmware/include/icp_qat_hw.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/firmware/include/icp_qat_hw.h @@ -0,0 +1,1552 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file icp_qat_hw.h + * @defgroup icp_qat_hw_defs ICP QAT HW definitions + * @ingroup icp_qat_hw + * @description + * This file documents definitions for the QAT HW + * + *****************************************************************************/ + +#ifndef _ICP_QAT_HW_H_ +#define _ICP_QAT_HW_H_ + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +/* ========================================================================= */ +/* AccelerationEngine */ +/* ========================================================================= */ + +typedef enum { + ICP_QAT_HW_AE_0 = 0, /*!< ID of AE0 */ + ICP_QAT_HW_AE_1 = 1, /*!< ID of AE1 */ + ICP_QAT_HW_AE_2 = 2, /*!< ID of AE2 */ + ICP_QAT_HW_AE_3 = 3, /*!< ID of AE3 */ + ICP_QAT_HW_AE_4 = 4, /*!< ID of AE4 */ + ICP_QAT_HW_AE_5 = 5, /*!< ID of AE5 */ + ICP_QAT_HW_AE_6 = 6, /*!< ID of AE6 */ + ICP_QAT_HW_AE_7 = 7, /*!< ID of AE7 */ + ICP_QAT_HW_AE_8 = 8, /*!< ID of AE8 */ + ICP_QAT_HW_AE_9 = 9, /*!< ID of AE9 */ + ICP_QAT_HW_AE_10 = 10, /*!< ID of AE10 */ + ICP_QAT_HW_AE_11 = 11, /*!< ID of AE11 */ + ICP_QAT_HW_AE_12 = 12, /*!< ID of AE12 */ + ICP_QAT_HW_AE_13 = 13, /*!< ID of AE13 */ + ICP_QAT_HW_AE_14 = 14, /*!< ID of AE14 */ + ICP_QAT_HW_AE_15 = 15, /*!< ID of AE15 */ + ICP_QAT_HW_AE_DELIMITER = 16 /**< Delimiter type */ +} icp_qat_hw_ae_id_t; + +/* ========================================================================= */ +/* QAT */ +/* ========================================================================= */ + +typedef enum { + ICP_QAT_HW_QAT_0 = 0, /*!< ID of QAT0 */ + ICP_QAT_HW_QAT_1 = 1, /*!< ID of QAT1 */ + ICP_QAT_HW_QAT_2 = 2, /*!< ID of QAT2 */ + ICP_QAT_HW_QAT_3 = 3, /*!< ID of QAT3 */ + ICP_QAT_HW_QAT_4 = 4, /*!< ID of QAT4 */ + ICP_QAT_HW_QAT_5 = 5, /*!< ID of QAT5 */ + ICP_QAT_HW_QAT_DELIMITER = 6 /**< Delimiter type */ +} icp_qat_hw_qat_id_t; + +/* ========================================================================= */ +/* AUTH SLICE */ +/* ========================================================================= */ + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Supported Authentication Algorithm types + * @description + * Enumeration which is used to define the authenticate algorithms + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_HW_AUTH_ALGO_NULL = 0, /*!< Null hashing */ + ICP_QAT_HW_AUTH_ALGO_SHA1 = 1, /*!< SHA1 hashing */ + ICP_QAT_HW_AUTH_ALGO_MD5 = 2, /*!< MD5 hashing */ + ICP_QAT_HW_AUTH_ALGO_SHA224 = 3, /*!< SHA-224 hashing */ + ICP_QAT_HW_AUTH_ALGO_SHA256 = 4, /*!< SHA-256 hashing */ + ICP_QAT_HW_AUTH_ALGO_SHA384 = 5, /*!< SHA-384 hashing */ + ICP_QAT_HW_AUTH_ALGO_SHA512 = 6, /*!< SHA-512 hashing */ + ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7, /*!< AES-XCBC-MAC hashing */ + ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8, /*!< AES-CBC-MAC hashing */ + ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9, /*!< AES F9 hashing */ + ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10, /*!< Galois 128 bit hashing */ + ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11, /*!< Galois 64 hashing */ + ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12, /*!< Kasumi F9 hashing */ + ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13, /*!< UIA2/SNOW_3G F9 hashing */ + ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = + 14, /*!< 128_EIA3/ZUC_3G hashing */ + ICP_QAT_HW_AUTH_ALGO_SM3 = 15, /*!< SM3 hashing */ + ICP_QAT_HW_AUTH_ALGO_SHA3_224 = 16, /*!< SHA3-224 hashing */ + ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17, /*!< SHA3-256 hashing */ + ICP_QAT_HW_AUTH_ALGO_SHA3_384 = 18, /*!< SHA3-384 hashing */ + ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19, /*!< SHA3-512 hashing */ + ICP_QAT_HW_AUTH_ALGO_SHAKE_128 = 20, /*!< SHAKE-128 hashing */ + ICP_QAT_HW_AUTH_ALGO_SHAKE_256 = 21, /*!< SHAKE-256 hashing */ + ICP_QAT_HW_AUTH_ALGO_POLY = 22, /*!< POLY hashing */ + ICP_QAT_HW_AUTH_ALGO_DELIMITER = 23 /**< Delimiter type */ +} icp_qat_hw_auth_algo_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the supported Authentication modes + * @description + * Enumeration which is used to define the authentication slice modes. + * The concept of modes is very specific to the QAT implementation. Its + * main use is differentiate how the algorithms are used i.e. mode0 SHA1 + * will configure the QAT Auth Slice to do plain SHA1 hashing while mode1 + * configures it to do SHA1 HMAC with precomputes and mode2 sets up the + * slice to do SHA1 HMAC with no precomputes (uses key directly) + * + * @Note + * Only some algorithms are valid in some of the modes. If you dont know + * what you are doing then refer back to the HW documentation + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_HW_AUTH_MODE0 = 0, /*!< QAT Auth Mode0 configuration */ + ICP_QAT_HW_AUTH_MODE1 = 1, /*!< QAT Auth Mode1 configuration */ + ICP_QAT_HW_AUTH_MODE2 = 2, /*!< QAT AuthMode2 configuration */ + ICP_QAT_HW_AUTH_MODE_DELIMITER = 3 /**< Delimiter type */ +} icp_qat_hw_auth_mode_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Auth configuration structure + * + * @description + * Definition of the format of the authentication slice configuration + * + *****************************************************************************/ +typedef struct icp_qat_hw_auth_config_s { + uint32_t config; + /**< Configuration used for setting up the slice */ + + uint32_t reserved; + /**< Reserved */ +} icp_qat_hw_auth_config_t; + +/* Private defines */ + +/* Note: Bit positions have been defined for little endian ordering */ +/* +* AUTH CONFIG WORD BITMAP +* + ===== + ------ + ------ + ------- + ------ + ------ + ----- + ----- + ------ + ------ + ---- + ----- + ----- + ----- + +* | Bit | 63:56 | 55:52 | 51:48 | 47:32 | 31:24 | 23:22 | 21:18 | 17 | 16 | 15 | 14:8 | 7:4 | 3:0 | +* + ===== + ------ + ------ + ------- + ------ + ------ + ----- + ----- + ------ + ------ + ---- + ----- + ------+ ----- + +* | Usage | Prog | Resvd | Prog | Resvd | Resvd | Algo | Rsvrd | SHA3 | SHA3 |Rsvrd | Cmp | Mode | Algo | +* | |padding | Bits=0 | padding | Bits=0 | Bits=0 | SHA3 | |Padding |Padding | | | | | +* | | SHA3 | | SHA3 | | | | |Override|Disable | | | | | +* | |(prefix)| |(postfix)| | | | | | | | | | | +* + ===== + ------ + ------ + ------- + ------ + ------ + ----- + ----- + ------ + ------ + ---- + ----- + ----- + ------+ +*/ + +/**< Flag mask & bit position */ + +#define QAT_AUTH_MODE_BITPOS 4 +/**< @ingroup icp_qat_hw_defs + * Starting bit position indicating the Auth mode */ + +#define QAT_AUTH_MODE_MASK 0xF +/**< @ingroup icp_qat_hw_defs + * Four bit mask used for determing the Auth mode */ + +#define QAT_AUTH_ALGO_BITPOS 0 +/**< @ingroup icp_qat_hw_defs + * Starting bit position indicating the Auth Algo */ + +#define QAT_AUTH_ALGO_MASK 0xF +/**< @ingroup icp_qat_hw_defs + * Four bit mask used for determining the Auth algo */ + +#define QAT_AUTH_CMP_BITPOS 8 +/**< @ingroup icp_qat_hw_defs + * Starting bit position indicating the Auth Compare */ + +#define QAT_AUTH_CMP_MASK 0x7F +/**< @ingroup icp_qat_hw_defs + * Seven bit mask used to determine the Auth Compare */ + +#define QAT_AUTH_SHA3_PADDING_DISABLE_BITPOS 16 +/**< @ingroup icp_qat_hw_defs + * Starting bit position indicating the Auth h/w + * padding disable for SHA3. + * Flag set to 0 => h/w is required to pad (default) + * Flag set to 1 => No padding in h/w + */ + +#define QAT_AUTH_SHA3_PADDING_DISABLE_MASK 0x1 +/**< @ingroup icp_qat_hw_defs + * Single bit mask used to determine the Auth h/w + * padding disable for SHA3. + */ + +#define QAT_AUTH_SHA3_PADDING_OVERRIDE_BITPOS 17 +/**< @ingroup icp_qat_hw_defs + * Starting bit position indicating the Auth h/w + * padding override for SHA3. + * Flag set to 0 => default padding behaviour + * implemented in SHA3-256 slice will take effect + * (default hardware setting upon h/w reset) + * Flag set to 1 => SHA3-core will not use the padding + * sequence built into the SHA3 core. Instead, the + * padding sequence specified in bits 48-51 and 56-63 + * of the 64-bit auth config word will apply + * (corresponds with EAS bits 32-43). + */ + +#define QAT_AUTH_SHA3_PADDING_OVERRIDE_MASK 0x1 +/**< @ingroup icp_qat_hw_defs + * Single bit mask used to determine the Auth h/w + * padding override for SHA3. + */ + +#define QAT_AUTH_ALGO_SHA3_BITPOS 22 +/**< @ingroup icp_qat_hw_defs + * Starting bit position for indicating the + * SHA3 Auth Algo + */ + +#define QAT_AUTH_ALGO_SHA3_MASK 0x3 +/**< @ingroup icp_qat_hw_defs + * Two bit mask used for determining the + * SHA3 Auth algo + */ + +/**< Flag mask & bit position */ + +#define QAT_AUTH_SHA3_PROG_PADDING_POSTFIX_BITPOS 16 +/**< @ingroup icp_qat_hw_defs + * Starting bit position indicating the SHA3 + * flexible programmable padding postfix. + * Note that these bits are set using macro + * ICP_QAT_HW_AUTH_CONFIG_BUILD_UPPER and are + * defined relative to the 32-bit value that + * this macro returns. In effect, therefore, this + * defines starting bit position 48 within the + * 64-bit auth config word. + */ + +#define QAT_AUTH_SHA3_PROG_PADDING_POSTFIX_MASK 0xF +/**< @ingroup icp_qat_hw_defs + * Four-bit mask used to determine the SHA3 + * flexible programmable padding postfix + */ + +#define QAT_AUTH_SHA3_PROG_PADDING_PREFIX_BITPOS 24 +/**< @ingroup icp_qat_hw_defs + * Starting bit position indicating the SHA3 + * flexible programmable padding prefix + * Note that these bits are set using macro + * ICP_QAT_HW_AUTH_CONFIG_BUILD_UPPER and are + * defined relative to the 32-bit value that + * this macro returns. In effect, therefore, this + * defines starting bit position 56 within the + * 64-bit auth config word. + */ + +#define QAT_AUTH_SHA3_PROG_PADDING_PREFIX_MASK 0xFF +/**< @ingroup icp_qat_hw_defs + * Eight-bit mask used to determine the SHA3 + * flexible programmable padding prefix + */ + +/**< Flag usage - see additional notes @description for + * ICP_QAT_HW_AUTH_CONFIG_BUILD and + * ICP_QAT_HW_AUTH_CONFIG_BUILD_UPPER macros. +*/ + +#define QAT_AUTH_SHA3_HW_PADDING_ENABLE 0 +/**< @ingroup icp_qat_hw_defs + * This setting enables h/w padding for SHA3. + */ + +#define QAT_AUTH_SHA3_HW_PADDING_DISABLE 1 +/**< @ingroup icp_qat_hw_defs + * This setting disables h/w padding for SHA3. + */ + +#define QAT_AUTH_SHA3_PADDING_DISABLE_USE_DEFAULT 0 +/**< @ingroup icp_qat_hw_defs + * Default value for the Auth h/w padding disable. + * If set to 0 for SHA3-256, h/w padding is enabled. + * Padding_Disable is undefined for all non-SHA3-256 + * algos and is consequently set to the default of 0. + */ + +#define QAT_AUTH_SHA3_PADDING_OVERRIDE_USE_DEFAULT 0 +/**< @ingroup icp_qat_hw_defs + * Value for the Auth h/w padding override for SHA3. + * Flag set to 0 => default padding behaviour + * implemented in SHA3-256 slice will take effect + * (default hardware setting upon h/w reset) + * For this setting of the override flag, all the + * bits of the padding sequence specified + * in bits 48-51 and 56-63 of the 64-bit + * auth config word are set to 0 (reserved). + */ + +#define QAT_AUTH_SHA3_PADDING_OVERRIDE_PROGRAMMABLE 1 +/**< @ingroup icp_qat_hw_defs + * Value for the Auth h/w padding override for SHA3. + * Flag set to 1 => SHA3-core will not use the padding + * sequence built into the SHA3 core. Instead, the + * padding sequence specified in bits 48-51 and 56-63 + * of the 64-bit auth config word will apply + * (corresponds with EAS bits 32-43). + */ + +#define QAT_AUTH_SHA3_PROG_PADDING_POSTFIX_RESERVED 0 +/**< @ingroup icp_qat_hw_defs + * All the bits of the padding sequence specified in + * bits 48-51 of the 64-bit auth config word are set + * to 0 (reserved) if the padding override bit is set + * to 0, indicating default padding. + */ + +#define QAT_AUTH_SHA3_PROG_PADDING_PREFIX_RESERVED 0 +/**< @ingroup icp_qat_hw_defs + * All the bits of the padding sequence specified in + * bits 56-63 of the 64-bit auth config word are set + * to 0 (reserved) if the padding override bit is set + * to 0, indicating default padding. + */ + +/** + *************************************************************************************** + * @ingroup icp_qat_hw_defs + * + * @description + * The derived configuration word for the auth slice is based on the inputs + * of mode, algorithm type and compare length. The total size of the auth + * config word in the setup block is 64 bits however the size of the value + * returned by this macro is assumed to be only 32 bits (for now) and sets + * the lower 32 bits of the auth config word. Unfortunately, changing the + * size of the returned value to 64 bits will also require changes to the + * shared RAM constants table so the macro size will remain at 32 bits. + * This means that the padding sequence bits specified in bits 48-51 and + * 56-63 of the 64-bit auth config word are NOT included in the + * ICP_QAT_HW_AUTH_CONFIG_BUILD macro and are defined in a + * separate macro, namely, ICP_QAT_HW_AUTH_CONFIG_BUILD_UPPER. + * + * For the digest generation case the compare length is a don't care value. + * Furthermore, if the client will be doing the digest validation, the + * compare_length will not be used. + * The padding and padding override bits for SHA3 are set internally + * by the macro. + * Padding_Disable is set it to 0 for SHA3-256 algo only i.e. we want to + * enable this to provide the ability to test with h/w padding enabled. + * Padding_Disable has no meaning for all non-SHA3-256 algos and is + * consequently set the default of 0. + * Padding Override is set to 0, implying that the padding behaviour + * implemented in the SHA3-256 slice will take effect (default hardware + * setting upon h/w reset). + * This flag has no meaning for other algos, so is also set to the default + * for non-SHA3-256 algos. + * + * @param mode Authentication mode to use + * @param algo Auth Algorithm to use + * @param cmp_len The length of the digest if the QAT is to the check + * + ****************************************************************************************/ +#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \ + ((((mode)&QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \ + (((algo)&QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \ + (((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) \ + << QAT_AUTH_ALGO_SHA3_BITPOS) | \ + (((QAT_AUTH_SHA3_PADDING_DISABLE_USE_DEFAULT)&QAT_AUTH_SHA3_PADDING_DISABLE_MASK) \ + << QAT_AUTH_SHA3_PADDING_DISABLE_BITPOS) | \ + (((QAT_AUTH_SHA3_PADDING_OVERRIDE_USE_DEFAULT)&QAT_AUTH_SHA3_PADDING_OVERRIDE_MASK) \ + << QAT_AUTH_SHA3_PADDING_OVERRIDE_BITPOS) | \ + (((cmp_len)&QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS)) + +/** + *************************************************************************************** + * @ingroup icp_qat_hw_defs + * + * @description + * This macro sets the upper 32 bits of the 64-bit auth config word. + * The sequence bits specified in bits 48-51 and 56-63 of the 64-bit auth + * config word are included in this macro, which is therefore assumed to + * return a 32-bit value. + * Note that the Padding Override bit is set in macro + * ICP_QAT_HW_AUTH_CONFIG_BUILD. + * Since the Padding Override is set to 0 regardless, for now, all the bits + * of the padding sequence specified in bits 48-51 and 56-63 of the 64-bit + * auth config word are set to 0 (reserved). Note that the bit positions of + * the padding sequence bits are defined relative to the 32-bit value that + * this macro returns. + * + ****************************************************************************************/ +#define ICP_QAT_HW_AUTH_CONFIG_BUILD_UPPER \ + ((((QAT_AUTH_SHA3_PROG_PADDING_POSTFIX_RESERVED)&QAT_AUTH_SHA3_PROG_PADDING_POSTFIX_MASK) \ + << QAT_AUTH_SHA3_PROG_PADDING_POSTFIX_BITPOS) | \ + (((QAT_AUTH_SHA3_PROG_PADDING_PREFIX_RESERVED)&QAT_AUTH_SHA3_PROG_PADDING_PREFIX_MASK) \ + << QAT_AUTH_SHA3_PROG_PADDING_PREFIX_BITPOS)) + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Auth Counter structure + * + * @description + * 32 bit counter that tracks the number of data bytes passed through + * the slice. This is used by the padding logic for some algorithms. Note + * only the upper 32 bits are set. + * + *****************************************************************************/ +typedef struct icp_qat_hw_auth_counter_s { + uint32_t counter; + /**< Counter value */ + uint32_t reserved; + /**< Reserved */ +} icp_qat_hw_auth_counter_t; + +/* Private defines */ +#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF +/**< @ingroup icp_qat_hw_defs + * Thirty two bit mask used for determining the Auth count */ + +#define QAT_AUTH_COUNT_BITPOS 0 +/**< @ingroup icp_qat_hw_defs + * Starting bit position indicating the Auth count. */ + +/** + ****************************************************************************** + * @ingroup icp_qat_hw_defs + * + * @description + * Macro to build the auth counter quad word + * + * @param val Counter value to set + * + *****************************************************************************/ +#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \ + (((val)&QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS) + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the common auth parameters + * @description + * This part of the configuration is constant for each service + * + *****************************************************************************/ +typedef struct icp_qat_hw_auth_setup_s { + icp_qat_hw_auth_config_t auth_config; + /**< Configuration word for the auth slice */ + icp_qat_hw_auth_counter_t auth_counter; + /**< Auth counter value for this request */ +} icp_qat_hw_auth_setup_t; + +/* ************************************************************************* */ +/* ************************************************************************* */ + +#define QAT_HW_DEFAULT_ALIGNMENT 8 +#define QAT_HW_ROUND_UP(val, n) (((val) + ((n)-1)) & (~(n - 1))) + +/* State1 */ +#define ICP_QAT_HW_NULL_STATE1_SZ 64 +/**< @ingroup icp_qat_hw_defs + * State1 block size for NULL hashing */ +#define ICP_QAT_HW_MD5_STATE1_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State1 block size for MD5 */ +#define ICP_QAT_HW_SHA1_STATE1_SZ 20 +/**< @ingroup icp_qat_hw_defs + * Define the state1 block size for SHA1 - Note that for the QAT HW the state + * is rounded to the nearest 8 byte multiple */ +#define ICP_QAT_HW_SHA224_STATE1_SZ 32 +/**< @ingroup icp_qat_hw_defs + * State1 block size for SHA24 */ +#define ICP_QAT_HW_SHA3_224_STATE1_SZ 32 +/**< @ingroup icp_qat_hw_defs + * State1 block size for SHA3_224 */ +#define ICP_QAT_HW_SHA256_STATE1_SZ 32 +/**< @ingroup icp_qat_hw_defs + * State1 block size for SHA256 */ +#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32 +/**< @ingroup icp_qat_hw_defs + * State1 block size for SHA3_256 */ +#define ICP_QAT_HW_SHA384_STATE1_SZ 64 +/**< @ingroup icp_qat_hw_defs + * State1 block size for SHA384 */ +#define ICP_QAT_HW_SHA3_384_STATE1_SZ 64 +/**< @ingroup icp_qat_hw_defs + * State1 block size for SHA3_384 */ +#define ICP_QAT_HW_SHA512_STATE1_SZ 64 +/**< @ingroup icp_qat_hw_defs + * State1 block size for SHA512 */ +#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64 +/**< @ingroup icp_qat_hw_defs + * State1 block size for SHA3_512 */ +#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State1 block size for XCBC */ +#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State1 block size for CBC */ +#define ICP_QAT_HW_AES_F9_STATE1_SZ 32 +/**< @ingroup icp_qat_hw_defs + * State1 block size for AES F9 */ +#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State1 block size for Kasumi F9 */ +#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State1 block size for Galois128 */ +#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8 +/**< @ingroup icp_cpm_hw_defs + * State1 block size for UIA2 */ +#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8 +/**< @ingroup icp_cpm_hw_defs + * State1 block size for EIA3 */ +#define ICP_QAT_HW_SHA3_STATEFUL_STATE1_SZ 200 +/** <@ingroup icp_cpm_hw_defs + * State1 block size for stateful SHA3 processing*/ +#define ICP_QAT_HW_SM3_STATE1_SZ 32 +/**< @ingroup icp_cpm_hw_defs + * State1 block size for SM3 */ + +/* State2 */ +#define ICP_QAT_HW_NULL_STATE2_SZ 64 +/**< @ingroup icp_qat_hw_defs + * State2 block size for NULL hashing */ +#define ICP_QAT_HW_MD5_STATE2_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State2 block size for MD5 */ +#define ICP_QAT_HW_SHA1_STATE2_SZ 20 +/**< @ingroup icp_qat_hw_defs + * State2 block size for SHA1 - Note that for the QAT HW the state is rounded + * to the nearest 8 byte multiple */ +#define ICP_QAT_HW_SHA224_STATE2_SZ 32 +/**< @ingroup icp_qat_hw_defs + * State2 block size for SHA224 */ +#define ICP_QAT_HW_SHA3_224_STATE2_SZ 32 +/**< @ingroup icp_qat_hw_defs + * State2 block size for SHA3_224 */ +#define ICP_QAT_HW_SHA256_STATE2_SZ 32 +/**< @ingroup icp_qat_hw_defs + * State2 block size for SHA256 */ +#define ICP_QAT_HW_SHA3_256_STATE2_SZ 32 +/**< @ingroup icp_qat_hw_defs + * State2 block size for SHA3_256 */ +#define ICP_QAT_HW_SHA384_STATE2_SZ 64 +/**< @ingroup icp_qat_hw_defs + * State2 block size for SHA384 */ +#define ICP_QAT_HW_SHA3_384_STATE2_SZ 64 +/**< @ingroup icp_qat_hw_defs + * State2 block size for SHA3_384 */ +#define ICP_QAT_HW_SHA512_STATE2_SZ 64 +/**< @ingroup icp_qat_hw_defs + * State2 block size for SHA512 */ +#define ICP_QAT_HW_SHA3_512_STATE2_SZ 64 +/**< @ingroup icp_qat_hw_defs + * State2 block size for SHA3_512 */ +#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State2 block size for XCBC */ +#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State2 block size for CBC */ +#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State2 block size for AES Encrypted Counter 0 */ +#define ICP_QAT_HW_F9_IK_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State2 block size for F9 IK */ +#define ICP_QAT_HW_F9_FK_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State2 block size for F9 FK */ +#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ \ + (ICP_QAT_HW_F9_IK_SZ + ICP_QAT_HW_F9_FK_SZ) +/**< @ingroup icp_qat_hw_defs + * State2 complete size for Kasumi F9 */ +#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ +/**< @ingroup icp_qat_hw_defs + * State2 complete size for AES F9 */ +#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24 +/**< @ingroup icp_cpm_hw_defs + * State2 block size for UIA2 */ +#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32 +/**< @ingroup icp_cpm_hw_defs + * State2 block size for EIA3 */ +#define ICP_QAT_HW_GALOIS_H_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State2 block size for Galois Multiplier H */ +#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8 +/**< @ingroup icp_qat_hw_defs + * State2 block size for Galois AAD length */ +#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16 +/**< @ingroup icp_qat_hw_defs + * State2 block size for Galois Encrypted Counter 0 */ +#define ICP_QAT_HW_SM3_STATE2_SZ 32 +/**< @ingroup icp_qat_hw_defs + * State2 block size for SM3 */ + +/* ************************************************************************* */ +/* ************************************************************************* */ + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of SHA512 auth algorithm processing struct + * @description + * This structs described the parameters to pass to the slice for + * configuring it for SHA512 processing. This is the largest possible + * setup block for authentication + * + *****************************************************************************/ +typedef struct icp_qat_hw_auth_sha512_s { + icp_qat_hw_auth_setup_t inner_setup; + /**< Inner loop configuration word for the slice */ + + uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ]; + /**< Slice state1 variable */ + + icp_qat_hw_auth_setup_t outer_setup; + /**< Outer configuration word for the slice */ + + uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ]; + /**< Slice state2 variable */ + +} icp_qat_hw_auth_sha512_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of SHA3_512 auth algorithm processing struct + * @description + * This structs described the parameters to pass to the slice for + * configuring it for SHA3_512 processing. This is the largest possible + * setup block for authentication + * + *****************************************************************************/ +typedef struct icp_qat_hw_auth_sha3_512_s { + icp_qat_hw_auth_setup_t inner_setup; + /**< Inner loop configuration word for the slice */ + + uint8_t state1[ICP_QAT_HW_SHA3_512_STATE1_SZ]; + /**< Slice state1 variable */ + + icp_qat_hw_auth_setup_t outer_setup; + /**< Outer configuration word for the slice */ + + /* State2 size is zero - this may change for future implementations */ + uint8_t state2[ICP_QAT_HW_SHA3_512_STATE2_SZ]; +} icp_qat_hw_auth_sha3_512_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Supported hardware authentication algorithms + * @description + * Common grouping of the auth algorithm types supported by the QAT + * + *****************************************************************************/ +typedef union icp_qat_hw_auth_algo_blk_u { + icp_qat_hw_auth_sha512_t sha512; + /**< SHA512 Hashing */ + +} icp_qat_hw_auth_algo_blk_t; + +#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0 +/**< @ingroup icp_qat_hw_defs + * Bit position of the 32 bit A value in the 64 bit A configuration sent to + * the QAT */ + +#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF +/**< @ingroup icp_qat_hw_defs + * Mask value for A value */ + +/* ========================================================================= */ +/* CIPHER SLICE */ +/* ========================================================================= */ + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the supported Cipher Algorithm types + * @description + * Enumeration used to define the cipher algorithms + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_HW_CIPHER_ALGO_NULL = 0, /*!< Null ciphering */ + ICP_QAT_HW_CIPHER_ALGO_DES = 1, /*!< DES ciphering */ + ICP_QAT_HW_CIPHER_ALGO_3DES = 2, /*!< 3DES ciphering */ + ICP_QAT_HW_CIPHER_ALGO_AES128 = 3, /*!< AES-128 ciphering */ + ICP_QAT_HW_CIPHER_ALGO_AES192 = 4, /*!< AES-192 ciphering */ + ICP_QAT_HW_CIPHER_ALGO_AES256 = 5, /*!< AES-256 ciphering */ + ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6, /*!< ARC4 ciphering */ + ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7, /*!< Kasumi */ + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8, /*!< Snow_3G */ + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9, /*!< ZUC_3G */ + ICP_QAT_HW_CIPHER_ALGO_SM4 = 10, /*!< SM4 ciphering */ + ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305 = + 11, /*!< CHACHA POLY SPC AEAD */ + ICP_QAT_HW_CIPHER_DELIMITER = 12 /**< Delimiter type */ +} icp_qat_hw_cipher_algo_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the supported cipher modes of operation + * @description + * Enumeration used to define the cipher slice modes. + * + * @Note + * Only some algorithms are valid in some of the modes. If you dont know + * what you are doing then refer back to the EAS + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_HW_CIPHER_ECB_MODE = 0, /*!< ECB mode */ + ICP_QAT_HW_CIPHER_CBC_MODE = 1, /*!< CBC more */ + ICP_QAT_HW_CIPHER_CTR_MODE = 2, /*!< CTR mode */ + ICP_QAT_HW_CIPHER_F8_MODE = 3, /*!< F8 mode */ + ICP_QAT_HW_CIPHER_AEAD_MODE = 4, /*!< AES-GCM SPC AEAD mode */ + ICP_QAT_HW_CIPHER_RESERVED_MODE = 5, /*!< Reserved */ + ICP_QAT_HW_CIPHER_XTS_MODE = 6, /*!< XTS mode */ + ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7 /**< Delimiter type */ +} icp_qat_hw_cipher_mode_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Cipher Configuration Struct + * + * @description + * Configuration data used for setting up the QAT Cipher Slice + * + *****************************************************************************/ + +typedef struct icp_qat_hw_cipher_config_s { + uint32_t val; + /**< Cipher slice configuration */ + + uint32_t reserved; + /**< Reserved */ +} icp_qat_hw_cipher_config_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the cipher direction + * @description + * Enumeration which is used to define the cipher direction to apply + * + *****************************************************************************/ + +typedef enum { + /*!< Flag to indicate that encryption is required */ + ICP_QAT_HW_CIPHER_ENCRYPT = 0, + /*!< Flag to indicate that decryption is required */ + ICP_QAT_HW_CIPHER_DECRYPT = 1, + +} icp_qat_hw_cipher_dir_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the cipher key conversion modes + * @description + * Enumeration which is used to define if cipher key conversion is needed + * + *****************************************************************************/ + +typedef enum { + /*!< Flag to indicate that no key convert is required */ + ICP_QAT_HW_CIPHER_NO_CONVERT = 0, + /*!< Flag to indicate that key conversion is required */ + ICP_QAT_HW_CIPHER_KEY_CONVERT = 1, +} icp_qat_hw_cipher_convert_t; + +/* Private defines */ + +/* Note: Bit positions have been arranged for little endian ordering */ + +#define QAT_CIPHER_MODE_BITPOS 4 +/**< @ingroup icp_qat_hw_defs + * Define for the cipher mode bit position */ + +#define QAT_CIPHER_MODE_MASK 0xF +/**< @ingroup icp_qat_hw_defs + * Define for the cipher mode mask (four bits) */ + +#define QAT_CIPHER_ALGO_BITPOS 0 +/**< @ingroup icp_qat_hw_defs + * Define for the cipher algo bit position */ + +#define QAT_CIPHER_ALGO_MASK 0xF +/**< @ingroup icp_qat_hw_defs + * Define for the cipher algo mask (four bits) */ + +#define QAT_CIPHER_CONVERT_BITPOS 9 +/**< @ingroup icp_qat_hw_defs + * Define the cipher convert key bit position */ + +#define QAT_CIPHER_CONVERT_MASK 0x1 +/**< @ingroup icp_qat_hw_defs + * Define for the cipher convert key mask (one bit)*/ + +#define QAT_CIPHER_DIR_BITPOS 8 +/**< @ingroup icp_qat_hw_defs + * Define for the cipher direction bit position */ + +#define QAT_CIPHER_DIR_MASK 0x1 +/**< @ingroup icp_qat_hw_defs + * Define for the cipher direction mask (one bit) */ + +#define QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK 0x1F +/**< @ingroup icp_qat_hw_defs + * Define for the cipher AEAD Hash compare length mask (5 bits)*/ + +#define QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS 10 +/**< @ingroup icp_qat_hw_defs + * Define for the cipher AEAD Hash compare length (5 bits)*/ + +#define QAT_CIPHER_AEAD_AAD_SIZE_LOWER_MASK 0xFF +/**< @ingroup icp_qat_hw_defs + * Define for the cipher AEAD AAD size lower byte mask */ + +#define QAT_CIPHER_AEAD_AAD_SIZE_UPPER_MASK 0x3F +/**< @ingroup icp_qat_hw_defs + * Define for the cipher AEAD AAD size upper 6 bits mask */ + +#define QAT_CIPHER_AEAD_AAD_UPPER_SHIFT 8 +/**< @ingroup icp_qat_hw_defs + * Define for the cipher AEAD AAD size Upper byte shift */ + +#define QAT_CIPHER_AEAD_AAD_LOWER_SHIFT 24 +/**< @ingroup icp_qat_hw_defs + * Define for the cipher AEAD AAD size Lower byte shift */ + +#define QAT_CIPHER_AEAD_AAD_SIZE_BITPOS 16 +/**< @ingroup icp_qat_hw_defs + * Define for the cipher AEAD AAD size (14 bits)*/ + +#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2 +/**< @ingroup icp_qat_hw_defs + * Define for the cipher mode F8 key size */ + +#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2 +/**< @ingroup icp_qat_hw_defs + * Define for the cipher XTS mode key size */ + +/** + ****************************************************************************** + * @ingroup icp_qat_hw_defs + * + * @description + * Build the cipher configuration field + * + * @param mode Cipher Mode to use + * @param algo Cipher Algorithm to use + * @param convert Specify if the key is to be converted + * @param dir Specify the cipher direction either encrypt or decrypt + * + *****************************************************************************/ +#define ICP_QAT_HW_CIPHER_CONFIG_BUILD( \ + mode, algo, convert, dir, aead_hash_cmp_len) \ + ((((mode)&QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \ + (((algo)&QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \ + (((convert)&QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \ + (((dir)&QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS) | \ + (((aead_hash_cmp_len)&QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK) \ + << QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS)) + +/** + ****************************************************************************** + * @ingroup icp_qat_hw_defs + * + * @description + * Build the second QW of cipher slice config + * + * @param aad_size Specify the size of associated authentication data + * for AEAD processing + * + ******************************************************************************/ +#define ICP_QAT_HW_CIPHER_CONFIG_BUILD_UPPER(aad_size) \ + (((((aad_size) >> QAT_CIPHER_AEAD_AAD_UPPER_SHIFT) & \ + QAT_CIPHER_AEAD_AAD_SIZE_UPPER_MASK) \ + << QAT_CIPHER_AEAD_AAD_SIZE_BITPOS) | \ + (((aad_size)&QAT_CIPHER_AEAD_AAD_SIZE_LOWER_MASK) \ + << QAT_CIPHER_AEAD_AAD_LOWER_SHIFT)) + +#define ICP_QAT_HW_DES_BLK_SZ 8 +/**< @ingroup icp_qat_hw_defs + * Define the block size for DES. + * This used as either the size of the IV or CTR input value */ +#define ICP_QAT_HW_3DES_BLK_SZ 8 +/**< @ingroup icp_qat_hw_defs + * Define the processing block size for 3DES */ +#define ICP_QAT_HW_NULL_BLK_SZ 8 +/**< @ingroup icp_qat_hw_defs + * Define the processing block size for NULL */ +#define ICP_QAT_HW_AES_BLK_SZ 16 +/**< @ingroup icp_qat_hw_defs + * Define the processing block size for AES 128, 192 and 256 */ +#define ICP_QAT_HW_KASUMI_BLK_SZ 8 +/**< @ingroup icp_qat_hw_defs + * Define the processing block size for KASUMI */ +#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8 +/**< @ingroup icp_qat_hw_defs + * Define the processing block size for SNOW_3G */ +#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8 +/**< @ingroup icp_qat_hw_defs + * Define the processing block size for ZUC_3G */ +#define ICP_QAT_HW_NULL_KEY_SZ 256 +/**< @ingroup icp_qat_hw_defs + * Define the key size for NULL */ +#define ICP_QAT_HW_DES_KEY_SZ 8 +/**< @ingroup icp_qat_hw_defs + * Define the key size for DES */ +#define ICP_QAT_HW_3DES_KEY_SZ 24 +/**< @ingroup icp_qat_hw_defs + * Define the key size for 3DES */ +#define ICP_QAT_HW_AES_128_KEY_SZ 16 +/**< @ingroup icp_qat_hw_defs + * Define the key size for AES128 */ +#define ICP_QAT_HW_AES_192_KEY_SZ 24 +/**< @ingroup icp_qat_hw_defs + * Define the key size for AES192 */ +#define ICP_QAT_HW_AES_256_KEY_SZ 32 +/**< @ingroup icp_qat_hw_defs + * Define the key size for AES256 */ +#define ICP_QAT_HW_AES_128_F8_KEY_SZ \ + (ICP_QAT_HW_AES_128_KEY_SZ * QAT_CIPHER_MODE_F8_KEY_SZ_MULT) +/**< @ingroup icp_qat_hw_defs + * Define the key size for AES128 F8 */ +#define ICP_QAT_HW_AES_192_F8_KEY_SZ \ + (ICP_QAT_HW_AES_192_KEY_SZ * QAT_CIPHER_MODE_F8_KEY_SZ_MULT) +/**< @ingroup icp_qat_hw_defs + * Define the key size for AES192 F8 */ +#define ICP_QAT_HW_AES_256_F8_KEY_SZ \ + (ICP_QAT_HW_AES_256_KEY_SZ * QAT_CIPHER_MODE_F8_KEY_SZ_MULT) +/**< @ingroup icp_qat_hw_defs + * Define the key size for AES256 F8 */ +#define ICP_QAT_HW_AES_128_XTS_KEY_SZ \ + (ICP_QAT_HW_AES_128_KEY_SZ * QAT_CIPHER_MODE_XTS_KEY_SZ_MULT) +/**< @ingroup icp_qat_hw_defs + * Define the key size for AES128 XTS */ +#define ICP_QAT_HW_AES_256_XTS_KEY_SZ \ + (ICP_QAT_HW_AES_256_KEY_SZ * QAT_CIPHER_MODE_XTS_KEY_SZ_MULT) +/**< @ingroup icp_qat_hw_defs + * Define the key size for AES256 XTS */ +#define ICP_QAT_HW_KASUMI_KEY_SZ 16 +/**< @ingroup icp_qat_hw_defs + * Define the key size for Kasumi */ +#define ICP_QAT_HW_KASUMI_F8_KEY_SZ \ + (ICP_QAT_HW_KASUMI_KEY_SZ * QAT_CIPHER_MODE_F8_KEY_SZ_MULT) +/**< @ingroup icp_qat_hw_defs + * Define the key size for Kasumi F8 */ +#define ICP_QAT_HW_AES_128_XTS_KEY_SZ \ + (ICP_QAT_HW_AES_128_KEY_SZ * QAT_CIPHER_MODE_XTS_KEY_SZ_MULT) +/**< @ingroup icp_qat_hw_defs + * Define the key size for AES128 XTS */ +#define ICP_QAT_HW_AES_256_XTS_KEY_SZ \ + (ICP_QAT_HW_AES_256_KEY_SZ * QAT_CIPHER_MODE_XTS_KEY_SZ_MULT) +/**< @ingroup icp_qat_hw_defs + * Define the key size for AES256 XTS */ +#define ICP_QAT_HW_ARC4_KEY_SZ 256 +/**< @ingroup icp_qat_hw_defs + * Define the key size for ARC4 */ +#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16 +/**< @ingroup icp_cpm_hw_defs + * Define the key size for SNOW_3G_UEA2 */ +#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16 +/**< @ingroup icp_cpm_hw_defs + * Define the iv size for SNOW_3G_UEA2 */ +#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16 +/**< @ingroup icp_cpm_hw_defs + * Define the key size for ZUC_3G_EEA3 */ +#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16 +/**< @ingroup icp_cpm_hw_defs + * Define the iv size for ZUC_3G_EEA3 */ +#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2 +/**< @ingroup icp_cpm_hw_defs + * Number of the HW register to clear in F8 mode */ +/**< @ingroup icp_qat_hw_defs + * Define the State/ Initialization Vector size for CHACHAPOLY */ +#define ICP_QAT_HW_CHACHAPOLY_KEY_SZ 32 +/**< @ingroup icp_qat_hw_defs + * Define the key size for CHACHA20-Poly1305*/ +#define ICP_QAT_HW_CHACHAPOLY_IV_SZ 12 +/**< @ingroup icp_qat_hw_defs + * Define the block size for CHACHA20-Poly1305*/ +#define ICP_QAT_HW_CHACHAPOLY_BLK_SZ 64 +/**< @ingroup icp_qat_hw_defs + * Define the State/ Initialization Vector size for CHACHA20-Poly1305 */ +#define ICP_QAT_HW_CHACHAPOLY_CTR_SZ 16 +/**< @ingroup icp_qat_hw_defs + * Define the key size for CHACHA20-Poly1305*/ +#define ICP_QAT_HW_SPC_CTR_SZ 16 +/**< @ingroup icp_qat_hw_defs + * Define the Single Pass tag size*/ +#define ICP_QAT_HW_CHACHAPOLY_ICV__SZ 16 +/**< @ingroup icp_qat_hw_defs + * Define the key size for CHACHA20-Poly1305*/ +#define ICP_QAT_HW_CHACHAPOLY_AAD_MAX_LOG 14 +/**< @ingroup icp_qat_hw_defs + * Define the key size for CHACHA20-Poly1305*/ +#define ICP_QAT_HW_SM4_BLK_SZ 16 +/**< @ingroup icp_qat_hw_defs + * Define the processing block size for SM4 */ +#define ICP_QAT_HW_SM4_KEY_SZ 16 +/**< @ingroup icp_qat_hw_defs + * Number of the HW register to clear in F8 mode */ +#define ICP_QAT_HW_SM4_IV_SZ 16 +/**< @ingroup icp_qat_hw_defs + * Define the key size for SM4 */ + +/* + * SHRAM constants definitions + */ +#define INIT_SHRAM_CONSTANTS_TABLE_SZ (1024) +#define SHRAM_CONSTANTS_TABLE_SIZE_QWS (INIT_SHRAM_CONSTANTS_TABLE_SZ / 4 / 2) + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of AES-256 F8 cipher algorithm processing struct + * @description + * This structs described the parameters to pass to the slice for + * configuring it for AES-256 F8 processing + * + *****************************************************************************/ +typedef struct icp_qat_hw_cipher_aes256_f8_s { + icp_qat_hw_cipher_config_t cipher_config; + /**< Cipher configuration word for the slice set to + * AES-256 and the F8 mode */ + + uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ]; + /**< Cipher key */ + +} icp_qat_hw_cipher_aes256_f8_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Supported hardware cipher algorithms + * @description + * Common grouping of the cipher algorithm types supported by the QAT. + * This is the largest possible cipher setup block size + * + *****************************************************************************/ +typedef union icp_qat_hw_cipher_algo_blk_u { + + icp_qat_hw_cipher_aes256_f8_t aes256_f8; + /**< AES-256 F8 Cipher */ + +} icp_qat_hw_cipher_algo_blk_t; + +/* ========================================================================= */ +/* TRNG SLICE */ +/* ========================================================================= */ + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the supported TRNG configuration modes + * @description + * Enumeration used to define the TRNG modes. Used by clients when + * configuring the TRNG for use + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_HW_TRNG_DBL = 0, /*!< TRNG Disabled mode */ + ICP_QAT_HW_TRNG_NHT = 1, /*!< TRNG Normal Health Test mode */ + ICP_QAT_HW_TRNG_KAT = 4, /*!< TRNG Known Answer Test mode */ + ICP_QAT_HW_TRNG_DELIMITER = 8 /**< Delimiter type */ +} icp_qat_hw_trng_cfg_mode_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the supported TRNG KAT (known answer test) modes + * @description + * Enumeration which is used to define the TRNG KAT modes. Used by clients + * when configuring the TRNG for testing + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_HW_TRNG_NEG_0 = 0, /*!< TRNG Neg Zero Test */ + ICP_QAT_HW_TRNG_NEG_1 = 1, /*!< TRNG Neg One Test */ + ICP_QAT_HW_TRNG_POS = 2, /*!< TRNG POS Test */ + ICP_QAT_HW_TRNG_POS_VNC = 3, /*!< TRNG POS VNC Test */ + ICP_QAT_HW_TRNG_KAT_DELIMITER = 4 /**< Delimiter type */ +} icp_qat_hw_trng_kat_mode_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * TRNG mode configuration structure. + * + * @description + * Definition of the format of the TRNG slice configuration. Used + * internally by the QAT FW for configuration of the KAT unit or the + * TRNG depending on the slice command i.e. either a set_slice_config or + * slice_wr_KAT_type + * + *****************************************************************************/ + +typedef struct icp_qat_hw_trng_config_s { + uint32_t val; + /**< Configuration used for setting up the TRNG slice */ + + uint32_t reserved; + /**< Reserved */ +} icp_qat_hw_trng_config_t; + +/* Private Defines */ + +/* Note: Bit positions have been arranged for little endian ordering */ + +#define QAT_TRNG_CONFIG_MODE_MASK 0x7 +/**< @ingroup icp_qat_hw_defs + * Mask for the TRNG configuration mode. (Three bits) */ + +#define QAT_TRNG_CONFIG_MODE_BITPOS 5 +/**< @ingroup icp_qat_hw_defs + * TRNG configuration mode bit positions start */ + +#define QAT_TRNG_KAT_MODE_MASK 0x3 +/**< @ingroup icp_qat_hw_defs + * Mask of two bits for the TRNG known answer test mode */ + +#define QAT_TRNG_KAT_MODE_BITPOS 6 +/**< @ingroup icp_qat_hw_defs + * TRNG known answer test mode bit positions start */ + +/** + ****************************************************************************** + * @ingroup icp_qat_hw_defs + * + * @description + * Build the configuration byte for the TRNG slice based on the mode + * + * @param mode Configuration mode parameter + * + *****************************************************************************/ +#define ICP_QAT_HW_TRNG_CONFIG_MODE_BUILD(mode) \ + (((mode)&QAT_TRNG_CONFIG_MODE_MASK) << QAT_TRNG_CONFIG_MODE_BITPOS) + +/** + ****************************************************************************** + * @ingroup icp_qat_hw_defs + * + * @description + * Build the configuration byte for the TRNG KAT based on the mode + * + * @param mode Configuration mode parameter + * + *****************************************************************************/ +#define ICP_QAT_HW_TRNG_KAT_MODE_BUILD(mode) \ + ((((mode)&QAT_TRNG_KAT_MODE_MASK) << QAT_TRNG_KAT_MODE_BITPOS)) + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * TRNG test status structure. + * + * @description + * Definition of the format of the TRNG slice test status structure. Used + * internally by the QAT FW. + * + *****************************************************************************/ + +typedef struct icp_qat_hw_trng_test_status_s { + + uint32_t status; + /**< Status used for setting up the TRNG slice */ + + uint32_t fail_count; + /**< Comparator fail count */ +} icp_qat_hw_trng_test_status_t; + +#define ICP_QAT_HW_TRNG_TEST_NO_FAILURES 1 +/**< @ingroup icp_qat_hw_defs + * Flag to indicate that there were no Test Failures */ + +#define ICP_QAT_HW_TRNG_TEST_FAILURES_FOUND 0 +/**< @ingroup icp_qat_hw_defs + * Flag to indicate that there were Test Failures */ + +#define ICP_QAT_HW_TRNG_TEST_STATUS_VALID 1 +/**< @ingroup icp_qat_hw_defs + * Flag to indicate that there is no valid Test output */ + +#define ICP_QAT_HW_TRNG_TEST_STATUS_INVALID 0 +/**< @ingroup icp_qat_hw_defs + * Flag to indicate that the Test output is still invalid */ + +/* Private defines */ +#define QAT_TRNG_TEST_FAILURE_FLAG_MASK 0x1 +/**< @ingroup icp_qat_hw_defs + * Mask of one bit used to determine the TRNG Test pass/fail */ + +#define QAT_TRNG_TEST_FAILURE_FLAG_BITPOS 4 +/**< @ingroup icp_qat_hw_defs + * Flag position to indicate that the TRNG Test status is pass of fail */ + +#define QAT_TRNG_TEST_STATUS_MASK 0x1 +/**< @ingroup icp_qat_hw_defs + * Mask of one bit used to determine the TRNG Test staus */ + +#define QAT_TRNG_TEST_STATUS_BITPOS 1 +/**< @ingroup icp_qat_hw_defs + * Flag position to indicate the TRNG Test status */ + +/** + ****************************************************************************** + * @ingroup icp_qat_hw_defs + * + * @description + * Extract the fail bit for the TRNG slice + * + * @param status TRNG status value + * + *****************************************************************************/ + +#define ICP_QAT_HW_TRNG_FAIL_FLAG_GET(status) \ + (((status) >> QAT_TRNG_TEST_FAILURE_FLAG_BITPOS) & \ + QAT_TRNG_TEST_FAILURE_FLAG_MASK) + +/** + ****************************************************************************** + * @ingroup icp_qat_hw_defs + * + * @description + * Extract the status valid bit for the TRNG slice + * + * @param status TRNG status value + * + *****************************************************************************/ +#define ICP_QAT_HW_TRNG_STATUS_VALID_GET(status) \ + (((status) >> QAT_TRNG_TEST_STATUS_BITPOS) & QAT_TRNG_TEST_STATUS_MASK) + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * TRNG entropy counters + * + * @description + * Definition of the format of the TRNG entropy counters. Used internally + * by the QAT FW. + * + *****************************************************************************/ + +typedef struct icp_qat_hw_trng_entropy_counts_s { + uint64_t raw_ones_count; + /**< Count of raw ones of entropy */ + + uint64_t raw_zeros_count; + /**< Count of raw zeros of entropy */ + + uint64_t cond_ones_count; + /**< Count of conditioned ones entropy */ + + uint64_t cond_zeros_count; + /**< Count of conditioned zeros entropy */ +} icp_qat_hw_trng_entropy_counts_t; + +/* Private defines */ +#define QAT_HW_TRNG_ENTROPY_STS_RSVD_SZ 4 +/**< @ingroup icp_qat_hw_defs + * TRNG entropy status reserved size in bytes */ + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * TRNG entropy available status. + * + * @description + * Definition of the format of the TRNG slice entropy status available. + * struct. Used internally by the QAT FW. + * + *****************************************************************************/ +typedef struct icp_qat_hw_trng_entropy_status_s { + uint32_t status; + /**< Entropy status in the TRNG */ + + uint8_t reserved[QAT_HW_TRNG_ENTROPY_STS_RSVD_SZ]; + /**< Reserved */ +} icp_qat_hw_trng_entropy_status_t; + +#define ICP_QAT_HW_TRNG_ENTROPY_AVAIL 1 +/**< @ingroup icp_qat_hw_defs + * Flag indicating that entropy data is available in the QAT TRNG slice */ + +#define ICP_QAT_HW_TRNG_ENTROPY_NOT_AVAIL 0 +/**< @ingroup icp_qat_hw_defs + * Flag indicating that no entropy data is available in the QAT TRNG slice */ + +/* Private defines */ +#define QAT_TRNG_ENTROPY_STATUS_MASK 1 +/**< @ingroup icp_qat_hw_defs + * Mask of one bit used to determine the TRNG Entropy status */ + +#define QAT_TRNG_ENTROPY_STATUS_BITPOS 0 +/**< @ingroup icp_qat_hw_defs + * Starting bit position for TRNG Entropy status. */ + +/** + ****************************************************************************** + * @ingroup icp_qat_hw_defs + * + * @description + * Extract the entropy available status bit + * + * @param status TRNG status value + * + *****************************************************************************/ +#define ICP_QAT_HW_TRNG_ENTROPY_STATUS_GET(status) \ + (((status) >> QAT_TRNG_ENTROPY_STATUS_BITPOS) & \ + QAT_TRNG_ENTROPY_STATUS_MASK) + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Entropy seed data + * + * @description + * This type is used for the definition of the entropy generated by a read + * of the TRNG slice + * + *****************************************************************************/ +typedef uint64_t icp_qat_hw_trng_entropy; + +/* ========================================================================= */ +/* COMPRESSION SLICE */ +/* ========================================================================= */ + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the supported compression directions + * @description + * Enumeration used to define the compression directions + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_HW_COMPRESSION_DIR_COMPRESS = 0, /*!< Compression */ + ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS = 1, /*!< Decompression */ + ICP_QAT_HW_COMPRESSION_DIR_DELIMITER = 2 /**< Delimiter type */ +} icp_qat_hw_compression_direction_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the supported delayed match modes + * @description + * Enumeration used to define whether delayed match is enabled + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_DISABLED = 0, + /*!< Delayed match disabled */ + + ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED = 1, + /*!< Delayed match enabled + Note: This is the only valid mode - refer to CPM1.6 SAS */ + + ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_DELIMITER = 2 + /**< Delimiter type */ + +} icp_qat_hw_compression_delayed_match_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the supported compression algorithms + * @description + * Enumeration used to define the compression algorithms + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE = 0, /*!< Deflate compression */ + ICP_QAT_HW_COMPRESSION_DEPRECATED = 1, /*!< Deprecated */ + ICP_QAT_HW_COMPRESSION_ALGO_DELIMITER = 2 /**< Delimiter type */ +} icp_qat_hw_compression_algo_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the supported compression depths + * @description + * Enumeration used to define the compression slice depths. + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_HW_COMPRESSION_DEPTH_1 = 0, + /*!< Search depth 1 (Fastest least exhaustive) */ + + ICP_QAT_HW_COMPRESSION_DEPTH_4 = 1, + /*!< Search depth 4 */ + + ICP_QAT_HW_COMPRESSION_DEPTH_8 = 2, + /*!< Search depth 8 */ + + ICP_QAT_HW_COMPRESSION_DEPTH_16 = 3, + /*!< Search depth 16 */ + + ICP_QAT_HW_COMPRESSION_DEPTH_128 = 4, + /*!< Search depth 128 (Slowest, most exhaustive) */ + + ICP_QAT_HW_COMPRESSION_DEPTH_DELIMITER = 5 + /**< Delimiter type */ + +} icp_qat_hw_compression_depth_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Definition of the supported file types + * @description + * Enumeration used to define the compression file types. + * + *****************************************************************************/ + +typedef enum { + ICP_QAT_HW_COMPRESSION_FILE_TYPE_0 = 0, + /*!< Use Static Trees */ + + ICP_QAT_HW_COMPRESSION_FILE_TYPE_1 = 1, + /*!< Use Semi-Dynamic Trees at offset 0 */ + + ICP_QAT_HW_COMPRESSION_FILE_TYPE_2 = 2, + /*!< Use Semi-Dynamic Trees at offset 320 */ + + ICP_QAT_HW_COMPRESSION_FILE_TYPE_3 = 3, + /*!< Use Semi-Dynamic Trees at offset 640 */ + + ICP_QAT_HW_COMPRESSION_FILE_TYPE_4 = 4, + /*!< Use Semi-Dynamic Trees at offset 960 */ + + ICP_QAT_HW_COMPRESSION_FILE_TYPE_DELIMITER = 5 + /**< Delimiter type */ + +} icp_qat_hw_compression_file_type_t; + +typedef enum { + BNP_SKIP_MODE_DISABLED = 0, + BNP_SKIP_MODE_AT_START = 1, + BNP_SKIP_MODE_AT_END = 2, + BNP_SKIP_MODE_STRIDE = 3 +} icp_qat_bnp_skip_mode_t; + +/** + ***************************************************************************** + * @ingroup icp_qat_hw_defs + * Compression Configuration Struct + * + * @description + * Configuration data used for setting up the QAT Compression Slice + * + *****************************************************************************/ + +typedef struct icp_qat_hw_compression_config_s { + uint32_t val; + /**< Compression slice configuration */ + + uint32_t reserved; + /**< Reserved */ +} icp_qat_hw_compression_config_t; + +/* Private defines */ +#define QAT_COMPRESSION_DIR_BITPOS 4 +/**< @ingroup icp_qat_hw_defs + * Define for the compression direction bit position */ + +#define QAT_COMPRESSION_DIR_MASK 0x7 +/**< @ingroup icp_qat_hw_defs + * Define for the compression direction mask (three bits) */ + +#define QAT_COMPRESSION_DELAYED_MATCH_BITPOS 16 +/**< @ingroup icp_qat_hw_defs + * Define for the compression delayed match bit position */ + +#define QAT_COMPRESSION_DELAYED_MATCH_MASK 0x1 +/**< @ingroup icp_qat_hw_defs + * Define for the delayed match mask (one bit) */ + +#define QAT_COMPRESSION_ALGO_BITPOS 31 +/**< @ingroup icp_qat_hw_defs + * Define for the compression algorithm bit position */ + +#define QAT_COMPRESSION_ALGO_MASK 0x1 +/**< @ingroup icp_qat_hw_defs + * Define for the compression algorithm mask (one bit) */ + +#define QAT_COMPRESSION_DEPTH_BITPOS 28 +/**< @ingroup icp_qat_hw_defs + * Define for the compression depth bit position */ + +#define QAT_COMPRESSION_DEPTH_MASK 0x7 +/**< @ingroup icp_qat_hw_defs + * Define for the compression depth mask (three bits) */ + +#define QAT_COMPRESSION_FILE_TYPE_BITPOS 24 +/**< @ingroup icp_qat_hw_defs + * Define for the compression file type bit position */ + +#define QAT_COMPRESSION_FILE_TYPE_MASK 0xF +/**< @ingroup icp_qat_hw_defs + * Define for the compression file type mask (four bits) */ + +/** + ****************************************************************************** + * @ingroup icp_qat_hw_defs + * + * @description + * Build the compression slice configuration field + * + * @param dir Compression Direction to use, compress or decompress + * @param delayed Specify if delayed match should be enabled + * @param algo Compression algorithm to use + * @param depth Compression search depth to use + * @param filetype Compression file type to use, static or semi dynamic trees + * + *****************************************************************************/ +#define ICP_QAT_HW_COMPRESSION_CONFIG_BUILD( \ + dir, delayed, algo, depth, filetype) \ + ((((dir)&QAT_COMPRESSION_DIR_MASK) << QAT_COMPRESSION_DIR_BITPOS) | \ + (((delayed)&QAT_COMPRESSION_DELAYED_MATCH_MASK) \ + << QAT_COMPRESSION_DELAYED_MATCH_BITPOS) | \ + (((algo)&QAT_COMPRESSION_ALGO_MASK) << QAT_COMPRESSION_ALGO_BITPOS) | \ + (((depth)&QAT_COMPRESSION_DEPTH_MASK) \ + << QAT_COMPRESSION_DEPTH_BITPOS) | \ + (((filetype)&QAT_COMPRESSION_FILE_TYPE_MASK) \ + << QAT_COMPRESSION_FILE_TYPE_BITPOS)) + +/* ========================================================================= */ +/* TRANSLATOR SLICE */ +/* ========================================================================= */ + +/**< Translator slice configuration is set internally by the firmware */ + +#endif /* _ICP_QAT_HW_H_ */ Index: sys/dev/qat/qat_api/freebsd_module.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/freebsd_module.c @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_cfg.h" +#include "cpa.h" +#include "icp_accel_devices.h" +#include "adf_common_drv.h" +#include "icp_adf_debug.h" +#include "icp_adf_init.h" +#include "lac_sal_ctrl.h" + +extern struct mtx *adfDevicesLock; + +static int +adf_module_load(void) +{ + CpaStatus ret = CPA_STATUS_SUCCESS; + + qatUtilsMutexInit(&adfDevicesLock); + ret = SalCtrl_AdfServicesRegister(); + if (ret != CPA_STATUS_SUCCESS) { + qatUtilsMutexDestroy(&adfDevicesLock); + return EFAULT; + } + + return 0; +} + +static int +adf_module_unload(void) +{ + CpaStatus ret = CPA_STATUS_SUCCESS; + + ret = SalCtrl_AdfServicesUnregister(); + if (ret != CPA_STATUS_SUCCESS) { + return EBUSY; + } + qatUtilsMutexDestroy(&adfDevicesLock); + + return 0; +} + +static int +adf_modevent(module_t mod, int type, void *arg) +{ + int error; + + switch (type) { + case MOD_LOAD: + error = adf_module_load(); + break; + case MOD_UNLOAD: + error = adf_module_unload(); + break; + default: + error = EOPNOTSUPP; + break; + } + + return (error); +} + +static moduledata_t adf_mod = { "qat_api", adf_modevent, 0 }; + +DECLARE_MODULE(qat_api, adf_mod, SI_SUB_DRIVERS, SI_ORDER_SECOND); +MODULE_VERSION(qat_api, 1); +MODULE_DEPEND(qat_api, qat_common, 1, 1, 1); +MODULE_DEPEND(qat_api, linuxkpi, 1, 1, 1); Index: sys/dev/qat/qat_api/include/cpa.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/cpa.h @@ -0,0 +1,677 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa.h + * + * @defgroup cpa CPA API + * + * @description + * This is the top level API definition for Intel(R) QuickAssist Technology. + * It contains structures, data types and definitions that are common + * across the interface. + * + *****************************************************************************/ + +/** + ***************************************************************************** + * @defgroup cpa_BaseDataTypes Base Data Types + * @file cpa.h + * + * @ingroup cpa + * + * @description + * The base data types for the Intel CPA API. + * + *****************************************************************************/ + +#ifndef CPA_H +#define CPA_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_types.h" + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Instance handle type. + * + * @description + * Handle used to uniquely identify an instance. + * + * @note + * Where only a single instantiation exists this field may be set to + * @ref CPA_INSTANCE_HANDLE_SINGLE. + * + *****************************************************************************/ +typedef void * CpaInstanceHandle; + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Default instantiation handle value where there is only a single instance + * + * @description + * Used as an instance handle value where only one instance exists. + * + *****************************************************************************/ +#define CPA_INSTANCE_HANDLE_SINGLE ((CpaInstanceHandle)0) + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Physical memory address. + * @description + * Type for physical memory addresses. + *****************************************************************************/ +typedef Cpa64U CpaPhysicalAddr; + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Virtual to physical address conversion routine. + * + * @description + * This function is used to convert virtual addresses to physical + * addresses. + * + * @context + * The function shall not be called in an interrupt context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pVirtualAddr Virtual address to be converted. + * + * @return + * Returns the corresponding physical address. + * On error, the value NULL is returned. + * + * @post + * None + * @see + * None + * + *****************************************************************************/ +typedef CpaPhysicalAddr (*CpaVirtualToPhysical)(void * pVirtualAddr); + + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Flat buffer structure containing a pointer and length member. + * + * @description + * A flat buffer structure. The data pointer, pData, is a virtual address. + * An API instance may require the actual data to be in contiguous + * physical memory as determined by @ref CpaInstanceInfo2. + * + *****************************************************************************/ +typedef struct _CpaFlatBuffer { + Cpa32U dataLenInBytes; + /**< Data length specified in bytes. + * When used as an input parameter to a function, the length specifies + * the current length of the buffer. + * When used as an output parameter to a function, the length passed in + * specifies the maximum length of the buffer on return (i.e. the allocated + * length). The implementation will not write past this length. On return, + * the length is always unchanged. */ + Cpa8U *pData; + /**< The data pointer is a virtual address, however the actual data pointed + * to is required to be in contiguous physical memory unless the field + requiresPhysicallyContiguousMemory in CpaInstanceInfo2 is false. */ +} CpaFlatBuffer; + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Scatter/Gather buffer list containing an array of flat buffers. + * + * @description + * A scatter/gather buffer list structure. This buffer structure is + * typically used to represent a region of memory which is not + * physically contiguous, by describing it as a collection of + * buffers, each of which is physically contiguous. + * + * @note + * The memory for the pPrivateMetaData member must be allocated + * by the client as physically contiguous memory. When allocating + * memory for pPrivateMetaData, a call to the corresponding + * BufferListGetMetaSize function (e.g. cpaCyBufferListGetMetaSize) + * MUST be made to determine the size of the Meta Data Buffer. The + * returned size (in bytes) may then be passed in a memory allocation + * routine to allocate the pPrivateMetaData memory. + *****************************************************************************/ +typedef struct _CpaBufferList { + Cpa32U numBuffers; + /**< Number of buffers in the list */ + CpaFlatBuffer *pBuffers; + /**< Pointer to an unbounded array containing the number of CpaFlatBuffers + * defined by numBuffers + */ + void *pUserData; + /**< This is an opaque field that is not read or modified internally. */ + void *pPrivateMetaData; + /**< Private representation of this buffer list. The memory for this + * buffer needs to be allocated by the client as contiguous data. + * The amount of memory required is returned with a call to + * the corresponding BufferListGetMetaSize function. If that function + * returns a size of zero then no memory needs to be allocated, and this + * parameter can be NULL. + */ +} CpaBufferList; + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Flat buffer structure with physical address. + * + * @description + * Functions taking this structure do not need to do any virtual to + * physical address translation before writing the buffer to hardware. + *****************************************************************************/ +typedef struct _CpaPhysFlatBuffer { + Cpa32U dataLenInBytes; + /**< Data length specified in bytes. + * When used as an input parameter to a function, the length specifies + * the current length of the buffer. + * When used as an output parameter to a function, the length passed in + * specifies the maximum length of the buffer on return (i.e. the allocated + * length). The implementation will not write past this length. On return, + * the length is always unchanged. + */ + Cpa32U reserved; + /**< Reserved for alignment */ + CpaPhysicalAddr bufferPhysAddr; + /**< The physical address at which the data resides. The data pointed + * to is required to be in contiguous physical memory. + */ +} CpaPhysFlatBuffer; + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Scatter/gather list containing an array of flat buffers with + * physical addresses. + * + * @description + * Similar to @ref CpaBufferList, this buffer structure is typically + * used to represent a region of memory which is not physically + * contiguous, by describing it as a collection of buffers, each of + * which is physically contiguous. The difference is that, in this + * case, the individual "flat" buffers are represented using + * physical, rather than virtual, addresses. + *****************************************************************************/ +typedef struct _CpaPhysBufferList { + Cpa64U reserved0; + /**< Reserved for internal usage */ + Cpa32U numBuffers; + /**< Number of buffers in the list */ + Cpa32U reserved1; + /**< Reserved for alignment */ + CpaPhysFlatBuffer flatBuffers[]; + /**< Array of flat buffer structures, of size numBuffers */ +} CpaPhysBufferList; + + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Special value which can be taken by length fields on some of the + * "data plane" APIs to indicate that the buffer in question is of + * type CpaPhysBufferList, rather than simply an array of bytes. + ****************************************************************************/ +#define CPA_DP_BUFLIST ((Cpa32U)0xFFFFFFFF) + + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * API status value type definition + * + * @description + * This type definition is used for the return values used in all the + * API functions. Common values are defined, for example see + * @ref CPA_STATUS_SUCCESS, @ref CPA_STATUS_FAIL, etc. + *****************************************************************************/ +typedef Cpa32S CpaStatus; + +#define CPA_STATUS_SUCCESS (0) +/**< + * @ingroup cpa_BaseDataTypes + * Success status value. */ +#define CPA_STATUS_FAIL (-1) +/**< + * @ingroup cpa_BaseDataTypes + * Fail status value. */ +#define CPA_STATUS_RETRY (-2) +/**< + * @ingroup cpa_BaseDataTypes + * Retry status value. */ +#define CPA_STATUS_RESOURCE (-3) +/**< + * @ingroup cpa_BaseDataTypes + * The resource that has been requested is unavailable. Refer + * to relevant sections of the API for specifics on what the suggested + * course of action is. */ +#define CPA_STATUS_INVALID_PARAM (-4) +/**< + * @ingroup cpa_BaseDataTypes + * Invalid parameter has been passed in. */ +#define CPA_STATUS_FATAL (-5) +/**< + * @ingroup cpa_BaseDataTypes + * A serious error has occurred. Recommended course of action + * is to shutdown and restart the component. */ +#define CPA_STATUS_UNSUPPORTED (-6) +/**< + * @ingroup cpa_BaseDataTypes + * The function is not supported, at least not with the specific + * parameters supplied. This may be because a particular + * capability is not supported by the current implementation. */ +#define CPA_STATUS_RESTARTING (-7) +/**< + * @ingroup cpa_BaseDataTypes + * The API implementation is restarting. This may be reported if, for example, + * a hardware implementation is undergoing a reset. Recommended course of + * action is to retry the request. */ + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * API status string type definition + * @description + * This type definition is used for the generic status text strings + * provided by cpaXxGetStatusText API functions. Common values are + * defined, for example see @ref CPA_STATUS_STR_SUCCESS, + * @ref CPA_STATUS_FAIL, etc., as well as the maximum size + * @ref CPA_STATUS_MAX_STR_LENGTH_IN_BYTES. + *****************************************************************************/ +#define CPA_STATUS_MAX_STR_LENGTH_IN_BYTES (255) +/**< + * @ingroup cpa_BaseDataTypes + * Maximum length of the Overall Status String (including generic and specific + * strings returned by calls to cpaXxGetStatusText) */ + +#define CPA_STATUS_STR_SUCCESS ("Operation was successful:") +/**< + * @ingroup cpa_BaseDataTypes + * Status string for @ref CPA_STATUS_SUCCESS. */ +#define CPA_STATUS_STR_FAIL ("General or unspecified error occurred:") +/**< + * @ingroup cpa_BaseDataTypes + * Status string for @ref CPA_STATUS_FAIL. */ +#define CPA_STATUS_STR_RETRY ("Recoverable error occurred:") +/**< + * @ingroup cpa_BaseDataTypes + * Status string for @ref CPA_STATUS_RETRY. */ +#define CPA_STATUS_STR_RESOURCE ("Required resource unavailable:") +/**< + * @ingroup cpa_BaseDataTypes + * Status string for @ref CPA_STATUS_RESOURCE. */ +#define CPA_STATUS_STR_INVALID_PARAM ("Invalid parameter supplied:") +/**< + * @ingroup cpa_BaseDataTypes + * Status string for @ref CPA_STATUS_INVALID_PARAM. */ +#define CPA_STATUS_STR_FATAL ("Fatal error has occurred:") +/**< + * @ingroup cpa_BaseDataTypes + * Status string for @ref CPA_STATUS_FATAL. */ +#define CPA_STATUS_STR_UNSUPPORTED ("Operation not supported:") +/**< + * @ingroup cpa_BaseDataTypes + * Status string for @ref CPA_STATUS_UNSUPPORTED. */ + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Instance Types + * + * @deprecated + * As of v1.3 of the Crypto API, this enum has been deprecated, + * replaced by @ref CpaAccelerationServiceType. + * + * @description + * Enumeration of the different instance types. + * + *****************************************************************************/ +typedef enum _CpaInstanceType +{ + CPA_INSTANCE_TYPE_CRYPTO = 0, + /**< Cryptographic instance type */ + CPA_INSTANCE_TYPE_DATA_COMPRESSION, + /**< Data compression instance type */ + CPA_INSTANCE_TYPE_RAID, + /**< RAID instance type */ + CPA_INSTANCE_TYPE_XML, + /**< XML instance type */ + CPA_INSTANCE_TYPE_REGEX + /**< Regular Expression instance type */ +} CpaInstanceType CPA_DEPRECATED; + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Service Type + * @description + * Enumeration of the different service types. + * + *****************************************************************************/ +typedef enum _CpaAccelerationServiceType +{ + CPA_ACC_SVC_TYPE_CRYPTO = CPA_INSTANCE_TYPE_CRYPTO, + /**< Cryptography */ + CPA_ACC_SVC_TYPE_DATA_COMPRESSION = CPA_INSTANCE_TYPE_DATA_COMPRESSION, + /**< Data Compression */ + CPA_ACC_SVC_TYPE_PATTERN_MATCH = CPA_INSTANCE_TYPE_REGEX, + /**< Pattern Match */ + CPA_ACC_SVC_TYPE_RAID = CPA_INSTANCE_TYPE_RAID, + /**< RAID */ + CPA_ACC_SVC_TYPE_XML = CPA_INSTANCE_TYPE_XML, + /**< XML */ + CPA_ACC_SVC_TYPE_VIDEO_ANALYTICS + /**< Video Analytics */ +} CpaAccelerationServiceType; + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Instance State + * + * @deprecated + * As of v1.3 of the Crypto API, this enum has been deprecated, + * replaced by @ref CpaOperationalState. + * + * @description + * Enumeration of the different instance states that are possible. + * + *****************************************************************************/ +typedef enum _CpaInstanceState +{ + CPA_INSTANCE_STATE_INITIALISED = 0, + /**< Instance is in the initialized state and ready for use. */ + CPA_INSTANCE_STATE_SHUTDOWN + /**< Instance is in the shutdown state and not available for use. */ +} CpaInstanceState CPA_DEPRECATED; + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Instance operational state + * @description + * Enumeration of the different operational states that are possible. + * + *****************************************************************************/ +typedef enum _CpaOperationalState +{ + CPA_OPER_STATE_DOWN= 0, + /**< Instance is not available for use. May not yet be initialized, + * or stopped. */ + CPA_OPER_STATE_UP + /**< Instance is available for use. Has been initialized and started. */ +} CpaOperationalState; + +#define CPA_INSTANCE_MAX_NAME_SIZE_IN_BYTES 64 +/**< + * @ingroup cpa_BaseDataTypes + * Maximum instance info name string length in bytes */ +#define CPA_INSTANCE_MAX_ID_SIZE_IN_BYTES 128 +/**< + * @ingroup cpa_BaseDataTypes + * Maximum instance info id string length in bytes */ +#define CPA_INSTANCE_MAX_VERSION_SIZE_IN_BYTES 64 +/**< + * @ingroup cpa_BaseDataTypes + * Maximum instance info version string length in bytes */ + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Instance Info Structure + * + * @deprecated + * As of v1.3 of the Crypto API, this structure has been deprecated, + * replaced by CpaInstanceInfo2. + * + * @description + * Structure that contains the information to describe the instance. + * + *****************************************************************************/ +typedef struct _CpaInstanceInfo { + enum _CpaInstanceType type; + /**< Type definition for this instance. */ + enum _CpaInstanceState state; + /**< Operational state of the instance. */ + Cpa8U name[CPA_INSTANCE_MAX_NAME_SIZE_IN_BYTES]; + /**< Simple text string identifier for the instance. */ + Cpa8U version[CPA_INSTANCE_MAX_VERSION_SIZE_IN_BYTES]; + /**< Version string. There may be multiple versions of the same type of + * instance accessible through a particular library. */ +} CpaInstanceInfo CPA_DEPRECATED; + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Physical Instance ID + * @description + * Identifies the physical instance of an accelerator execution + * engine. + * + * Accelerators grouped into "packages". Each accelerator can in + * turn contain one or more execution engines. Implementations of + * this API will define the packageId, acceleratorId, + * executionEngineId and busAddress as appropriate for the + * implementation. For example, for hardware-based accelerators, + * the packageId might identify the chip, which might contain + * multiple accelerators, each of which might contain multiple + * execution engines. The combination of packageId, acceleratorId + * and executionEngineId uniquely identifies the instance. + * + * Hardware based accelerators implementing this API may also provide + * information on the location of the accelerator in the busAddress + * field. This field will be defined as appropriate for the + * implementation. For example, for PCIe attached accelerators, + * the busAddress may contain the PCIe bus, device and function + * number of the accelerators. + * + *****************************************************************************/ +typedef struct _CpaPhysicalInstanceId { + Cpa16U packageId; + /**< Identifies the package within which the accelerator is + * contained. */ + Cpa16U acceleratorId; + /**< Identifies the specific accelerator within the package. */ + Cpa16U executionEngineId; + /**< Identifies the specific execution engine within the + * accelerator. */ + Cpa16U busAddress; + /**< Identifies the bus address associated with the accelerator + * execution engine. */ + Cpa32U kptAcHandle; + /**< Identifies the achandle of the accelerator. */ +} CpaPhysicalInstanceId; + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Instance Info Structure, version 2 + * @description + * Structure that contains the information to describe the instance. + * + *****************************************************************************/ +typedef struct _CpaInstanceInfo2 { + CpaAccelerationServiceType accelerationServiceType; + /**< Type of service provided by this instance. */ +#define CPA_INST_VENDOR_NAME_SIZE CPA_INSTANCE_MAX_NAME_SIZE_IN_BYTES + /**< Maximum length of the vendor name. */ + Cpa8U vendorName[CPA_INST_VENDOR_NAME_SIZE]; + /**< String identifying the vendor of the accelerator. */ + +#define CPA_INST_PART_NAME_SIZE CPA_INSTANCE_MAX_NAME_SIZE_IN_BYTES + /**< Maximum length of the part name. */ + Cpa8U partName[CPA_INST_PART_NAME_SIZE]; + /**< String identifying the part (name and/or number). */ + +#define CPA_INST_SW_VERSION_SIZE CPA_INSTANCE_MAX_VERSION_SIZE_IN_BYTES + /**< Maximum length of the software version string. */ + Cpa8U swVersion[CPA_INST_SW_VERSION_SIZE]; + /**< String identifying the version of the software associated with + * the instance. For hardware-based implementations of the API, + * this should be the driver version. For software-based + * implementations of the API, this should be the version of the + * library. + * + * Note that this should NOT be used to store the version of the + * API, nor should it be used to report the hardware revision + * (which can be captured as part of the @ref partName, if required). */ + +#define CPA_INST_NAME_SIZE CPA_INSTANCE_MAX_NAME_SIZE_IN_BYTES + /**< Maximum length of the instance name. */ + Cpa8U instName[CPA_INST_NAME_SIZE]; + /**< String identifying the name of the instance. */ + +#define CPA_INST_ID_SIZE CPA_INSTANCE_MAX_ID_SIZE_IN_BYTES + Cpa8U instID[CPA_INST_ID_SIZE]; + /**< String containing a unique identifier for the instance */ + + CpaPhysicalInstanceId physInstId; + /**< Identifies the "physical instance" of the accelerator. */ + +#define CPA_MAX_CORES 256 + /**< Maximum number of cores to support in the coreAffinity bitmap. */ + CPA_BITMAP(coreAffinity, CPA_MAX_CORES); + /**< A bitmap identifying the core or cores to which the instance + * is affinitized in an SMP operating system. + * + * The term core here is used to mean a "logical" core - for example, + * in a dual-processor, quad-core system with hyperthreading (two + * threads per core), there would be 16 such cores (2 processors x + * 4 cores/processor x 2 threads/core). The numbering of these cores + * and the corresponding bit positions is OS-specific. Note that Linux + * refers to this as "processor affinity" or "CPU affinity", and refers + * to the bitmap as a "cpumask". + * + * The term "affinity" is used to mean that this is the core on which + * the callback function will be invoked when using the asynchronous + * mode of the API. In a hardware-based implementation of the API, + * this might be the core to which the interrupt is affinitized. + * In a software-based implementation, this might be the core to which + * the process running the algorithm is affinitized. Where there is + * no affinity, the bitmap can be set to all zeroes. + * + * This bitmap should be manipulated using the macros @ref + * CPA_BITMAP_BIT_SET, @ref CPA_BITMAP_BIT_CLEAR and @ref + * CPA_BITMAP_BIT_TEST. */ + + Cpa32U nodeAffinity; + /**< Identifies the processor complex, or node, to which the accelerator + * is physically connected, to help identify locality in NUMA systems. + * + * The values taken by this attribute will typically be in the range + * 0..n-1, where n is the number of nodes (processor complexes) in the + * system. For example, in a dual-processor configuration, n=2. The + * precise values and their interpretation are OS-specific. */ + + CpaOperationalState operState; + /**< Operational state of the instance. */ + CpaBoolean requiresPhysicallyContiguousMemory; + /**< Specifies whether the data pointed to by flat buffers + * (CpaFlatBuffer::pData) supplied to this instance must be in + * physically contiguous memory. */ + CpaBoolean isPolled; + /**< Specifies whether the instance must be polled, or is event driven. + * For hardware accelerators, the alternative to polling would be + * interrupts. */ + CpaBoolean isOffloaded; + /**< Identifies whether the instance uses hardware offload, or is a + * software-only implementation. */ +} CpaInstanceInfo2; + +/** + ***************************************************************************** + * @ingroup cpa_BaseDataTypes + * Instance Events + * @description + * Enumeration of the different events that will cause the registered + * Instance notification callback function to be invoked. + * + *****************************************************************************/ +typedef enum _CpaInstanceEvent +{ + CPA_INSTANCE_EVENT_RESTARTING = 0, + /**< Event type that triggers the registered instance notification callback + * function when and instance is restarting. The reason why an instance is + * restarting is implementation specific. For example a hardware + * implementation may send this event if the hardware device is about to + * be reset. + */ + CPA_INSTANCE_EVENT_RESTARTED, + /**< Event type that triggers the registered instance notification callback + * function when and instance has restarted. The reason why an instance has + * restarted is implementation specific. For example a hardware + * implementation may send this event after the hardware device has + * been reset. + */ + CPA_INSTANCE_EVENT_FATAL_ERROR + /**< Event type that triggers the registered instance notification callback + * function when an error has been detected that requires the device + * to be reset. + * This event will be sent by all instances using the device, both on the + * host and guests. + */ +} CpaInstanceEvent; + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_H */ Index: sys/dev/qat/qat_api/include/cpa_dev.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/cpa_dev.h @@ -0,0 +1,144 @@ +/**************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_dev.h + * + * @defgroup cpaDev Device API + * + * @ingroup cpa + * + * @description + * These functions specify the API for device level operation. + * + * @remarks + * + * + *****************************************************************************/ + +#ifndef CPA_DEV_H +#define CPA_DEV_H + +#ifdef __cplusplus +extern"C" { +#endif + + +#ifndef CPA_H +#include "cpa.h" +#endif + + + /***************************************************************************** + * @ingroup cpaDev + * Returns device information + * + * @description + * This data structure contains the device information. The device + * information are available to both Physical and Virtual Functions. + * Depending on the resource partitioning configuration, the services + * available may changes. This configuration will impact the size of the + * Security Association Database (SADB). Other properties such device SKU + * and device ID are also reported. + * + *****************************************************************************/ +typedef struct _CpaDeviceInfo { + Cpa32U sku; + /**< Identifies the SKU of the device. */ + Cpa16U bdf; + /**< Identifies the Bus Device Function of the device. + * Format is reported as follow: + * - bits<2:0> represent the function number. + * - bits<7:3> represent the device + * - bits<15:8> represent the bus + */ + Cpa32U deviceId; + /**< Returns the device ID. */ + Cpa32U numaNode; + /**< Return the local NUMA node mapped to the device. */ + CpaBoolean isVf; + /**< Return whether the device is currently used in a virtual function + * or not. */ + CpaBoolean dcEnabled; + /**< Compression service enabled */ + CpaBoolean cySymEnabled; + /**< Symetric crypto service enabled */ + CpaBoolean cyAsymEnabled; + /**< Asymetric crypto service enabled */ + CpaBoolean inlineEnabled; + /**< Inline service enabled */ + Cpa32U deviceMemorySizeAvailable; + /**< Return the size of the device memory available. This device memory + * section could be used for the intermediate buffers in the + * compression service. + */ +} CpaDeviceInfo; + + +/***************************************************************************** +* @ingroup cpaDev +* Returns number devices. +* +* @description +* This API returns the number of devices available to the application. +* If used on the host, it will return the number of physical devices. +* If used on the guest, it will return the number of function mapped +* to the virtual machine. +* +*****************************************************************************/ +CpaStatus cpaGetNumDevices (Cpa16U *numDevices); + +/***************************************************************************** +* @ingroup cpaDev +* Returns device information for a given device index. +* +* @description +* Returns device information for a given device index. This API must +* be used with cpaGetNumDevices(). +*****************************************************************************/ +CpaStatus cpaGetDeviceInfo (Cpa16U device, CpaDeviceInfo *deviceInfo); + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_DEV_H */ Index: sys/dev/qat/qat_api/include/cpa_types.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/cpa_types.h @@ -0,0 +1,244 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_types.h + * + * @defgroup cpa_Types CPA Type Definition + * + * @ingroup cpa + * + * @description + * This is the CPA Type Definitions. + * + *****************************************************************************/ + +#ifndef CPA_TYPES_H +#define CPA_TYPES_H + +#ifdef __cplusplus +extern "C" { +#endif + +#if defined (__FreeBSD__) && defined (_KERNEL) + +/* FreeBSD kernel mode */ +#include +#include +#include + +#else + +/* Linux, FreeBSD, or Windows user mode */ +#include +#include +#include + +#endif + +#if defined (WIN32) || defined (_WIN64) +/* nonstandard extension used : zero-sized array in struct/union */ +#pragma warning (disable: 4200) +#endif + +typedef uint8_t Cpa8U; +/**< + * @file cpa_types.h + * @ingroup cpa_Types + * Unsigned byte base type. */ +typedef int8_t Cpa8S; +/**< + * @file cpa_types.h + * @ingroup cpa_Types + * Signed byte base type. */ +typedef uint16_t Cpa16U; +/**< + * @file cpa_types.h + * @ingroup cpa_Types + * Unsigned double-byte base type. */ +typedef int16_t Cpa16S; +/**< + * @file cpa_types.h + * @ingroup cpa_Types + * Signed double-byte base type. */ +typedef uint32_t Cpa32U; +/**< + * @file cpa_types.h + * @ingroup cpa_Types + * Unsigned quad-byte base type. */ +typedef int32_t Cpa32S; +/**< + * @file cpa_types.h + * @ingroup cpa_Types + * Signed quad-byte base type. */ +typedef uint64_t Cpa64U; +/**< + * @file cpa_types.h + * @ingroup cpa_Types + * Unsigned double-quad-byte base type. */ +typedef int64_t Cpa64S; +/**< + * @file cpa_types.h + * @ingroup cpa_Types + * Signed double-quad-byte base type. */ + +/***************************************************************************** + * Generic Base Data Type definitions + *****************************************************************************/ +#ifndef NULL +#define NULL (0) +/**< + * @file cpa_types.h + * @ingroup cpa_Types + * NULL definition. */ +#endif + +#ifndef TRUE +#define TRUE (1==1) +/**< + * @file cpa_types.h + * @ingroup cpa_Types + * True value definition. */ +#endif +#ifndef FALSE +#define FALSE (0==1) +/**< + * @file cpa_types.h + * @ingroup cpa_Types + * False value definition. */ +#endif + +/** + ***************************************************************************** + * @ingroup cpa_Types + * Boolean type. + * + * @description + * Functions in this API use this type for Boolean variables that take + * true or false values. + * + *****************************************************************************/ +typedef enum _CpaBoolean +{ + CPA_FALSE = FALSE, /**< False value */ + CPA_TRUE = TRUE /**< True value */ +} CpaBoolean; + + +/** + ***************************************************************************** + * @ingroup cpa_Types + * Declare a bitmap of specified size (in bits). + * + * @description + * This macro is used to declare a bitmap of arbitrary size. + * + * To test whether a bit in the bitmap is set, use @ref + * CPA_BITMAP_BIT_TEST. + * + * While most uses of bitmaps on the API are read-only, macros are also + * provided to set (see @ref CPA_BITMAP_BIT_SET) and clear (see @ref + * CPA_BITMAP_BIT_CLEAR) bits in the bitmap. + *****************************************************************************/ +#define CPA_BITMAP(name, sizeInBits) \ + Cpa32U name[((sizeInBits)+31)/32] + +#define CPA_BITMAP_BIT_TEST(bitmask, bit) \ + ((bitmask[(bit)/32]) & (0x1 << ((bit)%32))) +/**< + * @ingroup cpa_Types + * Test a specified bit in the specified bitmap. The bitmap may have been + * declared using @ref CPA_BITMAP. Returns a Boolean (true if the bit is + * set, false otherwise). */ + +#define CPA_BITMAP_BIT_SET(bitmask, bit) \ + (bitmask[(bit)/32] |= (0x1 << ((bit)%32))) +/**< + * @file cpa_types.h + * @ingroup cpa_Types + * Set a specified bit in the specified bitmap. The bitmap may have been + * declared using @ref CPA_BITMAP. */ + +#define CPA_BITMAP_BIT_CLEAR(bitmask, bit) \ + (bitmask[(bit)/32] &= ~(0x1 << ((bit)%32))) +/**< + * @ingroup cpa_Types + * Clear a specified bit in the specified bitmap. The bitmap may have been + * declared using @ref CPA_BITMAP. */ + + +/** + ********************************************************************** + * + * @ingroup cpa_Types + * + * @description + * Declare a function or type and mark it as deprecated so that + * usages get flagged with a warning. + * + ********************************************************************** + */ +#if defined(__GNUC__) || defined(__INTEL_COMPILER) || defined(_WIN64) +/* + * gcc and icc support the __attribute__ ((deprecated)) syntax for marking + * functions and other constructs as deprecated. + */ +/* + * Uncomment the deprecated macro if you need to see which structs are deprecated + */ +#define CPA_DEPRECATED +/*#define CPA_DEPRECATED __attribute__ ((deprecated)) */ +#else +/* + * for all other compilers, define deprecated to do nothing + * + */ +/* #define CPA_DEPRECATED_FUNC(func) func; #pragma deprecated(func) */ +#pragma message("WARNING: You need to implement the CPA_DEPRECATED macro for this compiler") +#define CPA_DEPRECATED +#endif + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_TYPES_H */ Index: sys/dev/qat/qat_api/include/dc/cpa_dc.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/dc/cpa_dc.h @@ -0,0 +1,2461 @@ +/**************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_dc.h + * + * @defgroup cpaDc Data Compression API + * + * @ingroup cpa + * + * @description + * These functions specify the API for Data Compression operations. + * + * @remarks + * + * + *****************************************************************************/ + +#ifndef CPA_DC_H +#define CPA_DC_H + +#ifdef __cplusplus +extern"C" { +#endif + + +#ifndef CPA_H +#include "cpa.h" +#endif + +/** + ***************************************************************************** + * @ingroup cpaDc + * CPA Dc Major Version Number + * @description + * The CPA_DC API major version number. This number will be incremented + * when significant churn to the API has occurred. The combination of the + * major and minor number definitions represent the complete version number + * for this interface. + * + *****************************************************************************/ +#define CPA_DC_API_VERSION_NUM_MAJOR (2) + +/** + ***************************************************************************** + * @ingroup cpaDc + * CPA DC Minor Version Number + * @description + * The CPA_DC API minor version number. This number will be incremented + * when minor changes to the API has occurred. The combination of the major + * and minor number definitions represent the complete version number for + * this interface. + * + *****************************************************************************/ +#define CPA_DC_API_VERSION_NUM_MINOR (2) + +/** + ***************************************************************************** + * @ingroup cpaDc + * Compression API session handle type + * + * @description + * Handle used to uniquely identify a Compression API session handle. This + * handle is established upon registration with the API using + * cpaDcInitSession(). + * + * + * + *****************************************************************************/ +typedef void * CpaDcSessionHandle; + + +/** + ***************************************************************************** + * @ingroup cpaDc + * Supported file types + * + * @description + * This enumerated lists identified file types. Used to select Huffman + * trees. + * File types are associated with Precompiled Huffman Trees. + * + * @deprecated + * As of v1.6 of the Compression API, this enum has been deprecated. + * + *****************************************************************************/ +typedef enum _CpaDcFileType +{ + CPA_DC_FT_ASCII, + /**< ASCII File Type */ + CPA_DC_FT_CSS, + /**< Cascading Style Sheet File Type */ + CPA_DC_FT_HTML, + /**< HTML or XML (or similar) file type */ + CPA_DC_FT_JAVA, + /**< File Java code or similar */ + CPA_DC_FT_OTHER + /**< Other file types */ +} CpaDcFileType; +/** + ***************************************************************************** + * @ingroup cpaDc + * Supported flush flags + * + * @description + * This enumerated list identifies the types of flush that can be + * specified for stateful and stateless cpaDcCompressData and + * cpaDcDecompressData functions. + * + *****************************************************************************/ +typedef enum _CpaDcFlush +{ + CPA_DC_FLUSH_NONE = 0, + /**< No flush request. */ + CPA_DC_FLUSH_FINAL, + /**< Indicates that the input buffer contains all of the data for + the compression session allowing any buffered data to be released. + For Deflate, BFINAL is set in the compression header.*/ + CPA_DC_FLUSH_SYNC, + /**< Used for stateful deflate compression to indicate that all pending + output is flushed, byte aligned, to the output buffer. The session state + is not reset.*/ + CPA_DC_FLUSH_FULL + /**< Used for deflate compression to indicate that all pending output is + flushed to the output buffer and the session state is reset.*/ +} CpaDcFlush; +/** + ***************************************************************************** + * @ingroup cpaDc + * Supported Huffman Tree types + * + * @description + * This enumeration lists support for Huffman Tree types. + * Selecting Static Huffman trees generates compressed blocks with an RFC + * 1951 header specifying "compressed with fixed Huffman trees". + * + * Selecting Full Dynamic Huffman trees generates compressed blocks with + * an RFC 1951 header specifying "compressed with dynamic Huffman codes". + * The headers are calculated on the data being compressed, requiring two + * passes. + * + * Selecting Precompiled Huffman Trees generates blocks with RFC 1951 + * dynamic headers. The headers are pre-calculated and are specified by + * the file type. + * + *****************************************************************************/ +typedef enum _CpaDcHuffType +{ + CPA_DC_HT_STATIC, + /**< Static Huffman Trees */ + CPA_DC_HT_PRECOMP, + /**< Precompiled Huffman Trees */ + CPA_DC_HT_FULL_DYNAMIC + /**< Full Dynamic Huffman Trees */ +} CpaDcHuffType; + +/** + ***************************************************************************** + * @ingroup cpaDc + * Supported compression types + * + * @description + * This enumeration lists the supported data compression algorithms. + * In combination with CpaDcChecksum it is used to decide on the file + * header and footer format. + * + * @deprecated + * As of v1.6 of the Compression API, CPA_DC_LZS, CPA_DC_ELZS and + * CPA_DC_LZSS have been deprecated and should not be used. + * + *****************************************************************************/ +typedef enum _CpaDcCompType +{ + CPA_DC_LZS, + /**< LZS Compression */ + CPA_DC_ELZS, + /**< Extended LZS Compression */ + CPA_DC_LZSS, + /**< LZSS Compression */ + CPA_DC_DEFLATE + /**< Deflate Compression */ +} CpaDcCompType; + +/** + ***************************************************************************** + * @ingroup cpaDc + * Supported checksum algorithms + * + * @description + * This enumeration lists the supported checksum algorithms + * Used to decide on file header and footer specifics. + * + *****************************************************************************/ +typedef enum _CpaDcChecksum +{ + CPA_DC_NONE, + /**< No checksums required */ + CPA_DC_CRC32, + /**< application requires a CRC32 checksum */ + CPA_DC_ADLER32 + /**< Application requires Adler-32 checksum */ +} CpaDcChecksum; + + +/** + ***************************************************************************** + * @ingroup cpaDc + * Supported session directions + * + * @description + * This enumerated list identifies the direction of a session. + * A session can be compress, decompress or both. + * + *****************************************************************************/ +typedef enum _CpaDcSessionDir +{ + CPA_DC_DIR_COMPRESS, + /**< Session will be used for compression */ + CPA_DC_DIR_DECOMPRESS, + /**< Session will be used for decompression */ + CPA_DC_DIR_COMBINED + /**< Session will be used for both compression and decompression */ +} CpaDcSessionDir; + +/** + ***************************************************************************** + * @ingroup cpaDc + * Supported session state settings + * + * @description + * This enumerated list identifies the stateful setting of a session. + * A session can be either stateful or stateless. + * + * Stateful sessions are limited to have only one in-flight message per + * session. This means a compress or decompress request must be complete + * before a new request can be started. This applies equally to sessions + * that are uni-directional in nature and sessions that are combined + * compress and decompress. Completion occurs when the synchronous function + * returns, or when the asynchronous callback function has completed. + * + *****************************************************************************/ +typedef enum _CpaDcSessionState +{ + CPA_DC_STATEFUL, + /**< Session will be stateful, implying that state may need to be + saved in some situations */ + CPA_DC_STATELESS + /**< Session will be stateless, implying no state will be stored*/ +} CpaDcSessionState; + +/** + ***************************************************************************** + * @ingroup cpaDc + * Supported compression levels + * + * @description + * This enumerated lists the supported compressed levels. + * Lower values will result in less compressibility in less time. + * + * + *****************************************************************************/ +typedef enum _CpaDcCompLvl +{ + CPA_DC_L1 = 1, + /**< Compression level 1 */ + CPA_DC_L2, + /**< Compression level 2 */ + CPA_DC_L3, + /**< Compression level 3 */ + CPA_DC_L4, + /**< Compression level 4 */ + CPA_DC_L5, + /**< Compression level 5 */ + CPA_DC_L6, + /**< Compression level 6 */ + CPA_DC_L7, + /**< Compression level 7 */ + CPA_DC_L8, + /**< Compression level 8 */ + CPA_DC_L9 + /**< Compression level 9 */ +} CpaDcCompLvl; + +/** + ***************************************************************************** + * @ingroup cpaDc + * Supported additional details from accelerator + * + * @description + * This enumeration lists the supported additional details from the + * accelerator. These may be useful in determining the best way to + * recover from a failure. + * + * + *****************************************************************************/ +typedef enum _CpaDcReqStatus +{ + CPA_DC_OK = 0, + /**< No error detected by compression slice */ + CPA_DC_INVALID_BLOCK_TYPE = -1, + /**< Invalid block type (type == 3) */ + CPA_DC_BAD_STORED_BLOCK_LEN = -2, + /**< Stored block length did not match one's complement */ + CPA_DC_TOO_MANY_CODES = -3, + /**< Too many length or distance codes */ + CPA_DC_INCOMPLETE_CODE_LENS = -4, + /**< Code length codes incomplete */ + CPA_DC_REPEATED_LENS = -5, + /**< Repeated lengths with no first length */ + CPA_DC_MORE_REPEAT = -6, + /**< Repeat more than specified lengths */ + CPA_DC_BAD_LITLEN_CODES = -7, + /**< Invalid literal/length code lengths */ + CPA_DC_BAD_DIST_CODES = -8, + /**< Invalid distance code lengths */ + CPA_DC_INVALID_CODE = -9, + /**< Invalid literal/length or distance code in fixed or dynamic block */ + CPA_DC_INVALID_DIST = -10, + /**< Distance is too far back in fixed or dynamic block */ + CPA_DC_OVERFLOW = -11, + /**< Overflow detected. This is an indication that output buffer has overflowed. + * For stateful sessions, this is a warning (the input can be adjusted and + * resubmitted). + * For stateless sessions this is an error condition */ + CPA_DC_SOFTERR = -12, + /**< Other non-fatal detected */ + CPA_DC_FATALERR = -13, + /**< Fatal error detected */ + CPA_DC_MAX_RESUBITERR = -14, + /**< On an error being detected, the firmware attempted to correct and resubmitted the + * request, however, the maximum resubmit value was exceeded */ + CPA_DC_INCOMPLETE_FILE_ERR = -15, + /**< The input file is incomplete. Note this is an indication that the request was + * submitted with a CPA_DC_FLUSH_FINAL, however, a BFINAL bit was not found in the + * request */ + CPA_DC_WDOG_TIMER_ERR = -16, + /**< The request was not completed as a watchdog timer hardware event occurred */ + CPA_DC_EP_HARDWARE_ERR = -17, + /**< Request was not completed as an end point hardware error occurred (for + * example, a parity error) */ + CPA_DC_VERIFY_ERROR = -18, + /**< Error detected during "compress and verify" operation */ + CPA_DC_EMPTY_DYM_BLK = -19, + /**< Decompression request contained an empty dynamic stored block + * (not supported) */ + CPA_DC_CRC_INTEG_ERR = -20, + /**< A data integrity CRC error was detected */ +} CpaDcReqStatus; + +/** + ***************************************************************************** + * @ingroup cpaDc + * Supported modes for automatically selecting the best compression type. + * + * @description + * This enumeration lists the supported modes for automatically selecting + * the best Huffman encoding which would lead to the best compression + * results. + * + * The CPA_DC_ASB_UNCOMP_STATIC_DYNAMIC_WITH_NO_HDRS value is deprecated + * and should not be used. + * + *****************************************************************************/ +typedef enum _CpaDcAutoSelectBest +{ + CPA_DC_ASB_DISABLED = 0, + /**< Auto select best mode is disabled */ + CPA_DC_ASB_STATIC_DYNAMIC = 1, + /**< Auto select between static and dynamic compression */ + CPA_DC_ASB_UNCOMP_STATIC_DYNAMIC_WITH_STORED_HDRS = 2, + /**< Auto select between uncompressed, static and dynamic compression, + * using stored block deflate headers if uncompressed is selected */ + CPA_DC_ASB_UNCOMP_STATIC_DYNAMIC_WITH_NO_HDRS = 3 + /**< Auto select between uncompressed, static and dynamic compression, + * using no deflate headers if uncompressed is selected */ +} CpaDcAutoSelectBest; + +/** + ***************************************************************************** + * @ingroup cpaDc + * Supported modes for skipping regions of input or output buffers. + * + * @description + * This enumeration lists the supported modes for skipping regions of + * input or output buffers. + * + *****************************************************************************/ +typedef enum _CpaDcSkipMode +{ + CPA_DC_SKIP_DISABLED = 0, + /**< Skip mode is disabled */ + CPA_DC_SKIP_AT_START = 1, + /**< Skip region is at the start of the buffer. */ + CPA_DC_SKIP_AT_END = 2, + /**< Skip region is at the end of the buffer. */ + CPA_DC_SKIP_STRIDE = 3 + /**< Skip region occurs at regular intervals within the buffer. + CpaDcSkipData.strideLength specifies the number of bytes between each + skip region. */ +} CpaDcSkipMode; + +/** + ***************************************************************************** + * @ingroup cpaDc + * Service specific return codes + * + * @description + * Compression specific return codes + * + * + *****************************************************************************/ + +#define CPA_DC_BAD_DATA (-100) + /**consumed arg. + * -# The implementation communicates the amount of data in the + * destination buffer list via pResults->produced arg. + * + * Source Buffer Setup Rules + * -# The buffer list must have the correct number of flat buffers. This + * is specified by the numBuffers element of the CpaBufferList. + * -# Each flat buffer must have a pointer to contiguous memory that has + * been allocated by the calling application. The + * number of octets to be compressed or decompressed must be stored + * in the dataLenInBytes element of the flat buffer. + * -# It is permissible to have one or more flat buffers with a zero length + * data store. This function will process all flat buffers until the + * destination buffer is full or all source data has been processed. + * If a buffer has zero length, then no data will be processed from + * that buffer. + * + * Source Buffer Processing Rules. + * -# The buffer list is processed in index order - SrcBuff->pBuffers[0] + * will be completely processed before SrcBuff->pBuffers[1] begins to + * be processed. + * -# The application must drain the destination buffers. + * If the source data was not completely consumed, the application + * must resubmit the request. + * -# On return, the pResults->consumed will indicate the number of bytes + * consumed from the input buffers. + * + * Destination Buffer Setup Rules + * -# The destination buffer list must have storage for processed data. + * This implies at least one flat buffer must exist in the buffer list. + * -# For each flat buffer in the buffer list, the dataLenInBytes element + * must be set to the size of the buffer space. + * -# It is permissible to have one or more flat buffers with a zero length + * data store. + * If a buffer has zero length, then no data will be added to + * that buffer. + * + * Destination Buffer Processing Rules. + * -# The buffer list is processed in index order - DestBuff->pBuffers[0] + * will be completely processed before DestBuff->pBuffers[1] begins to + * be processed. + * -# On return, the pResults->produced will indicate the number of bytes + * written to the output buffers. + * -# If processing has not been completed, the application must drain the + * destination buffers and resubmit the request. The application must + * reset the dataLenInBytes for each flat buffer in the destination + * buffer list. + * + * Checksum rules. + * If a checksum is specified in the session setup data, then: + * -# For the first request for a particular data segment the checksum + * is initialised internally by the implementation. + * -# The checksum is maintained by the implementation between calls + * until the flushFlag is set to CPA_DC_FLUSH_FINAL indicating the + * end of a particular data segment. + * -# Intermediate checksum values are returned to the application, + * via the CpaDcRqResults structure, in response to each request. + * However these checksum values are not guaranteed to the valid + * until the call with flushFlag set to CPA_DC_FLUSH_FINAL + * completes successfully. + * + * The application should set flushFlag to + * CPA_DC_FLUSH_FINAL to indicate processing a particular data segment + * is complete. It should be noted that this function may have to be + * called more than once to process data after the flushFlag parameter has + * been set to CPA_DC_FLUSH_FINAL if the destination buffer fills. Refer + * to buffer processing rules. + * + * For stateful operations, when the function is invoked with flushFlag + * set to CPA_DC_FLUSH_NONE or CPA_DC_FLUSH_SYNC, indicating more data + * is yet to come, the function may or may not retain data. When the + * function is invoked with flushFlag set to CPA_DC_FLUSH_FULL or + * CPA_DC_FLUSH_FINAL, the function will process all buffered data. + * + * For stateless operations, CPA_DC_FLUSH_FINAL will cause the BFINAL + * bit to be set for deflate compression. The initial checksum for the + * stateless operation should be set to 0. CPA_DC_FLUSH_NONE and + * CPA_DC_FLUSH_SYNC should not be used for stateless operations. + * + * It is possible to maintain checksum and length information across + * cpaDcCompressData() calls with a stateless session without maintaining + * the full history state that is maintained by a stateful session. In this + * mode of operation, an initial checksum value of 0 is passed into the + * first cpaDcCompressData() call with the flush flag set to + * CPA_DC_FLUSH_FULL. On subsequent calls to cpaDcCompressData() for this + * session, the checksum passed to cpaDcCompressData should be set to the + * checksum value produced by the previous call to cpaDcCompressData(). + * When the last block of input data is passed to cpaDcCompressData(), the + * flush flag should be set to CP_DC_FLUSH_FINAL. This will cause the BFINAL + * bit to be set in a deflate stream. It is the responsibility of the calling + * application to maintain overall lengths across the stateless requests + * and to pass the checksum produced by one request into the next request. + * + * When an instance supports compressAndVerifyAndRecover, it is enabled by + * default when using cpaDcCompressData(). If this feature needs to be + * disabled, cpaDcCompressData2() must be used. + * + * Synchronous or Asynchronous operation of the API is determined by + * the value of the callbackFn parameter passed to cpaDcInitSession() + * when the sessionHandle was setup. If a non-NULL value was specified + * then the supplied callback function will be invoked asynchronously + * with the response of this request. + * + * Response ordering: + * For each session, the implementation must maintain the order of + * responses. That is, if in asynchronous mode, the order of the callback + * functions must match the order of jobs submitted by this function. + * In a simple synchronous mode implementation, the practice of submitting + * a request and blocking on its completion ensure ordering is preserved. + * + * This limitation does not apply if the application employs multiple + * threads to service a single session. + * + * If this API is invoked asynchronous, the return code represents + * the success or not of asynchronously scheduling the request. + * The results of the operation, along with the amount of data consumed + * and produced become available when the callback function is invoked. + * As such, pResults->consumed and pResults->produced are available + * only when the operation is complete. + * + * The application must not use either the source or destination buffers + * until the callback has completed. + * + * @see + * None + * + *****************************************************************************/ +CpaStatus +cpaDcCompressData( CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + void *callbackTag ); + +/** + ***************************************************************************** + * @ingroup cpaDc + * Submit a request to compress a buffer of data. + * + * @description + * This API consumes data from the input buffer and generates compressed + * data in the output buffer. This API is very similar to + * cpaDcCompressData() except it provides a CpaDcOpData structure for + * passing additional input parameters not covered in cpaDcCompressData(). + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] dcInstance Target service instance. + * @param[in,out] pSessionHandle Session handle. + * @param[in] pSrcBuff Pointer to data buffer for compression. + * @param[in] pDestBuff Pointer to buffer space for data after + * compression. + * @param[in] pOpData Additional input parameters. + * @param[in,out] pResults Pointer to results structure + * @param[in] callbackTag User supplied value to help correlate + * the callback with its associated + * request. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_DC_BAD_DATA The input data was not properly formed. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * + * @pre + * pSessionHandle has been setup using cpaDcInitSession() + * @post + * pSessionHandle has session related state information + * @note + * This function passes control to the compression service for processing + * + * @see + * cpaDcCompressData() + * + *****************************************************************************/ +CpaStatus +cpaDcCompressData2( CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcOpData *pOpData, + CpaDcRqResults *pResults, + void *callbackTag ); + +/** + ***************************************************************************** + * @ingroup cpaDc + * Submit a request to decompress a buffer of data. + * + * @description + * This API consumes compressed data from the input buffer and generates + * uncompressed data in the output buffer. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] dcInstance Target service instance. + * @param[in,out] pSessionHandle Session handle. + * @param[in] pSrcBuff Pointer to data buffer for compression. + * @param[in] pDestBuff Pointer to buffer space for data + * after decompression. + * @param[in,out] pResults Pointer to results structure + * @param[in] flushFlag When set to CPA_DC_FLUSH_FINAL, indicates + * that the input buffer contains all of + * the data for the compression session, + * allowing the function to release + * history data. + * @param[in] callbackTag User supplied value to help correlate + * the callback with its associated + * request. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_DC_BAD_DATA The input data was not properly formed. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * pSessionHandle has been setup using cpaDcInitSession() + * @post + * pSessionHandle has session related state information + * @note + * This function passes control to the compression service for + * decompression. The function returns the status from the service. + * + * This function may be called repetitively with input until all of the + * input has been provided and all the output has been consumed. + * + * This function has identical buffer processing rules as + * cpaDcCompressData(). + * + * This function has identical checksum processing rules as + * cpaDcCompressData(). + * + * The application should set flushFlag to + * CPA_DC_FLUSH_FINAL to indicate processing a particular compressed + * data segment is complete. It should be noted that this function may + * have to be called more than once to process data after flushFlag + * has been set if the destination buffer fills. Refer to + * buffer processing rules in cpaDcCompressData(). + * + * Synchronous or Asynchronous operation of the API is determined by + * the value of the callbackFn parameter passed to cpaDcInitSession() + * when the sessionHandle was setup. If a non-NULL value was specified + * then the supplied callback function will be invoked asynchronously + * with the response of this request, along with the callbackTag + * specified in the function. + * + * The same response ordering constraints identified in the + * cpaDcCompressData API apply to this function. + * + * @see + * cpaDcCompressData() + * + *****************************************************************************/ +CpaStatus +cpaDcDecompressData( CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + void *callbackTag ); + + +/** + ***************************************************************************** + * @ingroup cpaDc + * Submit a request to decompress a buffer of data. + * + * @description + * This API consumes compressed data from the input buffer and generates + * uncompressed data in the output buffer. This API is very similar to + * cpaDcDecompressData() except it provides a CpaDcOpData structure for + * passing additional input parameters not covered in cpaDcDecompressData(). + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] dcInstance Target service instance. + * @param[in,out] pSessionHandle Session handle. + * @param[in] pSrcBuff Pointer to data buffer for compression. + * @param[in] pDestBuff Pointer to buffer space for data + * after decompression. + * @param[in] pOpData Additional input parameters. + * @param[in,out] pResults Pointer to results structure + * @param[in] callbackTag User supplied value to help correlate + * the callback with its associated + * request. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_DC_BAD_DATA The input data was not properly formed. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * + * @pre + * pSessionHandle has been setup using cpaDcInitSession() + * @post + * pSessionHandle has session related state information + * @note + * This function passes control to the compression service for + * decompression. The function returns the status from the service. + * + * @see + * cpaDcDecompressData() + * cpaDcCompressData2() + * cpaDcCompressData() + * + *****************************************************************************/ +CpaStatus +cpaDcDecompressData2( CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcOpData *pOpData, + CpaDcRqResults *pResults, + void *callbackTag ); + +/** + ***************************************************************************** + * @ingroup cpaDc + * Generate compression header. + * + * @description + * This API generates the gzip or the zlib header and stores it in the + * output buffer. + * + * @context + * This function may be call from any context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in,out] pSessionHandle Session handle. + * @param[in] pDestBuff Pointer to data buffer where the + * compression header will go. + * @param[out] count Pointer to counter filled in with + * header size. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * pSessionHandle has been setup using cpaDcInitSession() + * + * @note + * This function can output a 10 byte gzip header or 2 byte zlib header to + * the destination buffer. The session properties are used to determine + * the header type. To output a header the session must have been + * initialized with CpaDcCompType CPA_DC_DEFLATE for any other value no + * header is produced. To output a gzip header the session must have been + * initialized with CpaDcChecksum CPA_DC_CRC32. To output a zlib header + * the session must have been initialized with CpaDcChecksum CPA_DC_ADLER32. + * For CpaDcChecksum CPA_DC_NONE no header is output. + * + * If the compression requires a gzip header, then this header requires + * at a minimum the following fields, defined in RFC1952: + * ID1: 0x1f + * ID2: 0x8b + * CM: Compression method = 8 for deflate + * + * The zlib header is defined in RFC1950 and this function must implement + * as a minimum: + * CM: four bit compression method - 8 is deflate with window size to + * 32k + * CINFO: four bit window size (see RFC1950 for details), 7 is 32k + * window + * FLG: defined as: + * - Bits 0 - 4: check bits for CM, CINFO and FLG (see RFC1950) + * - Bit 5: FDICT 0 = default, 1 is preset dictionary + * - Bits 6 - 7: FLEVEL, compression level (see RFC 1950) + * + * The counter parameter will be set + * to the number of bytes added to the buffer. The pData will be + * not be changed. + * + * @see + * None + * + *****************************************************************************/ +CpaStatus +cpaDcGenerateHeader( CpaDcSessionHandle pSessionHandle, + CpaFlatBuffer *pDestBuff, Cpa32U *count ); + +/** + ***************************************************************************** + * @ingroup cpaDc + * Generate compression footer. + * + * @description + * This API generates the footer for gzip or zlib and stores it in the + * output buffer. + * @context + * This function may be call from any context. + * @assumptions + * None + * @sideEffects + * All session variables are reset + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in,out] pSessionHandle Session handle. + * @param[in] pDestBuff Pointer to data buffer where the + * compression footer will go. + * @param[in,out] pResults Pointer to results structure filled by + * CpaDcCompressData. Updated with the + * results of this API call + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * pSessionHandle has been setup using cpaDcInitSession() + * pResults structure has been filled by CpaDcCompressData(). + * + * @note + * Depending on the session variables, this function can add the + * alder32 footer to the zlib compressed data as defined in RFC1950. If + * required, it can also add the gzip footer, which is the crc32 of the + * uncompressed data and the length of the uncompressed data. This + * section is defined in RFC1952. The session variables used to determine + * the header type are CpaDcCompType and CpaDcChecksum, see cpaDcGenerateHeader + * for more details. + * + * An artifact of invoking this function for writing the footer data is + * that all opaque session specific data is re-initialized. If the + * compression level and file types are consistent, the upper level + * application can continue processing compression requests using the + * same session handle. + * + * The produced element of the pResults structure will be incremented by the + * numbers bytes added to the buffer. The pointer to the buffer + * will not be modified. + * + * This function is not supported for stateless sessions. + * + * @see + * None + * + *****************************************************************************/ +CpaStatus +cpaDcGenerateFooter( CpaDcSessionHandle pSessionHandle, + CpaFlatBuffer *pDestBuff, CpaDcRqResults *pResults ); + + +/** + ***************************************************************************** + * @ingroup cpaDc + * Retrieve statistics + * + * @description + * This API retrieves the current statistics for a compression instance. + * + * @context + * This function may be call from any context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] dcInstance Instance handle. + * @param[out] pStatistics Pointer to statistics structure. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None + * @post + * None + * + * @see + * None + * + *****************************************************************************/ +CpaStatus +cpaDcGetStats( CpaInstanceHandle dcInstance, + CpaDcStats *pStatistics ); + +/*****************************************************************************/ +/* Instance Discovery Functions */ + +/** + ***************************************************************************** + * @ingroup cpaDc + * Get the number of device instances that are supported by the API + * implementation. + * + * @description + * + * This function will get the number of device instances that are supported + * by an implementation of the compression API. This number is then used to + * determine the size of the array that must be passed to + * cpaDcGetInstances(). + * + * @context + * This function MUST NOT be called from an interrupt context as it MAY + * sleep. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[out] pNumInstances Pointer to where the number of + * instances will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated + * + * @see + * cpaDcGetInstances + * + *****************************************************************************/ +CpaStatus +cpaDcGetNumInstances(Cpa16U* pNumInstances); + +/** + ***************************************************************************** + * @ingroup cpaDc + * Get the handles to the device instances that are supported by the + * API implementation. + * + * @description + * + * This function will return handles to the device instances that are + * supported by an implementation of the compression API. These instance + * handles can then be used as input parameters with other compression API + * functions. + * + * This function will populate an array that has been allocated by the + * caller. The size of this API is determined by the + * cpaDcGetNumInstances() function. + * + * @context + * This function MUST NOT be called from an interrupt context as it MAY + * sleep. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] numInstances Size of the array. + * @param[out] dcInstances Pointer to where the instance + * handles will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated + * + * @see + * cpaDcGetInstances + * + *****************************************************************************/ +CpaStatus +cpaDcGetInstances(Cpa16U numInstances, + CpaInstanceHandle* dcInstances); + +/** + ***************************************************************************** + * @ingroup cpaDc + * Compression Component utility function to determine the number of + * intermediate buffers required by an implementation. + * + * @description + * This function will determine the number of intermediate buffer lists + * required by an implementation for a compression instance. These buffers + * should then be allocated and provided when calling @ref cpaDcStartInstance() + * to start a compression instance that will use dynamic compression. + * + * @context + * This function may sleep, and MUST NOT be called in interrupt context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * @param[in,out] instanceHandle Handle to an instance of this API to be + * initialized. + * @param[out] pNumBuffers When the function returns, this will + * specify the number of buffer lists that + * should be used as intermediate buffers + * when calling cpaDcStartInstance(). + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. Suggested course of action + * is to shutdown and restart. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None + * @post + * None + * @note + * Note that this is a synchronous function and has no completion callback + * associated with it. + * + * @see + * cpaDcStartInstance() + * + *****************************************************************************/ +CpaStatus +cpaDcGetNumIntermediateBuffers(CpaInstanceHandle instanceHandle, + Cpa16U *pNumBuffers); + +/** + ***************************************************************************** + * @ingroup cpaDc + * Compression Component Initialization and Start function. + * + * @description + * This function will initialize and start the compression component. + * It MUST be called before any other compress function is called. This + * function SHOULD be called only once (either for the very first time, + * or after an cpaDcStopInstance call which succeeded) per instance. + * Subsequent calls will have no effect. + * + * If required by an implementation, this function can be provided with + * instance specific intermediate buffers. The intent is to provide an + * instance specific location to store intermediate results during dynamic + * instance Huffman tree compression requests. The memory should be + * accessible by the compression engine. The buffers are to support + * deflate compression with dynamic Huffman Trees. Each buffer list + * should be similar in size to twice the destination buffer size passed + * to the compress API. The number of intermediate buffer lists may vary + * between implementations and so @ref cpaDcGetNumIntermediateBuffers() + * should be called first to determine the number of intermediate + * buffers required by the implementation. + * + * If not required, this parameter can be passed in as NULL. + * + * @context + * This function may sleep, and MUST NOT be called in interrupt context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * @param[in,out] instanceHandle Handle to an instance of this API to be + * initialized. + * @param[in] numBuffers Number of buffer lists represented by + * the pIntermediateBuffers parameter. + * Note: @ref cpaDcGetNumIntermediateBuffers() + * can be used to determine the number of + * intermediate buffers that an implementation + * requires. + * @param[in] pIntermediateBuffers Optional pointer to Instance specific + * DRAM buffer. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. Suggested course of action + * is to shutdown and restart. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None + * @post + * None + * @note + * Note that this is a synchronous function and has no completion callback + * associated with it. + * + * @see + * cpaDcStopInstance() + * cpaDcGetNumIntermediateBuffers() + * + *****************************************************************************/ +CpaStatus +cpaDcStartInstance(CpaInstanceHandle instanceHandle, + Cpa16U numBuffers, + CpaBufferList **pIntermediateBuffers); + +/** + ***************************************************************************** + * @ingroup cpaDc + * Compress Component Stop function. + * + * @description + * This function will stop the Compression component and free + * all system resources associated with it. The client MUST ensure that + * all outstanding operations have completed before calling this function. + * The recommended approach to ensure this is to deregister all session or + * callback handles before calling this function. If outstanding + * operations still exist when this function is invoked, the callback + * function for each of those operations will NOT be invoked and the + * shutdown will continue. If the component is to be restarted, then a + * call to cpaDcStartInstance is required. + * + * @context + * This function may sleep, and so MUST NOT be called in interrupt + * context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * @param[in] instanceHandle Handle to an instance of this API to be + * shutdown. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. Suggested course of action + * is to ensure requests are not still being + * submitted and that all sessions are + * deregistered. If this does not help, then + * forcefully remove the component from the + * system. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaDcStartInstance + * @post + * None + * @note + * Note that this is a synchronous function and has no completion callback + * associated with it. + * + * @see + * cpaDcStartInstance() + * + *****************************************************************************/ +CpaStatus +cpaDcStopInstance(CpaInstanceHandle instanceHandle); + + +/** + ***************************************************************************** + * @ingroup cpaDc + * Function to get information on a particular instance. + * + * @description + * This function will provide instance specific information through a + * @ref CpaInstanceInfo2 structure. + * + * @context + * This function will be executed in a context that requires that sleeping + * MUST NOT be permitted. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Handle to an instance of this API to be + * initialized. + * @param[out] pInstanceInfo2 Pointer to the memory location allocated by + * the client into which the CpaInstanceInfo2 + * structure will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The client has retrieved an instanceHandle from successive calls to + * @ref cpaDcGetNumInstances and @ref cpaDcGetInstances. + * @post + * None + * @note + * None + * @see + * cpaDcGetNumInstances, + * cpaDcGetInstances, + * CpaInstanceInfo2 + * + *****************************************************************************/ +CpaStatus +cpaDcInstanceGetInfo2(const CpaInstanceHandle instanceHandle, + CpaInstanceInfo2 * pInstanceInfo2); + +/*****************************************************************************/ +/* Instance Notification Functions */ +/*****************************************************************************/ +/** + ***************************************************************************** + * @ingroup cpaDc + * Callback function for instance notification support. + * + * @description + * This is the prototype for the instance notification callback function. + * The callback function is passed in as a parameter to the + * @ref cpaDcInstanceSetNotificationCb function. + * + * @context + * This function will be executed in a context that requires that sleeping + * MUST NOT be permitted. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCallbackTag Opaque value provided by user while making + * individual function calls. + * @param[in] instanceEvent The event that will trigger this function to + * get invoked. + * + * @retval + * None + * @pre + * Component has been initialized and the notification function has been + * set via the cpaDcInstanceSetNotificationCb function. + * @post + * None + * @note + * None + * @see + * cpaDcInstanceSetNotificationCb(), + * + *****************************************************************************/ +typedef void (*CpaDcInstanceNotificationCbFunc)( + const CpaInstanceHandle instanceHandle, + void * pCallbackTag, + const CpaInstanceEvent instanceEvent); + +/** + ***************************************************************************** + * @ingroup cpaDc + * Subscribe for instance notifications. + * + * @description + * Clients of the CpaDc interface can subscribe for instance notifications + * by registering a @ref CpaDcInstanceNotificationCbFunc function. + * + * @context + * This function may be called from any context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pInstanceNotificationCb Instance notification callback + * function pointer. + * @param[in] pCallbackTag Opaque value provided by user while + * making individual function calls. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Instance has been initialized. + * @post + * None + * @note + * None + * @see + * CpaDcInstanceNotificationCbFunc + * + *****************************************************************************/ +CpaStatus +cpaDcInstanceSetNotificationCb( + const CpaInstanceHandle instanceHandle, + const CpaDcInstanceNotificationCbFunc pInstanceNotificationCb, + void *pCallbackTag); + + +/** + ***************************************************************************** + * @ingroup cpaDc + * Get the size of the memory required to hold the session information. + * + * @description + * + * The client of the Data Compression API is responsible for + * allocating sufficient memory to hold session information and the context + * data. This function provides a means for determining the size of the + * session information and the size of the context data. + * + * @context + * No restrictions + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] dcInstance Instance handle. + * @param[in] pSessionData Pointer to a user instantiated structure + * containing session data. + * @param[out] pSessionSize On return, this parameter will be the size + * of the memory that will be + * required by cpaDcInitSession() for session + * data. + * @param[out] pContextSize On return, this parameter will be the size + * of the memory that will be required + * for context data. Context data is + * save/restore data including history and + * any implementation specific data that is + * required for a save/restore operation. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None + * @post + * None + * @note + * Only a synchronous version of this function is provided. + * + * It is expected that context data is comprised of the history and + * any data stores that are specific to the history such as linked + * lists or hash tables. + * For stateless sessions the context size returned from this function + * will be zero. For stateful sessions the context size returned will + * depend on the session setup data. + * + * Session data is expected to include interim checksum values, various + * counters and other session related data that needs to persist + * between invocations. + * For a given implementation of this API, it is safe to assume that + * cpaDcGetSessionSize() will always return the same session size and + * that the size will not be different for different setup data + * parameters. However, it should be noted that the size may change: + * (1) between different implementations of the API (e.g. between software + * and hardware implementations or between different hardware + * implementations) + * (2) between different releases of the same API implementation. + * + * @see + * cpaDcInitSession() + * + *****************************************************************************/ +CpaStatus +cpaDcGetSessionSize(CpaInstanceHandle dcInstance, + CpaDcSessionSetupData* pSessionData, + Cpa32U* pSessionSize, Cpa32U* pContextSize ); + +/** + ***************************************************************************** + * @ingroup cpaDc + * Function to return the size of the memory which must be allocated for + * the pPrivateMetaData member of CpaBufferList. + * + * @description + * This function is used to obtain the size (in bytes) required to allocate + * a buffer descriptor for the pPrivateMetaData member in the + * CpaBufferList structure. + * Should the function return zero then no meta data is required for the + * buffer list. + * + * @context + * This function may be called from any context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Handle to an instance of this API. + * @param[in] numBuffers The number of pointers in the CpaBufferList. + * This is the maximum number of CpaFlatBuffers + * which may be contained in this CpaBufferList. + * @param[out] pSizeInBytes Pointer to the size in bytes of memory to be + * allocated when the client wishes to allocate + * a cpaFlatBuffer. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None + * @post + * None + * @note + * None + * @see + * cpaDcGetInstances() + * + *****************************************************************************/ +CpaStatus +cpaDcBufferListGetMetaSize(const CpaInstanceHandle instanceHandle, + Cpa32U numBuffers, + Cpa32U *pSizeInBytes); + + +/** + ***************************************************************************** + * @ingroup cpaDc + * Function to return a string indicating the specific error that occurred + * within the system. + * + * @description + * When a function returns any error including CPA_STATUS_SUCCESS, the + * client can invoke this function to get a string which describes the + * general error condition, and if available additional information on + * the specific error. + * The Client MUST allocate CPA_STATUS_MAX_STR_LENGTH_IN_BYTES bytes for the buffer + * string. + * + * @context + * This function may be called from any context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] dcInstance Handle to an instance of this API. + * @param[in] errStatus The error condition that occurred. + * @param[in,out] pStatusText Pointer to the string buffer that will + * be updated with the status text. The invoking + * application MUST allocate this buffer to be + * exactly CPA_STATUS_MAX_STR_LENGTH_IN_BYTES. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. Note, in this scenario + * it is INVALID to call this function a + * second time. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None + * @post + * None + * @note + * None + * @see + * CpaStatus + * + *****************************************************************************/ + +CpaStatus +cpaDcGetStatusText(const CpaInstanceHandle dcInstance, + const CpaStatus errStatus, + Cpa8S * pStatusText); + + +/** + ***************************************************************************** + * @ingroup cpaDc + * Set Address Translation function + * + * @description + * This function is used to set the virtual to physical address + * translation routine for the instance. The specified routine + * is used by the instance to perform any required translation of + * a virtual address to a physical address. If the application + * does not invoke this function, then the instance will use its + * default method, such as virt2phys, for address translation. + * + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Data Compression API instance handle. + * @param[in] virtual2Physical Routine that performs virtual to + * physical address translation. + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None + * @post + * None + * @see + * None + * + *****************************************************************************/ +CpaStatus +cpaDcSetAddressTranslation(const CpaInstanceHandle instanceHandle, + CpaVirtualToPhysical virtual2Physical); +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_DC_H */ Index: sys/dev/qat/qat_api/include/dc/cpa_dc_bp.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/dc/cpa_dc_bp.h @@ -0,0 +1,320 @@ +/**************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_dc_bp.h + * + * @defgroup cpaDcBp Data Compression Batch and Pack API + * + * @ingroup cpaDc + * + * @description + * These functions specify the API for Data Compression operations related + * to the 'Batch and Pack' mode of operation. + * + * @remarks + * + * + *****************************************************************************/ + +#ifndef CPA_DC_BP_H +#define CPA_DC_BP_H + +#ifdef __cplusplus +extern"C" { +#endif + + +#include "cpa_dc.h" + +/** + ***************************************************************************** + * @ingroup cpaDcBp + * Batch request input parameters. + * @description + * This structure contains the request information for use with batched + * compression operations. + * + * + ****************************************************************************/ +typedef struct _CpaDcBatchOpData { + CpaDcOpData opData; + /**< Compression input parameters */ + CpaBufferList *pSrcBuff; + /**< Input buffer list containing the data to be compressed. */ + CpaBoolean resetSessionState; + /**< Reset the session state at the beginning of this request within + * the batch. Only applies to stateful sessions. When this flag is + * set, the history from previous requests in this session will not be + * used when compressing the input data for this request in the batch. + * */ +} CpaDcBatchOpData ; + +/** + ***************************************************************************** + * @ingroup cpaDcBp + * Submit a batch of requests to compress a batch of input buffers into + * a common output buffer. The same output buffer is used for each request + * in the batch. This is termed 'batch and pack'. + * + * @description + * This API consumes data from the input buffer and generates compressed + * data in the output buffer. + * This API compresses a batch of input buffers and concatenates the + * compressed data into the output buffer. A results structure is also + * generated for each request in the batch. + * + * @context + * When called as an asynchronous funnction it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] dcInstance Target service instance. + * @param[in,out] pSessionHandle Session handle. + * @param[in] numRequests Number of requests in the batch. + * @param[in] pBatchOpData Pointer to an array of CpaDcBatchOpData + * structures which contain the input buffers + * and parameters for each request in the + * batch. There should be numRequests entries + * in the array. + * @param[in] pDestBuff Pointer to buffer space for data after + * compression. + * @param[in,out] pResults Pointer to an array of results structures. + * There should be numRequests entries in the + * array. + * @param[in] callbackTag User supplied value to help correlate + * the callback with its associated + * request. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_DC_BAD_DATA The input data was not properly formed. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * + * @pre + * pSessionHandle has been setup using cpaDcInitSession() + * Session must be setup as a stateless sesssion. + * @note + * This function passes control to the compression service for processing + * + * In synchronous mode the function returns the error status returned from the + * service. In asynchronous mode the status is returned by the callback + * function. + * + * This function may be called repetitively with input until all of the input + * has been consumed by the compression service and all the output has been + * produced. + * + * When this function returns, it may be that all of the available buffers in + * the input list has not been compressed. This situation will occur when + * there is insufficient space in the output buffer. The calling application + * should note the amount of buffers processed, and then submit the request + * again, with a new output buffer and with the input buffer list containing + * the buffers that were not previously compressed. + * + * Relationship between input buffers and results buffers. + * -# Implementations of this API must not modify the individual + * flat buffers of the input buffer list. + * -# The implementation communicates the number of buffers + * consumed from the source buffer list via pResults->consumed arg. + * -# The implementation communicates the amount of data in the + * destination buffer list via pResults->produced arg. + * + * Source Buffer Setup Rules + * -# The buffer list must have the correct number of flat buffers. This + * is specified by the numBuffers element of the CpaBufferList. + * -# Each flat buffer must have a pointer to contiguous memory that has + * been allocated by the calling application. The number of octets to be + * compressed or decompressed must be stored in the dataLenInBytes element + * of the flat buffer. + * -# It is permissible to have one or more flat buffers with a zero length + * data store. This function will process all flat buffers until the + * destination buffer is full or all source data has been processed. + * If a buffer has zero length, then no data will be processed from + * that buffer. + * + * Source Buffer Processing Rules. + * -# The buffer list is processed in index order - SrcBuff->pBuffers[0] + * will be completely processed before SrcBuff->pBuffers[1] begins to + * be processed. + * -# The application must drain the destination buffers. + * If the source data was not completely consumed, the application + * must resubmit the request. + * -# On return, the pResults->consumed will indicate the number of buffers + * consumed from the input buffer list. + * + * Destination Buffer Setup Rules + * -# The destination buffer list must have storage for processed data and + * for the packed header information. + * This means that least two flat buffer must exist in the buffer list. + * The first buffer entry will be used for the header information. + * Subsequent entries will be used for the compressed data. + * -# For each flat buffer in the buffer list, the dataLenInBytes element + * must be set to the size of the buffer space. + * -# It is permissible to have one or more flat buffers with a zero length + * data store. + * If a buffer has zero length, then no data will be added to + * that buffer. + * + * Destination Buffer Processing Rules. + * -# The buffer list is processed in index order. + * -# On return, the pResults->produced will indicate the number of bytes + * of compressed data written to the output buffers. Note that this + * will not include the header information buffer. + * -# If processing has not been completed, the application must drain the + * destination buffers and resubmit the request. The application must reset + * the dataLenInBytes for each flat buffer in the destination buffer list. + * + * Synchronous or Asynchronous operation of the API is determined by + * the value of the callbackFn parameter passed to cpaDcInitSession() + * when the sessionHandle was setup. If a non-NULL value was specified + * then the supplied callback function will be invoked asynchronously + * with the response of this request. + * + * Response ordering: + * For each session, the implementation must maintain the order of + * responses. That is, if in asynchronous mode, the order of the callback + * functions must match the order of jobs submitted by this function. + * In a simple synchronous mode implementation, the practice of submitting + * a request and blocking on its completion ensure ordering is preserved. + * + * This limitation does not apply if the application employs multiple + * threads to service a single session. + * + * If this API is invoked asynchronous, the return code represents + * the success or not of asynchronously scheduling the request. + * The results of the operation, along with the amount of data consumed + * and produced become available when the callback function is invoked. + * As such, pResults->consumed and pResults->produced are available + * only when the operation is complete. + * + * The application must not use either the source or destination buffers + * until the callback has completed. + * + * @see + * None + * + *****************************************************************************/ +CpaStatus +cpaDcBPCompressData( CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + const Cpa32U numRequests, + CpaDcBatchOpData *pBatchOpData, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + void *callbackTag ); + +/** +***************************************************************************** +* @ingroup cpaDcBp +* Function to return the size of the memory which must be allocated for +* the pPrivateMetaData member of CpaBufferList contained within +* CpaDcBatchOpData. +* +* @description +* This function is used to obtain the size (in bytes) required to allocate +* a buffer descriptor for the pPrivateMetaData member in the +* CpaBufferList structure when Batch and Pack API are used. +* Should the function return zero then no meta data is required for the +* buffer list. +* +* @context +* This function may be called from any context. +* @assumptions +* None +* @sideEffects +* None +* @blocking +* No +* @reentrant +* No +* @threadSafe +* Yes +* +* @param[in] instanceHandle Handle to an instance of this API. +* @param[in] numJobs The number of jobs defined in the CpaDcBatchOpData +* table. +* @param[out] pSizeInBytes Pointer to the size in bytes of memory to be +* allocated when the client wishes to allocate +* a cpaFlatBuffer and the Batch and Pack OP data. +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* +* @pre +* None +* @post +* None +* @note +* None +* @see +* cpaDcBPCompressData() +* +*****************************************************************************/ +CpaStatus +cpaDcBnpBufferListGetMetaSize(const CpaInstanceHandle instanceHandle, + Cpa32U numJobs, + Cpa32U *pSizeInBytes); + + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_DC_BP_H */ Index: sys/dev/qat/qat_api/include/dc/cpa_dc_dp.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/dc/cpa_dc_dp.h @@ -0,0 +1,746 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_dc_dp.h + * + * @defgroup cpaDcDp Data Compression Data Plane API + * + * @ingroup cpaDc + * + * @description + * These data structures and functions specify the Data Plane API + * for compression and decompression operations. + * + * This API is recommended for data plane applications, in which the + * cost of offload - that is, the cycles consumed by the driver in + * sending requests to the hardware, and processing responses - needs + * to be minimized. In particular, use of this API is recommended + * if the following constraints are acceptable to your application: + * + * - Thread safety is not guaranteed. Each software thread should + * have access to its own unique instance (CpaInstanceHandle) to + * avoid contention. + * - Polling is used, rather than interrupts (which are expensive). + * Implementations of this API will provide a function (not + * defined as part of this API) to read responses from the hardware + * response queue and dispatch callback functions, as specified on + * this API. + * - Buffers and buffer lists are passed using physical addresses, + * to avoid virtual to physical address translation costs. + * - The ability to enqueue one or more requests without submitting + * them to the hardware allows for certain costs to be amortized + * across multiple requests. + * - Only asynchronous invocation is supported. + * - There is no support for partial packets. + * - Implementations may provide certain features as optional at + * build time, such as atomic counters. + * - There is no support for stateful operations. + * - The "default" instance (CPA_INSTANCE_HANDLE_SINGLE) is not + * supported on this API. The specific handle should be obtained + * using the instance discovery functions (@ref cpaDcGetNumInstances, + * @ref cpaDcGetInstances). + * + *****************************************************************************/ + +#ifndef CPA_DC_DP_H +#define CPA_DC_DP_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_dc.h" + +/** + ***************************************************************************** + * @ingroup cpaDcDp + * Operation Data for compression data plane API. + * + * @description + * This structure contains data relating to a request to perform + * compression processing on one or more data buffers. + * + * The physical memory to which this structure points should be + * at least 8-byte aligned. + * + * All reserved fields SHOULD NOT be written or read by the + * calling code. + * + * @see + * cpaDcDpEnqueueOp, cpaDcDpEnqueueOpBatch + ****************************************************************************/ +typedef struct _CpaDcDpOpData +{ + Cpa64U reserved0; + /**< Reserved for internal use. Source code should not read or write + * this field. + */ + Cpa32U bufferLenToCompress; + /**< The number of bytes from the source buffer to compress. This must be + * less than, or more typically equal to, the total size of the source + * buffer (or buffer list). + */ + + Cpa32U bufferLenForData; + /**< The maximum number of bytes that should be written to the destination + * buffer. This must be less than, or more typically equal to, the total + * size of the destination buffer (or buffer list). + */ + + Cpa64U reserved1; + /**< Reserved for internal use. Source code should not read or write */ + + Cpa64U reserved2; + /**< Reserved for internal use. Source code should not read or write */ + + Cpa64U reserved3; + /**< Reserved for internal use. Source code should not read or write */ + + CpaDcRqResults results; + /**< Results of the operation. Contents are valid upon completion. */ + + CpaInstanceHandle dcInstance; + /**< Instance to which the request is to be enqueued */ + + CpaDcSessionHandle pSessionHandle; + /**< DC Session associated with the stream of requests */ + + CpaPhysicalAddr srcBuffer; + /**< Physical address of the source buffer on which to operate. + * This is either the location of the data, of length srcBufferLen; or, + * if srcBufferLen has the special value @ref CPA_DP_BUFLIST, then + * srcBuffer contains the location where a @ref CpaPhysBufferList is + * stored. + */ + + Cpa32U srcBufferLen; + /**< If the source buffer is a "flat buffer", then this field + * specifies the size of the buffer, in bytes. If the source buffer + * is a "buffer list" (of type @ref CpaPhysBufferList), then this field + * should be set to the value @ref CPA_DP_BUFLIST. + */ + + CpaPhysicalAddr destBuffer; + /**< Physical address of the destination buffer on which to operate. + * This is either the location of the data, of length destBufferLen; or, + * if destBufferLen has the special value @ref CPA_DP_BUFLIST, then + * destBuffer contains the location where a @ref CpaPhysBufferList is + * stored. + */ + + Cpa32U destBufferLen; + /**< If the destination buffer is a "flat buffer", then this field + * specifies the size of the buffer, in bytes. If the destination buffer + * is a "buffer list" (of type @ref CpaPhysBufferList), then this field + * should be set to the value @ref CPA_DP_BUFLIST. + */ + + CpaDcSessionDir sessDirection; + /**pSessionHandle was setup using + * @ref cpaDcDpInitSession. + * The instance identified by pOpData->dcInstance has had a + * callback function registered via @ref cpaDcDpRegCbFunc. + * + * @post + * None + * + * @note + * A callback of type @ref CpaDcDpCallbackFn is generated in + * response to this function call. Any errors generated during + * processing are reported as part of the callback status code. + * + * @see + * @ref cpaDcDpPerformOpNow + *****************************************************************************/ + + +CpaStatus +cpaDcDpEnqueueOp(CpaDcDpOpData *pOpData, + const CpaBoolean performOpNow); + + +/** + ***************************************************************************** + * @ingroup cpaDcDp + * Enqueue multiple requests to the compression data plane API. + * + * @description + * This function enqueues multiple requests to perform compression or + * decompression operations. + * + * The function is asynchronous; control is returned to the user once + * the request has been submitted. On completion of the request, the + * application may poll for responses, which will cause a callback + * function (registered via @ref cpaDcDpRegCbFunc) to be invoked. + * Separate callbacks will be invoked for each request. + * Callbacks within a session and at the same priority are guaranteed + * to be in the same order in which they were submitted. + * + * The following restrictions apply to each element of the pOpData + * array: + * + * - The memory MUST be aligned on an 8-byte boundary. + * - The reserved fields of the structure MUST be set to zero. + * - The structure MUST reside in physically contiguous memory. + * + * @context + * This function will not sleep, and hence can be executed in a context + * that does not permit sleeping. + * + * @assumptions + * Client MUST allocate the request parameters to 8 byte alignment. + * Reserved elements of the CpaDcDpOpData structure MUST not used + * The CpaDcDpOpData structure MUST reside in physically + * contiguous memory. + * + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] numberRequests The number of requests in the array of + * CpaDcDpOpData structures. + * @param[in] pOpData An array of pointers to CpaDcDpOpData + * structures. Each CpaDcDpOpData + * structure contains the request parameters for + * that request. The client code allocates the + * memory for this structure. This component takes + * ownership of the memory until it is returned in + * the callback, which was registered on the + * instance via @ref cpaDcDpRegCbFunc. + * See the above Description for some restrictions + * that apply to this parameter. + * @param[in] performOpNow Flag to indicate whether the operation should be + * performed immediately (CPA_TRUE), or simply + * enqueued to be performed later (CPA_FALSE). + * In the latter case, the request is submitted + * to be performed either by calling this function + * again with this flag set to CPA_TRUE, or by + * invoking the function @ref + * cpaDcDpPerformOpNow. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The session identified by pOpData[i]->pSessionHandle was setup using + * @ref cpaDcDpInitSession. + * The instance identified by pOpData[i]->dcInstance has had a + * callback function registered via @ref cpaDcDpRegCbFunc. + * + * @post + * None + * + * @note + * Multiple callbacks of type @ref CpaDcDpCallbackFn are generated in + * response to this function call (one per request). Any errors + * generated during processing are reported as part of the callback + * status code. + * + * @see + * cpaDcDpEnqueueOp + *****************************************************************************/ +CpaStatus +cpaDcDpEnqueueOpBatch(const Cpa32U numberRequests, + CpaDcDpOpData *pOpData[], + const CpaBoolean performOpNow); + + +/** + ***************************************************************************** + * @ingroup cpaDcDp + * Submit any previously enqueued requests to be performed now on the + * compression data plane API. + * + * @description + * This function triggers processing of previously enqueed requests on the + * referenced instance. + * + * + * @context + * Will not sleep. It can be executed in a context that does not + * permit sleeping. + * + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] dcInstance Instance to which the requests will be + * submitted. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via @ref cpaDcStartInstance function. + * A compression session has been previously setup using the + * @ref cpaDcDpInitSession function call. + * + * @post + * None + * + * @see + * cpaDcDpEnqueueOp, cpaDcDpEnqueueOpBatch + *****************************************************************************/ +CpaStatus +cpaDcDpPerformOpNow(CpaInstanceHandle dcInstance); + + + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_DC_DP_H */ + Index: sys/dev/qat/qat_api/include/icp_buffer_desc.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/icp_buffer_desc.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file icp_buffer_desc.h + * + * @defgroup icp_BufferDesc Buffer descriptor for LAC + * + * @ingroup LacCommon + * + * @description + * This file contains details of the hardware buffer descriptors used to + * communicate with the QAT. + * + *****************************************************************************/ +#ifndef ICP_BUFFER_DESC_H +#define ICP_BUFFER_DESC_H + +#include "cpa.h" + +typedef Cpa64U icp_qat_addr_width_t; // hi32 first, lo32 second + +// Alignement constraint of the buffer list. +#define ICP_DESCRIPTOR_ALIGNMENT_BYTES 8 + +/** + ***************************************************************************** + * @ingroup icp_BufferDesc + * Buffer descriptors for FlatBuffers - used in communications with + * the QAT. + * + * @description + * A QAT friendly buffer descriptor. + * All buffer descriptor described in this structure are physcial + * and are 64 bit wide. + * + * Updates in the CpaFlatBuffer should be also reflected in this + * structure + * + *****************************************************************************/ +typedef struct icp_flat_buffer_desc_s { + Cpa32U dataLenInBytes; + Cpa32U reserved; + icp_qat_addr_width_t phyBuffer; + /**< The client will allocate memory for this using API function calls + * and the access layer will fill it and the QAT will read it. + */ +} icp_flat_buffer_desc_t; + +/** + ***************************************************************************** + * @ingroup icp_BufferDesc + * Buffer descriptors for BuffersLists - used in communications with + * the QAT. + * + * @description + * A QAT friendly buffer descriptor. + * All buffer descriptor described in this structure are physcial + * and are 64 bit wide. + * + * Updates in the CpaBufferList should be also reflected in this structure + * + *****************************************************************************/ +typedef struct icp_buffer_list_desc_s { + Cpa64U resrvd; + Cpa32U numBuffers; + Cpa32U reserved; + icp_flat_buffer_desc_t phyBuffers[]; + /**< Unbounded array of physical buffer pointers, these point to the + * FlatBufferDescs. The client will allocate memory for this using + * API function calls and the access layer will fill it and the QAT + * will read it. + */ +} icp_buffer_list_desc_t; + +#endif /* ICP_BUFFER_DESC_H */ Index: sys/dev/qat/qat_api/include/icp_sal.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/icp_sal.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file icp_sal.h + * + * @ingroup SalCommon + * + * Functions for both user space and kernel space. + * + ***************************************************************************/ + +#ifndef ICP_SAL_H +#define ICP_SAL_H + +/* + * icp_sal_get_dc_error + * + * @description: + * This function returns the occurrences of compression errors specified + * in the input parameter + * + * @context + * This function is called from the user process context + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No + * @param[in] dcError DC Error Type + * + * returns Number of failing requests of type dcError + */ +Cpa64U icp_sal_get_dc_error(Cpa8S dcError); + +#endif Index: sys/dev/qat/qat_api/include/icp_sal_iommu.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/icp_sal_iommu.h @@ -0,0 +1,84 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file icp_sal_iommu.h + * + * @ingroup SalUser + * + * Sal iommu wrapper functions. + * + ***************************************************************************/ + +#ifndef ICP_SAL_IOMMU_H +#define ICP_SAL_IOMMU_H + +/************************************************************************* + * @ingroup Sal + * @description + * Function returns page_size rounded size for iommu remapping + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] size Minimum required size. + * + * @retval page_size rounded size for iommu remapping. + * + *************************************************************************/ +size_t icp_sal_iommu_get_remap_size(size_t size); + +/************************************************************************* + * @ingroup Sal + * @description + * Function adds an entry into iommu remapping table + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] phaddr Host physical address. + * @param[in] iova Guest physical address. + * @param[in] size Size of the remapped region. + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + *************************************************************************/ +CpaStatus icp_sal_iommu_map(Cpa64U phaddr, Cpa64U iova, size_t size); + +/************************************************************************* + * @ingroup Sal + * @description + * Function removes an entry from iommu remapping table + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] iova Guest physical address to be removed. + * @param[in] size Size of the remapped region. + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + *************************************************************************/ +CpaStatus icp_sal_iommu_unmap(Cpa64U iova, size_t size); +#endif Index: sys/dev/qat/qat_api/include/icp_sal_nrbg_ht.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/icp_sal_nrbg_ht.h @@ -0,0 +1,66 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file icp_sal_nrbg_ht.h + * + * @ingroup LacSym + * + * @description + * This file contains declaration of function used to test the health + * of NRBG entropy source. + * + *****************************************************************************/ +#ifndef ICP_SAL_NRBG_HT_H +#define ICP_SAL_NRBG_HT_H + +/** + ****************************************************************************** + * @ingroup LacSym + * NRBG Health Test + * + * @description + * This function performs a check on the deterministic parts of the + * NRBG. It also provides the caller the value of continuous random + * number generator test failures for n=64 bits, refer to FIPS 140-2 + * section 4.9.2 for details. A non-zero value for the counter does + * not necessarily indicate a failure; it is statistically possible + * that consecutive blocks of 64 bits will be identical, and the RNG + * will discard the identical block in such cases. This counter allows + * the calling application to monitor changes in this counter and to + * use this to decide whether to mark the NRBG as faulty, based on + * local policy or statistical model. + * + * @context + * MUST NOT be executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pContinuousRngTestFailures Number of continuous random number + * generator test failures. + * + * @retval CPA_STATUS_SUCCESS Health test passed. + * @retval CPA_STATUS_FAIL Health test failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * + * @note + * The return value of this function is not impacted by the value + * of continuous random generator test failures. + * + *****************************************************************************/ +CpaStatus icp_sal_nrbgHealthTest(const CpaInstanceHandle instanceHandle, + Cpa32U *pContinuousRngTestFailures); + +#endif Index: sys/dev/qat/qat_api/include/icp_sal_poll.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/icp_sal_poll.h @@ -0,0 +1,366 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file icp_sal_poll.h + * + * @defgroup SalPoll + * + * @ingroup SalPoll + * + * @description + * Polling APIs for instance polling. + * These functions retrieve requests on appropriate response rings and + * dispatch the associated callbacks. Callbacks are called in the + * context of the polling function itself. + * + * + ***************************************************************************/ + +#ifndef ICP_SAL_POLL_H +#define ICP_SAL_POLL_H + +/************************************************************************* + * @ingroup SalPoll + * @description + * Poll a Cy logical instance to retrieve requests that are on the + * response rings associated with that instance and dispatch the + * associated callbacks. + * + * @context + * This functions is called from both the user and kernel context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] response_quota The maximum number of messages that + * will be read in one polling. Setting + * the response quota to zero means that + * all messages on the ring will be read. + * + * @retval CPA_STATUS_SUCCESS Successfully polled a ring with data + * @retval CPA_STATUS_RETRY There are no responses on the rings + * associated with this instance + * @retval CPA_STATUS_FAIL Indicates a failure + *************************************************************************/ +CpaStatus icp_sal_CyPollInstance(CpaInstanceHandle instanceHandle, + Cpa32U response_quota); + +/************************************************************************* + * @ingroup SalPoll + * @description + * Poll a Sym Cy ring to retrieve requests that are on the + * response rings associated with that instance and dispatch the + * associated callbacks. + * + * @context + * This functions is called from both the user and kernel context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] response_quota The maximum number of messages that + * will be read in one polling. Setting + * the response quota to zero means that + * all messages on the ring will be read. + * + * @retval CPA_STATUS_SUCCESS Successfully polled a ring with data + * @retval CPA_STATUS_RETRY There are no responses on the rings + * associated with this instance + * @retval CPA_STATUS_FAIL Indicates a failure + *************************************************************************/ +CpaStatus icp_sal_CyPollSymRing(CpaInstanceHandle instanceHandle, + Cpa32U response_quota); + +/************************************************************************* + * @ingroup SalPoll + * @description + * Poll an Asym Cy ring to retrieve requests that are on the + * response rings associated with that instance and dispatch the + * associated callbacks. + * + * @context + * This functions is called from both the user and kernel context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] response_quota The maximum number of messages that + * will be read in one polling. Setting + * the response quota to zero means that + * all messages on the ring will be read. + * + * @retval CPA_STATUS_SUCCESS Successfully polled a ring with data + * @retval CPA_STATUS_RETRY There are no responses on the rings + * associated with this instance + * @retval CPA_STATUS_FAIL Indicates a failure + *************************************************************************/ +CpaStatus icp_sal_CyPollAsymRing(CpaInstanceHandle instanceHandle, + Cpa32U response_quota); + +/************************************************************************* + * @ingroup SalPoll + * @description + * Poll a Cy NRBG ring to retrieve requests that are on the + * response rings associated with that instance and dispatch the + * associated callbacks. + * + * @context + * This functions is called from both the user and kernel context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] response_quota The maximum number of messages that + * will be read in one polling. Setting + * the response quota to zero means that + * all messages on the ring will be read. + * + * @retval CPA_STATUS_SUCCESS Successfully polled a ring with data + * @retval CPA_STATUS_RETRY There are no responses on the rings + * associated with this instance + * @retval CPA_STATUS_FAIL Indicates a failure + *************************************************************************/ +CpaStatus icp_sal_CyPollNRBGRing(CpaInstanceHandle instanceHandle, + Cpa32U response_quota); + +/************************************************************************* + * @ingroup SalPoll + * @description + * Poll the high priority symmetric response ring associated with a Cy + * logical instance to retrieve requests and dispatch the + * associated callbacks. + * + * This API is recommended for data plane applications, in which the + * cost of offload - that is, the cycles consumed by the driver in + * sending requests to the hardware, and processing responses - needs + * to be minimized. In particular, use of this API is recommended + * if the following constraints are acceptable to your application: + * + * - Thread safety is not guaranteed. Each software thread should + * have access to its own unique instance (CpaInstanceHandle) to + * avoid contention. + * - The "default" instance (@ref CPA_INSTANCE_HANDLE_SINGLE) is not + * supported on this API. The specific handle should be obtained + * using the instance discovery functions (@ref cpaCyGetNumInstances, + * @ref cpaCyGetInstances). + * + * This polling function should be used with the functions described + * in cpa_cy_sym_dp.h + * + * @context + * This functions is called from both the user and kernel context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] instanceHandle Instance handle. + * @param[in] response_quota The maximum number of messages that + * will be read in one polling. Setting + * the response quota to zero means that + * all messages on the ring will be read. + * + * @retval CPA_STATUS_SUCCESS Successfully polled a ring with data + * @retval CPA_STATUS_RETRY There are no responses on the ring + * associated with this instance + * @retval CPA_STATUS_FAIL Indicates a failure + *************************************************************************/ +CpaStatus icp_sal_CyPollDpInstance(const CpaInstanceHandle instanceHandle, + const Cpa32U response_quota); + +/************************************************************************* + * @ingroup SalPoll + * @description + * Poll a Dc logical instance to retrieve requests that are on the + * response ring associated with that instance and dispatch the + * associated callbacks. + * + * @context + * This function is called from both the user and kernel context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] response_quota The maximum number of messages that + * will be read in one polling. Setting + * the response quota to zero means that + * all messages on the ring will be read. + * + * @retval CPA_STATUS_SUCCESS Successfully polled a ring with data + * @retval CPA_STATUS_RETRY There are no responses on the ring + * associated with this instance + * @retval CPA_STATUS_FAIL Indicates a failure + *************************************************************************/ +CpaStatus icp_sal_DcPollInstance(CpaInstanceHandle instanceHandle, + Cpa32U response_quota); + +/************************************************************************* + * @ingroup SalPoll + * @description + * Poll the response ring associated with a Dc logical instance to + * retrieve requests and dispatch the associated callbacks. + * + * This API is recommended for data plane applications, in which the + * cost of offload - that is, the cycles consumed by the driver in + * sending requests to the hardware, and processing responses - needs + * to be minimized. In particular, use of this API is recommended + * if the following constraints are acceptable to your application: + * + * - Thread safety is not guaranteed. Each software thread should + * have access to its own unique instance (CpaInstanceHandle) to + * avoid contention. + * - The "default" instance (@ref CPA_INSTANCE_HANDLE_SINGLE) is not + * supported on this API. The specific handle should be obtained + * using the instance discovery functions (@ref cpaDcGetNumInstances, + * @ref cpaDcGetInstances). + * + * This polling function should be used with the functions described + * in cpa_dc_dp.h + + * + * @context + * This functions is called from both the user and kernel context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] instanceHandle Instance handle. + * @param[in] response_quota The maximum number of messages that + * will be read in one polling. Setting + * the response quota to zero means that + * all messages on the ring will be read. + * + * @retval CPA_STATUS_SUCCESS Successfully polled a ring with data + * @retval CPA_STATUS_RETRY There are no responses on the ring + * associated with this instance + * @retval CPA_STATUS_FAIL Indicates a failure + *************************************************************************/ +CpaStatus icp_sal_DcPollDpInstance(CpaInstanceHandle dcInstance, + Cpa32U responseQuota); + +/************************************************************************* + * @ingroup SalPoll + * @description + * This function polls the rings on the given bank to determine + * if any of the rings contain messages to be read. The + * response quota is per ring. + * + * @context + * This functions is called from both the user and kernel context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] accelId Acceleration device Id, also known as + * packageId. This can be obtained using + * instance info functions ( + * @ref cpaCyInstanceGetInfo2 + * and @ref cpaDcInstanceGetInfo2) + * + * @param[in] bank_number Bank number + * + * @param[in] response_quota The maximum number of messages that + * will be read in one polling. Setting + * the response quota to zero means that + * all messages on the ring will be read. + * + * @retval CPA_STATUS_SUCCESS Successfully polled a ring with data + * @retval CPA_STATUS_RETRY There is no data on any ring on the bank + * or the bank is already being polled + * @retval CPA_STATUS_FAIL Indicates a failure + *************************************************************************/ +CpaStatus +icp_sal_pollBank(Cpa32U accelId, Cpa32U bank_number, Cpa32U response_quota); + +/************************************************************************* + * @ingroup SalPoll + * @description + * This function polls the rings on all banks to determine + * if any of the rings contain messages to be read. The + * response quota is per ring. + * + * @context + * This functions is called from both the user and kernel context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] accelId Acceleration device Id, also known as + * packageId. This can be obtained using + * instance info functions ( + * @ref cpaCyInstanceGetInfo2 + * and @ref cpaDcInstanceGetInfo2) + * + * @param[in] response_quota The maximum number of messages that + * will be read in one polling. Setting + * the response quota to zero means that + * all messages on the ring will be read. + * + * @retval CPA_STATUS_SUCCESS Successfully polled a ring with data + * @retval CPA_STATUS_RETRY There is no data on any ring on any bank + * or the banks are already being polled + * @retval CPA_STATUS_FAIL Indicates a failure + *************************************************************************/ +CpaStatus icp_sal_pollAllBanks(Cpa32U accelId, Cpa32U response_quota); + +#endif Index: sys/dev/qat/qat_api/include/icp_sal_user.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/icp_sal_user.h @@ -0,0 +1,871 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file icp_sal_user.h + * + * @ingroup SalUser + * + * User space process init and shutdown functions. + * + ***************************************************************************/ + +#ifndef ICP_SAL_USER_H +#define ICP_SAL_USER_H + +/************************************************************************* + * @ingroup SalUser + * @description + * This function initialises and starts user space service access layer + * (SAL) - it registers SAL with ADF and initialises the ADF proxy. + * This function must only be called once per user space process. + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pProcessName Process address space name described in + * the config file for this device + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + *************************************************************************/ +CpaStatus icp_sal_userStart(const char *pProcessName); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function is to be used with simplified config file, where user + * defines many user space processes. The driver generates unique + * process names based on the pProcessName provided. + * For example: + * If a config file in simplified format contains: + * [SSL] + * NumProcesses = 3 + * + * Then three internal sections will be generated and the three + * applications can be started at a given time. Each application can call + * icp_sal_userStartMultiProcess("SSL"). In this case the driver will + * figure out the unique name to use for each process. + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pProcessName Process address space name described in + * the new format of the config file + * for this device. + * + * @param[in] limitDevAccess Specifies if the address space is limited + * to one device (true) or if it spans + * accross multiple devices. + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed. In this case user + * can wait and retry. + * + *************************************************************************/ +CpaStatus icp_sal_userStartMultiProcess(const char *pProcessName, + CpaBoolean limitDevAccess); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function stops and shuts down user space SAL + * - it deregisters SAL with ADF and shuts down ADF proxy + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus icp_sal_userStop(void); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function gets the number of the available dynamic allocated + * crypto instances + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus icp_sal_userCyGetAvailableNumDynInstances(Cpa32U *pNumCyInstances); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function gets the number of the available dynamic allocated + * compression instances + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus icp_sal_userDcGetAvailableNumDynInstances(Cpa32U *pNumDcInstances); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function gets the number of the available dynamic allocated + * crypto instances which are from the specific device package. + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus +icp_sal_userCyGetAvailableNumDynInstancesByDevPkg(Cpa32U *pNumCyInstances, + Cpa32U devPkgID); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function gets the number of the available dynamic allocated + * crypto instances which are from the specific device package and specific + * accelerator. + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus +icp_sal_userCyGetAvailableNumDynInstancesByPkgAccel(Cpa32U *pNumCyInstances, + Cpa32U devPkgID, + Cpa32U accelerator_number); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function gets the number of the available dynamic allocated + * compression instances which are from the specific device package. + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus +icp_sal_userDcGetAvailableNumDynInstancesByDevPkg(Cpa32U *pNumDcInstances, + Cpa32U devPkgID); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function allocates crypto instances + * from dynamic crypto instance pool + * - it adds new allocated instances into crypto_services + * - it initializes new allocated instances + * - it starts new allocated instances + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus icp_sal_userCyInstancesAlloc(Cpa32U numCyInstances, + CpaInstanceHandle *pCyInstances); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function allocates crypto instances + * from dynamic crypto instance pool + * which are from the specific device package. + * - it adds new allocated instances into crypto_services + * - it initializes new allocated instances + * - it starts new allocated instances + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus icp_sal_userCyInstancesAllocByDevPkg(Cpa32U numCyInstances, + CpaInstanceHandle *pCyInstances, + Cpa32U devPkgID); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function allocates crypto instances + * from dynamic crypto instance pool + * which are from the specific device package and specific accelerator + * - it adds new allocated instances into crypto_services + * - it initializes new allocated instances + * - it starts new allocated instances + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus +icp_sal_userCyInstancesAllocByPkgAccel(Cpa32U numCyInstances, + CpaInstanceHandle *pCyInstances, + Cpa32U devPkgID, + Cpa32U accelerator_number); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function frees crypto instances allocated + * from dynamic crypto instance pool + * - it stops the instances + * - it shutdowns the instances + * - it removes the instances from crypto_services + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus icp_sal_userCyFreeInstances(Cpa32U numCyInstances, + CpaInstanceHandle *pCyInstances); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function allocates compression instances + * from dynamic compression instance pool + * - it adds new allocated instances into compression_services + * - it initializes new allocated instances + * - it starts new allocated instances + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus icp_sal_userDcInstancesAlloc(Cpa32U numDcInstances, + CpaInstanceHandle *pDcInstances); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function allocates compression instances + * from dynamic compression instance pool + * which are from the specific device package. + * - it adds new allocated instances into compression_services + * - it initializes new allocated instances + * - it starts new allocated instances + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus icp_sal_userDcInstancesAllocByDevPkg(Cpa32U numDcInstances, + CpaInstanceHandle *pDcInstances, + Cpa32U devPkgID); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function frees compression instances allocated + * from dynamic compression instance pool + * - it stops the instances + * - it shutdowns the instances + * - it removes the instances from compression_services + * + * @context + * This function is called from the user process context + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus icp_sal_userDcFreeInstances(Cpa32U numDcInstances, + CpaInstanceHandle *pDcInstances); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function checks if new devices have been started and if so + * starts to use them. + * + * @context + * This function is called from the user process context + * in threadless mode + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus icp_sal_find_new_devices(void); + +/************************************************************************* + * @ingroup SalUser + * @description + * This function polls device events. + * + * @context + * This function is called from the user process context + * in threadless mode + * + * @assumptions + * None + * @sideEffects + * In case a device has beed stoped or restarted the application + * will get restarting/stop/shutdown events + * @reentrant + * No + * @threadSafe + * No + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + * + ************************************************************************/ +CpaStatus icp_sal_poll_device_events(void); + +/* + * icp_adf_check_device + * + * @description: + * This function checks the status of the firmware/hardware for a given device. + * This function is used as part of the heartbeat functionality. + * + * @context + * This function is called from the user process context + * @assumptions + * None + * @sideEffects + * In case a device is unresponsive the device will + * be restarted. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] accelId Device Id. + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + */ +CpaStatus icp_sal_check_device(Cpa32U accelId); + +/* + * icp_adf_check_all_devices + * + * @description: + * This function checks the status of the firmware/hardware for all devices. + * This function is used as part of the heartbeat functionality. + * + * @context + * This function is called from the user process context + * @assumptions + * None + * @sideEffects + * In case a device is unresponsive the device will + * be restarted. + * @reentrant + * No + * @threadSafe + * Yes + * + * @retval CPA_STATUS_SUCCESS No error + * @retval CPA_STATUS_FAIL Operation failed + */ +CpaStatus icp_sal_check_all_devices(void); + +/* + * @ingroup icp_sal_user + * @description + * This is a stub function to send messages to VF + * + * @context + * None + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * Yes + * @threadSafe + * Yes + * +*/ +CpaStatus icp_sal_userSendMsgToVf(Cpa32U accelId, Cpa32U vfNum, Cpa32U message); + +/* + * @ingroup icp_sal_user + * @description + * This is a stub function to send messages to PF + * + * @context + * None + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * Yes + * @threadSafe + * Yes + * +*/ +CpaStatus icp_sal_userSendMsgToPf(Cpa32U accelId, Cpa32U message); + +/* + * @ingroup icp_sal_user + * @description + * This is a stub function to get messages from VF + * + * @context + * None + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * Yes + * @threadSafe + * Yes + * +*/ +CpaStatus icp_sal_userGetMsgFromVf(Cpa32U accelId, + Cpa32U vfNum, + Cpa32U *message, + Cpa32U *messageCounter); + +/* + * @ingroup icp_sal_user + * @description + * This is a stub function to get messages from PF + * + * @context + * None + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * Yes + * @threadSafe + * Yes + * +*/ +CpaStatus icp_sal_userGetMsgFromPf(Cpa32U accelId, + Cpa32U *message, + Cpa32U *messageCounter); + +/* + * @ingroup icp_sal_user + * @description + * This is a stub function to get pfvf comms status + * + * @context + * None + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * Yes + * @threadSafe + * Yes + * +*/ +CpaStatus icp_sal_userGetPfVfcommsStatus(CpaBoolean *unreadMessage); + +/* + * @ingroup icp_sal_user + * @description + * This is a stub function to reset the device + * + * @context + * None + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * Yes + * @threadSafe + * Yes + * +*/ +CpaStatus icp_sal_reset_device(Cpa32U accelId); + +/** + ***************************************************************************** + * @ingroup icp_sal_user + * Retrieve number of in flight requests for a nrbg tx ring + * from a crypto instance (Traditional API). + * + * @description + * This function is a part of back-pressure mechanism. + * Applications can query for inflight requests in + * the appropriate service/ring on each instance + * and select any instance with sufficient space or + * the instance with the lowest number. + * + * @assumptions + * None + * @sideEffects + * None + * @blocking + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Crypto API instance handle. + * @param[out] maxInflightRequests Maximal number of in flight requests. + * @param[out] numInflightRequests Current number of in flight requests. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @pre + * None + * @post + * None + * @see + * None + * + *****************************************************************************/ +CpaStatus icp_sal_NrbgGetInflightRequests(CpaInstanceHandle instanceHandle, + Cpa32U *maxInflightRequests, + Cpa32U *numInflightRequests); + +/** + ***************************************************************************** + * @ingroup icp_sal_user + * Retrieve number of in flight requests for a symmetric tx ring + * from a crypto instance (Traditional API). + * + * @description + * This function is a part of back-pressure mechanism. + * Applications can query for inflight requests in + * the appropriate service/ring on each instance + * and select any instance with sufficient space or + * the instance with the lowest number. + * + * @assumptions + * None + * @sideEffects + * None + * @blocking + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Crypto API instance handle. + * @param[out] maxInflightRequests Maximal number of in flight requests. + * @param[out] numInflightRequests Current number of in flight requests. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @pre + * None + * @post + * None + * @see + * None + * + *****************************************************************************/ +CpaStatus icp_sal_SymGetInflightRequests(CpaInstanceHandle instanceHandle, + Cpa32U *maxInflightRequests, + Cpa32U *numInflightRequests); + +/** + ***************************************************************************** + * @ingroup icp_sal_user + * Retrieve number of in flight requests for an asymmetric tx ring + * from a crypto instance (Traditional API). + * + * @description + * This function is a part of back-pressure mechanism. + * Applications can query the appropriate service/ring on each instance + * and select any instance with sufficient space or + * the instance with the lowest number. + * + * @assumptions + * None + * @sideEffects + * None + * @blocking + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Crypto API instance handle. + * @param[out] maxInflightRequests Maximal number of in flight requests. + * @param[out] numInflightRequests Current number of in flight requests. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @pre + * None + * @post + * None + * @see + * None + * + *****************************************************************************/ +CpaStatus icp_sal_AsymGetInflightRequests(CpaInstanceHandle instanceHandle, + Cpa32U *maxInflightRequests, + Cpa32U *numInflightRequests); + +/** + ***************************************************************************** + * @ingroup icp_sal_user + * Retrieve number of in flight requests for a symmetric tx ring + * from a crypto instancei (Data Plane API). + * + * @description + * This function is a part of back-pressure mechanism. + * Applications can query the appropriate service/ring on each instance + * and select any instance with sufficient space or + * the instance with the lowest number. + * + * @assumptions + * None + * @sideEffects + * None + * @blocking + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Crypto API instance handle. + * @param[out] maxInflightRequests Maximal number of in flight requests. + * @param[out] numInflightRequests Current number of in flight requests. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @pre + * None + * @post + * None + * @see + * None + * + *****************************************************************************/ +CpaStatus icp_sal_dp_SymGetInflightRequests(CpaInstanceHandle instanceHandle, + Cpa32U *maxInflightRequests, + Cpa32U *numInflightRequests); + +/** + ***************************************************************************** + * @ingroup icp_sal_user + * Updates the CSR with queued requests in the asymmetric tx ring. + * + * @description + * The function writes current shadow tail pointer of the asymmetric + * TX ring into ring's CSR. Updating the CSR will notify the HW that + * there are request(s) queued to be processed. The CSR is updated + * always, disregarding the current value of shadow tail pointer and + * the current CSR's tail value. + * + * @assumptions + * None + * @sideEffects + * None + * @blocking + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Crypto API instance handle. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @pre + * None + * @post + * None + * @see + * None + * + *****************************************************************************/ +CpaStatus icp_sal_AsymPerformOpNow(CpaInstanceHandle instanceHandle); +#endif Index: sys/dev/qat/qat_api/include/icp_sal_versions.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/icp_sal_versions.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + *************************************************************************** + * @file icp_sal_versions.h + * + * @defgroup SalVersions + * + * @ingroup SalVersions + * + * API and structures definition for obtaining software and hardware versions + * + ***************************************************************************/ + +#ifndef _ICP_SAL_VERSIONS_H_ +#define _ICP_SAL_VERSIONS_H_ + +#define ICP_SAL_VERSIONS_FW_VERSION_SIZE 16 +/**< Max length of firmware version string */ +#define ICP_SAL_VERSIONS_SW_VERSION_SIZE 16 +/**< Max length of software version string */ +#define ICP_SAL_VERSIONS_MMP_VERSION_SIZE 16 +/**< Max length of MMP binary version string */ +#define ICP_SAL_VERSIONS_HW_VERSION_SIZE 4 +/**< Max length of hardware version string */ + +/* Part name and number of the accelerator device */ +#define SAL_INFO2_DRIVER_SW_VERSION_MAJ_NUMBER 3 +#define SAL_INFO2_DRIVER_SW_VERSION_MIN_NUMBER 11 +#define SAL_INFO2_DRIVER_SW_VERSION_PATCH_NUMBER 0 + +/** +******************************************************************************* + * @ingroup SalVersions + * Structure holding versions information + * + * @description + * This structure stores information about versions of software + * and hardware being run on a particular device. + *****************************************************************************/ +typedef struct icp_sal_dev_version_info_s { + Cpa32U devId; + /**< Number of acceleration device for which this structure holds + * version + * information */ + Cpa8U firmwareVersion[ICP_SAL_VERSIONS_FW_VERSION_SIZE]; + /**< String identifying the version of the firmware associated with + * the device. */ + Cpa8U mmpVersion[ICP_SAL_VERSIONS_MMP_VERSION_SIZE]; + /**< String identifying the version of the MMP binary associated with + * the device. */ + Cpa8U softwareVersion[ICP_SAL_VERSIONS_SW_VERSION_SIZE]; + /**< String identifying the version of the software associated with + * the device. */ + Cpa8U hardwareVersion[ICP_SAL_VERSIONS_HW_VERSION_SIZE]; + /**< String identifying the version of the hardware (stepping and + * revision ID) associated with the device. */ +} icp_sal_dev_version_info_t; + +/** +******************************************************************************* + * @ingroup SalVersions + * Obtains the version information for a given device + * @description + * This function obtains hardware and software version information + * associated with a given device. + * + * @param[in] accelId ID of the acceleration device for which version + * information is to be obtained. + * @param[out] pVerInfo Pointer to a structure that will hold version + * information + * + * @context + * This function might sleep. It cannot be executed in a context that + * does not permit sleeping. + * @assumptions + * The system has been started + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @return CPA_STATUS_SUCCESS Operation finished successfully + * @return CPA_STATUS_INVALID_PARAM Invalid parameter passed to the function + * @return CPA_STATUS_RESOURCE System resources problem + * @return CPA_STATUS_FAIL Operation failed + * + *****************************************************************************/ +CpaStatus icp_sal_getDevVersionInfo(Cpa32U accelId, + icp_sal_dev_version_info_t *pVerInfo); + +#endif Index: sys/dev/qat/qat_api/include/lac/cpa_cy_common.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_common.h @@ -0,0 +1,649 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_common.h + * + * @defgroup cpaCy Cryptographic API + * + * @ingroup cpa + * + * @description + * These functions specify the Cryptographic API. + * + *****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_common.h + * @defgroup cpaCyCommon Cryptographic Common API + * + * @ingroup cpaCy + * + * @description + * This file specifies items which are common for both the asymmetric + * (public key cryptography) and the symmetric operations for the + * Cryptographic API. + * + *****************************************************************************/ +#ifndef CPA_CY_COMMON_H +#define CPA_CY_COMMON_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa.h" + +/** + ***************************************************************************** + * @ingroup cpa_cyCommon + * CPA CY Major Version Number + * @description + * The CPA_CY API major version number. This number will be incremented + * when significant churn to the API has occurred. The combination of the + * major and minor number definitions represent the complete version number + * for this interface. + * + *****************************************************************************/ +#define CPA_CY_API_VERSION_NUM_MAJOR (2) + +/** + ***************************************************************************** + * @ingroup cpa_cyCommon + * CPA CY Minor Version Number + * @description + * The CPA_CY API minor version number. This number will be incremented + * when minor changes to the API has occurred. The combination of the major + * and minor number definitions represent the complete version number for + * this interface. + * + *****************************************************************************/ +#define CPA_CY_API_VERSION_NUM_MINOR (3) + +/** + ***************************************************************************** + * @ingroup cpaCyCommon + * Request priority + * @description + * Enumeration of priority of the request to be given to the API. + * Currently two levels - HIGH and NORMAL are supported. HIGH priority + * requests will be prioritized on a "best-effort" basis over requests + * that are marked with a NORMAL priority. + * + *****************************************************************************/ +typedef enum _CpaCyPriority +{ + CPA_CY_PRIORITY_NORMAL = 1, /**< Normal priority */ + CPA_CY_PRIORITY_HIGH /**< High priority */ +} CpaCyPriority; + +/*****************************************************************************/ +/* Callback Definitions */ +/*****************************************************************************/ +/** + ***************************************************************************** + * @ingroup cpaCyCommon + * Definition of the crypto generic callback function + * + * @description + * This data structure specifies the prototype for a generic callback + * function + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag Opaque value provided by user while making individual + * function call. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pOpData Opaque Pointer to the operation data that was + * submitted in the request + * + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * cpaCyKeyGenSsl() + * + *****************************************************************************/ +typedef void (*CpaCyGenericCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pOpData); + +/** + ***************************************************************************** + * @ingroup cpaCyCommon + * Definition of generic callback function with an additional output + * CpaFlatBuffer parameter. + * + * @description + * This data structure specifies the prototype for a generic callback + * function which provides an output buffer (of type CpaFlatBuffer). + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag Opaque value provided by user while making individual + * function call. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pOpData Opaque Pointer to the operation data that was + * submitted in the request + * @param[in] pOut Pointer to the output buffer provided in the request + * invoking this callback. + * + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * None + * + *****************************************************************************/ +typedef void (*CpaCyGenFlatBufCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pOpdata, + CpaFlatBuffer *pOut); + +/** + ***************************************************************************** + * @ingroup cpaCyCommon + * Function to return the size of the memory which must be allocated for + * the pPrivateMetaData member of CpaBufferList. + * + * @description + * This function is used obtain the size (in bytes) required to allocate + * a buffer descriptor for the pPrivateMetaData member in the + * CpaBufferList the structure. + * Should the function return zero then no meta data is required for the + * buffer list. + * + * @context + * This function may be called from any context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Handle to an instance of this API. + * @param[in] numBuffers The number of pointers in the CpaBufferList. + * this is the maximum number of CpaFlatBuffers + * which may be contained in this CpaBufferList. + * @param[out] pSizeInBytes Pointer to the size in bytes of memory to be + * allocated when the client wishes to allocate + * a cpaFlatBuffer + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None. + * @post + * None + * @note + * None + * @see + * cpaCyGetInstances() + * + *****************************************************************************/ +CpaStatus +cpaCyBufferListGetMetaSize(const CpaInstanceHandle instanceHandle, + Cpa32U numBuffers, + Cpa32U *pSizeInBytes); + +/** + ***************************************************************************** + * @ingroup cpaCyCommon + * Function to return a string indicating the specific error that occurred + * for a particular instance. + * + * @description + * When a function invocation on a particular instance returns an error, + * the client can invoke this function to query the instance for a null + * terminated string which describes the general error condition, and if + * available additional text on the specific error. + * The Client MUST allocate CPA_STATUS_MAX_STR_LENGTH_IN_BYTES bytes for + * the buffer string. + * + * @context + * This function may be called from any context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Handle to an instance of this API. + * @param[in] errStatus The error condition that occurred + * @param[out] pStatusText Pointer to the string buffer that will be + * updated with a null terminated status text + * string. + * The invoking application MUST allocate this + * buffer to be CPA_STATUS_MAX_STR_LENGTH_IN_BYTES. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. Note, In this scenario it + * is INVALID to call this function a further + * time. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None. + * @post + * None + * @note + * None + * @see + * CpaStatus + * + *****************************************************************************/ +CpaStatus +cpaCyGetStatusText(const CpaInstanceHandle instanceHandle, + CpaStatus errStatus, + Cpa8S *pStatusText); + +/*****************************************************************************/ +/* Instance Discovery Functions */ +/*****************************************************************************/ +/** + ***************************************************************************** + * @ingroup cpaCyCommon + * Get the number of instances that are supported by the API + * implementation. + * + * @description + * This function will get the number of instances that are supported + * by an implementation of the Cryptographic API. This number is then + * used to determine the size of the array that must be passed to + * @ref cpaCyGetInstances(). + * + * @context + * This function MUST NOT be called from an interrupt context as it MAY + * sleep. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[out] pNumInstances Pointer to where the number of + * instances will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated + * + * @see + * cpaCyGetInstances + * + *****************************************************************************/ +CpaStatus +cpaCyGetNumInstances(Cpa16U *pNumInstances); + +/** + ***************************************************************************** + * @ingroup cpaCyCommon + * Get the handles to the instances that are supported by the + * API implementation. + * + * @description + * This function will return handles to the instances that are + * supported by an implementation of the Cryptographic API. These + * instance handles can then be used as input parameters with other + * Cryptographic API functions. + * + * This function will populate an array that has been allocated by the + * caller. The size of this API will have been determined by the + * cpaCyGetNumInstances() function. + * + * @context + * This function MUST NOT be called from an interrupt context as it MAY + * sleep. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] numInstances Size of the array. If the value is not + * the same as the number of instances + * supported, then an error (@ref + * CPA_STATUS_INVALID_PARAM) is returned. + * @param[in,out] cyInstances Pointer to where the instance + * handles will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated + * + * @see + * cpaCyGetNumInstances + * + *****************************************************************************/ +CpaStatus +cpaCyGetInstances(Cpa16U numInstances, + CpaInstanceHandle *cyInstances); + +/** + ***************************************************************************** + * @ingroup cpaCyCommon + * Function to get information on a particular instance. + * + * @deprecated + * As of v1.3 of the Crypto API, this function has been deprecated, + * replaced by @ref cpaCyInstanceGetInfo2. + * + * @description + * This function will provide instance specific information through a + * @ref CpaInstanceInfo structure. + * + * @context + * This function may be called from any context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Handle to an instance of this API to be + * initialized. + * @param[out] pInstanceInfo Pointer to the memory location allocated by + * the client into which the CpaInstanceInfo + * structure will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The client has retrieved an instanceHandle from successive calls to + * @ref cpaCyGetNumInstances and @ref cpaCyGetInstances. + * @post + * None + * @note + * None + * @see + * cpaCyGetNumInstances, + * cpaCyGetInstances, + * CpaInstanceInfo + * + *****************************************************************************/ +CpaStatus CPA_DEPRECATED +cpaCyInstanceGetInfo(const CpaInstanceHandle instanceHandle, + struct _CpaInstanceInfo * pInstanceInfo); + +/** + ***************************************************************************** + * @ingroup cpaCyCommon + * Function to get information on a particular instance. + * + * @description + * This function will provide instance specific information through a + * @ref CpaInstanceInfo2 structure. + * Supersedes @ref cpaCyInstanceGetInfo. + * + * @context + * This function may be called from any context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Handle to an instance of this API to be + * initialized. + * @param[out] pInstanceInfo2 Pointer to the memory location allocated by + * the client into which the CpaInstanceInfo2 + * structure will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The client has retrieved an instanceHandle from successive calls to + * @ref cpaCyGetNumInstances and @ref cpaCyGetInstances. + * @post + * None + * @note + * None + * @see + * cpaCyGetNumInstances, + * cpaCyGetInstances, + * CpaInstanceInfo + * + *****************************************************************************/ +CpaStatus +cpaCyInstanceGetInfo2(const CpaInstanceHandle instanceHandle, + CpaInstanceInfo2 * pInstanceInfo2); + +/*****************************************************************************/ +/* Instance Notification Functions */ +/*****************************************************************************/ +/** + ***************************************************************************** + * @ingroup cpaCyCommon + * Callback function for instance notification support. + * + * @description + * This is the prototype for the instance notification callback function. + * The callback function is passed in as a parameter to the + * @ref cpaCyInstanceSetNotificationCb function. + * + * @context + * This function will be executed in a context that requires that sleeping + * MUST NOT be permitted. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCallbackTag Opaque value provided by user while making + * individual function calls. + * @param[in] instanceEvent The event that will trigger this function to + * get invoked. + * + * @retval + * None + * @pre + * Component has been initialized and the notification function has been + * set via the cpaCyInstanceSetNotificationCb function. + * @post + * None + * @note + * None + * @see + * cpaCyInstanceSetNotificationCb(), + * + *****************************************************************************/ +typedef void (*CpaCyInstanceNotificationCbFunc)( + const CpaInstanceHandle instanceHandle, + void * pCallbackTag, + const CpaInstanceEvent instanceEvent); + +/** + ***************************************************************************** + * @ingroup cpaCyCommon + * Subscribe for instance notifications. + * + * @description + * Clients of the CpaCy interface can subscribe for instance notifications + * by registering a @ref CpaCyInstanceNotificationCbFunc function. + * + * @context + * This function may be called from any context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pInstanceNotificationCb Instance notification callback + * function pointer. + * @param[in] pCallbackTag Opaque value provided by user while + * making individual function calls. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Instance has been initialized. + * @post + * None + * @note + * None + * @see + * CpaCyInstanceNotificationCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyInstanceSetNotificationCb( + const CpaInstanceHandle instanceHandle, + const CpaCyInstanceNotificationCbFunc pInstanceNotificationCb, + void *pCallbackTag); + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_CY_COMMON_H */ Index: sys/dev/qat/qat_api/include/lac/cpa_cy_dh.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_dh.h @@ -0,0 +1,514 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_dh.h + * + * @defgroup cpaCyDh Diffie-Hellman (DH) API + * + * @ingroup cpaCy + * + * @description + * These functions specify the API for Public Key Encryption + * (Cryptography) operations for use with Diffie-Hellman algorithm. + * + * @note + * Large numbers are represented on the QuickAssist API as described + * in the Large Number API (@ref cpaCyLn). + *****************************************************************************/ + +#ifndef CPA_CY_DH_H +#define CPA_CY_DH_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_cy_common.h" +/** + ***************************************************************************** + * @ingroup cpaCyDh + * Diffie-Hellman Phase 1 Key Generation Data. + * @description + * This structure lists the different items that are required in the + * cpaCyDhKeyGenPhase1 function. The client MUST allocate the memory for + * this structure. When the structure is passed into the function, + * ownership of the memory passes to the function. Ownership of the memory + * returns to the client when this structure is returned with the + * CpaCyDhPhase1KeyGenOpData structure. + * + * @note + * If the client modifies or frees the memory referenced in this structure + * after it has been submitted to the cpaCyDhKeyGenPhase1 function, and + * before it has been returned in the callback, undefined behavior will + * result. + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. primeP.pData[0] = MSB. + * + *****************************************************************************/ +typedef struct _CpaCyDhPhase1KeyGenOpData { + CpaFlatBuffer primeP; + /**< Flat buffer containing a pointer to the random odd prime number (p). + * The bit-length of this number may be one of 768, 1024, 1536, 2048, + * 3072 or 4096. + */ + CpaFlatBuffer baseG; + /**< Flat buffer containing a pointer to base (g). This MUST comply with + * the following: + * 0 < g < p. + */ + CpaFlatBuffer privateValueX; + /**< Flat buffer containing a pointer to the private value (x). This is a + * random value which MUST satisfy the following condition: + * 0 < PrivateValueX < (PrimeP - 1) + * + * Refer to PKCS #3: Diffie-Hellman Key-Agreement Standard for details. + * The client creating this data MUST ensure the compliance of this value + * with the standard. Note: This value is also needed to complete local + * phase 2 Diffie-Hellman operation.*/ +} CpaCyDhPhase1KeyGenOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyDh + * Diffie-Hellman Phase 2 Secret Key Generation Data. + * @description + * This structure lists the different items that required in the + * cpaCyDhKeyGenPhase2Secret function. The client MUST allocate the + * memory for this structure. When the structure is passed into the + * function, ownership of the memory passes to the function. Ownership of + * the memory returns to the client when this structure is returned with + * the callback. + * @note + * If the client modifies or frees the memory referenced in this structure + * after it has been submitted to the cpaCyDhKeyGenPhase2Secret + * function, and before it has been returned in the callback, undefined + * behavior will result. + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. primeP.pData[0] = MSB. + * + *****************************************************************************/ +typedef struct _CpaCyDhPhase2SecretKeyGenOpData { + CpaFlatBuffer primeP; + /**< Flat buffer containing a pointer to the random odd prime number (p). + * The bit-length of this number may be one of 768, 1024, 1536, 2048, + * 3072 or 4096. + * This SHOULD be same prime number as was used in the phase 1 key + * generation operation. */ + CpaFlatBuffer remoteOctetStringPV; + /**< Flat buffer containing a pointer to the remote entity + * octet string Public Value (PV). */ + CpaFlatBuffer privateValueX; + /**< Flat buffer containing a pointer to the private value (x). This + * value may have been used in a call to the cpaCyDhKeyGenPhase1 function. + * This is a random value which MUST satisfy the following condition: + * 0 < privateValueX < (primeP - 1). */ +} CpaCyDhPhase2SecretKeyGenOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyDh + * Diffie-Hellman Statistics. + * @deprecated + * As of v1.3 of the Crypto API, this structure has been deprecated, + * replaced by @ref CpaCyDhStats64. + * @description + * This structure contains statistics on the Diffie-Hellman operations. + * Statistics are set to zero when the component is initialized, and are + * collected per instance. + ****************************************************************************/ +typedef struct _CpaCyDhStats { + Cpa32U numDhPhase1KeyGenRequests; + /**< Total number of successful Diffie-Hellman phase 1 key + * generation requests. */ + Cpa32U numDhPhase1KeyGenRequestErrors; + /**< Total number of Diffie-Hellman phase 1 key generation requests + * that had an error and could not be processed. */ + Cpa32U numDhPhase1KeyGenCompleted; + /**< Total number of Diffie-Hellman phase 1 key generation operations + * that completed successfully. */ + Cpa32U numDhPhase1KeyGenCompletedErrors; + /**< Total number of Diffie-Hellman phase 1 key generation operations + * that could not be completed successfully due to errors. */ + Cpa32U numDhPhase2KeyGenRequests; + /**< Total number of successful Diffie-Hellman phase 2 key + * generation requests. */ + Cpa32U numDhPhase2KeyGenRequestErrors; + /**< Total number of Diffie-Hellman phase 2 key generation requests + * that had an error and could not be processed. */ + Cpa32U numDhPhase2KeyGenCompleted; + /**< Total number of Diffie-Hellman phase 2 key generation operations + * that completed successfully. */ + Cpa32U numDhPhase2KeyGenCompletedErrors; + /**< Total number of Diffie-Hellman phase 2 key generation operations + * that could not be completed successfully due to errors. */ +} CpaCyDhStats CPA_DEPRECATED; + +/** + ***************************************************************************** + * @ingroup cpaCyDh + * Diffie-Hellman Statistics (64-bit version). + * @description + * This structure contains the 64-bit version of the statistics on the + * Diffie-Hellman operations. + * Statistics are set to zero when the component is initialized, and are + * collected per instance. + ****************************************************************************/ +typedef struct _CpaCyDhStats64 { + Cpa64U numDhPhase1KeyGenRequests; + /**< Total number of successful Diffie-Hellman phase 1 key + * generation requests. */ + Cpa64U numDhPhase1KeyGenRequestErrors; + /**< Total number of Diffie-Hellman phase 1 key generation requests + * that had an error and could not be processed. */ + Cpa64U numDhPhase1KeyGenCompleted; + /**< Total number of Diffie-Hellman phase 1 key generation operations + * that completed successfully. */ + Cpa64U numDhPhase1KeyGenCompletedErrors; + /**< Total number of Diffie-Hellman phase 1 key generation operations + * that could not be completed successfully due to errors. */ + Cpa64U numDhPhase2KeyGenRequests; + /**< Total number of successful Diffie-Hellman phase 2 key + * generation requests. */ + Cpa64U numDhPhase2KeyGenRequestErrors; + /**< Total number of Diffie-Hellman phase 2 key generation requests + * that had an error and could not be processed. */ + Cpa64U numDhPhase2KeyGenCompleted; + /**< Total number of Diffie-Hellman phase 2 key generation operations + * that completed successfully. */ + Cpa64U numDhPhase2KeyGenCompletedErrors; + /**< Total number of Diffie-Hellman phase 2 key generation operations + * that could not be completed successfully due to errors. */ +} CpaCyDhStats64; + +/** + ***************************************************************************** + * @ingroup cpaCyDh + * Function to implement Diffie-Hellman phase 1 operations. + * + * @description + * This function may be used to implement the Diffie-Hellman phase 1 + * operations as defined in the PKCS #3 standard. It may be used to + * generate the the (local) octet string public value (PV) key. + * The prime number sizes specified in RFC 2409, 4306, and part of + * RFC 3526 are supported (bit sizes 6144 and 8192 from RFC 3536 are not + * supported). + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pDhPhase1Cb Pointer to a callback function to be invoked + * when the operation is complete. If the + * pointer is set to a NULL value the function + * will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific call. + * Will be returned unchanged in the callback + * @param[in] pPhase1KeyGenData Structure containing all the data needed + * to perform the DH Phase 1 key generation + * operation. The client code allocates the + * memory for this structure. This component + * takes ownership of the memory until it is + * returned in the callback. + * @param[out] pLocalOctetStringPV Pointer to memory allocated by the client + * into which the (local) octet string Public + * Value (PV) will be written. This value + * needs to be sent to the remote entity with + * which Diffie-Hellman is negotiating. + * The size of this buffer in bytes (as + * represented by the dataLenInBytes field) + * MUST be at least big enough to store + * the public value, which may have a bit + * length up to that of pPrimeP. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pDhPhase1Cb is non-NULL an asynchronous callback of type + * CpaCyGenFlatBufCbFunc is generated in response to this function + * call. Any errors generated during processing are reported in the + * structure returned in the callback. + * + * @see + * CpaCyGenFlatBufCbFunc, + * CpaCyDhPhase1KeyGenOpData + * + *****************************************************************************/ +CpaStatus +cpaCyDhKeyGenPhase1(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pDhPhase1Cb, + void *pCallbackTag, + const CpaCyDhPhase1KeyGenOpData *pPhase1KeyGenData, + CpaFlatBuffer *pLocalOctetStringPV); + +/** + ***************************************************************************** + * @ingroup cpaCyDh + * Function to implement Diffie-Hellman phase 2 operations. + * + * @description + * This function may be used to implement the Diffie-Hellman phase 2 + * operation as defined in the PKCS #3 standard. It may be used to + * generate the Diffie-Hellman shared secret key. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pDhPhase2Cb Pointer to a callback function to be + * invoked when the operation is complete. + * If the pointer is set to a NULL value + * the function will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific + * call. Will be returned unchanged in + * the callback. + * @param[in] pPhase2SecretKeyGenData Structure containing all the data + * needed to perform the DH Phase 2 + * secret key generation operation. The + * client code allocates the memory for + * this structure. This component takes + * ownership of the memory until it is + * returned in the callback. + * @param[out] pOctetStringSecretKey Pointer to memory allocated by the + * client into which the octet string + * secret key will be written. + * The size of this buffer in bytes (as + * represented by the dataLenInBytes field) + * MUST be at least big enough to store + * the public value, which may have a bit + * length up to that of pPrimeP. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pDhPhase2Cb is non-NULL an asynchronous callback of type + * CpaCyGenFlatBufCbFunc is generated in response to this function + * call. Any errors generated during processing are reported in the + * structure returned in the callback. + * + * @see + * CpaCyGenFlatBufCbFunc, + * CpaCyDhPhase2SecretKeyGenOpData + * + *****************************************************************************/ +CpaStatus +cpaCyDhKeyGenPhase2Secret(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pDhPhase2Cb, + void *pCallbackTag, + const CpaCyDhPhase2SecretKeyGenOpData *pPhase2SecretKeyGenData, + CpaFlatBuffer *pOctetStringSecretKey); + +/** + ***************************************************************************** + * @ingroup cpaCyDh + * Query statistics for Diffie-Hellman operations + * + * @deprecated + * As of v1.3 of the Crypto API, this function has been deprecated, + * replaced by @ref cpaCyDhQueryStats64(). + * + * @description + * This function will query a specific Instance handle for Diffie- + * Hellman statistics. The user MUST allocate the CpaCyDhStats + * structure and pass the reference to that structure into this function + * call. This function writes the statistic results into the passed in + * CpaCyDhStats structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pDhStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * @see + * CpaCyDhStats + *****************************************************************************/ +CpaStatus CPA_DEPRECATED +cpaCyDhQueryStats(const CpaInstanceHandle instanceHandle, + struct _CpaCyDhStats *pDhStats); + +/** + ***************************************************************************** + * @ingroup cpaCyDh + * Query statistics (64-bit version) for Diffie-Hellman operations + * + * @description + * This function will query a specific Instance handle for the 64-bit + * version of the Diffie-Hellman statistics. The user MUST allocate the + * CpaCyDhStats64 structure and pass the reference to that structure into + * this function call. This function writes the statistic results into + * the passed in CpaCyDhStats64 structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pDhStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * @see + * CpaCyDhStats64 + *****************************************************************************/ +CpaStatus +cpaCyDhQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyDhStats64 *pDhStats); + +/*****************************************************************************/ + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_CY_DH_H */ Index: sys/dev/qat/qat_api/include/lac/cpa_cy_dsa.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_dsa.h @@ -0,0 +1,1443 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_dsa.h + * + * @defgroup cpaCyDsa Digital Signature Algorithm (DSA) API + * + * @ingroup cpaCy + * + * @description + * These functions specify the API for Public Key Encryption + * (Cryptography) Digital Signature Algorithm (DSA) operations. + * + * Support is provided for FIPS PUB 186-2 with Change Notice 1 + * specification, and optionally for FIPS PUB 186-3. If an + * implementation does not support FIPS PUB 186-3, then the + * corresponding functions may return a status of @ref + * CPA_STATUS_FAIL. + * + * Support for FIPS PUB 186-2 with Change Notice 1 implies supporting + * the following choice for the pair L and N: + * - L = 1024, N = 160 + * + * Support for FIPS PUB 186-3 implies supporting the following choices + * for the pair L and N: + * + * - L = 1024, N = 160 + * - L = 2048, N = 224 + * - L = 2048, N = 256 + * - L = 3072, N = 256 + * + * Only the modular math aspects of DSA parameter generation and message + * signature generation and verification are implemented here. For full + * DSA support, this DSA API SHOULD be used in conjunction with other + * parts of this overall Cryptographic API. In particular the Symmetric + * functions (for hashing), the Random Number Generation functions, and + * the Prime Number Test functions will be required. + * + * @note + * Large numbers are represented on the QuickAssist API as described + * in the Large Number API (@ref cpaCyLn). + *****************************************************************************/ + +#ifndef CPA_CY_DSA_H +#define CPA_CY_DSA_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_cy_common.h" + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * DSA P Parameter Generation Operation Data. + * @description + * This structure contains the operation data for the cpaCyDsaGenPParam + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * For optimal performance all data buffers SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. X.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyDsaGenPParam + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyDsaGenPParam() + * + *****************************************************************************/ +typedef struct _CpaCyDsaPParamGenOpData { + CpaFlatBuffer X; + /**< 2^(L-1) <= X < 2^L (from FIPS 186-3) */ + CpaFlatBuffer Q; + /**< DSA group parameter q */ +} CpaCyDsaPParamGenOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * DSA G Parameter Generation Operation Data. + * @description + * This structure contains the operation data for the cpaCyDsaGenGParam + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. P.pData[0] = MSB. + * + * All numbers MUST be stored in big-endian order. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyDsaGenGParam + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyDsaGenGParam() + * + *****************************************************************************/ +typedef struct _CpaCyDsaGParamGenOpData { + CpaFlatBuffer P; + /**< DSA group parameter p */ + CpaFlatBuffer Q; + /**< DSA group parameter q */ + CpaFlatBuffer H; + /**< any integer with 1 < h < p - 1 */ +} CpaCyDsaGParamGenOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * DSA Y Parameter Generation Operation Data. + * @description + * This structure contains the operation data for the cpaCyDsaGenYParam + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * For optimal performance all data SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. P.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyDsaGenYParam + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyDsaGenYParam() + * + *****************************************************************************/ +typedef struct _CpaCyDsaYParamGenOpData { + CpaFlatBuffer P; + /**< DSA group parameter p */ + CpaFlatBuffer G; + /**< DSA group parameter g */ + CpaFlatBuffer X; + /**< DSA private key x */ +} CpaCyDsaYParamGenOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * DSA R Sign Operation Data. + * @description + * This structure contains the operation data for the cpaCyDsaSignR + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * For optimal performance all data SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. P.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyDsaSignR + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyDsaSignR() + * + *****************************************************************************/ +typedef struct _CpaCyDsaRSignOpData { + CpaFlatBuffer P; + /**< DSA group parameter p */ + CpaFlatBuffer Q; + /**< DSA group parameter q */ + CpaFlatBuffer G; + /**< DSA group parameter g */ + CpaFlatBuffer K; + /**< DSA secret parameter k for signing */ +} CpaCyDsaRSignOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * DSA S Sign Operation Data. + * @description + * This structure contains the operation data for the cpaCyDsaSignS + * function. The client MUST allocate the memory for this structure and + * the items pointed to by this structure. When the structure is passed + * into the function, ownership of the memory passes to the function. + * Ownership of the memory returns to the client when this structure is + * returned in the callback function. + * + * For optimal performance all data SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. Q.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyDsaSignS + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyDsaSignS() + * + *****************************************************************************/ +typedef struct _CpaCyDsaSSignOpData { + CpaFlatBuffer Q; + /**< DSA group parameter q */ + CpaFlatBuffer X; + /**< DSA private key x */ + CpaFlatBuffer K; + /**< DSA secret parameter k for signing */ + CpaFlatBuffer R; + /**< DSA message signature r */ + CpaFlatBuffer Z; + /**< The leftmost min(N, outlen) bits of Hash(M), where: + * - N is the bit length of q + * - outlen is the bit length of the hash function output block + * - M is the message to be signed + */ +} CpaCyDsaSSignOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * DSA R & S Sign Operation Data. + * @description + * This structure contains the operation data for the cpaCyDsaSignRS + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * For optimal performance all data SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. P.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyDsaSignRS + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyDsaSignRS() + * + *****************************************************************************/ +typedef struct _CpaCyDsaRSSignOpData { + CpaFlatBuffer P; + /**< DSA group parameter p */ + CpaFlatBuffer Q; + /**< DSA group parameter q */ + CpaFlatBuffer G; + /**< DSA group parameter g */ + CpaFlatBuffer X; + /**< DSA private key x */ + CpaFlatBuffer K; + /**< DSA secret parameter k for signing */ + CpaFlatBuffer Z; + /**< The leftmost min(N, outlen) bits of Hash(M), where: + * - N is the bit length of q + * - outlen is the bit length of the hash function output block + * - M is the message to be signed + */ +} CpaCyDsaRSSignOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * DSA Verify Operation Data. + * @description + * This structure contains the operation data for the cpaCyDsaVerify + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * For optimal performance all data SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. P.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyDsaVerify + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyDsaVerify() + * + *****************************************************************************/ +typedef struct _CpaCyDsaVerifyOpData { + CpaFlatBuffer P; + /**< DSA group parameter p */ + CpaFlatBuffer Q; + /**< DSA group parameter q */ + CpaFlatBuffer G; + /**< DSA group parameter g */ + CpaFlatBuffer Y; + /**< DSA public key y */ + CpaFlatBuffer Z; + /**< The leftmost min(N, outlen) bits of Hash(M'), where: + * - N is the bit length of q + * - outlen is the bit length of the hash function output block + * - M is the message to be signed + */ + CpaFlatBuffer R; + /**< DSA message signature r */ + CpaFlatBuffer S; + /**< DSA message signature s */ +} CpaCyDsaVerifyOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Cryptographic DSA Statistics. + * @deprecated + * As of v1.3 of the Crypto API, this structure has been deprecated, + * replaced by @ref CpaCyDsaStats64. + * @description + * This structure contains statistics on the Cryptographic DSA + * operations. Statistics are set to zero when the component is + * initialized, and are collected per instance. + ****************************************************************************/ +typedef struct _CpaCyDsaStats { + Cpa32U numDsaPParamGenRequests; + /**< Total number of successful DSA P parameter generation requests. */ + Cpa32U numDsaPParamGenRequestErrors; + /**< Total number of DSA P parameter generation requests that had an + * error and could not be processed. */ + Cpa32U numDsaPParamGenCompleted; + /**< Total number of DSA P parameter generation operations that + * completed successfully. */ + Cpa32U numDsaPParamGenCompletedErrors; + /**< Total number of DSA P parameter generation operations that could + * not be completed successfully due to errors. */ + Cpa32U numDsaGParamGenRequests; + /**< Total number of successful DSA G parameter generation requests. */ + Cpa32U numDsaGParamGenRequestErrors; + /**< Total number of DSA G parameter generation requests that had an + * error and could not be processed. */ + Cpa32U numDsaGParamGenCompleted; + /**< Total number of DSA G parameter generation operations that + * completed successfully. */ + Cpa32U numDsaGParamGenCompletedErrors; + /**< Total number of DSA G parameter generation operations that could + * not be completed successfully due to errors. */ + Cpa32U numDsaYParamGenRequests; + /**< Total number of successful DSA Y parameter generation requests. */ + Cpa32U numDsaYParamGenRequestErrors; + /**< Total number of DSA Y parameter generation requests that had an + * error and could not be processed. */ + Cpa32U numDsaYParamGenCompleted; + /**< Total number of DSA Y parameter generation operations that + * completed successfully. */ + Cpa32U numDsaYParamGenCompletedErrors; + /**< Total number of DSA Y parameter generation operations that could + * not be completed successfully due to errors. */ + Cpa32U numDsaRSignRequests; + /**< Total number of successful DSA R sign generation requests. */ + Cpa32U numDsaRSignRequestErrors; + /**< Total number of DSA R sign requests that had an error and could + * not be processed. */ + Cpa32U numDsaRSignCompleted; + /**< Total number of DSA R sign operations that completed + * successfully. */ + Cpa32U numDsaRSignCompletedErrors; + /**< Total number of DSA R sign operations that could not be completed + * successfully due to errors. */ + Cpa32U numDsaSSignRequests; + /**< Total number of successful DSA S sign generation requests. */ + Cpa32U numDsaSSignRequestErrors; + /**< Total number of DSA S sign requests that had an error and could + * not be processed. */ + Cpa32U numDsaSSignCompleted; + /**< Total number of DSA S sign operations that completed + * successfully. */ + Cpa32U numDsaSSignCompletedErrors; + /**< Total number of DSA S sign operations that could not be completed + * successfully due to errors. */ + Cpa32U numDsaRSSignRequests; + /**< Total number of successful DSA RS sign generation requests. */ + Cpa32U numDsaRSSignRequestErrors; + /**< Total number of DSA RS sign requests that had an error and could + * not be processed. */ + Cpa32U numDsaRSSignCompleted; + /**< Total number of DSA RS sign operations that completed + * successfully. */ + Cpa32U numDsaRSSignCompletedErrors; + /**< Total number of DSA RS sign operations that could not be completed + * successfully due to errors. */ + Cpa32U numDsaVerifyRequests; + /**< Total number of successful DSA verify generation requests. */ + Cpa32U numDsaVerifyRequestErrors; + /**< Total number of DSA verify requests that had an error and could + * not be processed. */ + Cpa32U numDsaVerifyCompleted; + /**< Total number of DSA verify operations that completed + * successfully. */ + Cpa32U numDsaVerifyCompletedErrors; + /**< Total number of DSA verify operations that could not be completed + * successfully due to errors. */ + Cpa32U numDsaVerifyFailures; + /**< Total number of DSA verify operations that executed successfully + * but the outcome of the test was that the verification failed. + * Note that this does not indicate an error. */ +} CpaCyDsaStats CPA_DEPRECATED; + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Cryptographic DSA Statistics (64-bit version). + * @description + * This structure contains 64-bit version of the statistics on the + * Cryptographic DSA operations. + * Statistics are set to zero when the component is + * initialized, and are collected per instance. + ****************************************************************************/ +typedef struct _CpaCyDsaStats64 { + Cpa64U numDsaPParamGenRequests; + /**< Total number of successful DSA P parameter generation requests. */ + Cpa64U numDsaPParamGenRequestErrors; + /**< Total number of DSA P parameter generation requests that had an + * error and could not be processed. */ + Cpa64U numDsaPParamGenCompleted; + /**< Total number of DSA P parameter generation operations that + * completed successfully. */ + Cpa64U numDsaPParamGenCompletedErrors; + /**< Total number of DSA P parameter generation operations that could + * not be completed successfully due to errors. */ + Cpa64U numDsaGParamGenRequests; + /**< Total number of successful DSA G parameter generation requests. */ + Cpa64U numDsaGParamGenRequestErrors; + /**< Total number of DSA G parameter generation requests that had an + * error and could not be processed. */ + Cpa64U numDsaGParamGenCompleted; + /**< Total number of DSA G parameter generation operations that + * completed successfully. */ + Cpa64U numDsaGParamGenCompletedErrors; + /**< Total number of DSA G parameter generation operations that could + * not be completed successfully due to errors. */ + Cpa64U numDsaYParamGenRequests; + /**< Total number of successful DSA Y parameter generation requests. */ + Cpa64U numDsaYParamGenRequestErrors; + /**< Total number of DSA Y parameter generation requests that had an + * error and could not be processed. */ + Cpa64U numDsaYParamGenCompleted; + /**< Total number of DSA Y parameter generation operations that + * completed successfully. */ + Cpa64U numDsaYParamGenCompletedErrors; + /**< Total number of DSA Y parameter generation operations that could + * not be completed successfully due to errors. */ + Cpa64U numDsaRSignRequests; + /**< Total number of successful DSA R sign generation requests. */ + Cpa64U numDsaRSignRequestErrors; + /**< Total number of DSA R sign requests that had an error and could + * not be processed. */ + Cpa64U numDsaRSignCompleted; + /**< Total number of DSA R sign operations that completed + * successfully. */ + Cpa64U numDsaRSignCompletedErrors; + /**< Total number of DSA R sign operations that could not be completed + * successfully due to errors. */ + Cpa64U numDsaSSignRequests; + /**< Total number of successful DSA S sign generation requests. */ + Cpa64U numDsaSSignRequestErrors; + /**< Total number of DSA S sign requests that had an error and could + * not be processed. */ + Cpa64U numDsaSSignCompleted; + /**< Total number of DSA S sign operations that completed + * successfully. */ + Cpa64U numDsaSSignCompletedErrors; + /**< Total number of DSA S sign operations that could not be completed + * successfully due to errors. */ + Cpa64U numDsaRSSignRequests; + /**< Total number of successful DSA RS sign generation requests. */ + Cpa64U numDsaRSSignRequestErrors; + /**< Total number of DSA RS sign requests that had an error and could + * not be processed. */ + Cpa64U numDsaRSSignCompleted; + /**< Total number of DSA RS sign operations that completed + * successfully. */ + Cpa64U numDsaRSSignCompletedErrors; + /**< Total number of DSA RS sign operations that could not be completed + * successfully due to errors. */ + Cpa64U numDsaVerifyRequests; + /**< Total number of successful DSA verify generation requests. */ + Cpa64U numDsaVerifyRequestErrors; + /**< Total number of DSA verify requests that had an error and could + * not be processed. */ + Cpa64U numDsaVerifyCompleted; + /**< Total number of DSA verify operations that completed + * successfully. */ + Cpa64U numDsaVerifyCompletedErrors; + /**< Total number of DSA verify operations that could not be completed + * successfully due to errors. */ + Cpa64U numDsaVerifyFailures; + /**< Total number of DSA verify operations that executed successfully + * but the outcome of the test was that the verification failed. + * Note that this does not indicate an error. */ +} CpaCyDsaStats64; + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Definition of a generic callback function invoked for a number of the + * DSA API functions.. + * + * @description + * This is the prototype for the cpaCyDsaGenCbFunc callback function. + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pOpData Opaque pointer to Operation data supplied in + * request. + * @param[in] protocolStatus The result passes/fails the DSA protocol + * related checks. + * @param[in] pOut Output data from the request. + * + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * cpaCyDsaGenPParam() + * cpaCyDsaGenGParam() + * cpaCyDsaSignR() + * cpaCyDsaSignS() + * + *****************************************************************************/ +typedef void (*CpaCyDsaGenCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaBoolean protocolStatus, + CpaFlatBuffer *pOut); + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Definition of callback function invoked for cpaCyDsaSignRS + * requests. + * + * @description + * This is the prototype for the cpaCyDsaSignRS callback function, which + * will provide the DSA message signature r and s parameters. + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pOpData Operation data pointer supplied in request. + * @param[in] protocolStatus The result passes/fails the DSA protocol + * related checks. + * @param[in] pR DSA message signature r. + * @param[in] pS DSA message signature s. + * + * + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * cpaCyDsaSignRS() + * + *****************************************************************************/ +typedef void (*CpaCyDsaRSSignCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaBoolean protocolStatus, + CpaFlatBuffer *pR, + CpaFlatBuffer *pS); + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Definition of callback function invoked for cpaCyDsaVerify + * requests. + * + * @description + * This is the prototype for the cpaCyDsaVerify callback function. + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pOpData Operation data pointer supplied in request. + * @param[in] verifyStatus The verification passed or failed. + * + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * cpaCyDsaVerify() + * + *****************************************************************************/ +typedef void (*CpaCyDsaVerifyCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaBoolean verifyStatus); + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Generate DSA P Parameter. + * + * @description + * + * This function performs FIPS 186-3 Appendix A.1.1.2 steps 11.4 and 11.5, + * and part of step 11.7: + * + * 11.4. c = X mod 2q. + * 11.5. p = X - (c - 1). + * 11.7. Test whether or not p is prime as specified in Appendix C.3. + * [Note that a GCD test against ~1400 small primes is performed + * on p to eliminate ~94% of composites - this is NOT a "robust" + * primality test, as specified in Appendix C.3.] + * + * The protocol status, returned in the callback function as parameter + * protocolStatus (or, in the case of synchronous invocation, in the + * parameter *pProtocolStatus) is used to indicate whether the value p is + * in the right range and has passed the limited primality test. + * + * Specifically, (protocolStatus == CPA_TRUE) means p is in the right range + * and SHOULD be subjected to a robust primality test as specified in + * FIPS 186-3 Appendix C.3 (for example, 40 rounds of Miller-Rabin). + * Meanwhile, (protocolStatus == CPA_FALSE) means p is either composite, + * or p < 2^(L-1), in which case the value of p gets set to zero. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is + * set to a NULL value the function will + * operate synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pProtocolStatus The result passes/fails the DSA protocol + * related checks. + * @param[out] pP Candidate for DSA parameter p, p odd and + * 2^(L-1) < p < X + * On invocation the callback function will + * contain this parameter in the pOut parameter. + * + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyDsaPParamGenCbFunc is generated in response to this + * function call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyDsaPParamGenOpData, + * CpaCyDsaGenCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyDsaGenPParam(const CpaInstanceHandle instanceHandle, + const CpaCyDsaGenCbFunc pCb, + void * pCallbackTag, + const CpaCyDsaPParamGenOpData *pOpData, + CpaBoolean *pProtocolStatus, + CpaFlatBuffer *pP); + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Generate DSA G Parameter. + * + * @description + * This function performs FIPS 186-3 Appendix A.2.1, steps 1 and 3, + * and part of step 4: + * + * 1. e = (p - 1)/q. + * 3. Set g = h^e mod p. + * 4. If (g = 1), then go to step 2. + * Here, the implementation will check for g == 1, and return + * status accordingly. + * + * + * The protocol status, returned in the callback function as parameter + * protocolStatus (or, in the case of synchronous invocation, in the + * parameter *pProtocolStatus) is used to indicate whether the value g is + * acceptable. + * + * Specifically, (protocolStatus == CPA_TRUE) means g is acceptable. + * Meanwhile, (protocolStatus == CPA_FALSE) means g == 1, so a + * different value of h SHOULD be used to generate another value of g. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to a + * NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pProtocolStatus The result passes/fails the DSA protocol + * related checks. + * @param[out] pG g = h^((p-1)/q) mod p. + * On invocation the callback function will + * contain this parameter in the pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyDsaGParamGenCbFunc is generated in response to this + * function call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyDsaGParamGenOpData, + * CpaCyDsaGenCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyDsaGenGParam(const CpaInstanceHandle instanceHandle, + const CpaCyDsaGenCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaGParamGenOpData *pOpData, + CpaBoolean *pProtocolStatus, + CpaFlatBuffer *pG); + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Generate DSA Y Parameter. + * + * @description + * + * This function performs modular exponentiation to generate y as + * described in FIPS 186-3 section 4.1: + * y = g^x mod p + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to a + * NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pProtocolStatus The result passes/fails the DSA protocol + * related checks. + * @param[out] pY y = g^x mod p* + * On invocation the callback function will + * contain this parameter in the pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyDsaYParamGenCbFunc is generated in response to this + * function call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyDsaYParamGenOpData, + * CpaCyDsaGenCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyDsaGenYParam(const CpaInstanceHandle instanceHandle, + const CpaCyDsaGenCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaYParamGenOpData *pOpData, + CpaBoolean *pProtocolStatus, + CpaFlatBuffer *pY); + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Generate DSA R Signature. + * + * @description + * This function generates the DSA R signature as described in FIPS 186-3 + * Section 4.6: + * r = (g^k mod p) mod q + * + * The protocol status, returned in the callback function as parameter + * protocolStatus (or, in the case of synchronous invocation, in the + * parameter *pProtocolStatus) is used to indicate whether the value r == 0. + * + * Specifically, (protocolStatus == CPA_TRUE) means r != 0, while + * (protocolStatus == CPA_FALSE) means r == 0. + * + * Generation of signature r does not depend on the content of the message + * being signed, so this operation can be done in advance for different + * values of k. Then once each message becomes available only the + * signature s needs to be generated. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to a + * NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pProtocolStatus The result passes/fails the DSA protocol + * related checks. + * @param[out] pR DSA message signature r. + * On invocation the callback function will + * contain this parameter in the pOut parameter. + * + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyDsaRSignCbFunc is generated in response to this function + * call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyDsaRSignOpData, + * CpaCyDsaGenCbFunc, + * cpaCyDsaSignS(), + * cpaCyDsaSignRS() + * + *****************************************************************************/ +CpaStatus +cpaCyDsaSignR(const CpaInstanceHandle instanceHandle, + const CpaCyDsaGenCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaRSignOpData *pOpData, + CpaBoolean *pProtocolStatus, + CpaFlatBuffer *pR); + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Generate DSA S Signature. + * + * @description + * This function generates the DSA S signature as described in FIPS 186-3 + * Section 4.6: + * s = (k^-1(z + xr)) mod q + * + * Here, z = the leftmost min(N, outlen) bits of Hash(M). This function + * does not perform the SHA digest; z is computed by the caller and + * passed as a parameter in the pOpData field. + * + * The protocol status, returned in the callback function as parameter + * protocolStatus (or, in the case of synchronous invocation, in the + * parameter *pProtocolStatus) is used to indicate whether the value s == 0. + * + * Specifically, (protocolStatus == CPA_TRUE) means s != 0, while + * (protocolStatus == CPA_FALSE) means s == 0. + * + * If signature r has been generated in advance, then this function can be + * used to generate the signature s once the message becomes available. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to a + * NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pProtocolStatus The result passes/fails the DSA protocol + * related checks. + * @param[out] pS DSA message signature s. + * On invocation the callback function will + * contain this parameter in the pOut parameter. + * + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyDsaSSignCbFunc is generated in response to this function + * call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyDsaSSignOpData, + * CpaCyDsaGenCbFunc, + * cpaCyDsaSignR(), + * cpaCyDsaSignRS() + * + *****************************************************************************/ +CpaStatus +cpaCyDsaSignS(const CpaInstanceHandle instanceHandle, + const CpaCyDsaGenCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaSSignOpData *pOpData, + CpaBoolean *pProtocolStatus, + CpaFlatBuffer *pS); + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Generate DSA R and S Signatures. + * + * @description + * This function generates the DSA R and S signatures as described in + * FIPS 186-3 Section 4.6: + * + * r = (g^k mod p) mod q + * s = (k^-1(z + xr)) mod q + * + * Here, z = the leftmost min(N, outlen) bits of Hash(M). This function + * does not perform the SHA digest; z is computed by the caller and + * passed as a parameter in the pOpData field. + * + * The protocol status, returned in the callback function as parameter + * protocolStatus (or, in the case of synchronous invocation, in the + * parameter *pProtocolStatus) is used to indicate whether either of + * the values r or s are zero. + * + * Specifically, (protocolStatus == CPA_TRUE) means neither is zero (i.e. + * (r != 0) && (s != 0)), while (protocolStatus == CPA_FALSE) means that at + * least one of r or s is zero (i.e. (r == 0) || (s == 0)). + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to + * a NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pProtocolStatus The result passes/fails the DSA protocol + * related checks. + * @param[out] pR DSA message signature r. + * @param[out] pS DSA message signature s. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyDsaRSSignCbFunc is generated in response to this function + * call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyDsaRSSignOpData, + * CpaCyDsaRSSignCbFunc, + * cpaCyDsaSignR(), + * cpaCyDsaSignS() + * + *****************************************************************************/ +CpaStatus +cpaCyDsaSignRS(const CpaInstanceHandle instanceHandle, + const CpaCyDsaRSSignCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaRSSignOpData *pOpData, + CpaBoolean *pProtocolStatus, + CpaFlatBuffer *pR, + CpaFlatBuffer *pS); + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Verify DSA R and S signatures. + * + * @description + * This function performs FIPS 186-3 Section 4.7: + * w = (s')^-1 mod q + * u1 = (zw) mod q + * u2 = ((r')w) mod q + * v = (((g)^u1 (y)^u2) mod p) mod q + * + * Here, z = the leftmost min(N, outlen) bits of Hash(M'). This function + * does not perform the SHA digest; z is computed by the caller and + * passed as a parameter in the pOpData field. + * + * A response status of ok (verifyStatus == CPA_TRUE) means v = r'. + * A response status of not ok (verifyStatus == CPA_FALSE) means v != r'. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to + * a NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pVerifyStatus The verification passed or failed. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyDsaVerifyCbFunc is generated in response to this function + * call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyDsaVerifyOpData, + * CpaCyDsaVerifyCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyDsaVerify(const CpaInstanceHandle instanceHandle, + const CpaCyDsaVerifyCbFunc pCb, + void *pCallbackTag, + const CpaCyDsaVerifyOpData *pOpData, + CpaBoolean *pVerifyStatus); + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Query statistics for a specific DSA instance. + * + * @deprecated + * As of v1.3 of the Crypto API, this function has been deprecated, + * replaced by @ref cpaCyDsaQueryStats64(). + * + * @description + * This function will query a specific instance of the DSA implementation + * for statistics. The user MUST allocate the CpaCyDsaStats structure + * and pass the reference to that structure into this function call. This + * function writes the statistic results into the passed in + * CpaCyDsaStats structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pDsaStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * @see + * CpaCyDsaStats + *****************************************************************************/ +CpaStatus CPA_DEPRECATED +cpaCyDsaQueryStats(const CpaInstanceHandle instanceHandle, + struct _CpaCyDsaStats *pDsaStats); + +/** + ***************************************************************************** + * @ingroup cpaCyDsa + * Query 64-bit statistics for a specific DSA instance. + * + * @description + * This function will query a specific instance of the DSA implementation + * for 64-bit statistics. The user MUST allocate the CpaCyDsaStats64 + * structure and pass the reference to that structure into this function. + * This function writes the statistic results into the passed in + * CpaCyDsaStats64 structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pDsaStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * @see + * CpaCyDsaStats + *****************************************************************************/ +CpaStatus +cpaCyDsaQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyDsaStats64 *pDsaStats); + +/*****************************************************************************/ + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_CY_DSA_H */ Index: sys/dev/qat/qat_api/include/lac/cpa_cy_ec.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_ec.h @@ -0,0 +1,766 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_ec.h + * + * @defgroup cpaCyEc Elliptic Curve (EC) API + * + * @ingroup cpaCy + * + * @description + * These functions specify the API for Public Key Encryption + * (Cryptography) Elliptic Curve (EC) operations. + * + * All implementations will support at least the following: + * + * - "NIST RECOMMENDED ELLIPTIC CURVES FOR FEDERAL GOVERNMENT USE" + * as defined by + * http://csrc.nist.gov/groups/ST/toolkit/documents/dss/NISTReCur.pdf + * + * - Random curves where the max(log2(q), log2(n) + log2(h)) <= 512 + * where q is the modulus, n is the order of the curve and h is the + * cofactor + * + * For Montgomery and Edwards 25519 and 448 elliptic curves, + * the following operations are supported: + * 1. Montgomery 25519 Curve | scalar point Multiplication + * Input: Montgomery affine coordinate X of point P + * Scalar k + * Output: Montgomery affine coordinate X of point [k/P + * Decode: Scalar k always decoded by implementation + * + * 2. Montgomery 25519 Curve | generator point Multiplication + * Input: Scalar k + * Output: Montgomery affine coordinate X of point [k]G + * Decode: Scalar k always decoded by implementation + * + * 3. Twisted Edwards 25519 Curve | scalar point Multiplication + * Input: Twisted Edwards affine coordinate X of point P + * Twisted Edwards affine coordinate Y of point P + * Scalar k + * Output: Twisted Edwards affine coordinate X of point [k]P + * Twisted Edwards affine coordinate Y of point [k]P + * Decode: Caller must specify if decoding is required + * + * 4. Twisted Edwards 25519 Curve | generator point Multiplication + * Input: Scalar k + * Output: Twisted Edwards affine coordinate X of point [k]G + * Twisted Edwards affine coordinate Y of point [k]G + * Decode: Caller must specify if decoding is required + * + * 5. Montgomery 448 Curve | scalar point Multiplication + * Input: Montgomery affine coordinate X of point P + * Scalar k + * Output: Montgomery affine coordinate X of point [k]P + * Decode: Scalar k always decoded by implementation + * + * 6. Montgomery 448 Curve | generator point Multiplication + * Input: Scalar k + * Output: Montgomery affine coordinate X of point [k]G + * Decode: Scalar k always decoded by implementation + * + * 7. Edwards 448 Curve | scalar point Multiplication + * Input: Edwards affine coordinate X of point P + * Edwards affine coordinate Y of point P + * Scalar k + * Output: Edwards affine coordinate X of point [k]P + * Edwards affine coordinate Y of point [k]P + * Decode: Caller must specify if decoding is required + * + * 8. Edwards 448 Curve | generator point Multiplication + * Input: Scalar k + * Output: Edwards affine coordinate X of point [k]G + * Edwards affine coordinate Y of point [k]G + * Decode: Caller must specify if decoding is required + * + * @note + * Large numbers are represented on the QuickAssist API as described + * in the Large Number API (@ref cpaCyLn). + * + * In addition, the bit length of large numbers passed to the API + * MUST NOT exceed 576 bits for Elliptic Curve operations. + *****************************************************************************/ + +#ifndef CPA_CY_EC_H_ +#define CPA_CY_EC_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_cy_common.h" + +/** + ***************************************************************************** + * @ingroup cpaCyEc + * Field types for Elliptic Curve + + * @description + * As defined by FIPS-186-3, for each cryptovariable length, there are + * two kinds of fields. + *
    + *
  • A prime field is the field GF(p) which contains a prime number + * p of elements. The elements of this field are the integers modulo + * p, and the field arithmetic is implemented in terms of the + * arithmetic of integers modulo p.
  • + * + *
  • A binary field is the field GF(2^m) which contains 2^m elements + * for some m (called the degree of the field). The elements of + * this field are the bit strings of length m, and the field + * arithmetic is implemented in terms of operations on the bits.
  • + *
+ *****************************************************************************/ +typedef enum _CpaCyEcFieldType +{ + CPA_CY_EC_FIELD_TYPE_PRIME = 1, + /**< A prime field, GF(p) */ + CPA_CY_EC_FIELD_TYPE_BINARY, + /**< A binary field, GF(2^m) */ +} CpaCyEcFieldType; + +/** + ***************************************************************************** + * @ingroup cpaCyEc + * Curve types for Elliptic Curves defined in RFC#7748 + + * @description + * As defined by RFC 7748, there are four elliptic curves in this + * group. The Montgomery curves are denoted curve25519 and curve448, + * and the birationally equivalent Twisted Edwards curves are denoted + * edwards25519 and edwards448 + * + *****************************************************************************/ +typedef enum _CpaCyEcMontEdwdsCurveType +{ + CPA_CY_EC_MONTEDWDS_CURVE25519_TYPE = 1, + /**< Montgomery 25519 curve */ + CPA_CY_EC_MONTEDWDS_ED25519_TYPE, + /**< Twisted Edwards 25519 curve */ + CPA_CY_EC_MONTEDWDS_CURVE448_TYPE, + /**< Montgomery 448 curve */ + CPA_CY_EC_MONTEDWDS_ED448_TYPE, + /**< Twisted Edwards 448 curve */ +} CpaCyEcMontEdwdsCurveType; + +/** + ***************************************************************************** + * @file cpa_cy_ec.h + * @ingroup cpaCyEc + * EC Point Multiplication Operation Data. + * + * @description + * This structure contains the operation data for the cpaCyEcPointMultiply + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * For optimal performance all data buffers SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. a.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyEcPointMultiply + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyEcPointMultiply() + * + *****************************************************************************/ +typedef struct _CpaCyEcPointMultiplyOpData { + CpaFlatBuffer k; + /**< scalar multiplier (k > 0 and k < n) */ + CpaFlatBuffer xg; + /**< x coordinate of curve point */ + CpaFlatBuffer yg; + /**< y coordinate of curve point */ + CpaFlatBuffer a; + /**< a elliptic curve coefficient */ + CpaFlatBuffer b; + /**< b elliptic curve coefficient */ + CpaFlatBuffer q; + /**< prime modulus or irreducible polynomial over GF(2^m)*/ + CpaFlatBuffer h; + /**< cofactor of the operation. + * If the cofactor is NOT required then set the cofactor to 1 or the + * data pointer of the Flat Buffer to NULL. */ + CpaCyEcFieldType fieldType; + /**< field type for the operation */ +} CpaCyEcPointMultiplyOpData; + + +/** + ***************************************************************************** + * @ingroup cpaCyEc + * EC Point Verification Operation Data. + * + * @description + * This structure contains the operation data for the cpaCyEcPointVerify + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * For optimal performance all data buffers SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. a.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the CpaCyEcPointVerify + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyEcPointVerify() + * + *****************************************************************************/ +typedef struct _CpaCyEcPointVerifyOpData { + CpaFlatBuffer xq; + /**< x coordinate candidate point */ + CpaFlatBuffer yq; + /**< y coordinate candidate point */ + CpaFlatBuffer q; + /**< prime modulus or irreducible polynomial over GF(2^m) */ + CpaFlatBuffer a; + /**< a elliptic curve coefficient */ + CpaFlatBuffer b; + /**< b elliptic curve coefficient */ + + CpaCyEcFieldType fieldType; + /**< field type for the operation */ +} CpaCyEcPointVerifyOpData; + +/** + ***************************************************************************** + * @file cpa_cy_ec.h + * @ingroup cpaCyEc + * EC Point Multiplication Operation Data for Edwards or + 8 Montgomery curves as specificied in RFC#7748. + * + * @description + * This structure contains the operation data for the + * cpaCyEcMontEdwdsPointMultiply function. + * The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * For optimal performance all data buffers SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. a.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyEcPointMultiply + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * All buffers in this structure need to be: + * - 32 bytes in size for 25519 curves + * - 64 bytes in size for 448 curves + * + * @see + * cpaCyEcMontEdwdsPointMultiply() + * + *****************************************************************************/ +typedef struct _CpaCyEcMontEdwdsPointMultiplyOpData { + CpaCyEcMontEdwdsCurveType curveType; + /**< field type for the operation */ + CpaBoolean generator; + /**< True if the operation is a generator multiplication (kG) + * False if it is a variable point multiplcation (kP). */ + CpaFlatBuffer k; + /**< k or generator for the operation */ + CpaFlatBuffer x; + /**< x value. Used in scalar varable point multiplication operations. + * Not required if the generator is True. Must be NULL if not required. + * The size of the buffer MUST be 32B for 25519 curves and 64B for 448 + * curves */ + CpaFlatBuffer y; + /**< y value. Used in variable point multiplication of operations. + * Not required for curves defined only on scalar operations. + * Not required if the generator is True. + * Must be NULL if not required. + * The size of the buffer MUST be 32B for 25519 curves and 64B for 448 + * curves */ +} CpaCyEcMontEdwdsPointMultiplyOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyEc + * Cryptographic EC Statistics. + * + * @description + * This structure contains statistics on the Cryptographic EC + * operations. Statistics are set to zero when the component is + * initialized, and are collected per instance. + * + ****************************************************************************/ +typedef struct _CpaCyEcStats64 { + Cpa64U numEcPointMultiplyRequests; + /**< Total number of EC Point Multiplication operation requests. */ + Cpa64U numEcPointMultiplyRequestErrors; + /**< Total number of EC Point Multiplication operation requests that had an + * error and could not be processed. */ + Cpa64U numEcPointMultiplyCompleted; + /**< Total number of EC Point Multiplication operation requests that + * completed successfully. */ + Cpa64U numEcPointMultiplyCompletedError; + /**< Total number of EC Point Multiplication operation requests that could + * not be completed successfully due to errors. */ + Cpa64U numEcPointMultiplyCompletedOutputInvalid; + /**< Total number of EC Point Multiplication operation requests that could + * not be completed successfully due to an invalid output. + * Note that this does not indicate an error. */ + Cpa64U numEcPointVerifyRequests; + /**< Total number of EC Point Verification operation requests. */ + Cpa64U numEcPointVerifyRequestErrors; + /**< Total number of EC Point Verification operation requests that had an + * error and could not be processed. */ + Cpa64U numEcPointVerifyCompleted; + /**< Total number of EC Point Verification operation requests that completed + * successfully. */ + Cpa64U numEcPointVerifyCompletedErrors; + /**< Total number of EC Point Verification operation requests that could + * not be completed successfully due to errors. */ + Cpa64U numEcPointVerifyCompletedOutputInvalid; + /**< Total number of EC Point Verification operation requests that had an + * invalid output. Note that this does not indicate an error. */ +} CpaCyEcStats64; + + +/** + ***************************************************************************** + * @ingroup cpaCyEc + * Definition of callback function invoked for cpaCyEcPointMultiply + * requests. + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pOpData Opaque pointer to Operation data supplied in + * request. + * @param[in] multiplyStatus Status of the point multiplication. + * @param[in] pXk x coordinate of resultant EC point. + * @param[in] pYk y coordinate of resultant EC point. + * + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * cpaCyEcPointMultiply() + * + *****************************************************************************/ +typedef void (*CpaCyEcPointMultiplyCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaBoolean multiplyStatus, + CpaFlatBuffer *pXk, + CpaFlatBuffer *pYk); + + +/** + ***************************************************************************** + * @ingroup cpaCyEc + * Definition of callback function invoked for cpaCyEcPointVerify + * requests. + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pOpData Operation data pointer supplied in request. + * @param[in] verifyStatus Set to CPA_FALSE if the point is NOT on the + * curve or at infinity. Set to CPA_TRUE if the + * point is on the curve. + * + * @return + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * cpaCyEcPointVerify() + * + *****************************************************************************/ +typedef void (*CpaCyEcPointVerifyCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaBoolean verifyStatus); + + +/** + ***************************************************************************** + * @ingroup cpaCyEc + * Perform EC Point Multiplication. + * + * @description + * This function performs Elliptic Curve Point Multiplication as per + * ANSI X9.63 Annex D.3.2. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to + * a NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pMultiplyStatus In synchronous mode, the multiply output is + * valid (CPA_TRUE) or the output is invalid + * (CPA_FALSE). + * @param[out] pXk Pointer to xk flat buffer. + * @param[out] pYk Pointer to yk flat buffer. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyEcPointMultiplyCbFunc is generated in response to this function + * call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyEcPointMultiplyOpData, + * CpaCyEcPointMultiplyCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyEcPointMultiply(const CpaInstanceHandle instanceHandle, + const CpaCyEcPointMultiplyCbFunc pCb, + void *pCallbackTag, + const CpaCyEcPointMultiplyOpData *pOpData, + CpaBoolean *pMultiplyStatus, + CpaFlatBuffer *pXk, + CpaFlatBuffer *pYk); + + +/** + ***************************************************************************** + * @ingroup cpaCyEc + * Verify that a point is on an elliptic curve. + * + * @description + * This function performs Elliptic Curve Point Verification, as per + * steps a, b and c of ANSI X9.62 Annex A.4.2. (To perform the final + * step d, the user can call @ref cpaCyEcPointMultiply.) + * + * This function checks if the specified point satisfies the + * Weierstrass equation for an Elliptic Curve. + * + * For GF(p): + * y^2 = (x^3 + ax + b) mod p + * For GF(2^m): + * y^2 + xy = x^3 + ax^2 + b mod p + * where p is the irreducible polynomial over GF(2^m) + * + * Use this function to verify a point is in the correct range and is + * NOT the point at infinity. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to + * a NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pVerifyStatus In synchronous mode, set to CPA_FALSE if the + * point is NOT on the curve or at infinity. Set + * to CPA_TRUE if the point is on the curve. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyEcPointVerifyCbFunc is generated in response to this function + * call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyEcPointVerifyOpData, + * CpaCyEcPointVerifyCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyEcPointVerify(const CpaInstanceHandle instanceHandle, + const CpaCyEcPointVerifyCbFunc pCb, + void *pCallbackTag, + const CpaCyEcPointVerifyOpData *pOpData, + CpaBoolean *pVerifyStatus); + +/** + ***************************************************************************** + * @file cpa_cy_ec.h + * @ingroup cpaCyEc + * Perform EC Point Multiplication on an Edwards or Montgomery curve as + * defined in RFC#7748. + * + * @description + * This function performs Elliptic Curve Point Multiplication as per + * RFC#7748 + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to + * a NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pMultiplyStatus In synchronous mode, the multiply output is + * valid (CPA_TRUE) or the output is invalid + * (CPA_FALSE). + * @param[out] pXk Pointer to xk flat buffer. + * @param[out] pYk Pointer to yk flat buffer. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyEcPointMultiplyCbFunc is generated in response to this function + * call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyEcMontEdwdsPointMultiplyOpData, + * CpaCyEcMontEdwdsPointMultiplyCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyEcMontEdwdsPointMultiply(const CpaInstanceHandle instanceHandle, + const CpaCyEcPointMultiplyCbFunc pCb, + void *pCallbackTag, + const CpaCyEcMontEdwdsPointMultiplyOpData *pOpData, + CpaBoolean *pMultiplyStatus, + CpaFlatBuffer *pXk, + CpaFlatBuffer *pYk); + +/** + ***************************************************************************** + * @ingroup cpaCyEc + * Query statistics for a specific EC instance. + * + * @description + * This function will query a specific instance of the EC implementation + * for statistics. The user MUST allocate the CpaCyEcStats64 structure + * and pass the reference to that structure into this function call. This + * function writes the statistic results into the passed in + * CpaCyEcStats64 structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pEcStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * @see + * CpaCyEcStats64 + *****************************************************************************/ +CpaStatus +cpaCyEcQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyEcStats64 *pEcStats); + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /*CPA_CY_EC_H_*/ Index: sys/dev/qat/qat_api/include/lac/cpa_cy_ecdh.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_ecdh.h @@ -0,0 +1,358 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_ecdh.h + * + * @defgroup cpaCyEcdh Elliptic Curve Diffie-Hellman (ECDH) API + * + * @ingroup cpaCy + * + * @description + * These functions specify the API for Public Key Encryption + * (Cryptography) Elliptic Curve Diffie-Hellman (ECDH) operations. + * + * @note + * Large numbers are represented on the QuickAssist API as described + * in the Large Number API (@ref cpaCyLn). + * + * In addition, the bit length of large numbers passed to the API + * MUST NOT exceed 576 bits for Elliptic Curve operations. + *****************************************************************************/ + +#ifndef CPA_CY_ECDH_H_ +#define CPA_CY_ECDH_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_cy_common.h" +#include "cpa_cy_ec.h" + +/** + ***************************************************************************** + * @ingroup cpaCyEcdh + * ECDH Point Multiplication Operation Data. + * + * @description + * This structure contains the operation data for the + * cpaCyEcdhPointMultiply function. The client MUST allocate the memory + * for this structure and the items pointed to by this structure. When + * the structure is passed into the function, ownership of the memory + * passes to the function. Ownership of the memory returns to the client + * when this structure is returned in the callback function. + * + * For optimal performance all data buffers SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. a.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyEcdhPointMultiply + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyEcdhPointMultiply() + * + *****************************************************************************/ +typedef struct _CpaCyEcdhPointMultiplyOpData { + CpaFlatBuffer k; + /**< scalar multiplier (k > 0 and k < n) */ + CpaFlatBuffer xg; + /**< x coordinate of curve point */ + CpaFlatBuffer yg; + /**< y coordinate of curve point */ + CpaFlatBuffer a; + /**< a equation coefficient */ + CpaFlatBuffer b; + /**< b equation coefficient */ + CpaFlatBuffer q; + /**< prime modulus or irreducible polynomial over GF(2^r) */ + CpaFlatBuffer h; + /**< cofactor of the operation. + * If the cofactor is NOT required then set the cofactor to 1 or the + * data pointer of the Flat Buffer to NULL. + * There are some restrictions on the value of the cofactor. + * Implementations of this API will support at least the following: + *
    + *
  • NIST standard curves and their cofactors (1, 2 and 4)
  • + * + *
  • Random curves where max(log2(p), log2(n)+log2(h)) <= 512, where + * p is the modulus, n is the order of the curve and h is the cofactor + *
  • + *
+ */ + + CpaCyEcFieldType fieldType; + /**< field type for the operation */ + CpaBoolean pointVerify; + /**< set to CPA_TRUE to do a verification before the multiplication */ +} CpaCyEcdhPointMultiplyOpData; + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdh + * Cryptographic ECDH Statistics. + * @description + * This structure contains statistics on the Cryptographic ECDH + * operations. Statistics are set to zero when the component is + * initialized, and are collected per instance. + * + ****************************************************************************/ +typedef struct _CpaCyEcdhStats64 { + Cpa64U numEcdhPointMultiplyRequests; + /**< Total number of ECDH Point Multiplication operation requests. */ + Cpa64U numEcdhPointMultiplyRequestErrors; + /**< Total number of ECDH Point Multiplication operation requests that had + * an error and could not be processed. */ + Cpa64U numEcdhPointMultiplyCompleted; + /**< Total number of ECDH Point Multiplication operation requests that + * completed successfully. */ + Cpa64U numEcdhPointMultiplyCompletedError; + /**< Total number of ECDH Point Multiplication operation requests that could + * not be completed successfully due to errors. */ + Cpa64U numEcdhRequestCompletedOutputInvalid; + /**< Total number of ECDH Point Multiplication or Point Verify operation + * requests that could not be completed successfully due to an invalid + * output. + * Note that this does not indicate an error. */ +} CpaCyEcdhStats64; + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdh + * Definition of callback function invoked for cpaCyEcdhPointMultiply + * requests. + * + * @description + * This is the prototype for the CpaCyEcdhPointMultiplyCbFunc callback + * function + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pOpData Opaque pointer to Operation data supplied in + * request. + * @param[in] pXk Output x coordinate from the request. + * @param[in] pYk Output y coordinate from the request. + * @param[in] multiplyStatus Status of the point multiplication and the + * verification when the pointVerify bit is set + * in the CpaCyEcdhPointMultiplyOpData structure. + * + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * cpaCyEcdhPointMultiply() + * + *****************************************************************************/ +typedef void (*CpaCyEcdhPointMultiplyCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaBoolean multiplyStatus, + CpaFlatBuffer *pXk, + CpaFlatBuffer *pYk); + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdh + * ECDH Point Multiplication. + * + * @description + * This function performs ECDH Point Multiplication as defined in + * ANSI X9.63 2001 section 5.4 + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to + * a NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pMultiplyStatus In synchronous mode, the status of the point + * multiplication and the verification when the + * pointVerify bit is set in the + * CpaCyEcdhPointMultiplyOpData structure. Set to + * CPA_FALSE if the point is NOT on the curve or + * at infinity. Set to CPA_TRUE if the point is + * on the curve. + * @param[out] pXk Pointer to x coordinate flat buffer. + * @param[out] pYk Pointer to y coordinate flat buffer. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyEcdhPointMultiplyCbFunc is generated in response to this function + * call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyEcdhPointMultiplyOpData, + * CpaCyEcdhPointMultiplyCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyEcdhPointMultiply(const CpaInstanceHandle instanceHandle, + const CpaCyEcdhPointMultiplyCbFunc pCb, + void *pCallbackTag, + const CpaCyEcdhPointMultiplyOpData *pOpData, + CpaBoolean *pMultiplyStatus, + CpaFlatBuffer *pXk, + CpaFlatBuffer *pYk); + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdh + * Query statistics for a specific ECDH instance. + * + * @description + * This function will query a specific instance of the ECDH implementation + * for statistics. The user MUST allocate the CpaCyEcdhStats64 structure + * and pass the reference to that structure into this function call. This + * function writes the statistic results into the passed in + * CpaCyEcdhStats64 structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pEcdhStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * @see + * CpaCyEcdhStats64 + *****************************************************************************/ +CpaStatus +cpaCyEcdhQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyEcdhStats64 *pEcdhStats); + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /*CPA_CY_ECDH_H_*/ Index: sys/dev/qat/qat_api/include/lac/cpa_cy_ecdsa.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_ecdsa.h @@ -0,0 +1,839 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_ecdsa.h + * + * @defgroup cpaCyEcdsa Elliptic Curve Digital Signature Algorithm (ECDSA) API + * + * @ingroup cpaCy + * + * @description + * These functions specify the API for Public Key Encryption + * (Cryptography) Elliptic Curve Digital Signature Algorithm (ECDSA) + * operations. + * + * @note + * Large numbers are represented on the QuickAssist API as described + * in the Large Number API (@ref cpaCyLn). + * + * In addition, the bit length of large numbers passed to the API + * MUST NOT exceed 576 bits for Elliptic Curve operations. + *****************************************************************************/ + +#ifndef CPA_CY_ECDSA_H_ +#define CPA_CY_ECDSA_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_cy_common.h" +#include "cpa_cy_ec.h" + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * ECDSA Sign R Operation Data. + * @description + * This structure contains the operation data for the cpaCyEcdsaSignR + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * For optimal performance all data buffers SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. a.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyEcdsaSignR + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyEcdsaSignR() + * + *****************************************************************************/ +typedef struct _CpaCyEcdsaSignROpData { + CpaFlatBuffer xg; + /**< x coordinate of base point G */ + CpaFlatBuffer yg; + /**< y coordinate of base point G */ + CpaFlatBuffer n; + /**< order of the base point G, which shall be prime */ + CpaFlatBuffer q; + /**< prime modulus or irreducible polynomial over GF(2^r) */ + CpaFlatBuffer a; + /**< a elliptic curve coefficient */ + CpaFlatBuffer b; + /**< b elliptic curve coefficient */ + CpaFlatBuffer k; + /**< random value (k > 0 and k < n) */ + + CpaCyEcFieldType fieldType; + /**< field type for the operation */ +} CpaCyEcdsaSignROpData; + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * ECDSA Sign S Operation Data. + * @description + * This structure contains the operation data for the cpaCyEcdsaSignS + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * For optimal performance all data buffers SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. a.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyEcdsaSignS + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyEcdsaSignS() + * + *****************************************************************************/ +typedef struct _CpaCyEcdsaSignSOpData { + CpaFlatBuffer m; + /**< digest of the message to be signed */ + CpaFlatBuffer d; + /**< private key */ + CpaFlatBuffer r; + /**< Ecdsa r signature value */ + CpaFlatBuffer k; + /**< random value (k > 0 and k < n) */ + CpaFlatBuffer n; + /**< order of the base point G, which shall be prime */ + CpaCyEcFieldType fieldType; + /**< field type for the operation */ +} CpaCyEcdsaSignSOpData; + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * ECDSA Sign R & S Operation Data. + * @description + * This structure contains the operation data for the cpaCyEcdsaSignRS + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * For optimal performance all data buffers SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. a.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyEcdsaSignRS + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyEcdsaSignRS() + * + *****************************************************************************/ +typedef struct _CpaCyEcdsaSignRSOpData { + CpaFlatBuffer xg; + /**< x coordinate of base point G */ + CpaFlatBuffer yg; + /**< y coordinate of base point G */ + CpaFlatBuffer n; + /**< order of the base point G, which shall be prime */ + CpaFlatBuffer q; + /**< prime modulus or irreducible polynomial over GF(2^r) */ + CpaFlatBuffer a; + /**< a elliptic curve coefficient */ + CpaFlatBuffer b; + /**< b elliptic curve coefficient */ + CpaFlatBuffer k; + /**< random value (k > 0 and k < n) */ + CpaFlatBuffer m; + /**< digest of the message to be signed */ + CpaFlatBuffer d; + /**< private key */ + CpaCyEcFieldType fieldType; + /**< field type for the operation */ +} CpaCyEcdsaSignRSOpData; + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * ECDSA Verify Operation Data, for Public Key. + + * @description + * This structure contains the operation data for the CpaCyEcdsaVerify + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * For optimal performance all data buffers SHOULD be 8-byte aligned. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. a.pData[0] = MSB. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyEcdsaVerify + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * CpaCyEcdsaVerify() + * + *****************************************************************************/ +typedef struct _CpaCyEcdsaVerifyOpData { + CpaFlatBuffer xg; + /**< x coordinate of base point G */ + CpaFlatBuffer yg; + /**< y coordinate of base point G */ + CpaFlatBuffer n; + /**< order of the base point G, which shall be prime */ + CpaFlatBuffer q; + /**< prime modulus or irreducible polynomial over GF(2^r) */ + CpaFlatBuffer a; + /**< a elliptic curve coefficient */ + CpaFlatBuffer b; + /**< b elliptic curve coefficient */ + CpaFlatBuffer m; + /**< digest of the message to be signed */ + CpaFlatBuffer r; + /**< ECDSA r signature value (r > 0 and r < n) */ + CpaFlatBuffer s; + /**< ECDSA s signature value (s > 0 and s < n) */ + CpaFlatBuffer xp; + /**< x coordinate of point P (public key) */ + CpaFlatBuffer yp; + /**< y coordinate of point P (public key) */ + CpaCyEcFieldType fieldType; + /**< field type for the operation */ +} CpaCyEcdsaVerifyOpData; + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * Cryptographic ECDSA Statistics. + * @description + * This structure contains statistics on the Cryptographic ECDSA + * operations. Statistics are set to zero when the component is + * initialized, and are collected per instance. + * + ****************************************************************************/ +typedef struct _CpaCyEcdsaStats64 { + Cpa64U numEcdsaSignRRequests; + /**< Total number of ECDSA Sign R operation requests. */ + Cpa64U numEcdsaSignRRequestErrors; + /**< Total number of ECDSA Sign R operation requests that had an error and + * could not be processed. */ + Cpa64U numEcdsaSignRCompleted; + /**< Total number of ECDSA Sign R operation requests that completed + * successfully. */ + Cpa64U numEcdsaSignRCompletedErrors; + /**< Total number of ECDSA Sign R operation requests that could + * not be completed successfully due to errors. */ + Cpa64U numEcdsaSignRCompletedOutputInvalid; + /**< Total number of ECDSA Sign R operation requests could not be completed + * successfully due to an invalid output. + * Note that this does not indicate an error. */ + Cpa64U numEcdsaSignSRequests; + /**< Total number of ECDSA Sign S operation requests. */ + Cpa64U numEcdsaSignSRequestErrors; + /**< Total number of ECDSA Sign S operation requests that had an error and + * could not be processed. */ + Cpa64U numEcdsaSignSCompleted; + /**< Total number of ECDSA Sign S operation requests that completed + * successfully. */ + Cpa64U numEcdsaSignSCompletedErrors; + /**< Total number of ECDSA Sign S operation requests that could + * not be completed successfully due to errors. */ + Cpa64U numEcdsaSignSCompletedOutputInvalid; + /**< Total number of ECDSA Sign S operation requests could not be completed + * successfully due to an invalid output. + * Note that this does not indicate an error. */ + Cpa64U numEcdsaSignRSRequests; + /**< Total number of ECDSA Sign R & S operation requests. */ + Cpa64U numEcdsaSignRSRequestErrors; + /**< Total number of ECDSA Sign R & S operation requests that had an + * error and could not be processed. */ + Cpa64U numEcdsaSignRSCompleted; + /**< Total number of ECDSA Sign R & S operation requests that completed + * successfully. */ + Cpa64U numEcdsaSignRSCompletedErrors; + /**< Total number of ECDSA Sign R & S operation requests that could + * not be completed successfully due to errors. */ + Cpa64U numEcdsaSignRSCompletedOutputInvalid; + /**< Total number of ECDSA Sign R & S operation requests could not be + * completed successfully due to an invalid output. + * Note that this does not indicate an error. */ + Cpa64U numEcdsaVerifyRequests; + /**< Total number of ECDSA Verification operation requests. */ + Cpa64U numEcdsaVerifyRequestErrors; + /**< Total number of ECDSA Verification operation requests that had an + * error and could not be processed. */ + Cpa64U numEcdsaVerifyCompleted; + /**< Total number of ECDSA Verification operation requests that completed + * successfully. */ + Cpa64U numEcdsaVerifyCompletedErrors; + /**< Total number of ECDSA Verification operation requests that could + * not be completed successfully due to errors. */ + Cpa64U numEcdsaVerifyCompletedOutputInvalid; + /**< Total number of ECDSA Verification operation requests that resulted + * in an invalid output. + * Note that this does not indicate an error. */ +} CpaCyEcdsaStats64; + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * Definition of a generic callback function invoked for a number of the + * ECDSA Sign API functions. + * + * @description + * This is the prototype for the CpaCyEcdsaGenSignCbFunc callback function. + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pOpData Opaque pointer to Operation data supplied in + * request. + * @param[in] multiplyStatus Status of the point multiplication. + * @param[in] pOut Output data from the request. + * + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * cpaCyEcdsaSignR() + * cpaCyEcdsaSignS() + * + *****************************************************************************/ +typedef void (*CpaCyEcdsaGenSignCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaBoolean multiplyStatus, + CpaFlatBuffer *pOut); + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * Definition of callback function invoked for cpaCyEcdsaSignRS + * requests. + * + * @description + * This is the prototype for the CpaCyEcdsaSignRSCbFunc callback function, + * which will provide the ECDSA message signature r and s parameters. + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pOpData Operation data pointer supplied in request. + * @param[in] multiplyStatus Status of the point multiplication. + * @param[in] pR Ecdsa message signature r. + * @param[in] pS Ecdsa message signature s. + * + * + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * cpaCyEcdsaSignRS() + * + *****************************************************************************/ +typedef void (*CpaCyEcdsaSignRSCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaBoolean multiplyStatus, + CpaFlatBuffer *pR, + CpaFlatBuffer *pS); + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * Definition of callback function invoked for cpaCyEcdsaVerify requests. + * + * @description + * This is the prototype for the CpaCyEcdsaVerifyCbFunc callback function. + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pOpData Operation data pointer supplied in request. + * @param[in] verifyStatus The verification status. + * + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * cpaCyEcdsaVerify() + * + *****************************************************************************/ +typedef void (*CpaCyEcdsaVerifyCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaBoolean verifyStatus); + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * Generate ECDSA Signature R. + * + * @description + * This function generates ECDSA Signature R as per ANSI X9.62 2005 + * section 7.3. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to a + * NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pSignStatus In synchronous mode, the multiply output is + * valid (CPA_TRUE) or the output is invalid + * (CPA_FALSE). + * @param[out] pR ECDSA message signature r. + * + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback is generated in response + * to this function call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * None + *****************************************************************************/ +CpaStatus +cpaCyEcdsaSignR(const CpaInstanceHandle instanceHandle, + const CpaCyEcdsaGenSignCbFunc pCb, + void *pCallbackTag, + const CpaCyEcdsaSignROpData *pOpData, + CpaBoolean *pSignStatus, + CpaFlatBuffer *pR); + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * Generate ECDSA Signature S. + * + * @description + * This function generates ECDSA Signature S as per ANSI X9.62 2005 + * section 7.3. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to a + * NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pSignStatus In synchronous mode, the multiply output is + * valid (CPA_TRUE) or the output is invalid + * (CPA_FALSE). + * @param[out] pS ECDSA message signature s. + * + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback is generated in response + * to this function call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * None + *****************************************************************************/ +CpaStatus +cpaCyEcdsaSignS(const CpaInstanceHandle instanceHandle, + const CpaCyEcdsaGenSignCbFunc pCb, + void *pCallbackTag, + const CpaCyEcdsaSignSOpData *pOpData, + CpaBoolean *pSignStatus, + CpaFlatBuffer *pS); + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * Generate ECDSA Signature R & S. + * + * @description + * This function generates ECDSA Signature R & S as per ANSI X9.62 2005 + * section 7.3. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to a + * NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pSignStatus In synchronous mode, the multiply output is + * valid (CPA_TRUE) or the output is invalid + * (CPA_FALSE). + * @param[out] pR ECDSA message signature r. + * @param[out] pS ECDSA message signature s. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback is generated in response + * to this function call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * None + *****************************************************************************/ +CpaStatus +cpaCyEcdsaSignRS(const CpaInstanceHandle instanceHandle, + const CpaCyEcdsaSignRSCbFunc pCb, + void *pCallbackTag, + const CpaCyEcdsaSignRSOpData *pOpData, + CpaBoolean *pSignStatus, + CpaFlatBuffer *pR, + CpaFlatBuffer *pS); + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * Verify ECDSA Public Key. + * + * @description + * This function performs ECDSA Verify as per ANSI X9.62 2005 section 7.4. + * + * A response status of ok (verifyStatus == CPA_TRUE) means that the + * signature was verified + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to + * a NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pVerifyStatus In synchronous mode, set to CPA_FALSE if the + * point is NOT on the curve or at infinity. Set + * to CPA_TRUE if the point is on the curve. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyEcdsaVerifyCbFunc is generated in response to this function + * call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyEcdsaVerifyOpData, + * CpaCyEcdsaVerifyCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyEcdsaVerify(const CpaInstanceHandle instanceHandle, + const CpaCyEcdsaVerifyCbFunc pCb, + void *pCallbackTag, + const CpaCyEcdsaVerifyOpData *pOpData, + CpaBoolean *pVerifyStatus); + + +/** + ***************************************************************************** + * @ingroup cpaCyEcdsa + * Query statistics for a specific ECDSA instance. + * + * @description + * This function will query a specific instance of the ECDSA implementation + * for statistics. The user MUST allocate the CpaCyEcdsaStats64 structure + * and pass the reference to that structure into this function call. This + * function writes the statistic results into the passed in + * CpaCyEcdsaStats64 structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pEcdsaStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * @see + * CpaCyEcdsaStats64 + *****************************************************************************/ +CpaStatus +cpaCyEcdsaQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyEcdsaStats64 *pEcdsaStats); + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /*CPA_CY_ECDSA_H_*/ Index: sys/dev/qat/qat_api/include/lac/cpa_cy_im.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_im.h @@ -0,0 +1,339 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_im.h + * + * @defgroup cpaCyInstMaint Cryptographic Instance Management API + * + * @ingroup cpaCy + * + * @description + * These functions specify the Instance Management API for available + * Cryptographic Instances. It is expected that these functions will only be + * called via a single system maintenance entity, rather than individual + * clients. + * + *****************************************************************************/ + +#ifndef CPA_CY_IM_H_ +#define CPA_CY_IM_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_cy_common.h" + +/** + ***************************************************************************** + * @ingroup cpaCyInstMaint + * Cryptographic Component Initialization and Start function. + * + * @description + * This function will initialize and start the Cryptographic component. + * It MUST be called before any other crypto function is called. This + * function SHOULD be called only once (either for the very first time, + * or after an cpaCyStopInstance call which succeeded) per instance. + * Subsequent calls will have no effect. + * + * @context + * This function may sleep, and MUST NOT be called in interrupt context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * @param[out] instanceHandle Handle to an instance of this API to be + * initialized. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. Suggested course of action + * is to shutdown and restart. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None. + * @post + * None + * @note + * Note that this is a synchronous function and has no completion callback + * associated with it. + * + * @see + * cpaCyStopInstance() + * + *****************************************************************************/ +CpaStatus +cpaCyStartInstance(CpaInstanceHandle instanceHandle); + +/** + ***************************************************************************** + * @ingroup cpaCyInstMaint + * Cryptographic Component Stop function. + * + * @description + * This function will stop the Cryptographic component and free + * all system resources associated with it. The client MUST ensure that + * all outstanding operations have completed before calling this function. + * The recommended approach to ensure this is to deregister all session or + * callback handles before calling this function. If outstanding + * operations still exist when this function is invoked, the callback + * function for each of those operations will NOT be invoked and the + * shutdown will continue. If the component is to be restarted, then a + * call to cpaCyStartInstance is required. + * + * @context + * This function may sleep, and so MUST NOT be called in interrupt + * context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * @param[in] instanceHandle Handle to an instance of this API to be + * shutdown. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. Suggested course of action + * is to ensure requests are not still being + * submitted and that all sessions are + * deregistered. If this does not help, then + * forcefully remove the component from the + * system. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance. + * @post + * None + * @note + * Note that this is a synchronous function and has no completion callback + * associated with it. + * + * @see + * cpaCyStartInstance() + * + *****************************************************************************/ +CpaStatus +cpaCyStopInstance(CpaInstanceHandle instanceHandle); + +/** + ***************************************************************************** + * @ingroup cpaCyInstMaint + * Cryptographic Capabilities Info + * + * @description + * This structure contains the capabilities that vary across API + * implementations. This structure is used in conjunction with + * @ref cpaCyQueryCapabilities() to determine the capabilities supported + * by a particular API implementation. + * + * The client MUST allocate memory for this structure and any members + * that require memory. When the structure is passed into the function + * ownership of the memory passes to the function. Ownership of the + * memory returns to the client when the function returns. + *****************************************************************************/ +typedef struct _CpaCyCapabilitiesInfo +{ + CpaBoolean symSupported; + /**< CPA_TRUE if instance supports the symmetric cryptography API. + * See @ref cpaCySym. */ + CpaBoolean symDpSupported; + /**< CPA_TRUE if instance supports the symmetric cryptography + * data plane API. + * See @ref cpaCySymDp. */ + CpaBoolean dhSupported; + /**< CPA_TRUE if instance supports the Diffie Hellman API. + * See @ref cpaCyDh. */ + CpaBoolean dsaSupported; + /**< CPA_TRUE if instance supports the DSA API. + * See @ref cpaCyDsa. */ + CpaBoolean rsaSupported; + /**< CPA_TRUE if instance supports the RSA API. + * See @ref cpaCyRsa. */ + CpaBoolean ecSupported; + /**< CPA_TRUE if instance supports the Elliptic Curve API. + * See @ref cpaCyEc. */ + CpaBoolean ecdhSupported; + /**< CPA_TRUE if instance supports the Elliptic Curve Diffie Hellman API. + * See @ref cpaCyEcdh. */ + CpaBoolean ecdsaSupported; + /**< CPA_TRUE if instance supports the Elliptic Curve DSA API. + * See @ref cpaCyEcdsa. */ + CpaBoolean keySupported; + /**< CPA_TRUE if instance supports the Key Generation API. + * See @ref cpaCyKeyGen. */ + CpaBoolean lnSupported; + /**< CPA_TRUE if instance supports the Large Number API. + * See @ref cpaCyLn. */ + CpaBoolean primeSupported; + /**< CPA_TRUE if instance supports the prime number testing API. + * See @ref cpaCyPrime. */ + CpaBoolean drbgSupported; + /**< CPA_TRUE if instance supports the DRBG API. + * See @ref cpaCyDrbg. */ + CpaBoolean nrbgSupported; + /**< CPA_TRUE if instance supports the NRBG API. + * See @ref cpaCyNrbg. */ + CpaBoolean randSupported; + /**< CPA_TRUE if instance supports the random bit/number generation API. + * See @ref cpaCyRand. */ + CpaBoolean kptSupported; + /**< CPA_TRUE if instance supports the Intel(R) KPT Cryptographic API. + * See @ref cpaCyKpt. */ + CpaBoolean hkdfSupported; + /**< CPA_TRUE if instance supports the HKDF components of the KeyGen API. + * See @ref cpaCyKeyGen. */ + CpaBoolean extAlgchainSupported; + /**< CPA_TRUE if instance supports algorithm chaining for certain + * wireless algorithms. Please refer to implementation for details. + * See @ref cpaCySym. */ + CpaBoolean ecEdMontSupported; + /**< CPA_TRUE if instance supports the Edwards and Montgomery elliptic + * curves of the EC API. + * See @ref cpaCyEc */ +} CpaCyCapabilitiesInfo; + +/** + ***************************************************************************** + * @ingroup cpaCyInstMaint + * Returns capabilities of a Cryptographic API instance + * + * @description + * This function is used to query the instance capabilities. + * + * @context + * The function shall not be called in an interrupt context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Handle to an instance of this API. + * @param[out] pCapInfo Pointer to capabilities info structure. + * All fields in the structure + * are populated by the API instance. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The instance has been initialized via the @ref cpaCyStartInstance + * function. + * @post + * None + *****************************************************************************/ +CpaStatus +cpaCyQueryCapabilities(const CpaInstanceHandle instanceHandle, + CpaCyCapabilitiesInfo * pCapInfo); + +/** + ***************************************************************************** + * @ingroup cpaCyInstMaint + * Sets the address translation function + * + * @description + * This function is used to set the virtual to physical address + * translation routine for the instance. The specified routine + * is used by the instance to perform any required translation of + * a virtual address to a physical address. If the application + * does not invoke this function, then the instance will use its + * default method, such as virt2phys, for address translation. + + * @context + * The function shall not be called in an interrupt context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Handle to an instance of this API. + * @param[in] virtual2Physical Routine that performs virtual to + * physical address translation. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * None + * @post + * None + * @see + * None + * + *****************************************************************************/ +CpaStatus +cpaCySetAddressTranslation(const CpaInstanceHandle instanceHandle, + CpaVirtualToPhysical virtual2Physical); + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /*CPA_CY_IM_H_*/ Index: sys/dev/qat/qat_api/include/lac/cpa_cy_key.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_key.h @@ -0,0 +1,1207 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_key.h + * + * @defgroup cpaCyKeyGen Cryptographic Key and Mask Generation API + * + * @ingroup cpaCy + * + * @description + * These functions specify the API for key and mask generation + * operations. + * + *****************************************************************************/ + +#ifndef CPA_CY_KEY_H +#define CPA_CY_KEY_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_cy_common.h" +#include "cpa_cy_sym.h" /* needed for hash algorithm, for MGF */ + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * SSL or TLS key generation random number length. + * + * @description + * Defines the permitted SSL or TLS random number length in bytes that + * may be used with the functions @ref cpaCyKeyGenSsl and @ref + * cpaCyKeyGenTls. This is the length of the client or server random + * number values. + *****************************************************************************/ +#define CPA_CY_KEY_GEN_SSL_TLS_RANDOM_LEN_IN_BYTES (32) + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * SSL Operation Types + * @description + * Enumeration of the different SSL operations that can be specified in + * the struct @ref CpaCyKeyGenSslOpData. It identifies the label. + *****************************************************************************/ +typedef enum _CpaCyKeySslOp +{ + CPA_CY_KEY_SSL_OP_MASTER_SECRET_DERIVE = 1, + /**< Derive the master secret */ + CPA_CY_KEY_SSL_OP_KEY_MATERIAL_DERIVE, + /**< Derive the key material */ + CPA_CY_KEY_SSL_OP_USER_DEFINED + /**< User Defined Operation for custom labels*/ +} CpaCyKeySslOp; + + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * SSL data for key generation functions + * @description + * This structure contains data for use in key generation operations for + * SSL. For specific SSL key generation operations, the structure fields + * MUST be set as follows: + * + * @par SSL Master-Secret Derivation: + *
sslOp = CPA_CY_KEY_SSL_OP_MASTER_SECRET_DERIVE + *
secret = pre-master secret key + *
seed = client_random + server_random + *
userLabel = NULL + * + * @par SSL Key-Material Derivation: + *
sslOp = CPA_CY_KEY_SSL_OP_KEY_MATERIAL_DERIVE + *
secret = master secret key + *
seed = server_random + client_random + *
userLabel = NULL + * + *
Note that the client/server random order is reversed from that + * used for master-secret derivation. + * + * @note Each of the client and server random numbers need to be of + * length CPA_CY_KEY_GEN_SSL_TLS_RANDOM_LEN_IN_BYTES. + * + * @note In each of the above descriptions, + indicates concatenation. + * + * @note The label used is predetermined by the SSL operation in line + * with the SSL 3.0 specification, and can be overridden by using + * a user defined operation CPA_CY_KEY_SSL_OP_USER_DEFINED and + * associated userLabel. + * + ****************************************************************************/ +typedef struct _CpaCyKeyGenSslOpData { + CpaCyKeySslOp sslOp; + /**< Indicate the SSL operation to be performed */ + CpaFlatBuffer secret; + /**< Flat buffer containing a pointer to either the master or pre-master + * secret key. The length field indicates the length of the secret key in + * bytes. Implementation-specific limits may apply to this length. */ + CpaFlatBuffer seed; + /**< Flat buffer containing a pointer to the seed data. + * Implementation-specific limits may apply to this length. */ + CpaFlatBuffer info; + /**< Flat buffer containing a pointer to the info data. + * Implementation-specific limits may apply to this length. */ + Cpa32U generatedKeyLenInBytes; + /**< The requested length of the generated key in bytes. + * Implementation-specific limits may apply to this length. */ + CpaFlatBuffer userLabel; + /**< Optional flat buffer containing a pointer to a user defined label. + * The length field indicates the length of the label in bytes. To use this + * field, the sslOp must be CPA_CY_KEY_SSL_OP_USER_DEFINED, + * or otherwise it is ignored and can be set to NULL. + * Implementation-specific limits + * may apply to this length. */ +} CpaCyKeyGenSslOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * TLS Operation Types + * @description + * Enumeration of the different TLS operations that can be specified in + * the CpaCyKeyGenTlsOpData. It identifies the label. + * + * The functions @ref cpaCyKeyGenTls and @ref cpaCyKeyGenTls2 + * accelerate the TLS PRF, which is defined as part of RFC2246 (TLS + * v1.0), RFC4346 (TLS v1.1), and RFC5246 (TLS v1.2). + * One of the inputs to each of these functions is a label. + * This enumerated type defines values that correspond to some of + * the required labels. + * However, for some of the operations/labels required by these RFCs, + * no values are specified. + * + * In such cases, a user-defined value must be provided. The client + * should use the enum value @ref CPA_CY_KEY_TLS_OP_USER_DEFINED, and + * pass the label using the userLabel field of the @ref + * CpaCyKeyGenTlsOpData data structure. + * + *****************************************************************************/ +typedef enum _CpaCyKeyTlsOp +{ + CPA_CY_KEY_TLS_OP_MASTER_SECRET_DERIVE = 1, + /**< Derive the master secret using the TLS PRF. + * Corresponds to RFC2246/5246 section 8.1, operation "Computing the + * master secret", label "master secret". */ + CPA_CY_KEY_TLS_OP_KEY_MATERIAL_DERIVE, + /**< Derive the key material using the TLS PRF. + * Corresponds to RFC2246/5246 section 6.3, operation "Derive the key + * material", label "key expansion". */ + CPA_CY_KEY_TLS_OP_CLIENT_FINISHED_DERIVE, + /**< Derive the client finished tag using the TLS PRF. + * Corresponds to RFC2246/5246 section 7.4.9, operation "Client finished", + * label "client finished". */ + CPA_CY_KEY_TLS_OP_SERVER_FINISHED_DERIVE, + /**< Derive the server finished tag using the TLS PRF. + * Corresponds to RFC2246/5246 section 7.4.9, operation "Server finished", + * label "server finished". */ + CPA_CY_KEY_TLS_OP_USER_DEFINED + /**< User Defined Operation for custom labels. */ + +} CpaCyKeyTlsOp; + + +/** + ***************************************************************************** + * @file cpa_cy_key.h + * @ingroup cpaCyKeyGen + * TLS Operation Types + * @description + * Enumeration of the different TLS operations that can be specified in + * the CpaCyKeyGenHKDFOpData. + * + * The function @ref cpaCyKeyGenTls3 + * accelerates the TLS HKDF, which is defined as part of RFC5869 (HKDF) + * and RFC8446 (TLS v1.3). + * + * This enumerated type defines the support HKDF operations for + * extraction and expansion of keying material. + * + *****************************************************************************/ +typedef enum _CpaCyKeyHKDFOp +{ + CPA_CY_HKDF_KEY_EXTRACT = 12, + /**< HKDF Extract operation + * Corresponds to RFC5869 section 2.2, step 1 "Extract" */ + CPA_CY_HKDF_KEY_EXPAND, + /**< HKDF Expand operation + * Corresponds to RFC5869 section 2.3, step 2 "Expand" */ + CPA_CY_HKDF_KEY_EXTRACT_EXPAND, + /**< HKDF operation + * This performs HKDF_EXTRACT and HKDF_EXPAND in a single + * API invocation. */ + CPA_CY_HKDF_KEY_EXPAND_LABEL , + /**< HKDF Expand label operation for TLS 1.3 + * Corresponds to RFC8446 section 7.1 Key Schedule definition for + * HKDF-Expand-Label, which refers to HKDF-Expand defined in RFC5869. */ + CPA_CY_HKDF_KEY_EXTRACT_EXPAND_LABEL + /**< HKDF Extract plus Expand label operation for TLS 1.3 + * Corresponds to RFC5869 section 2.2, step 1 "Extract" followed by + * RFC8446 section 7.1 Key Schedule definition for + * HKDF-Expand-Label, which refers to HKDF-Expand defined in RFC5869. */ +} CpaCyKeyHKDFOp; + + +/** + ***************************************************************************** + * @file cpa_cy_key.h + * @ingroup cpaCyKeyGen + * TLS Operation Types + * @description + * Enumeration of the different cipher suites that may be used in a TLS + * v1.3 operation. This value is used to infer the sizes of the key + * and iv sublabel. + * + * The function @ref cpaCyKeyGenTls3 + * accelerates the TLS HKDF, which is defined as part of RFC5869 (HKDF) + * and RFC8446 (TLS v1.3). + * + * This enumerated type defines the supported cipher suites in the + * TLS operation that require HKDF key operations. + * + *****************************************************************************/ +typedef enum _CpaCyKeyHKDFCipherSuite +{ + CPA_CY_HKDF_TLS_AES_128_GCM_SHA256 = 1, + CPA_CY_HKDF_TLS_AES_256_GCM_SHA384, + CPA_CY_HKDF_TLS_CHACHA20_POLY1305_SHA256 , + CPA_CY_HKDF_TLS_AES_128_CCM_SHA256, + CPA_CY_HKDF_TLS_AES_128_CCM_8_SHA256 +} CpaCyKeyHKDFCipherSuite; + + +/** + ***************************************************************************** + * @file cpa_cy_key.h + * @ingroup cpaCyKeyGen + * TLS Operation Types + * @description + * Bitwise constants for HKDF sublabels + * + * These definitions provide bit settings for sublabels for + * HKDF-ExpandLabel operations. + * + *
key sublabel to generate "key" keying material + *
iv sublabel to generate "iv" keying material + *
resumption sublabel to generate "resumption" keying material + *
finished sublabel to generate "finished" keying material + * + *****************************************************************************/ + +#define CPA_CY_HKDF_SUBLABEL_KEY ((Cpa16U)0x0001) + /**< Bit for creation of key material for 'key' sublabel */ +#define CPA_CY_HKDF_SUBLABEL_IV ((Cpa16U)0x0002) + /**< Bit for creation of key material for 'iv' sublabel */ +#define CPA_CY_HKDF_SUBLABEL_RESUMPTION ((Cpa16U)0x0004) + /**< Bit for creation of key material for 'resumption' sublabel */ +#define CPA_CY_HKDF_SUBLABEL_FINISHED ((Cpa16U)0x0008) + /**< Bit for creation of key material for 'finished' sublabel */ + +#define CPA_CY_HKDF_KEY_MAX_SECRET_SZ ((Cpa8U)64) + /** space in bytes PSK or (EC)DH */ +#define CPA_CY_HKDF_KEY_MAX_HMAC_SZ ((Cpa8U)48) + /** space in bytes of CPA_CY_SYM_HASH_SHA384 result */ +#define CPA_CY_HKDF_KEY_MAX_INFO_SZ ((Cpa8U)80) + /** space in bytes of largest info needed for TLS 1.3, + * rounded up to multiple of 8 */ +#define CPA_CY_HKDF_KEY_MAX_LABEL_SZ ((Cpa8U)78) + /** space in bytes of largest label for TLS 1.3 */ +#define CPA_CY_HKDF_KEY_MAX_LABEL_COUNT ((Cpa8U)4) + /** Maximum number of labels in op structure */ + +/** + ***************************************************************************** + * @file cpa_cy_key.h + * @ingroup cpaCyKeyGen + * TLS data for key generation functions + * @description + * This structure contains data for describing label for the + * HKDF Extract Label function + * + * @par Extract Label Function + *
labelLen = length of the label field + *
contextLen = length of the context field + *
sublabelFlag = Mask of sub labels required for this label. + *
label = label as defined in RFC8446 + *
context = context as defined in RFC8446 + * + ****************************************************************************/ +typedef struct _CpaCyKeyGenHKDFExpandLabel +{ + Cpa8U label[CPA_CY_HKDF_KEY_MAX_LABEL_SZ]; + /**< HKDFLabel field as defined in RFC8446 sec 7.1. + */ + Cpa8U labelLen; + /**< The length, in bytes of the label */ + Cpa8U sublabelFlag; + /**< mask of sublabels to be generated. + * This flag is composed of zero or more of: + * CPA_CY_HKDF_SUBLABEL_KEY + * CPA_CY_HKDF_SUBLABEL_IV + * CPA_CY_HKDF_SUBLABEL_RESUMPTION + * CPA_CY_HKDF_SUBLABEL_FINISHED + */ +} CpaCyKeyGenHKDFExpandLabel; + +/** + ***************************************************************************** + * @file cpa_cy_key.h + * @ingroup cpaCyKeyGen + * TLS data for key generation functions + * @description + * This structure contains data for all HKDF operations: + *
HKDF Extract + *
HKDF Expand + *
HKDF Expand Label + *
HKDF Extract and Expand + *
HKDF Extract and Expand Label + * + * @par HKDF Map Structure Elements + *
secret - IKM value for extract operations or PRK for expand + * or expand operations. + *
seed - contains the salt for extract + * operations + *
info - contains the info data for extract operations + *
labels - See notes above + * + ****************************************************************************/ +typedef struct _CpaCyKeyGenHKDFOpData +{ + CpaCyKeyHKDFOp hkdfKeyOp; + /**< Keying operation to be performed. */ + Cpa8U secretLen; + /**< Length of secret field */ + Cpa16U seedLen; + /**< Length of seed field */ + Cpa16U infoLen; + /**< Length of info field */ + Cpa16U numLabels; + /**< Number of filled CpaCyKeyGenHKDFExpandLabel elements */ + Cpa8U secret[CPA_CY_HKDF_KEY_MAX_SECRET_SZ]; + /**< Input Key Material or PRK */ + Cpa8U seed[CPA_CY_HKDF_KEY_MAX_HMAC_SZ]; + /**< Input salt */ + Cpa8U info[CPA_CY_HKDF_KEY_MAX_INFO_SZ]; + /**< info field */ + CpaCyKeyGenHKDFExpandLabel label[CPA_CY_HKDF_KEY_MAX_LABEL_COUNT]; + /**< array of Expand Label structures */ +} CpaCyKeyGenHKDFOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * TLS data for key generation functions + * @description + * This structure contains data for use in key generation operations for + * TLS. For specific TLS key generation operations, the structure fields + * MUST be set as follows: + * + * @par TLS Master-Secret Derivation: + *
tlsOp = CPA_CY_KEY_TLS_OP_MASTER_SECRET_DERIVE + *
secret = pre-master secret key + *
seed = client_random + server_random + *
userLabel = NULL + * + * @par TLS Key-Material Derivation: + *
tlsOp = CPA_CY_KEY_TLS_OP_KEY_MATERIAL_DERIVE + *
secret = master secret key + *
seed = server_random + client_random + *
userLabel = NULL + * + *
Note that the client/server random order is reversed from + * that used for Master-Secret Derivation. + * + * @par TLS Client finished/Server finished tag Derivation: + *
tlsOp = CPA_CY_KEY_TLS_OP_CLIENT_FINISHED_DERIVE (client) + *
or CPA_CY_KEY_TLS_OP_SERVER_FINISHED_DERIVE (server) + *
secret = master secret key + *
seed = MD5(handshake_messages) + SHA-1(handshake_messages) + *
userLabel = NULL + * + * @note Each of the client and server random seeds need to be of + * length CPA_CY_KEY_GEN_SSL_TLS_RANDOM_LEN_IN_BYTES. + * @note In each of the above descriptions, + indicates concatenation. + * @note The label used is predetermined by the TLS operation in line + * with the TLS specifications, and can be overridden by using + * a user defined operation CPA_CY_KEY_TLS_OP_USER_DEFINED + * and associated userLabel. + * + ****************************************************************************/ +typedef struct _CpaCyKeyGenTlsOpData { + CpaCyKeyTlsOp tlsOp; + /**< TLS operation to be performed */ + CpaFlatBuffer secret; + /**< Flat buffer containing a pointer to either the master or pre-master + * secret key. The length field indicates the length of the secret in + * bytes. */ + CpaFlatBuffer seed; + /**< Flat buffer containing a pointer to the seed data. + * Implementation-specific limits may apply to this length. */ + Cpa32U generatedKeyLenInBytes; + /**< The requested length of the generated key in bytes. + * Implementation-specific limits may apply to this length. */ + CpaFlatBuffer userLabel; + /**< Optional flat buffer containing a pointer to a user defined label. + * The length field indicates the length of the label in bytes. To use this + * field, the tlsOp must be CPA_CY_KEY_TLS_OP_USER_DEFINED. + * Implementation-specific limits may apply to this length. */ +} CpaCyKeyGenTlsOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * Key Generation Mask Generation Function (MGF) Data + * @description + * This structure contains data relating to Mask Generation Function + * key generation operations. + * + * @note The default hash algorithm used by the MGF is SHA-1. If a + * different hash algorithm is preferred, then see the extended + * version of this structure, @ref CpaCyKeyGenMgfOpDataExt. + * @see + * cpaCyKeyGenMgf + ****************************************************************************/ +typedef struct _CpaCyKeyGenMgfOpData { + CpaFlatBuffer seedBuffer; + /**< Caller MUST allocate a buffer and populate with the input seed + * data. For optimal performance the start of the seed SHOULD be allocated + * on an 8-byte boundary. The length field represents the seed length in + * bytes. Implementation-specific limits may apply to this length. */ + Cpa32U maskLenInBytes; + /**< The requested length of the generated mask in bytes. + * Implementation-specific limits may apply to this length. */ +} CpaCyKeyGenMgfOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * Extension to the original Key Generation Mask Generation Function + * (MGF) Data + * @description + * This structure is an extension to the original MGF data structure. + * The extension allows the hash function to be specified. + * @note + * This structure is separate from the base @ref CpaCyKeyGenMgfOpData + * structure in order to retain backwards compatibility with the + * original version of the API. + * @see + * cpaCyKeyGenMgfExt + ****************************************************************************/ +typedef struct _CpaCyKeyGenMgfOpDataExt { + CpaCyKeyGenMgfOpData baseOpData; + /**< "Base" operational data for MGF generation */ + CpaCySymHashAlgorithm hashAlgorithm; + /**< Specifies the hash algorithm to be used by the Mask Generation + * Function */ +} CpaCyKeyGenMgfOpDataExt; + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * Key Generation Statistics. + * @deprecated + * As of v1.3 of the Crypto API, this structure has been deprecated, + * replaced by @ref CpaCyKeyGenStats64. + * @description + * This structure contains statistics on the key and mask generation + * operations. Statistics are set to zero when the component is + * initialized, and are collected per instance. + * + ****************************************************************************/ +typedef struct _CpaCyKeyGenStats { + Cpa32U numSslKeyGenRequests; + /**< Total number of successful SSL key generation requests. */ + Cpa32U numSslKeyGenRequestErrors; + /**< Total number of SSL key generation requests that had an error and + * could not be processed. */ + Cpa32U numSslKeyGenCompleted; + /**< Total number of SSL key generation operations that completed + * successfully. */ + Cpa32U numSslKeyGenCompletedErrors; + /**< Total number of SSL key generation operations that could not be + * completed successfully due to errors. */ + Cpa32U numTlsKeyGenRequests; + /**< Total number of successful TLS key generation requests. */ + Cpa32U numTlsKeyGenRequestErrors; + /**< Total number of TLS key generation requests that had an error and + * could not be processed. */ + Cpa32U numTlsKeyGenCompleted; + /**< Total number of TLS key generation operations that completed + * successfully. */ + Cpa32U numTlsKeyGenCompletedErrors; + /**< Total number of TLS key generation operations that could not be + * completed successfully due to errors. */ + Cpa32U numMgfKeyGenRequests; + /**< Total number of successful MGF key generation requests (including + * "extended" MGF requests). */ + Cpa32U numMgfKeyGenRequestErrors; + /**< Total number of MGF key generation requests that had an error and + * could not be processed. */ + Cpa32U numMgfKeyGenCompleted; + /**< Total number of MGF key generation operations that completed + * successfully. */ + Cpa32U numMgfKeyGenCompletedErrors; + /**< Total number of MGF key generation operations that could not be + * completed successfully due to errors. */ +} CpaCyKeyGenStats CPA_DEPRECATED; + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * Key Generation Statistics (64-bit version). + * @description + * This structure contains the 64-bit version of the statistics + * on the key and mask generation operations. + * Statistics are set to zero when the component is + * initialized, and are collected per instance. + * + ****************************************************************************/ +typedef struct _CpaCyKeyGenStats64 { + Cpa64U numSslKeyGenRequests; + /**< Total number of successful SSL key generation requests. */ + Cpa64U numSslKeyGenRequestErrors; + /**< Total number of SSL key generation requests that had an error and + * could not be processed. */ + Cpa64U numSslKeyGenCompleted; + /**< Total number of SSL key generation operations that completed + * successfully. */ + Cpa64U numSslKeyGenCompletedErrors; + /**< Total number of SSL key generation operations that could not be + * completed successfully due to errors. */ + Cpa64U numTlsKeyGenRequests; + /**< Total number of successful TLS key generation requests. */ + Cpa64U numTlsKeyGenRequestErrors; + /**< Total number of TLS key generation requests that had an error and + * could not be processed. */ + Cpa64U numTlsKeyGenCompleted; + /**< Total number of TLS key generation operations that completed + * successfully. */ + Cpa64U numTlsKeyGenCompletedErrors; + /**< Total number of TLS key generation operations that could not be + * completed successfully due to errors. */ + Cpa64U numMgfKeyGenRequests; + /**< Total number of successful MGF key generation requests (including + * "extended" MGF requests). */ + Cpa64U numMgfKeyGenRequestErrors; + /**< Total number of MGF key generation requests that had an error and + * could not be processed. */ + Cpa64U numMgfKeyGenCompleted; + /**< Total number of MGF key generation operations that completed + * successfully. */ + Cpa64U numMgfKeyGenCompletedErrors; + /**< Total number of MGF key generation operations that could not be + * completed successfully due to errors. */ +} CpaCyKeyGenStats64; + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * SSL Key Generation Function. + * @description + * This function is used for SSL key generation. It implements the key + * generation function defined in section 6.2.2 of the SSL 3.0 + * specification as described in + * http://www.mozilla.org/projects/security/pki/nss/ssl/draft302.txt. + * + * The input seed is taken as a flat buffer and the generated key is + * returned to caller in a flat destination data buffer. + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pKeyGenCb Pointer to callback function to be + * invoked when the operation is complete. + * If this is set to a NULL value the + * function will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific + * call. Will be returned unchanged in the + * callback. + * @param[in] pKeyGenSslOpData Structure containing all the data + * needed to perform the SSL key + * generation operation. The client code + * allocates the memory for this + * structure. This component takes + * ownership of the memory until it is + * returned in the callback. + * @param[out] pGeneratedKeyBuffer Caller MUST allocate a sufficient + * buffer to hold the key generation + * output. The data pointer SHOULD be + * aligned on an 8-byte boundary. The + * length field passed in represents the + * size of the buffer in bytes. The value + * that is returned is the size of the + * result key in bytes. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. + * Resubmit the request. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @see + * CpaCyKeyGenSslOpData, + * CpaCyGenFlatBufCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyKeyGenSsl(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const CpaCyKeyGenSslOpData *pKeyGenSslOpData, + CpaFlatBuffer *pGeneratedKeyBuffer); + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * TLS Key Generation Function. + * @description + * This function is used for TLS key generation. It implements the + * TLS PRF (Pseudo Random Function) as defined by RFC2246 (TLS v1.0) + * and RFC4346 (TLS v1.1). + * + * The input seed is taken as a flat buffer and the generated key is + * returned to caller in a flat destination data buffer. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pKeyGenCb Pointer to callback function to be + * invoked when the operation is complete. + * If this is set to a NULL value the + * function will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific + * call. Will be returned unchanged in the + * callback. + * @param[in] pKeyGenTlsOpData Structure containing all the data + * needed to perform the TLS key + * generation operation. The client code + * allocates the memory for this + * structure. This component takes + * ownership of the memory until it is + * returned in the callback. + * @param[out] pGeneratedKeyBuffer Caller MUST allocate a sufficient + * buffer to hold the key generation + * output. The data pointer SHOULD be + * aligned on an 8-byte boundary. The + * length field passed in represents the + * size of the buffer in bytes. The value + * that is returned is the size of the + * result key in bytes. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. + * Resubmit the request. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @see + * CpaCyKeyGenTlsOpData, + * CpaCyGenFlatBufCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyKeyGenTls(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const CpaCyKeyGenTlsOpData *pKeyGenTlsOpData, + CpaFlatBuffer *pGeneratedKeyBuffer); + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * TLS Key Generation Function version 2. + * @description + * This function is used for TLS key generation. It implements the + * TLS PRF (Pseudo Random Function) as defined by RFC5246 (TLS v1.2). + * + * The input seed is taken as a flat buffer and the generated key is + * returned to caller in a flat destination data buffer. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pKeyGenCb Pointer to callback function to be + * invoked when the operation is complete. + * If this is set to a NULL value the + * function will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific + * call. Will be returned unchanged in the + * callback. + * @param[in] pKeyGenTlsOpData Structure containing all the data + * needed to perform the TLS key + * generation operation. The client code + * allocates the memory for this + * structure. This component takes + * ownership of the memory until it is + * returned in the callback. + * @param[in] hashAlgorithm Specifies the hash algorithm to use. + * According to RFC5246, this should be + * "SHA-256 or a stronger standard hash + * function." + * @param[out] pGeneratedKeyBuffer Caller MUST allocate a sufficient + * buffer to hold the key generation + * output. The data pointer SHOULD be + * aligned on an 8-byte boundary. The + * length field passed in represents the + * size of the buffer in bytes. The value + * that is returned is the size of the + * result key in bytes. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. + * Resubmit the request. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @see + * CpaCyKeyGenTlsOpData, + * CpaCyGenFlatBufCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyKeyGenTls2(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const CpaCyKeyGenTlsOpData *pKeyGenTlsOpData, + CpaCySymHashAlgorithm hashAlgorithm, + CpaFlatBuffer *pGeneratedKeyBuffer); + + +/** + ***************************************************************************** + * @file cpa_cy_key.h + * @ingroup cpaCyKeyGen + * TLS Key Generation Function version 3. + * @description + * This function is used for TLS key generation. It implements the + * TLS HKDF (HMAC Key Derivation Function) as defined by + * RFC5689 (HKDF) and RFC8446 (TLS 1.3). + * + * The input seed is taken as a flat buffer and the generated key is + * returned to caller in a flat destination data buffer. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pKeyGenCb Pointer to callback function to be + * invoked when the operation is complete. + * If this is set to a NULL value the + * function will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific + * call. Will be returned unchanged in the + * callback. + * @param[in] pKeyGenTlsOpData Structure containing all the data + * needed to perform the TLS key + * generation operation. The client code + * allocates the memory for this + * structure. This component takes + * ownership of the memory until it is + * returned in the callback. The memory + * must be pinned and contiguous, suitable + * for DMA operations. + * @param[in] hashAlgorithm Specifies the hash algorithm to use. + * According to RFC5246, this should be + * "SHA-256 or a stronger standard hash + * function." + * @param[out] pGeneratedKeyBuffer Caller MUST allocate a sufficient + * buffer to hold the key generation + * output. The data pointer SHOULD be + * aligned on an 8-byte boundary. The + * length field passed in represents the + * size of the buffer in bytes. The value + * that is returned is the size of the + * result key in bytes. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. + * Resubmit the request. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @see + * CpaCyGenFlatBufCbFunc + * CpaCyKeyGenHKDFOpData + * + *****************************************************************************/ +CpaStatus +cpaCyKeyGenTls3(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const CpaCyKeyGenHKDFOpData *pKeyGenTlsOpData, + CpaCyKeyHKDFCipherSuite cipherSuite, + CpaFlatBuffer *pGeneratedKeyBuffer); + + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * Mask Generation Function. + * @description + * This function implements the mask generation function MGF1 as + * defined by PKCS#1 v2.1, and RFC3447. The input seed is taken + * as a flat buffer and the generated mask is returned to caller in a + * flat destination data buffer. + * + * @note The default hash algorithm used by the MGF is SHA-1. If a + * different hash algorithm is preferred, then see the "extended" + * version of this function, @ref cpaCyKeyGenMgfExt. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pKeyGenCb Pointer to callback function to be + * invoked when the operation is complete. + * If this is set to a NULL value the + * function will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific call. + * Will be returned unchanged in the + * callback. + * @param[in] pKeyGenMgfOpData Structure containing all the data needed + * to perform the MGF key generation + * operation. The client code allocates the + * memory for this structure. This + * component takes ownership of the memory + * until it is returned in the callback. + * @param[out] pGeneratedMaskBuffer Caller MUST allocate a sufficient buffer + * to hold the generated mask. The data + * pointer SHOULD be aligned on an 8-byte + * boundary. The length field passed in + * represents the size of the buffer in + * bytes. The value that is returned is the + * size of the generated mask in bytes. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. + * Resubmit the request. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @see + * CpaCyKeyGenMgfOpData, + * CpaCyGenFlatBufCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyKeyGenMgf(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const CpaCyKeyGenMgfOpData *pKeyGenMgfOpData, + CpaFlatBuffer *pGeneratedMaskBuffer); + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * Extended Mask Generation Function. + * @description + * This function is used for mask generation. It differs from the "base" + * version of the function (@ref cpaCyKeyGenMgf) in that it allows + * the hash function used by the Mask Generation Function to be + * specified. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pKeyGenCb Pointer to callback function to be + * invoked when the operation is complete. + * If this is set to a NULL value the + * function will operate synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific call. + * Will be returned unchanged in the + * callback. + * @param[in] pKeyGenMgfOpDataExt Structure containing all the data needed + * to perform the extended MGF key generation + * operation. The client code allocates the + * memory for this structure. This + * component takes ownership of the memory + * until it is returned in the callback. + * @param[out] pGeneratedMaskBuffer Caller MUST allocate a sufficient buffer + * to hold the generated mask. The data + * pointer SHOULD be aligned on an 8-byte + * boundary. The length field passed in + * represents the size of the buffer in + * bytes. The value that is returned is the + * size of the generated mask in bytes. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. + * Resubmit the request. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * This function is only used to generate a mask keys from seed + * material. + * @see + * CpaCyKeyGenMgfOpData, + * CpaCyGenFlatBufCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyKeyGenMgfExt(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pKeyGenCb, + void *pCallbackTag, + const CpaCyKeyGenMgfOpDataExt *pKeyGenMgfOpDataExt, + CpaFlatBuffer *pGeneratedMaskBuffer); + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * Queries the Key and Mask generation statistics specific to + * an instance. + * + * @deprecated + * As of v1.3 of the Crypto API, this function has been deprecated, + * replaced by @ref cpaCyKeyGenQueryStats64(). + * + * @description + * This function will query a specific instance for key and mask + * generation statistics. The user MUST allocate the CpaCyKeyGenStats + * structure and pass the reference to that into this function call. This + * function will write the statistic results into the passed in + * CpaCyKeyGenStats structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pKeyGenStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. + * Resubmit the request. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * + * @see + * CpaCyKeyGenStats + * + *****************************************************************************/ +CpaStatus CPA_DEPRECATED +cpaCyKeyGenQueryStats(const CpaInstanceHandle instanceHandle, + struct _CpaCyKeyGenStats *pKeyGenStats); + +/** + ***************************************************************************** + * @ingroup cpaCyKeyGen + * Queries the Key and Mask generation statistics (64-bit version) + * specific to an instance. + * + * @description + * This function will query a specific instance for key and mask + * generation statistics. The user MUST allocate the CpaCyKeyGenStats64 + * structure and pass the reference to that into this function call. This + * function will write the statistic results into the passed in + * CpaCyKeyGenStats64 structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pKeyGenStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. + * Resubmit the request. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * + * @see + * CpaCyKeyGenStats64 + *****************************************************************************/ +CpaStatus +cpaCyKeyGenQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyKeyGenStats64 *pKeyGenStats); + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_CY_KEY_H */ Index: sys/dev/qat/qat_api/include/lac/cpa_cy_ln.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_ln.h @@ -0,0 +1,519 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_ln.h + * + * @defgroup cpaCyLn Cryptographic Large Number API + * + * @ingroup cpaCy + * + * @description + * These functions specify the Cryptographic API for Large Number + * Operations. + * + * @note + * Large numbers are represented on the QuickAssist API using octet + * strings, stored in structures of type @ref CpaFlatBuffer. These + * octet strings are encoded as described by PKCS#1 v2.1, section 4, + * which is consistent with ASN.1 syntax. The following text + * summarizes this. Any exceptions to this encoding are specified + * on the specific data structure or function to which the exception + * applies. + * + * An n-bit number, N, has a value in the range 2^(n-1) through 2^(n)-1. + * In other words, its most significant bit, bit n-1 (where bit-counting + * starts from zero) MUST be set to 1. We can also state that the + * bit-length n of a number N is defined by n = floor(log2(N))+1. + * + * The buffer, b, in which an n-bit number N is stored, must be "large + * enough". In other words, b.dataLenInBytes must be at least + * minLenInBytes = ceiling(n/8). + * + * The number is stored in a "big endian" format. This means that the + * least significant byte (LSB) is b[b.dataLenInBytes-1], while the + * most significant byte (MSB) is b[b.dataLenInBytes-minLenInBytes]. + * In the case where the buffer is "exactly" the right size, then the + * MSB is b[0]. Otherwise, all bytes from b[0] up to the MSB MUST be + * set to 0x00. + * + * The largest bit-length we support today is 4096 bits. In other + * words, we can deal with numbers up to a value of (2^4096)-1. + * + *****************************************************************************/ + +#ifndef CPA_CY_LN_H +#define CPA_CY_LN_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_cy_common.h" + +/** + ***************************************************************************** + * @ingroup cpaCyLn + * Modular Exponentiation Function Operation Data. + * @description + * This structure lists the different items that are required in the + * cpaCyLnModExp function. The client MUST allocate the memory for + * this structure. When the structure is passed into the function, + * ownership of the memory passes to the function. Ownership of the memory + * returns to the client when this structure is returned in the callback. + * The operation size in bits is equal to the size of whichever of the + * following is largest: the modulus, the base or the exponent. + * + * @note + * If the client modifies or frees the memory referenced in this structure + * after it has been submitted to the cpaCyLnModExp function, and + * before it has been returned in the callback, undefined behavior will + * result. + + * The values of the base, the exponent and the modulus MUST all be less + * than 2^4096, and the modulus must not be equal to zero. + *****************************************************************************/ +typedef struct _CpaCyLnModExpOpData { + CpaFlatBuffer modulus; + /**< Flat buffer containing a pointer to the modulus. + * This number may be up to 4096 bits in length, and MUST be greater + * than zero. + */ + CpaFlatBuffer base; + /**< Flat buffer containing a pointer to the base. + * This number may be up to 4096 bits in length. + */ + CpaFlatBuffer exponent; + /**< Flat buffer containing a pointer to the exponent. + * This number may be up to 4096 bits in length. + */ +} CpaCyLnModExpOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyLn + * Modular Inversion Function Operation Data. + * @description + * This structure lists the different items that are required in the + * function @ref cpaCyLnModInv. The client MUST allocate the memory for + * this structure. When the structure is passed into the function, + * ownership of the memory passes to the function. Ownership of the + * memory returns to the client when this structure is returned in the + * callback. + * @note + * If the client modifies or frees the memory referenced in this structure + * after it has been submitted to the cpaCyLnModInv function, and + * before it has been returned in the callback, undefined behavior will + * result. + * + * Note that the values of A and B MUST NOT both be even numbers, and + * both MUST be less than 2^4096. + *****************************************************************************/ +typedef struct _CpaCyLnModInvOpData { + CpaFlatBuffer A; + /**< Flat buffer containing a pointer to the value that will be + * inverted. + * This number may be up to 4096 bits in length, it MUST NOT be zero, + * and it MUST be co-prime with B. + */ + CpaFlatBuffer B; + /**< Flat buffer containing a pointer to the value that will be used as + * the modulus. + * This number may be up to 4096 bits in length, it MUST NOT be zero, + * and it MUST be co-prime with A. + */ +} CpaCyLnModInvOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyLn + * Look Aside Cryptographic large number Statistics. + * @deprecated + * As of v1.3 of the Crypto API, this structure has been deprecated, + * replaced by @ref CpaCyLnStats64. + * @description + * This structure contains statistics on the Look Aside Cryptographic + * large number operations. Statistics are set to zero when the component + * is initialized, and are collected per instance. + * + ****************************************************************************/ +typedef struct _CpaCyLnStats { + Cpa32U numLnModExpRequests; + /**< Total number of successful large number modular exponentiation + * requests.*/ + Cpa32U numLnModExpRequestErrors; + /**< Total number of large number modular exponentiation requests that + * had an error and could not be processed. */ + Cpa32U numLnModExpCompleted; + /**< Total number of large number modular exponentiation operations + * that completed successfully. */ + Cpa32U numLnModExpCompletedErrors; + /**< Total number of large number modular exponentiation operations + * that could not be completed successfully due to errors. */ + Cpa32U numLnModInvRequests; + /**< Total number of successful large number modular inversion + * requests.*/ + Cpa32U numLnModInvRequestErrors; + /**< Total number of large number modular inversion requests that + * had an error and could not be processed. */ + Cpa32U numLnModInvCompleted; + /**< Total number of large number modular inversion operations + * that completed successfully. */ + Cpa32U numLnModInvCompletedErrors; + /**< Total number of large number modular inversion operations + * that could not be completed successfully due to errors. */ +} CpaCyLnStats CPA_DEPRECATED; + +/** + ***************************************************************************** + * @ingroup cpaCyLn + * Look Aside Cryptographic large number Statistics. + * @description + * This structure contains statistics on the Look Aside Cryptographic + * large number operations. Statistics are set to zero when the component + * is initialized, and are collected per instance. + * + ****************************************************************************/ +typedef struct _CpaCyLnStats64 { + Cpa64U numLnModExpRequests; + /**< Total number of successful large number modular exponentiation + * requests.*/ + Cpa64U numLnModExpRequestErrors; + /**< Total number of large number modular exponentiation requests that + * had an error and could not be processed. */ + Cpa64U numLnModExpCompleted; + /**< Total number of large number modular exponentiation operations + * that completed successfully. */ + Cpa64U numLnModExpCompletedErrors; + /**< Total number of large number modular exponentiation operations + * that could not be completed successfully due to errors. */ + Cpa64U numLnModInvRequests; + /**< Total number of successful large number modular inversion + * requests.*/ + Cpa64U numLnModInvRequestErrors; + /**< Total number of large number modular inversion requests that + * had an error and could not be processed. */ + Cpa64U numLnModInvCompleted; + /**< Total number of large number modular inversion operations + * that completed successfully. */ + Cpa64U numLnModInvCompletedErrors; + /**< Total number of large number modular inversion operations + * that could not be completed successfully due to errors. */ +} CpaCyLnStats64; + +/** + ***************************************************************************** + * @ingroup cpaCyLn + * Perform modular exponentiation operation. + * + * @description + * This function performs modular exponentiation. It computes the + * following result based on the inputs: + * + * result = (base ^ exponent) mod modulus + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pLnModExpCb Pointer to callback function to be + * invoked when the operation is complete. + * @param[in] pCallbackTag Opaque User Data for this specific call. + * Will be returned unchanged in the callback. + * @param[in] pLnModExpOpData Structure containing all the data needed + * to perform the LN modular exponentiation + * operation. The client code allocates + * the memory for this structure. This + * component takes ownership of the memory + * until it is returned in the callback. + * @param[out] pResult Pointer to a flat buffer containing a + * pointer to memory allocated by the client + * into which the result will be written. + * The size of the memory required MUST be + * larger than or equal to the size + * required to store the modulus. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized. + * @post + * None + * @note + * When pLnModExpCb is non null, an asynchronous callback of type + * CpaCyLnModExpCbFunc is generated in response to this function call. + * Any errors generated during processing are reported in the structure + * returned in the callback. + * + * @see + * CpaCyLnModExpOpData, CpaCyGenFlatBufCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyLnModExp(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pLnModExpCb, + void *pCallbackTag, + const CpaCyLnModExpOpData *pLnModExpOpData, + CpaFlatBuffer *pResult); + +/** + ***************************************************************************** + * @ingroup cpaCyLn + * Perform modular inversion operation. + * + * @description + * This function performs modular inversion. It computes the following + * result based on the inputs: + * + * result = (1/A) mod B. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pLnModInvCb Pointer to callback function to be + * invoked when the operation is complete. + * @param[in] pCallbackTag Opaque User Data for this specific call. + * Will be returned unchanged in the + * callback. + * @param[in] pLnModInvOpData Structure containing all the data + * needed to perform the LN modular + * inversion operation. The client code + * allocates the memory for this structure. + * This component takes ownership of the + * memory until it is returned in the + * callback. + * @param[out] pResult Pointer to a flat buffer containing a + * pointer to memory allocated by the client + * into which the result will be written. + * The size of the memory required MUST be + * larger than or equal to the size + * required to store the modulus. + * On invocation the callback function + * will contain this parameter in the + * pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized. + * @post + * None + * @note + * When pLnModInvCb is non null, an asynchronous callback of type + * CpaCyLnModInvCbFunc is generated in response to this function call. + * Any errors generated during processing are reported in the structure + * returned in the callback. + * + * @see + * CpaCyLnModInvOpData, + * CpaCyGenFlatBufCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyLnModInv(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pLnModInvCb, + void *pCallbackTag, + const CpaCyLnModInvOpData *pLnModInvOpData, + CpaFlatBuffer *pResult); + +/** + ***************************************************************************** + * @ingroup cpaCyLn + * Query statistics for large number operations + * + * @deprecated + * As of v1.3 of the Crypto API, this function has been deprecated, + * replaced by @ref cpaCyLnStatsQuery64(). + * + * @description + * This function will query a specific instance handle for large number + * statistics. The user MUST allocate the CpaCyLnStats structure and pass + * the reference to that structure into this function call. This function + * writes the statistic results into the passed in CpaCyLnStats structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pLnStats Pointer to memory into which the + * statistics will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Acceleration Services unit has been initialized. + * + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * + * @see + * CpaCyLnStats + * + *****************************************************************************/ +CpaStatus CPA_DEPRECATED +cpaCyLnStatsQuery(const CpaInstanceHandle instanceHandle, + struct _CpaCyLnStats *pLnStats); + +/** + ***************************************************************************** + * @ingroup cpaCyLn + * Query statistics (64-bit version) for large number operations + * + * @description + * This function will query a specific instance handle for the 64-bit + * version of the large number statistics. + * The user MUST allocate the CpaCyLnStats64 structure and pass + * the reference to that structure into this function call. This function + * writes the statistic results into the passed in CpaCyLnStats64 + * structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pLnStats Pointer to memory into which the + * statistics will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Acceleration Services unit has been initialized. + * + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * + * @see + * CpaCyLnStats + *****************************************************************************/ +CpaStatus +cpaCyLnStatsQuery64(const CpaInstanceHandle instanceHandle, + CpaCyLnStats64 *pLnStats); + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_CY_LN_H */ Index: sys/dev/qat/qat_api/include/lac/cpa_cy_prime.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_prime.h @@ -0,0 +1,450 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_prime.h + * + * @defgroup cpaCyPrime Prime Number Test API + * + * @ingroup cpaCy + * + * @description + * These functions specify the API for the prime number test operations. + * + * For prime number generation, this API SHOULD be used in conjunction + * with the Deterministic Random Bit Generation API (@ref cpaCyDrbg). + * + * @note + * Large numbers are represented on the QuickAssist API as described + * in the Large Number API (@ref cpaCyLn). + * + * In addition, the bit length of large numbers passed to the API + * MUST NOT exceed 576 bits for Elliptic Curve operations. + *****************************************************************************/ + +#ifndef CPA_CY_PRIME_H +#define CPA_CY_PRIME_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_cy_common.h" + +/** + ***************************************************************************** + * @ingroup cpaCyPrime + * Prime Test Operation Data. + * @description + * This structure contains the operation data for the cpaCyPrimeTest + * function. The client MUST allocate the memory for this structure and the + * items pointed to by this structure. When the structure is passed into + * the function, ownership of the memory passes to the function. Ownership + * of the memory returns to the client when this structure is returned in + * the callback function. + * + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. primeCandidate.pData[0] = MSB. + * + * All numbers MUST be stored in big-endian order. + * + * @note + * If the client modifies or frees the memory referenced in this + * structure after it has been submitted to the cpaCyPrimeTest + * function, and before it has been returned in the callback, undefined + * behavior will result. + * + * @see + * cpaCyPrimeTest() + * + *****************************************************************************/ +typedef struct _CpaCyPrimeTestOpData { + CpaFlatBuffer primeCandidate; + /**< The prime number candidate to test */ + CpaBoolean performGcdTest; + /**< A value of CPA_TRUE means perform a GCD Primality Test */ + CpaBoolean performFermatTest; + /**< A value of CPA_TRUE means perform a Fermat Primality Test */ + Cpa32U numMillerRabinRounds; + /**< Number of Miller Rabin Primality Test rounds. Set to 0 to perform + * zero Miller Rabin tests. The maximum number of rounds supported is 50. + */ + CpaFlatBuffer millerRabinRandomInput; + /**< Flat buffer containing a pointer to an array of n random numbers + * for Miller Rabin Primality Tests. The size of the buffer MUST be + * + * n * (MAX(64,x)) + * + * where: + * + * - n is the requested number of rounds. + * - x is the minimum number of bytes required to represent the prime + * candidate, i.e. x = ceiling((ceiling(log2(p)))/8). + * + * Each random number MUST be greater than 1 and less than the prime + * candidate - 1, with leading zeroes as necessary. + */ + CpaBoolean performLucasTest; + /**< An CPA_TRUE value means perform a Lucas Primality Test */ +} CpaCyPrimeTestOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyPrime + * Prime Number Test Statistics. + * @deprecated + * As of v1.3 of the Crypto API, this structure has been deprecated, + * replaced by @ref CpaCyPrimeStats64. + * @description + * This structure contains statistics on the prime number test operations. + * Statistics are set to zero when the component is initialized, and are + * collected per instance. + * + ****************************************************************************/ +typedef struct _CpaCyPrimeStats { + Cpa32U numPrimeTestRequests; + /**< Total number of successful prime number test requests.*/ + Cpa32U numPrimeTestRequestErrors; + /**< Total number of prime number test requests that had an + * error and could not be processed. */ + Cpa32U numPrimeTestCompleted; + /**< Total number of prime number test operations that completed + * successfully. */ + Cpa32U numPrimeTestCompletedErrors; + /**< Total number of prime number test operations that could not be + * completed successfully due to errors. */ + Cpa32U numPrimeTestFailures; + /**< Total number of prime number test operations that executed + * successfully but the outcome of the test was that the number was not + * prime. */ +} CpaCyPrimeStats CPA_DEPRECATED; + +/** + ***************************************************************************** + * @ingroup cpaCyPrime + * Prime Number Test Statistics (64-bit version). + * @description + * This structure contains a 64-bit version of the statistics on the + * prime number test operations. + * Statistics are set to zero when the component is initialized, and are + * collected per instance. + ****************************************************************************/ +typedef struct _CpaCyPrimeStats64 { + Cpa64U numPrimeTestRequests; + /**< Total number of successful prime number test requests.*/ + Cpa64U numPrimeTestRequestErrors; + /**< Total number of prime number test requests that had an + * error and could not be processed. */ + Cpa64U numPrimeTestCompleted; + /**< Total number of prime number test operations that completed + * successfully. */ + Cpa64U numPrimeTestCompletedErrors; + /**< Total number of prime number test operations that could not be + * completed successfully due to errors. */ + Cpa64U numPrimeTestFailures; + /**< Total number of prime number test operations that executed + * successfully but the outcome of the test was that the number was not + * prime. */ +} CpaCyPrimeStats64; + +/** + ***************************************************************************** + * @ingroup cpaCyPrime + * Definition of callback function invoked for cpaCyPrimeTest + * requests. + * + * @description + * This is the prototype for the cpaCyPrimeTest callback function. + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pOpData Opaque pointer to the Operation data pointer + * supplied in request. + * @param[in] testPassed A value of CPA_TRUE means the prime candidate + * is probably prime. + * + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * cpaCyPrimeTest() + * + *****************************************************************************/ +typedef void (*CpaCyPrimeTestCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pOpData, + CpaBoolean testPassed); + +/** + ***************************************************************************** + * @ingroup cpaCyPrime + * Prime Number Test Function. + * + * @description + * This function will test probabilistically if a number is prime. Refer + * to ANSI X9.80 2005 for details. The primality result will be returned + * in the asynchronous callback. + * + * The following combination of GCD, Fermat, Miller-Rabin, and Lucas + * testing is supported: + * (up to 1x GCD) + (up to 1x Fermat) + (up to 50x Miller-Rabin rounds) + + * (up to 1x Lucas) + * For example: + * (1x GCD) + (25x Miller-Rabin) + (1x Lucas); + * (1x GCD) + (1x Fermat); + * (50x Miller-rabin); + * + * Tests are always performed in order of increasing complexity, for + * example GCD first, then Fermat, then Miller-Rabin, and finally Lucas. + * + * For all of the primality tests, the following prime number "sizes" + * (length in bits) are supported: all sizes up to and including 512 + * bits, as well as sizes 768, 1024, 1536, 2048, 3072 and 4096. + * + * Candidate prime numbers MUST match these sizes accordingly, with + * leading zeroes present where necessary. + * + * When this prime number test is used in conjunction with combined + * Miller-Rabin and Lucas tests, it may be used as a means of performing + * a self test operation on the random data generator. + * + * A response status of ok (pass == CPA_TRUE) means all requested + * primality tests passed, and the prime candidate is probably prime + * (the exact probability depends on the primality tests requested). + * A response status of not ok (pass == CPA_FALSE) means one of the + * requested primality tests failed (the prime candidate has been found + * to be composite). + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCb Callback function pointer. If this is set to + * a NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag User-supplied value to help identify request. + * @param[in] pOpData Structure containing all the data needed to + * perform the operation. The client code + * allocates the memory for this structure. This + * component takes ownership of the memory until + * it is returned in the callback. + * @param[out] pTestPassed A value of CPA_TRUE means the prime candidate + * is probably prime. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pCb is non-NULL an asynchronous callback of type + * CpaCyPrimeTestCbFunc is generated in response to this function call. + * For optimal performance, data pointers SHOULD be 8-byte aligned. + * + * @see + * CpaCyPrimeTestOpData, CpaCyPrimeTestCbFunc + * + *****************************************************************************/ +CpaStatus +cpaCyPrimeTest(const CpaInstanceHandle instanceHandle, + const CpaCyPrimeTestCbFunc pCb, + void *pCallbackTag, + const CpaCyPrimeTestOpData *pOpData, + CpaBoolean *pTestPassed); + +/****************************************************************************** + * @ingroup cpaCyPrime + * Query prime number statistics specific to an instance. + * + * @deprecated + * As of v1.3 of the Crypto API, this function has been deprecated, + * replaced by @ref cpaCyPrimeQueryStats64(). + * + * @description + * This function will query a specific instance for prime number + * statistics. The user MUST allocate the CpaCyPrimeStats structure + * and pass the reference to that into this function call. This function + * will write the statistic results into the passed in + * CpaCyPrimeStats structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pPrimeStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * + *****************************************************************************/ +CpaStatus CPA_DEPRECATED +cpaCyPrimeQueryStats(const CpaInstanceHandle instanceHandle, + struct _CpaCyPrimeStats *pPrimeStats); + + +/****************************************************************************** + * @ingroup cpaCyPrime + * Query prime number statistics specific to an instance. + * + * @description + * This function will query a specific instance for the 64-bit + * version of the prime number statistics. + * The user MUST allocate the CpaCyPrimeStats64 structure + * and pass the reference to that into this function call. This function + * will write the statistic results into the passed in + * CpaCyPrimeStats64 structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pPrimeStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + *****************************************************************************/ +CpaStatus +cpaCyPrimeQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyPrimeStats64 *pPrimeStats); + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_CY_PRIME_H */ Index: sys/dev/qat/qat_api/include/lac/cpa_cy_rsa.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_rsa.h @@ -0,0 +1,907 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_rsa.h + * + * @defgroup cpaCyRsa RSA API + * + * @ingroup cpaCy + * + * @description + * These functions specify the API for Public Key Encryption + * (Cryptography) RSA operations. The PKCS #1 V2.1 specification is + * supported, however the support is limited to "two-prime" mode. RSA + * multi-prime is not supported. + * + * @note + * These functions implement RSA cryptographic primitives. RSA padding + * schemes are not implemented. For padding schemes that require the mgf + * function see @ref cpaCyKeyGen. + * + * @note + * Large numbers are represented on the QuickAssist API as described + * in the Large Number API (@ref cpaCyLn). + *****************************************************************************/ + +#ifndef CPA_CY_RSA_H +#define CPA_CY_RSA_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_cy_common.h" + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * RSA Version. + * @description + * This enumeration lists the version identifier for the PKCS #1 V2.1 + * standard. + * @note + * Multi-prime (more than two primes) is not supported. + * + *****************************************************************************/ +typedef enum _CpaCyRsaVersion +{ + CPA_CY_RSA_VERSION_TWO_PRIME = 1 + /**< The version supported is "two-prime". */ +} CpaCyRsaVersion; + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * RSA Public Key Structure. + * @description + * This structure contains the two components which comprise the RSA + * public key as defined in the PKCS #1 V2.1 standard. + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. modulusN.pData[0] = MSB. + * + *****************************************************************************/ +typedef struct _CpaCyRsaPublicKey { + CpaFlatBuffer modulusN; + /**< The modulus (n). + * For key generation operations, the client MUST allocate the memory + * for this parameter; its value is generated. + * For encrypt operations this parameter is an input. */ + CpaFlatBuffer publicExponentE; + /**< The public exponent (e). + * For key generation operations, this field is unused. It is NOT + * generated by the interface; it is the responsibility of the client + * to set this to the same value as the corresponding parameter on + * the CpaCyRsaKeyGenOpData structure before using the key for + * encryption. + * For encrypt operations this parameter is an input. */ +} CpaCyRsaPublicKey; + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * RSA Private Key Structure For Representation 1. + * @description + * This structure contains the first representation that can be used for + * describing the RSA private key, represented by the tuple of the + * modulus (n) and the private exponent (d). + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. modulusN.pData[0] = MSB. + * + *****************************************************************************/ +typedef struct _CpaCyRsaPrivateKeyRep1 { + CpaFlatBuffer modulusN; + /**< The modulus (n). For key generation operations the memory MUST + * be allocated by the client and the value is generated. For other + * operations this is an input. Permitted lengths are: + * + * - 512 bits (64 bytes), + * - 1024 bits (128 bytes), + * - 1536 bits (192 bytes), + * - 2048 bits (256 bytes), + * - 3072 bits (384 bytes), or + * - 4096 bits (512 bytes). + */ + CpaFlatBuffer privateExponentD; + /**< The private exponent (d). For key generation operations the + * memory MUST be allocated by the client and the value is generated. For + * other operations this is an input. + * NOTE: It is important that the value D is big enough. It is STRONGLY + * recommended that this value is at least half the length of the modulus + * N to protect against the Wiener attack. */ +} CpaCyRsaPrivateKeyRep1; + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * RSA Private Key Structure For Representation 2. + * @description + * This structure contains the second representation that can be used for + * describing the RSA private key. The quintuple of p, q, dP, dQ, and qInv + * (explained below and in the spec) are required for the second + * representation. The optional sequence of triplets are not included. + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. prime1P.pData[0] = MSB. + * + *****************************************************************************/ +typedef struct _CpaCyRsaPrivateKeyRep2 { + CpaFlatBuffer prime1P; + /**< The first large prime (p). + * For key generation operations, this field is unused. */ + CpaFlatBuffer prime2Q; + /**< The second large prime (q). + * For key generation operations, this field is unused. */ + CpaFlatBuffer exponent1Dp; + /**< The first factor CRT exponent (dP). d mod (p-1). */ + CpaFlatBuffer exponent2Dq; + /**< The second factor CRT exponent (dQ). d mod (q-1). */ + CpaFlatBuffer coefficientQInv; + /**< The (first) Chinese Remainder Theorem (CRT) coefficient (qInv). + * (inverse of q) mod p. */ +} CpaCyRsaPrivateKeyRep2; + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * RSA private key representation type. + * @description + * This enumeration lists which PKCS V2.1 representation of the private + * key is being used. + * + *****************************************************************************/ +typedef enum _CpaCyRsaPrivateKeyRepType +{ + CPA_CY_RSA_PRIVATE_KEY_REP_TYPE_1= 1, + /**< The first representation of the RSA private key. */ + CPA_CY_RSA_PRIVATE_KEY_REP_TYPE_2 + /**< The second representation of the RSA private key. */ +} CpaCyRsaPrivateKeyRepType; + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * RSA Private Key Structure. + * @description + * This structure contains the two representations that can be used for + * describing the RSA private key. The privateKeyRepType will be used to + * identify which representation is to be used. Typically, using the + * second representation results in faster decryption operations. + * + *****************************************************************************/ +typedef struct _CpaCyRsaPrivateKey { + CpaCyRsaVersion version; + /**< Indicates the version of the PKCS #1 specification that is + * supported. + * Note that this applies to both representations. */ + CpaCyRsaPrivateKeyRepType privateKeyRepType; + /**< This value is used to identify which of the private key + * representation types in this structure is relevant. + * When performing key generation operations for Type 2 representations, + * memory must also be allocated for the type 1 representations, and values + * for both will be returned. */ + CpaCyRsaPrivateKeyRep1 privateKeyRep1; + /**< This is the first representation of the RSA private key as + * defined in the PKCS #1 V2.1 specification. For key generation operations + * the memory for this structure is allocated by the client and the + * specific values are generated. For other operations this is an input + * parameter. */ + CpaCyRsaPrivateKeyRep2 privateKeyRep2; + /**< This is the second representation of the RSA private key as + * defined in the PKCS #1 V2.1 specification. For key generation operations + * the memory for this structure is allocated by the client and the + * specific values are generated. For other operations this is an input + * parameter. */ +} CpaCyRsaPrivateKey; + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * RSA Key Generation Data. + * @description + * This structure lists the different items that are required in the + * cpaCyRsaGenKey function. The client MUST allocate the memory for this + * structure. When the structure is passed into the function, ownership of + * the memory passes to the function. Ownership of the memory returns to + * the client when this structure is returned in the + * CpaCyRsaKeyGenCbFunc callback function. + * + * @note + * If the client modifies or frees the memory referenced in this structure + * after it has been submitted to the cpaCyRsaGenKey function, and + * before it has been returned in the callback, undefined behavior will + * result. + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. prime1P.pData[0] = MSB. + * + * The following limitations on the permutations of the supported bit + * lengths of p, q and n (written as {p, q, n}) apply: + * + * - {256, 256, 512} or + * - {512, 512, 1024} or + * - {768, 768, 1536} or + * - {1024, 1024, 2048} or + * - {1536, 1536, 3072} or + * - {2048, 2048, 4096}. + * + *****************************************************************************/ +typedef struct _CpaCyRsaKeyGenOpData { + CpaFlatBuffer prime1P; + /**< A large random prime number (p). This MUST be created by the + * client. Permitted bit lengths are: 256, 512, 768, 1024, 1536 or 2048. + * Limitations apply - refer to the description above for details. */ + CpaFlatBuffer prime2Q; + /**< A large random prime number (q). This MUST be created by the + * client. Permitted bit lengths are: 256, 512, 768, 1024, 1536 or 2048. + * Limitations apply - refer to the description above for details. If the + * private key representation type is 2, then this pointer will be assigned + * to the relevant structure member of the representation 2 private key. */ + Cpa32U modulusLenInBytes; + /**< The bit length of the modulus (n). This is the modulus length for + * both the private and public keys. The length of the modulus N parameter + * for the private key representation 1 structure and the public key + * structures will be assigned to this value. References to the strength of + * RSA actually refer to this bit length. Recommended minimum is 1024 bits. + * Permitted lengths are: + * - 512 bits (64 bytes), + * - 1024 bits (128 bytes), + * - 1536 bits (192 bytes), + * - 2048 bits (256 bytes), + * - 3072 bits (384 bytes), or + * - 4096 bits (512 bytes). + * Limitations apply - refer to description above for details. */ + CpaCyRsaVersion version; + /**< Indicates the version of the PKCS #1 specification that is + * supported. + * Note that this applies to both representations. */ + CpaCyRsaPrivateKeyRepType privateKeyRepType; + /**< This value is used to identify which of the private key + * representation types is required to be generated. */ + CpaFlatBuffer publicExponentE; + /**< The public exponent (e). */ +} CpaCyRsaKeyGenOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * RSA Encryption Primitive Operation Data + * @description + * This structure lists the different items that are required in the + * cpaCyRsaEncrypt function. As the RSA encryption primitive and + * verification primitive operations are mathematically identical this + * structure may also be used to perform an RSA verification primitive + * operation. + * When performing an RSA encryption primitive operation, the input data + * is the message and the output data is the cipher text. + * When performing an RSA verification primitive operation, the input data + * is the signature and the output data is the message. + * The client MUST allocate the memory for this structure. When the + * structure is passed into the function, ownership of the memory passes + * to the function. Ownership of the memory returns to the client when + * this structure is returned in the CpaCyRsaEncryptCbFunc + * callback function. + * + * @note + * If the client modifies or frees the memory referenced in this structure + * after it has been submitted to the cpaCyRsaEncrypt function, and + * before it has been returned in the callback, undefined behavior will + * result. + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. inputData.pData[0] = MSB. + * + *****************************************************************************/ +typedef struct _CpaCyRsaEncryptOpData { + CpaCyRsaPublicKey *pPublicKey; + /**< Pointer to the public key. */ + CpaFlatBuffer inputData; + /**< The input data that the RSA encryption primitive operation is + * performed on. The data pointed to is an integer that MUST be in big- + * endian order. The value MUST be between 0 and the modulus n - 1. */ +} CpaCyRsaEncryptOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * RSA Decryption Primitive Operation Data + * @description + * This structure lists the different items that are required in the + * cpaCyRsaDecrypt function. As the RSA decryption primitive and + * signature primitive operations are mathematically identical this + * structure may also be used to perform an RSA signature primitive + * operation. + * When performing an RSA decryption primitive operation, the input data + * is the cipher text and the output data is the message text. + * When performing an RSA signature primitive operation, the input data + * is the message and the output data is the signature. + * The client MUST allocate the memory for this structure. When the + * structure is passed into the function, ownership of the memory passes + * to he function. Ownership of the memory returns to the client when + * this structure is returned in the CpaCyRsaDecryptCbFunc + * callback function. + * + * @note + * If the client modifies or frees the memory referenced in this structure + * after it has been submitted to the cpaCyRsaDecrypt function, and + * before it has been returned in the callback, undefined behavior will + * result. + * All values in this structure are required to be in Most Significant Byte + * first order, e.g. inputData.pData[0] = MSB. + * + *****************************************************************************/ +typedef struct _CpaCyRsaDecryptOpData { + CpaCyRsaPrivateKey *pRecipientPrivateKey; + /**< Pointer to the recipient's RSA private key. */ + CpaFlatBuffer inputData; + /**< The input data that the RSA decryption primitive operation is + * performed on. The data pointed to is an integer that MUST be in big- + * endian order. The value MUST be between 0 and the modulus n - 1. */ +} CpaCyRsaDecryptOpData; + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * RSA Statistics. + * @deprecated + * As of v1.3 of the Crypto API, this structure has been deprecated, + * replaced by @ref CpaCyRsaStats64. + * @description + * This structure contains statistics on the RSA operations. + * Statistics are set to zero when the component is initialized, and are + * collected per instance. + ****************************************************************************/ +typedef struct _CpaCyRsaStats { + Cpa32U numRsaKeyGenRequests; + /**< Total number of successful RSA key generation requests. */ + Cpa32U numRsaKeyGenRequestErrors; + /**< Total number of RSA key generation requests that had an error and + * could not be processed. */ + Cpa32U numRsaKeyGenCompleted; + /**< Total number of RSA key generation operations that completed + * successfully. */ + Cpa32U numRsaKeyGenCompletedErrors; + /**< Total number of RSA key generation operations that could not be + * completed successfully due to errors. */ + Cpa32U numRsaEncryptRequests; + /**< Total number of successful RSA encrypt operation requests. */ + Cpa32U numRsaEncryptRequestErrors; + /**< Total number of RSA encrypt requests that had an error and could + * not be processed. */ + Cpa32U numRsaEncryptCompleted; + /**< Total number of RSA encrypt operations that completed + * successfully. */ + Cpa32U numRsaEncryptCompletedErrors; + /**< Total number of RSA encrypt operations that could not be + * completed successfully due to errors. */ + Cpa32U numRsaDecryptRequests; + /**< Total number of successful RSA decrypt operation requests. */ + Cpa32U numRsaDecryptRequestErrors; + /**< Total number of RSA decrypt requests that had an error and could + * not be processed. */ + Cpa32U numRsaDecryptCompleted; + /**< Total number of RSA decrypt operations that completed + * successfully. */ + Cpa32U numRsaDecryptCompletedErrors; + /**< Total number of RSA decrypt operations that could not be + * completed successfully due to errors. */ +} CpaCyRsaStats CPA_DEPRECATED; + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * RSA Statistics (64-bit version). + * @description + * This structure contains 64-bit version of the statistics on the RSA + * operations. + * Statistics are set to zero when the component is initialized, and are + * collected per instance. + ****************************************************************************/ +typedef struct _CpaCyRsaStats64 { + Cpa64U numRsaKeyGenRequests; + /**< Total number of successful RSA key generation requests. */ + Cpa64U numRsaKeyGenRequestErrors; + /**< Total number of RSA key generation requests that had an error and + * could not be processed. */ + Cpa64U numRsaKeyGenCompleted; + /**< Total number of RSA key generation operations that completed + * successfully. */ + Cpa64U numRsaKeyGenCompletedErrors; + /**< Total number of RSA key generation operations that could not be + * completed successfully due to errors. */ + Cpa64U numRsaEncryptRequests; + /**< Total number of successful RSA encrypt operation requests. */ + Cpa64U numRsaEncryptRequestErrors; + /**< Total number of RSA encrypt requests that had an error and could + * not be processed. */ + Cpa64U numRsaEncryptCompleted; + /**< Total number of RSA encrypt operations that completed + * successfully. */ + Cpa64U numRsaEncryptCompletedErrors; + /**< Total number of RSA encrypt operations that could not be + * completed successfully due to errors. */ + Cpa64U numRsaDecryptRequests; + /**< Total number of successful RSA decrypt operation requests. */ + Cpa64U numRsaDecryptRequestErrors; + /**< Total number of RSA decrypt requests that had an error and could + * not be processed. */ + Cpa64U numRsaDecryptCompleted; + /**< Total number of RSA decrypt operations that completed + * successfully. */ + Cpa64U numRsaDecryptCompletedErrors; + /**< Total number of RSA decrypt operations that could not be + * completed successfully due to errors. */ +} CpaCyRsaStats64; + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * Definition of the RSA key generation callback function. + * + * @description + * This is the prototype for the RSA key generation callback function. The + * callback function pointer is passed in as a parameter to the + * cpaCyRsaGenKey function. It will be invoked once the request has + * completed. + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag Opaque value provided by user while making + * individual function calls. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] pKeyGenOpData Structure with output params for callback. + * @param[in] pPrivateKey Structure which contains pointers to the memory + * into which the generated private key will be + * written. + * @param[in] pPublicKey Structure which contains pointers to the memory + * into which the generated public key will be + * written. The pointer to the public exponent (e) + * that is returned in this structure is equal to + * the input public exponent. + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * CpaCyRsaPrivateKey, + * CpaCyRsaPublicKey, + * cpaCyRsaGenKey() + * + *****************************************************************************/ +typedef void (*CpaCyRsaKeyGenCbFunc)(void *pCallbackTag, + CpaStatus status, + void *pKeyGenOpData, + CpaCyRsaPrivateKey *pPrivateKey, + CpaCyRsaPublicKey *pPublicKey); + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * Generate RSA keys. + * + * @description + * This function will generate private and public keys for RSA as specified + * in the PKCS #1 V2.1 standard. Both representation types of the private + * key may be generated. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pRsaKeyGenCb Pointer to the callback function to be invoked + * when the operation is complete. If this is + * set to a NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific call. Will + * be returned unchanged in the callback. + * @param[in] pKeyGenOpData Structure containing all the data needed to + * perform the RSA key generation operation. The + * client code allocates the memory for this + * structure. This component takes ownership of + * the memory until it is returned in the + * callback. + * @param[out] pPrivateKey Structure which contains pointers to the memory + * into which the generated private key will be + * written. The client MUST allocate memory + * for this structure, and for the pointers + * within it, recursively; on return, these will + * be populated. + * @param[out] pPublicKey Structure which contains pointers to the memory + * into which the generated public key will be + * written. The memory for this structure and + * for the modulusN parameter MUST be allocated + * by the client, and will be populated on return + * from the call. The field publicExponentE + * is not modified or touched in any way; it is + * the responsibility of the client to set this + * to the same value as the corresponding + * parameter on the CpaCyRsaKeyGenOpData + * structure before using the key for encryption. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pRsaKeyGenCb is non-NULL, an asynchronous callback of type is + * generated in response to this function call. + * Any errors generated during processing are reported as part of the + * callback status code. For optimal performance, data pointers SHOULD be + * 8-byte aligned. + * @see + * CpaCyRsaKeyGenOpData, + * CpaCyRsaKeyGenCbFunc, + * cpaCyRsaEncrypt(), + * cpaCyRsaDecrypt() + * + *****************************************************************************/ +CpaStatus +cpaCyRsaGenKey(const CpaInstanceHandle instanceHandle, + const CpaCyRsaKeyGenCbFunc pRsaKeyGenCb, + void *pCallbackTag, + const CpaCyRsaKeyGenOpData *pKeyGenOpData, + CpaCyRsaPrivateKey *pPrivateKey, + CpaCyRsaPublicKey *pPublicKey); + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * Perform the RSA encrypt (or verify) primitive operation on the input + * data. + * + * @description + * This function will perform an RSA encryption primitive operation on the + * input data using the specified RSA public key. As the RSA encryption + * primitive and verification primitive operations are mathematically + * identical this function may also be used to perform an RSA verification + * primitive operation. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pRsaEncryptCb Pointer to callback function to be invoked + * when the operation is complete. If this is + * set to a NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific call. Will + * be returned unchanged in the callback. + * @param[in] pEncryptOpData Structure containing all the data needed to + * perform the RSA encryption operation. The + * client code allocates the memory for this + * structure. This component takes ownership of + * the memory until it is returned in the + * callback. + * @param[out] pOutputData Pointer to structure into which the result of + * the RSA encryption primitive is written. The + * client MUST allocate this memory. The data + * pointed to is an integer in big-endian order. + * The value will be between 0 and the modulus + * n - 1. + * On invocation the callback function will + * contain this parameter in the pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pRsaEncryptCb is non-NULL an asynchronous callback of type is + * generated in response to this function call. + * Any errors generated during processing are reported as part of the + * callback status code. For optimal performance, data pointers SHOULD be + * 8-byte aligned. + * @see + * CpaCyGenFlatBufCbFunc + * CpaCyRsaEncryptOpData + * cpaCyRsaGenKey() + * cpaCyRsaDecrypt() + * + *****************************************************************************/ +CpaStatus +cpaCyRsaEncrypt(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pRsaEncryptCb, + void *pCallbackTag, + const CpaCyRsaEncryptOpData *pEncryptOpData, + CpaFlatBuffer *pOutputData); + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * Perform the RSA decrypt (or sign) primitive operation on the input + * data. + * + * @description + * This function will perform an RSA decryption primitive operation on the + * input data using the specified RSA private key. As the RSA decryption + * primitive and signing primitive operations are mathematically identical + * this function may also be used to perform an RSA signing primitive + * operation. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pRsaDecryptCb Pointer to callback function to be invoked + * when the operation is complete. If this is + * set to a NULL value the function will operate + * synchronously. + * @param[in] pCallbackTag Opaque User Data for this specific call. + * Will be returned unchanged in the callback. + * @param[in] pDecryptOpData Structure containing all the data needed to + * perform the RSA decrypt operation. The + * client code allocates the memory for this + * structure. This component takes ownership + * of the memory until it is returned in the + * callback. + * @param[out] pOutputData Pointer to structure into which the result of + * the RSA decryption primitive is written. The + * client MUST allocate this memory. The data + * pointed to is an integer in big-endian order. + * The value will be between 0 and the modulus + * n - 1. + * On invocation the callback function will + * contain this parameter in the pOut parameter. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * When pRsaDecryptCb is non-NULL an asynchronous callback is generated in + * response to this function call. + * Any errors generated during processing are reported as part of the + * callback status code. For optimal performance, data pointers SHOULD be + * 8-byte aligned. + * @see + * CpaCyRsaDecryptOpData, + * CpaCyGenFlatBufCbFunc, + * cpaCyRsaGenKey(), + * cpaCyRsaEncrypt() + * + *****************************************************************************/ +CpaStatus +cpaCyRsaDecrypt(const CpaInstanceHandle instanceHandle, + const CpaCyGenFlatBufCbFunc pRsaDecryptCb, + void *pCallbackTag, + const CpaCyRsaDecryptOpData *pDecryptOpData, + CpaFlatBuffer * pOutputData); + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * Query statistics for a specific RSA instance. + * + * @deprecated + * As of v1.3 of the Crypto API, this function has been deprecated, + * replaced by @ref cpaCyRsaQueryStats64(). + * + * @description + * This function will query a specific instance for RSA statistics. The + * user MUST allocate the CpaCyRsaStats structure and pass the + * reference to that into this function call. This function will write the + * statistic results into the passed in CpaCyRsaStats structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pRsaStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * @see + * CpaCyRsaStats + * + *****************************************************************************/ +CpaStatus CPA_DEPRECATED +cpaCyRsaQueryStats(const CpaInstanceHandle instanceHandle, + struct _CpaCyRsaStats *pRsaStats); + +/** + ***************************************************************************** + * @ingroup cpaCyRsa + * Query statistics (64-bit version) for a specific RSA instance. + * + * @description + * This function will query a specific instance for RSA statistics. The + * user MUST allocate the CpaCyRsaStats64 structure and pass the + * reference to that into this function call. This function will write the + * statistic results into the passed in CpaCyRsaStats64 structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pRsaStats Pointer to memory into which the statistics + * will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner and no asynchronous + * callback will be generated. + * @see + * CpaCyRsaStats64 + *****************************************************************************/ +CpaStatus +cpaCyRsaQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCyRsaStats64 *pRsaStats); + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_CY_RSA_H */ Index: sys/dev/qat/qat_api/include/lac/cpa_cy_sym.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_sym.h @@ -0,0 +1,1844 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_sym.h + * + * @defgroup cpaCySym Symmetric Cipher and Hash Cryptographic API + * + * @ingroup cpaCy + * + * @description + * These functions specify the Cryptographic API for symmetric cipher, + * hash, and combined cipher and hash operations. + * + *****************************************************************************/ + +#ifndef CPA_CY_SYM_H +#define CPA_CY_SYM_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_cy_common.h" + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Cryptographic component symmetric session context handle. + * @description + * Handle to a cryptographic session context. The memory for this handle + * is allocated by the client. The size of the memory that the client needs + * to allocate is determined by a call to the @ref + * cpaCySymSessionCtxGetSize or @ref cpaCySymSessionCtxGetDynamicSize + * functions. The session context memory is initialized with a call to + * the @ref cpaCySymInitSession function. + * This memory MUST not be freed until a call to @ref + * cpaCySymRemoveSession has completed successfully. + * + *****************************************************************************/ +typedef void * CpaCySymSessionCtx; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Packet type for the cpaCySymPerformOp function + * + * @description + * Enumeration which is used to indicate to the symmetric cryptographic + * perform function on which type of packet the operation is required to + * be invoked. Multi-part cipher and hash operations are useful when + * processing needs to be performed on a message which is available to + * the client in multiple parts (for example due to network fragmentation + * of the packet). + * + * @note + * There are some restrictions regarding the operations on which + * partial packet processing is supported. For details, see the + * function @ref cpaCySymPerformOp. + * + * @see + * cpaCySymPerformOp() + * + *****************************************************************************/ +typedef enum _CpaCySymPacketType +{ + CPA_CY_SYM_PACKET_TYPE_FULL = 1, + /**< Perform an operation on a full packet*/ + CPA_CY_SYM_PACKET_TYPE_PARTIAL, + /**< Perform a partial operation and maintain the state of the partial + * operation within the session. This is used for either the first or + * subsequent packets within a partial packet flow. */ + CPA_CY_SYM_PACKET_TYPE_LAST_PARTIAL + /**< Complete the last part of a multi-part operation */ +} CpaCySymPacketType; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Types of operations supported by the cpaCySymPerformOp function. + * @description + * This enumeration lists different types of operations supported by the + * cpaCySymPerformOp function. The operation type is defined during + * session registration and cannot be changed for a session once it has + * been setup. + * @see + * cpaCySymPerformOp + *****************************************************************************/ +typedef enum _CpaCySymOp +{ + CPA_CY_SYM_OP_NONE=0, + /**< No operation */ + CPA_CY_SYM_OP_CIPHER, + /**< Cipher only operation on the data */ + CPA_CY_SYM_OP_HASH, + /**< Hash only operation on the data */ + CPA_CY_SYM_OP_ALGORITHM_CHAINING + /**< Chain any cipher with any hash operation. The order depends on + * the value in the CpaCySymAlgChainOrder enum. + * + * This value is also used for authenticated ciphers (GCM and CCM), in + * which case the cipherAlgorithm should take one of the values @ref + * CPA_CY_SYM_CIPHER_AES_CCM or @ref CPA_CY_SYM_CIPHER_AES_GCM, while the + * hashAlgorithm should take the corresponding value @ref + * CPA_CY_SYM_HASH_AES_CCM or @ref CPA_CY_SYM_HASH_AES_GCM. + */ +} CpaCySymOp; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Cipher algorithms. + * @description + * This enumeration lists supported cipher algorithms and modes. + * + *****************************************************************************/ +typedef enum _CpaCySymCipherAlgorithm +{ + CPA_CY_SYM_CIPHER_NULL = 1, + /**< NULL cipher algorithm. No mode applies to the NULL algorithm. */ + CPA_CY_SYM_CIPHER_ARC4, + /**< (A)RC4 cipher algorithm */ + CPA_CY_SYM_CIPHER_AES_ECB, + /**< AES algorithm in ECB mode */ + CPA_CY_SYM_CIPHER_AES_CBC, + /**< AES algorithm in CBC mode */ + CPA_CY_SYM_CIPHER_AES_CTR, + /**< AES algorithm in Counter mode */ + CPA_CY_SYM_CIPHER_AES_CCM, + /**< AES algorithm in CCM mode. This authenticated cipher is only supported + * when the hash mode is also set to CPA_CY_SYM_HASH_MODE_AUTH. When this + * cipher algorithm is used the CPA_CY_SYM_HASH_AES_CCM element of the + * CpaCySymHashAlgorithm enum MUST be used to set up the related + * CpaCySymHashSetupData structure in the session context. */ + CPA_CY_SYM_CIPHER_AES_GCM, + /**< AES algorithm in GCM mode. This authenticated cipher is only supported + * when the hash mode is also set to CPA_CY_SYM_HASH_MODE_AUTH. When this + * cipher algorithm is used the CPA_CY_SYM_HASH_AES_GCM element of the + * CpaCySymHashAlgorithm enum MUST be used to set up the related + * CpaCySymHashSetupData structure in the session context. */ + CPA_CY_SYM_CIPHER_DES_ECB, + /**< DES algorithm in ECB mode */ + CPA_CY_SYM_CIPHER_DES_CBC, + /**< DES algorithm in CBC mode */ + CPA_CY_SYM_CIPHER_3DES_ECB, + /**< Triple DES algorithm in ECB mode */ + CPA_CY_SYM_CIPHER_3DES_CBC, + /**< Triple DES algorithm in CBC mode */ + CPA_CY_SYM_CIPHER_3DES_CTR, + /**< Triple DES algorithm in CTR mode */ + CPA_CY_SYM_CIPHER_KASUMI_F8, + /**< Kasumi algorithm in F8 mode */ + CPA_CY_SYM_CIPHER_SNOW3G_UEA2, + /**< SNOW3G algorithm in UEA2 mode */ + CPA_CY_SYM_CIPHER_AES_F8, + /**< AES algorithm in F8 mode */ + CPA_CY_SYM_CIPHER_AES_XTS, + /**< AES algorithm in XTS mode */ + CPA_CY_SYM_CIPHER_ZUC_EEA3, + /**< ZUC algorithm in EEA3 mode */ + CPA_CY_SYM_CIPHER_CHACHA, + /**< ChaCha20 Cipher Algorithm. This cipher is only supported for + * algorithm chaining. When selected, the hash algorithm must be set to + * CPA_CY_SYM_HASH_POLY and the hash mode must be set to + * CPA_CY_SYM_HASH_MODE_AUTH. */ + CPA_CY_SYM_CIPHER_SM4_ECB, + /**< SM4 algorithm in ECB mode This cipher supports 128 bit keys only and + * does not support partial processing. */ + CPA_CY_SYM_CIPHER_SM4_CBC, + /**< SM4 algorithm in CBC mode This cipher supports 128 bit keys only and + * does not support partial processing. */ + CPA_CY_SYM_CIPHER_SM4_CTR + /**< SM4 algorithm in CTR mode This cipher supports 128 bit keys only and + * does not support partial processing. */ +} CpaCySymCipherAlgorithm; + +/** + * @ingroup cpaCySym + * Size of bitmap needed for cipher "capabilities" type. + * + * @description + * Defines the number of bits in the bitmap to represent supported + * ciphers in the type @ref CpaCySymCapabilitiesInfo. Should be set to + * at least one greater than the largest value in the enumerated type + * @ref CpaCySymHashAlgorithm, so that the value of the enum constant + * can also be used as the bit position in the bitmap. + * + * A larger value was chosen to allow for extensibility without the need + * to change the size of the bitmap (to ease backwards compatibility in + * future versions of the API). + */ +#define CPA_CY_SYM_CIPHER_CAP_BITMAP_SIZE (32) + + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Symmetric Cipher Direction + * @description + * This enum indicates the cipher direction (encryption or decryption). + * + *****************************************************************************/ +typedef enum _CpaCySymCipherDirection +{ + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT = 1, + /**< Encrypt Data */ + CPA_CY_SYM_CIPHER_DIRECTION_DECRYPT + /**< Decrypt Data */ +} CpaCySymCipherDirection; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Symmetric Cipher Setup Data. + * @description + * This structure contains data relating to Cipher (Encryption and + * Decryption) to set up a session. + * + *****************************************************************************/ +typedef struct _CpaCySymCipherSetupData { + CpaCySymCipherAlgorithm cipherAlgorithm; + /**< Cipher algorithm and mode */ + Cpa32U cipherKeyLenInBytes; + /**< Cipher key length in bytes. For AES it can be 128 bits (16 bytes), + * 192 bits (24 bytes) or 256 bits (32 bytes). + * For the CCM mode of operation, the only supported key length is 128 bits + * (16 bytes). + * For the CPA_CY_SYM_CIPHER_AES_F8 mode of operation, cipherKeyLenInBytes + * should be set to the combined length of the encryption key and the + * keymask. Since the keymask and the encryption key are the same size, + * cipherKeyLenInBytes should be set to 2 x the AES encryption key length. + * For the AES-XTS mode of operation: + * - Two keys must be provided and cipherKeyLenInBytes refers to total + * length of the two keys. + * - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes). + * - Both keys must have the same size. */ + Cpa8U *pCipherKey; + /**< Cipher key + * For the CPA_CY_SYM_CIPHER_AES_F8 mode of operation, pCipherKey will + * point to a concatenation of the AES encryption key followed by a + * keymask. As per RFC3711, the keymask should be padded with trailing + * bytes to match the length of the encryption key used. + * For AES-XTS mode of operation, two keys must be provided and pCipherKey + * must point to the two keys concatenated together (Key1 || Key2). + * cipherKeyLenInBytes will contain the total size of both keys. */ + CpaCySymCipherDirection cipherDirection; + /**< This parameter determines if the cipher operation is an encrypt or + * a decrypt operation. + * For the RC4 algorithm and the F8/CTR modes, only encrypt operations + * are valid. */ +} CpaCySymCipherSetupData; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Symmetric Hash mode + * @description + * This enum indicates the Hash Mode. + * + *****************************************************************************/ +typedef enum _CpaCySymHashMode +{ + CPA_CY_SYM_HASH_MODE_PLAIN = 1, + /**< Plain hash. Can be specified for MD5 and the SHA family of + * hash algorithms. */ + CPA_CY_SYM_HASH_MODE_AUTH, + /**< Authenticated hash. This mode may be used in conjunction with the + * MD5 and SHA family of algorithms to specify HMAC. It MUST also be + * specified with all of the remaining algorithms, all of which are in + * fact authentication algorithms. + */ + CPA_CY_SYM_HASH_MODE_NESTED + /**< Nested hash. Can be specified for MD5 and the SHA family of + * hash algorithms. */ +} CpaCySymHashMode; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Hash algorithms. + * @description + * This enumeration lists supported hash algorithms. + * + *****************************************************************************/ +typedef enum _CpaCySymHashAlgorithm +{ + CPA_CY_SYM_HASH_NONE = 0, + /**< No hash algorithm. */ + CPA_CY_SYM_HASH_MD5, + /**< MD5 algorithm. Supported in all 3 hash modes */ + CPA_CY_SYM_HASH_SHA1, + /**< 128 bit SHA algorithm. Supported in all 3 hash modes */ + CPA_CY_SYM_HASH_SHA224, + /**< 224 bit SHA algorithm. Supported in all 3 hash modes */ + CPA_CY_SYM_HASH_SHA256, + /**< 256 bit SHA algorithm. Supported in all 3 hash modes */ + CPA_CY_SYM_HASH_SHA384, + /**< 384 bit SHA algorithm. Supported in all 3 hash modes */ + CPA_CY_SYM_HASH_SHA512, + /**< 512 bit SHA algorithm. Supported in all 3 hash modes */ + CPA_CY_SYM_HASH_AES_XCBC, + /**< AES XCBC algorithm. This is only supported in the hash mode + * CPA_CY_SYM_HASH_MODE_AUTH. */ + CPA_CY_SYM_HASH_AES_CCM, + /**< AES algorithm in CCM mode. This authenticated cipher requires that the + * hash mode is set to CPA_CY_SYM_HASH_MODE_AUTH. When this hash algorithm + * is used, the CPA_CY_SYM_CIPHER_AES_CCM element of the + * CpaCySymCipherAlgorithm enum MUST be used to set up the related + * CpaCySymCipherSetupData structure in the session context. */ + CPA_CY_SYM_HASH_AES_GCM, + /**< AES algorithm in GCM mode. This authenticated cipher requires that the + * hash mode is set to CPA_CY_SYM_HASH_MODE_AUTH. When this hash algorithm + * is used, the CPA_CY_SYM_CIPHER_AES_GCM element of the + * CpaCySymCipherAlgorithm enum MUST be used to set up the related + * CpaCySymCipherSetupData structure in the session context. */ + CPA_CY_SYM_HASH_KASUMI_F9, + /**< Kasumi algorithm in F9 mode. This is only supported in the hash + * mode CPA_CY_SYM_HASH_MODE_AUTH. */ + CPA_CY_SYM_HASH_SNOW3G_UIA2, + /**< SNOW3G algorithm in UIA2 mode. This is only supported in the hash + * mode CPA_CY_SYM_HASH_MODE_AUTH. */ + CPA_CY_SYM_HASH_AES_CMAC, + /**< AES CMAC algorithm. This is only supported in the hash mode + * CPA_CY_SYM_HASH_MODE_AUTH. */ + CPA_CY_SYM_HASH_AES_GMAC, + /**< AES GMAC algorithm. This is only supported in the hash mode + * CPA_CY_SYM_HASH_MODE_AUTH. When this hash algorithm + * is used, the CPA_CY_SYM_CIPHER_AES_GCM element of the + * CpaCySymCipherAlgorithm enum MUST be used to set up the related + * CpaCySymCipherSetupData structure in the session context. */ + CPA_CY_SYM_HASH_AES_CBC_MAC, + /**< AES-CBC-MAC algorithm. This is only supported in the hash mode + * CPA_CY_SYM_HASH_MODE_AUTH. Only 128-bit keys are supported. */ + CPA_CY_SYM_HASH_ZUC_EIA3, + /**< ZUC algorithm in EIA3 mode */ + CPA_CY_SYM_HASH_SHA3_224, + /**< 224 bit SHA-3 algorithm. Only CPA_CY_SYM_HASH_MODE_PLAIN and + * CPA_CY_SYM_HASH_MODE_AUTH are supported, that is, the hash + * mode CPA_CY_SYM_HASH_MODE_NESTED is not supported for this algorithm. + */ + CPA_CY_SYM_HASH_SHA3_256, + /**< 256 bit SHA-3 algorithm. Only CPA_CY_SYM_HASH_MODE_PLAIN and + * CPA_CY_SYM_HASH_MODE_AUTH are supported, that is, the hash + * mode CPA_CY_SYM_HASH_MODE_NESTED is not supported for this algorithm. + * Partial requests are not supported, that is, only requests + * of CPA_CY_SYM_PACKET_TYPE_FULL are supported. */ + CPA_CY_SYM_HASH_SHA3_384, + /**< 384 bit SHA-3 algorithm. Only CPA_CY_SYM_HASH_MODE_PLAIN and + * CPA_CY_SYM_HASH_MODE_AUTH are supported, that is, the hash + * mode CPA_CY_SYM_HASH_MODE_NESTED is not supported for this algorithm. + * Partial requests are not supported, that is, only requests + * of CPA_CY_SYM_PACKET_TYPE_FULL are supported. */ + CPA_CY_SYM_HASH_SHA3_512, + /**< 512 bit SHA-3 algorithm. Only CPA_CY_SYM_HASH_MODE_PLAIN and + * CPA_CY_SYM_HASH_MODE_AUTH are supported, that is, the hash + * mode CPA_CY_SYM_HASH_MODE_NESTED is not supported for this algorithm. + * Partial requests are not supported, that is, only requests + * of CPA_CY_SYM_PACKET_TYPE_FULL are supported. */ + CPA_CY_SYM_HASH_SHAKE_128, + /**< 128 bit SHAKE algorithm. This is only supported in the hash + * mode CPA_CY_SYM_HASH_MODE_PLAIN. Partial requests are not + * supported, that is, only requests of CPA_CY_SYM_PACKET_TYPE_FULL + * are supported. */ + CPA_CY_SYM_HASH_SHAKE_256, + /**< 256 bit SHAKE algorithm. This is only supported in the hash + * mode CPA_CY_SYM_HASH_MODE_PLAIN. Partial requests are not + * supported, that is, only requests of CPA_CY_SYM_PACKET_TYPE_FULL + * are supported. */ + CPA_CY_SYM_HASH_POLY, + /**< Poly1305 hash algorithm. This is only supported in the hash mode + * CPA_CY_SYM_HASH_MODE_AUTH. This hash algorithm is only supported + * as part of an algorithm chain with AES_CY_SYM_CIPHER_CHACHA to + * implement the ChaCha20-Poly1305 AEAD algorithm. */ + CPA_CY_SYM_HASH_SM3 + /**< SM3 hash algorithm. Supported in all 3 hash modes. */ + } CpaCySymHashAlgorithm; + +/** + * @ingroup cpaCySym + * Size of bitmap needed for hash "capabilities" type. + * + * @description + * Defines the number of bits in the bitmap to represent supported + * hashes in the type @ref CpaCySymCapabilitiesInfo. Should be set to + * at least one greater than the largest value in the enumerated type + * @ref CpaCySymHashAlgorithm, so that the value of the enum constant + * can also be used as the bit position in the bitmap. + * + * A larger value was chosen to allow for extensibility without the need + * to change the size of the bitmap (to ease backwards compatibility in + * future versions of the API). + */ +#define CPA_CY_SYM_HASH_CAP_BITMAP_SIZE (32) + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Hash Mode Nested Setup Data. + * @description + * This structure contains data relating to a hash session in + * CPA_CY_SYM_HASH_MODE_NESTED mode. + * + *****************************************************************************/ +typedef struct _CpaCySymHashNestedModeSetupData { + Cpa8U *pInnerPrefixData; + /**< A pointer to a buffer holding the Inner Prefix data. For optimal + * performance the prefix data SHOULD be 8-byte aligned. This data is + * prepended to the data being hashed before the inner hash operation is + * performed. */ + Cpa32U innerPrefixLenInBytes; + /**< The inner prefix length in bytes. The maximum size the prefix data + * can be is 255 bytes. */ + CpaCySymHashAlgorithm outerHashAlgorithm; + /**< The hash algorithm used for the outer hash. Note: The inner hash + * algorithm is provided in the hash context. */ + Cpa8U *pOuterPrefixData; + /**< A pointer to a buffer holding the Outer Prefix data. For optimal + * performance the prefix data SHOULD be 8-byte aligned. This data is + * prepended to the output from the inner hash operation before the outer + * hash operation is performed.*/ + Cpa32U outerPrefixLenInBytes; + /**< The outer prefix length in bytes. The maximum size the prefix data + * can be is 255 bytes. */ +} CpaCySymHashNestedModeSetupData; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Hash Auth Mode Setup Data. + * @description + * This structure contains data relating to a hash session in + * CPA_CY_SYM_HASH_MODE_AUTH mode. + * + *****************************************************************************/ +typedef struct _CpaCySymHashAuthModeSetupData { + Cpa8U *authKey; + /**< Authentication key pointer. + * For the GCM (@ref CPA_CY_SYM_HASH_AES_GCM) and CCM (@ref + * CPA_CY_SYM_HASH_AES_CCM) modes of operation, this field is ignored; + * the authentication key is the same as the cipher key (see + * the field pCipherKey in struct @ref CpaCySymCipherSetupData). + */ + Cpa32U authKeyLenInBytes; + /**< Length of the authentication key in bytes. The key length MUST be + * less than or equal to the block size of the algorithm. It is the client's + * responsibility to ensure that the key length is compliant with the + * standard being used (for example RFC 2104, FIPS 198a). + * + * For the GCM (@ref CPA_CY_SYM_HASH_AES_GCM) and CCM (@ref + * CPA_CY_SYM_HASH_AES_CCM) modes of operation, this field is ignored; + * the authentication key is the same as the cipher key, and so is its + * length (see the field cipherKeyLenInBytes in struct @ref + * CpaCySymCipherSetupData). + */ + Cpa32U aadLenInBytes; + /**< The length of the additional authenticated data (AAD) in bytes. + * The maximum permitted value is 240 bytes, unless otherwise + * specified below. + * + * This field must be specified when the hash algorithm is one of the + * following: + + * - For SNOW3G (@ref CPA_CY_SYM_HASH_SNOW3G_UIA2), this is the + * length of the IV (which should be 16). + * - For GCM (@ref CPA_CY_SYM_HASH_AES_GCM). In this case, this is the + * length of the Additional Authenticated Data (called A, in NIST + * SP800-38D). + * - For CCM (@ref CPA_CY_SYM_HASH_AES_CCM). In this case, this is the + * length of the associated data (called A, in NIST SP800-38C). + * Note that this does NOT include the length of any padding, or the + * 18 bytes reserved at the start of the above field to store the + * block B0 and the encoded length. The maximum permitted value in + * this case is 222 bytes. + * + * @note For AES-GMAC (@ref CPA_CY_SYM_HASH_AES_GMAC) mode of operation + * this field is not used and should be set to 0. Instead the length + * of the AAD data is specified in the messageLenToHashInBytes field of + * the CpaCySymOpData structure. + */ +} CpaCySymHashAuthModeSetupData; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Hash Setup Data. + * @description + * This structure contains data relating to a hash session. The fields + * hashAlgorithm, hashMode and digestResultLenInBytes are common to all + * three hash modes and MUST be set for each mode. + * + *****************************************************************************/ +typedef struct _CpaCySymHashSetupData { + CpaCySymHashAlgorithm hashAlgorithm; + /**< Hash algorithm. For mode CPA_CY_SYM_MODE_HASH_NESTED, this is the + * inner hash algorithm. */ + CpaCySymHashMode hashMode; + /**< Mode of the hash operation. Valid options include plain, auth or + * nested hash mode. */ + Cpa32U digestResultLenInBytes; + /**< Length of the digest to be returned. If the verify option is set, + * this specifies the length of the digest to be compared for the + * session. + * + * For CCM (@ref CPA_CY_SYM_HASH_AES_CCM), this is the octet length + * of the MAC, which can be one of 4, 6, 8, 10, 12, 14 or 16. + * + * For GCM (@ref CPA_CY_SYM_HASH_AES_GCM), this is the length in bytes + * of the authentication tag. + * + * If the value is less than the maximum length allowed by the hash, + * the result shall be truncated. If the value is greater than the + * maximum length allowed by the hash, an error (@ref + * CPA_STATUS_INVALID_PARAM) is returned from the function @ref + * cpaCySymInitSession. + * + * In the case of nested hash, it is the outer hash which determines + * the maximum length allowed. */ + CpaCySymHashAuthModeSetupData authModeSetupData; + /**< Authentication Mode Setup Data. + * Only valid for mode CPA_CY_SYM_MODE_HASH_AUTH */ + CpaCySymHashNestedModeSetupData nestedModeSetupData; + /**< Nested Hash Mode Setup Data + * Only valid for mode CPA_CY_SYM_MODE_HASH_NESTED */ +} CpaCySymHashSetupData; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Algorithm Chaining Operation Ordering + * @description + * This enum defines the ordering of operations for algorithm chaining. + * + ****************************************************************************/ +typedef enum _CpaCySymAlgChainOrder +{ + CPA_CY_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER = 1, + /**< Perform the hash operation followed by the cipher operation. If it is + * required that the result of the hash (i.e. the digest) is going to be + * included in the data to be ciphered, then: + * + *
    + *
  • The digest MUST be placed in the destination buffer at the + * location corresponding to the end of the data region to be hashed + * (hashStartSrcOffsetInBytes + messageLenToHashInBytes), + * i.e. there must be no gaps between the start of the digest and the + * end of the data region to be hashed.
  • + *
  • The messageLenToCipherInBytes member of the CpaCySymOpData + * structure must be equal to the overall length of the plain text, + * the digest length and any (optional) trailing data that is to be + * included.
  • + *
  • The messageLenToCipherInBytes must be a multiple to the block + * size if a block cipher is being used.
  • + *
+ * + * The following is an example of the layout of the buffer before the + * operation, after the hash, and after the cipher: + +@verbatim + ++-------------------------+---------------+ +| Plaintext | Tail | ++-------------------------+---------------+ +<-messageLenToHashInBytes-> + ++-------------------------+--------+------+ +| Plaintext | Digest | Tail | ++-------------------------+--------+------+ +<--------messageLenToCipherInBytes--------> + ++-----------------------------------------+ +| Cipher Text | ++-----------------------------------------+ + +@endverbatim + */ + CPA_CY_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH + /**< Perform the cipher operation followed by the hash operation. + * The hash operation will be performed on the ciphertext resulting from + * the cipher operation. + * + * The following is an example of the layout of the buffer before the + * operation, after the cipher, and after the hash: + +@verbatim + ++--------+---------------------------+---------------+ +| Head | Plaintext | Tail | ++--------+---------------------------+---------------+ + <-messageLenToCipherInBytes-> + ++--------+---------------------------+---------------+ +| Head | Ciphertext | Tail | ++--------+---------------------------+---------------+ +<------messageLenToHashInBytes-------> + ++--------+---------------------------+--------+------+ +| Head | Ciphertext | Digest | Tail | ++--------+---------------------------+--------+------+ + +@endverbatim + * + */ +} CpaCySymAlgChainOrder; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Session Setup Data. + * @description + * This structure contains data relating to setting up a session. The + * client needs to complete the information in this structure in order to + * setup a session. + * + ****************************************************************************/ +typedef struct _CpaCySymSessionSetupData { + CpaCyPriority sessionPriority; + /**< Priority of this session */ + CpaCySymOp symOperation; + /**< Operation to perfom */ + CpaCySymCipherSetupData cipherSetupData; + /**< Cipher Setup Data for the session. This member is ignored for the + * CPA_CY_SYM_OP_HASH operation. */ + CpaCySymHashSetupData hashSetupData; + /**< Hash Setup Data for a session. This member is ignored for the + * CPA_CY_SYM_OP_CIPHER operation. */ + CpaCySymAlgChainOrder algChainOrder; + /**< If this operation data structure relates to an algorithm chaining + * session then this parameter determines the order in which the chained + * operations are performed. If this structure does not relate to an + * algorithm chaining session then this parameter will be ignored. + * + * @note In the case of authenticated ciphers (GCM and CCM), which are + * also presented as "algorithm chaining", this value is also ignored. + * The chaining order is defined by the authenticated cipher, in those + * cases. */ + CpaBoolean digestIsAppended; + /**< Flag indicating whether the digest is appended immediately following + * the region over which the digest is computed. This is true for both + * IPsec packets and SSL/TLS records. + * + * If this flag is set, then the value of the pDigestResult field of + * the structure @ref CpaCySymOpData is ignored. + * + * @note The value of this field is ignored for the authenticated cipher + * AES_CCM as the digest must be appended in this case. + * + * @note Setting digestIsAppended for hash only operations when + * verifyDigest is also set is not supported. For hash only operations + * when verifyDigest is set, digestIsAppended should be set to CPA_FALSE. + */ + CpaBoolean verifyDigest; + /**< This flag is relevant only for operations which generate a message + * digest. If set to true, the computed digest will not be written back + * to the buffer location specified by other parameters, but instead will + * be verified (i.e. compared to the value passed in at that location). + * The number of bytes to be written or compared is indicated by the + * digest output length for the session. + * @note This option is only valid for full packets and for final + * partial packets when using partials without algorithm chaining. + * @note The value of this field is ignored for the authenticated ciphers + * (AES_CCM and AES_GCM). Digest verification is always done for these + * (when the direction is decrypt) and unless the DP API is used, + * the message buffer will be zeroed if verification fails. When using the + * DP API, it is the API clients responsibility to clear the message + * buffer when digest verification fails. + */ + CpaBoolean partialsNotRequired; + /**< This flag indicates if partial packet processing is required for this + * session. If set to true, partial packet processing will not be enabled + * for this session and any calls to cpaCySymPerformOp() with the + * packetType parameter set to a value other than + * CPA_CY_SYM_PACKET_TYPE_FULL will fail. + */ +} CpaCySymSessionSetupData ; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Session Update Data. + * @description + * This structure contains data relating to resetting a session. + ****************************************************************************/ +typedef struct _CpaCySymSessionUpdateData { + Cpa32U flags; + /**< Flags indicating which fields to update. + * All bits should be set to 0 except those fields to be updated. + */ +#define CPA_CY_SYM_SESUPD_CIPHER_KEY 1 << 0 +#define CPA_CY_SYM_SESUPD_CIPHER_DIR 1 << 1 +#define CPA_CY_SYM_SESUPD_AUTH_KEY 1 << 2 + Cpa8U *pCipherKey; + /**< Cipher key. + * The same restrictions apply as described in the corresponding field + * of the data structure @ref CpaCySymCipherSetupData. + */ + CpaCySymCipherDirection cipherDirection; + /**< This parameter determines if the cipher operation is an encrypt or + * a decrypt operation. + * The same restrictions apply as described in the corresponding field + * of the data structure @ref CpaCySymCipherSetupData. + */ + Cpa8U *authKey; + /**< Authentication key pointer. + * The same restrictions apply as described in the corresponding field + * of the data structure @ref CpaCySymHashAuthModeSetupData. + */ +} CpaCySymSessionUpdateData; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Cryptographic Component Operation Data. + * @description + * This structure contains data relating to performing cryptographic + * processing on a data buffer. This request is used with + * cpaCySymPerformOp() call for performing cipher, hash, auth cipher + * or a combined hash and cipher operation. + * + * @see + * CpaCySymPacketType + * + * @note + * If the client modifies or frees the memory referenced in this structure + * after it has been submitted to the cpaCySymPerformOp function, and + * before it has been returned in the callback, undefined behavior will + * result. + ****************************************************************************/ +typedef struct _CpaCySymOpData { + CpaCySymSessionCtx sessionCtx; + /**< Handle for the initialized session context */ + CpaCySymPacketType packetType; + /**< Selects the packet type */ + Cpa8U *pIv; + /**< Initialization Vector or Counter. + * + * - For block ciphers in CBC or F8 mode, or for Kasumi in F8 mode, or for + * SNOW3G in UEA2 mode, this is the Initialization Vector (IV) + * value. + * - For block ciphers in CTR mode, this is the counter. + * - For GCM mode, this is either the IV (if the length is 96 bits) or J0 + * (for other sizes), where J0 is as defined by NIST SP800-38D. + * Regardless of the IV length, a full 16 bytes needs to be allocated. + * - For CCM mode, the first byte is reserved, and the nonce should be + * written starting at &pIv[1] (to allow space for the implementation + * to write in the flags in the first byte). Note that a full 16 bytes + * should be allocated, even though the ivLenInBytes field will have + * a value less than this. + * The macro @ref CPA_CY_SYM_CCM_SET_NONCE may be used here. + * - For AES-XTS, this is the 128bit tweak, i, from IEEE Std 1619-2007. + * + * For optimum performance, the data pointed to SHOULD be 8-byte + * aligned. + * + * The IV/Counter will be updated after every partial cryptographic + * operation. + */ + Cpa32U ivLenInBytes; + /**< Length of valid IV data pointed to by the pIv parameter. + * + * - For block ciphers in CBC or F8 mode, or for Kasumi in F8 mode, or for + * SNOW3G in UEA2 mode, this is the length of the IV (which + * must be the same as the block length of the cipher). + * - For block ciphers in CTR mode, this is the length of the counter + * (which must be the same as the block length of the cipher). + * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in which + * case pIv points to J0. + * - For CCM mode, this is the length of the nonce, which can be in the + * range 7 to 13 inclusive. + */ + Cpa32U cryptoStartSrcOffsetInBytes; + /**< Starting point for cipher processing, specified as number of bytes + * from start of data in the source buffer. The result of the cipher + * operation will be written back into the output buffer starting + * at this location. + */ + Cpa32U messageLenToCipherInBytes; + /**< The message length, in bytes, of the source buffer on which the + * cryptographic operation will be computed. This must be a multiple of + * the block size if a block cipher is being used. This is also the same + * as the result length. + * + * @note In the case of CCM (@ref CPA_CY_SYM_HASH_AES_CCM), this value + * should not include the length of the padding or the length of the + * MAC; the driver will compute the actual number of bytes over which + * the encryption will occur, which will include these values. + * + * @note There are limitations on this length for partial + * operations. Refer to the cpaCySymPerformOp function description for + * details. + * + * @note On some implementations, this length may be limited to a 16-bit + * value (65535 bytes). + * + * @note For AES-GMAC (@ref CPA_CY_SYM_HASH_AES_GMAC), this field + * should be set to 0. + */ + Cpa32U hashStartSrcOffsetInBytes; + /**< Starting point for hash processing, specified as number of bytes + * from start of packet in source buffer. + * + * @note For CCM and GCM modes of operation, this field is ignored. + * The field @ref pAdditionalAuthData field should be set instead. + * + * @note For AES-GMAC (@ref CPA_CY_SYM_HASH_AES_GMAC) mode of + * operation, this field specifies the start of the AAD data in + * the source buffer. + */ + Cpa32U messageLenToHashInBytes; + /**< The message length, in bytes, of the source buffer that the hash + * will be computed on. + * + * @note There are limitations on this length for partial operations. + * Refer to the @ref cpaCySymPerformOp function description for details. + * + * @note For CCM and GCM modes of operation, this field is ignored. + * The field @ref pAdditionalAuthData field should be set instead. + * + * @note For AES-GMAC (@ref CPA_CY_SYM_HASH_AES_GMAC) mode of + * operation, this field specifies the length of the AAD data in the + * source buffer. The maximum length supported for AAD data for AES-GMAC + * is 16383 bytes. + * + * @note On some implementations, this length may be limited to a 16-bit + * value (65535 bytes). + */ + Cpa8U *pDigestResult; + /**< If the digestIsAppended member of the @ref CpaCySymSessionSetupData + * structure is NOT set then this is a pointer to the location where the + * digest result should be inserted (in the case of digest generation) + * or where the purported digest exists (in the case of digest verification). + * + * At session registration time, the client specified the digest result + * length with the digestResultLenInBytes member of the @ref + * CpaCySymHashSetupData structure. The client must allocate at least + * digestResultLenInBytes of physically contiguous memory at this location. + * + * For partial packet processing without algorithm chaining, this pointer + * will be ignored for all but the final partial operation. + * + * For digest generation, the digest result will overwrite any data + * at this location. + * + * @note For GCM (@ref CPA_CY_SYM_HASH_AES_GCM), for "digest result" + * read "authentication tag T". + * + * If the digestIsAppended member of the @ref CpaCySymSessionSetupData + * structure is set then this value is ignored and the digest result + * is understood to be in the destination buffer for digest generation, + * and in the source buffer for digest verification. The location of the + * digest result in this case is immediately following the region over + * which the digest is computed. + * + */ + Cpa8U *pAdditionalAuthData; + /**< Pointer to Additional Authenticated Data (AAD) needed for + * authenticated cipher mechanisms (CCM and GCM), and to the IV for + * SNOW3G authentication (@ref CPA_CY_SYM_HASH_SNOW3G_UIA2). + * For other authentication mechanisms this pointer is ignored. + * + * The length of the data pointed to by this field is set up for + * the session in the @ref CpaCySymHashAuthModeSetupData structure + * as part of the @ref cpaCySymInitSession function call. This length + * must not exceed 240 bytes. + * + * Specifically for CCM (@ref CPA_CY_SYM_HASH_AES_CCM), the caller + * should setup this field as follows: + * + * - the nonce should be written starting at an offset of one byte + * into the array, leaving room for the implementation to write in + * the flags to the first byte. For example, + *
+ * memcpy(&pOpData->pAdditionalAuthData[1], pNonce, nonceLen); + *
+ * The macro @ref CPA_CY_SYM_CCM_SET_NONCE may be used here. + * + * - the additional authentication data itself should be written + * starting at an offset of 18 bytes into the array, leaving room for + * the length encoding in the first two bytes of the second block. + * For example, + *
+ * memcpy(&pOpData->pAdditionalAuthData[18], pAad, aadLen); + *
+ * The macro @ref CPA_CY_SYM_CCM_SET_AAD may be used here. + * + * - the array should be big enough to hold the above fields, plus + * any padding to round this up to the nearest multiple of the + * block size (16 bytes). Padding will be added by the + * implementation. + * + * Finally, for GCM (@ref CPA_CY_SYM_HASH_AES_GCM), the caller + * should setup this field as follows: + * + * - the AAD is written in starting at byte 0 + * - the array must be big enough to hold the AAD, plus any padding + * to round this up to the nearest multiple of the block size (16 + * bytes). Padding will be added by the implementation. + * + * @note For AES-GMAC (@ref CPA_CY_SYM_HASH_AES_GMAC) mode of + * operation, this field is not used and should be set to 0. Instead + * the AAD data should be placed in the source buffer. + */ +} CpaCySymOpData; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Setup the nonce for CCM. + * @description + * This macro sets the nonce in the appropriate locations of the + * @ref CpaCySymOpData struct for the authenticated encryption + * algorithm @ref CPA_CY_SYM_HASH_AES_CCM. + ****************************************************************************/ +#define CPA_CY_SYM_CCM_SET_NONCE(pOpData, pNonce, nonceLen) do { \ + memcpy(&pOpData->pIv[1], pNonce, nonceLen); \ + memcpy(&pOpData->pAdditionalAuthData[1], pNonce, nonceLen); \ + } while (0) + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Setup the additional authentication data for CCM. + * @description + * This macro sets the additional authentication data in the + * appropriate location of the@ref CpaCySymOpData struct for the + * authenticated encryptionalgorithm @ref CPA_CY_SYM_HASH_AES_CCM. + ****************************************************************************/ +#define CPA_CY_SYM_CCM_SET_AAD(pOpData, pAad, aadLen) do { \ + memcpy(&pOpData->pAdditionalAuthData[18], pAad, aadLen); \ + } while (0) + + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Cryptographic Component Statistics. + * @deprecated + * As of v1.3 of the cryptographic API, this structure has been + * deprecated, replaced by @ref CpaCySymStats64. + * @description + * This structure contains statistics on the Symmetric Cryptographic + * operations. Statistics are set to zero when the component is + * initialized. + ****************************************************************************/ +typedef struct _CpaCySymStats { + Cpa32U numSessionsInitialized; + /**< Number of session initialized */ + Cpa32U numSessionsRemoved; + /**< Number of sessions removed */ + Cpa32U numSessionErrors; + /**< Number of session initialized and removed errors. */ + Cpa32U numSymOpRequests; + /**< Number of successful symmetric operation requests. */ + Cpa32U numSymOpRequestErrors; + /**< Number of operation requests that had an error and could + * not be processed. */ + Cpa32U numSymOpCompleted; + /**< Number of operations that completed successfully. */ + Cpa32U numSymOpCompletedErrors; + /**< Number of operations that could not be completed + * successfully due to errors. */ + Cpa32U numSymOpVerifyFailures; + /**< Number of operations that completed successfully, but the + * result of the digest verification test was that it failed. + * Note that this does not indicate an error condition. */ +} CpaCySymStats CPA_DEPRECATED; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Cryptographic Component Statistics (64-bit version). + * @description + * This structure contains a 64-bit version of the statistics on + * the Symmetric Cryptographic operations. + * Statistics are set to zero when the component is initialized. + ****************************************************************************/ +typedef struct _CpaCySymStats64 { + Cpa64U numSessionsInitialized; + /**< Number of session initialized */ + Cpa64U numSessionsRemoved; + /**< Number of sessions removed */ + Cpa64U numSessionErrors; + /**< Number of session initialized and removed errors. */ + Cpa64U numSymOpRequests; + /**< Number of successful symmetric operation requests. */ + Cpa64U numSymOpRequestErrors; + /**< Number of operation requests that had an error and could + * not be processed. */ + Cpa64U numSymOpCompleted; + /**< Number of operations that completed successfully. */ + Cpa64U numSymOpCompletedErrors; + /**< Number of operations that could not be completed + * successfully due to errors. */ + Cpa64U numSymOpVerifyFailures; + /**< Number of operations that completed successfully, but the + * result of the digest verification test was that it failed. + * Note that this does not indicate an error condition. */ +} CpaCySymStats64; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Definition of callback function + * + * @description + * This is the callback function prototype. The callback function is + * registered by the application using the cpaCySymInitSession() + * function call. + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] pCallbackTag Opaque value provided by user while making + * individual function call. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] operationType Identifies the operation type that was + * requested in the cpaCySymPerformOp function. + * @param[in] pOpData Pointer to structure with input parameters. + * @param[in] pDstBuffer Caller MUST allocate a sufficiently sized + * destination buffer to hold the data output. For + * out-of-place processing the data outside the + * cryptographic regions in the source buffer are + * copied into the destination buffer. To perform + * "in-place" processing set the pDstBuffer + * parameter in cpaCySymPerformOp function to point + * at the same location as pSrcBuffer. For optimum + * performance, the data pointed to SHOULD be + * 8-byte aligned. + * @param[in] verifyResult This parameter is valid when the verifyDigest + * option is set in the CpaCySymSessionSetupData + * structure. A value of CPA_TRUE indicates that + * the compare succeeded. A value of CPA_FALSE + * indicates that the compare failed for an + * unspecified reason. + * + * @retval + * None + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * cpaCySymInitSession(), + * cpaCySymRemoveSession() + * + *****************************************************************************/ +typedef void (*CpaCySymCbFunc)(void *pCallbackTag, + CpaStatus status, + const CpaCySymOp operationType, + void *pOpData, + CpaBufferList *pDstBuffer, + CpaBoolean verifyResult); + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Gets the size required to store a session context. + * + * @description + * This function is used by the client to determine the size of the memory + * it must allocate in order to store the session context. This MUST be + * called before the client allocates the memory for the session context + * and before the client calls the @ref cpaCySymInitSession function. + * + * For a given implementation of this API, it is safe to assume that + * cpaCySymSessionCtxGetSize() will always return the same size and that + * the size will not be different for different setup data parameters. + * However, it should be noted that the size may change: + * (1) between different implementations of the API (e.g. between software + * and hardware implementations or between different hardware + * implementations) + * (2) between different releases of the same API implementation. + * + * The size returned by this function is the smallest size needed to + * support all possible combinations of setup data parameters. Some + * setup data parameter combinations may fit within a smaller session + * context size. The alternate cpaCySymSessionCtxGetDynamicSize() + * function will return the smallest size needed to fit the + * provided setup data parameters. + * + * @context + * This is a synchronous function that cannot sleep. It can be + * executed in a context that does not permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pSessionSetupData Pointer to session setup data which + * contains parameters which are static + * for a given cryptographic session such + * as operation type, mechanisms, and keys + * for cipher and/or hash operations. + * @param[out] pSessionCtxSizeInBytes The amount of memory in bytes required + * to hold the Session Context. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * This is a synchronous function and has no completion callback + * associated with it. + * @see + * CpaCySymSessionSetupData + * cpaCySymInitSession() + * cpaCySymSessionCtxGetDynamicSize() + * cpaCySymPerformOp() + * + *****************************************************************************/ +CpaStatus +cpaCySymSessionCtxGetSize(const CpaInstanceHandle instanceHandle, + const CpaCySymSessionSetupData *pSessionSetupData, + Cpa32U *pSessionCtxSizeInBytes); + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Gets the minimum size required to store a session context. + * + * @description + * This function is used by the client to determine the smallest size of + * the memory it must allocate in order to store the session context. + * This MUST be called before the client allocates the memory for the + * session context and before the client calls the @ref cpaCySymInitSession + * function. + * + * This function is an alternate to cpaCySymSessionGetSize(). + * cpaCySymSessionCtxGetSize() will return a fixed size which is the + * minimum memory size needed to support all possible setup data parameter + * combinations. cpaCySymSessionCtxGetDynamicSize() will return the + * minimum memory size needed to support the specific session setup + * data parmeters provided. This size may be different for different setup + * data parameters. + * + * @context + * This is a synchronous function that cannot sleep. It can be + * executed in a context that does not permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pSessionSetupData Pointer to session setup data which + * contains parameters which are static + * for a given cryptographic session such + * as operation type, mechanisms, and keys + * for cipher and/or hash operations. + * @param[out] pSessionCtxSizeInBytes The amount of memory in bytes required + * to hold the Session Context. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * This is a synchronous function and has no completion callback + * associated with it. + * @see + * CpaCySymSessionSetupData + * cpaCySymInitSession() + * cpaCySymSessionCtxGetSize() + * cpaCySymPerformOp() + * + *****************************************************************************/ +CpaStatus +cpaCySymSessionCtxGetDynamicSize(const CpaInstanceHandle instanceHandle, + const CpaCySymSessionSetupData *pSessionSetupData, + Cpa32U *pSessionCtxSizeInBytes); + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Initialize a session for symmetric cryptographic API. + * + * @description + * This function is used by the client to initialize an asynchronous + * completion callback function for the symmetric cryptographic + * operations. Clients MAY register multiple callback functions using + * this function. + * The callback function is identified by the combination of userContext, + * pSymCb and session context (sessionCtx). The session context is the + * handle to the session and needs to be passed when processing calls. + * Callbacks on completion of operations within a session are guaranteed + * to be in the same order they were submitted in. + * + * @context + * This is a synchronous function and it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pSymCb Pointer to callback function to be + * registered. Set to NULL if the + * cpaCySymPerformOp function is required to + * work in a synchronous manner. + * @param[in] pSessionSetupData Pointer to session setup data which contains + * parameters which are static for a given + * cryptographic session such as operation + * type, mechanisms, and keys for cipher and/or + * hash operations. + * @param[out] sessionCtx Pointer to the memory allocated by the + * client to store the session context. This + * will be initialized with this function. This + * value needs to be passed to subsequent + * processing calls. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * This is a synchronous function and has no completion callback + * associated with it. + * @see + * CpaCySymSessionCtx, + * CpaCySymCbFunc, + * CpaCySymSessionSetupData, + * cpaCySymRemoveSession(), + * cpaCySymPerformOp() + * + *****************************************************************************/ +CpaStatus +cpaCySymInitSession(const CpaInstanceHandle instanceHandle, + const CpaCySymCbFunc pSymCb, + const CpaCySymSessionSetupData *pSessionSetupData, + CpaCySymSessionCtx sessionCtx); + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Remove (delete) a symmetric cryptographic session. + * + * @description + * This function will remove a previously initialized session context + * and the installed callback handler function. Removal will fail if + * outstanding calls still exist for the initialized session handle. + * The client needs to retry the remove function at a later time. + * The memory for the session context MUST not be freed until this call + * has completed successfully. + * + * @context + * This is a synchronous function that cannot sleep. It can be + * executed in a context that does not permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in,out] pSessionCtx Session context to be removed. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * Note that this is a synchronous function and has no completion callback + * associated with it. + * + * @see + * CpaCySymSessionCtx, + * cpaCySymInitSession() + * + *****************************************************************************/ +CpaStatus +cpaCySymRemoveSession(const CpaInstanceHandle instanceHandle, + CpaCySymSessionCtx pSessionCtx); + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Update a session. + * + * @description + * This function is used to update certain parameters of a session, as + * specified by the CpaCySymSessionUpdateData data structure. + * + * It can be used on sessions created with either the so-called + * Traditional API (@ref cpaCySymInitSession) or the Data Plane API + * (@ref cpaCySymDpInitSession). + * + * In order for this function to operate correctly, two criteria must + * be met: + * + * - In the case of sessions created with the Traditional API, the + * session must be stateless, i.e. the field partialsNotRequired of + * the CpaCySymSessionSetupData data structure must be FALSE. + * (Sessions created using the Data Plane API are always stateless.) + * + * - There must be no outstanding requests in flight for the session. + * The application can call the function @ref cpaCySymSessionInUse + * to test for this. + * + * Note that in the case of multi-threaded applications (which are + * supported using the Traditional API only), this function may fail + * even if a previous invocation of the function @ref + * cpaCySymSessionInUse indicated that there were no outstanding + * requests. + * + * @param[in] sessionCtx Identifies the session to be reset. + * @param[in] pSessionUpdateData Pointer to session data which contains + * the parameters to be updated. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * @post + * None + * @note + * This is a synchronous function and has no completion callback + * associated with it. + *****************************************************************************/ +CpaStatus +cpaCySymUpdateSession(CpaCySymSessionCtx sessionCtx, + const CpaCySymSessionUpdateData *pSessionUpdateData); + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Indicates whether there are outstanding requests on a given + * session. + * + * @description + * This function is used to test whether there are outstanding + * requests in flight for a specified session. This may be used + * before resetting session parameters using the function @ref + * cpaCySymResetSession. See some additional notes on + * multi-threaded applications described on that function. + * + * @param[in] sessionCtx Identifies the session to be reset. + * @param[out] pSessionInUse Returns CPA_TRUE if there are + * outstanding requests on the session, + * or CPA_FALSE otherwise. +*****************************************************************************/ +CpaStatus +cpaCySymSessionInUse(CpaCySymSessionCtx sessionCtx, + CpaBoolean* pSessionInUse); + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Perform a symmetric cryptographic operation on an existing session. + * + * @description + * Performs a cipher, hash or combined (cipher and hash) operation on + * the source data buffer using supported symmetric key algorithms and + * modes. + * + * This function maintains cryptographic state between calls for + * partial cryptographic operations. If a partial cryptographic + * operation is being performed, then on a per-session basis, the next + * part of the multi-part message can be submitted prior to previous + * parts being completed, the only limitation being that all parts + * must be performed in sequential order. + * + * If for any reason a client wishes to terminate the partial packet + * processing on the session (for example if a packet fragment was lost) + * then the client MUST remove the session. + * + * When using partial packet processing with algorithm chaining, only + * the cipher state is maintained between calls. The hash state is + * not be maintained between calls. Instead the hash digest will be + * generated/verified for each call. If both the cipher state and + * hash state need to be maintained between calls, algorithm chaining + * cannot be used. + + * The following restrictions apply to the length: + * + * - When performing block based operations on a partial packet + * (excluding the final partial packet), the data that is to be + * operated on MUST be a multiple of the block size of the algorithm + * being used. This restriction only applies to the cipher state + * when using partial packets with algorithm chaining. + * + * - The final block must not be of length zero (0) if the operation + * being performed is the authentication algorithm @ref + * CPA_CY_SYM_HASH_AES_XCBC. This is because this algorithm requires + * that the final block be XORed with another value internally. + * If the length is zero, then the return code @ref + * CPA_STATUS_INVALID_PARAM will be returned. + * + * - The length of the final block must be greater than or equal to + * 16 bytes when using the @ref CPA_CY_SYM_CIPHER_AES_XTS cipher + * algorithm. + * + * Partial packet processing is supported only when the following + * conditions are true: + * + * - The cipher, hash or authentication operation is "in place" (that is, + * pDstBuffer == pSrcBuffer) + * + * - The cipher or hash algorithm is NOT one of Kasumi or SNOW3G + * + * - The cipher mode is NOT F8 mode. + * + * - The hash algorithm is NOT SHAKE + * + * - The cipher algorithm is not SM4 + * + * - The cipher algorithm is not CPA_CY_SYM_CIPHER_CHACHA and the hash + * algorithm is not CPA_CY_SYM_HASH_POLY. + * + * - The instance/implementation supports partial packets as one of + * its capabilities (see @ref CpaCySymCapabilitiesInfo). + * + * The term "in-place" means that the result of the cryptographic + * operation is written into the source buffer. The term "out-of-place" + * means that the result of the cryptographic operation is written into + * the destination buffer. To perform "in-place" processing, set the + * pDstBuffer parameter to point at the same location as the pSrcBuffer + * parameter. + * + * @context + * When called as an asynchronous function it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * When called as a synchronous function it may sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes when configured to operate in synchronous mode. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pCallbackTag Opaque data that will be returned to the client + * in the callback. + * @param[in] pOpData Pointer to a structure containing request + * parameters. The client code allocates the memory + * for this structure. This component takes + * ownership of the memory until it is returned in + * the callback. + * @param[in] pSrcBuffer The source buffer. The caller MUST allocate + * the source buffer and populate it + * with data. For optimum performance, the data + * pointed to SHOULD be 8-byte aligned. For + * block ciphers, the data passed in MUST be + * a multiple of the relevant block size. + * i.e. padding WILL NOT be applied to the data. + * For optimum performance, the buffer should + * only contain the data region that the + * cryptographic operation(s) must be performed on. + * Any additional data in the source buffer may be + * copied to the destination buffer and this copy + * may degrade performance. + * @param[out] pDstBuffer The destination buffer. The caller MUST + * allocate a sufficiently sized destination + * buffer to hold the data output (including + * the authentication tag in the case of CCM). + * Furthermore, the destination buffer must be the + * same size as the source buffer (i.e. the sum of + * lengths of the buffers in the buffer list must + * be the same). This effectively means that the + * source buffer must in fact be big enough to hold + * the output data, too. This is because, + * for out-of-place processing, the data outside the + * regions in the source buffer on which + * cryptographic operations are performed are copied + * into the destination buffer. To perform + * "in-place" processing set the pDstBuffer + * parameter in cpaCySymPerformOp function to point + * at the same location as pSrcBuffer. For optimum + * performance, the data pointed to SHOULD be + * 8-byte aligned. + * @param[out] pVerifyResult In synchronous mode, this parameter is returned + * when the verifyDigest option is set in the + * CpaCySymSessionSetupData structure. A value of + * CPA_TRUE indicates that the compare succeeded. A + * value of CPA_FALSE indicates that the compare + * failed for an unspecified reason. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resource. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized via cpaCyStartInstance function. + * A Cryptographic session has been previously setup using the + * @ref cpaCySymInitSession function call. + * @post + * None + * + * @note + * When in asynchronous mode, a callback of type CpaCySymCbFunc is + * generated in response to this function call. Any errors generated during + * processing are reported as part of the callback status code. + * + * @see + * CpaCySymOpData, + * cpaCySymInitSession(), + * cpaCySymRemoveSession() + *****************************************************************************/ +CpaStatus +cpaCySymPerformOp(const CpaInstanceHandle instanceHandle, + void *pCallbackTag, + const CpaCySymOpData *pOpData, + const CpaBufferList *pSrcBuffer, + CpaBufferList *pDstBuffer, + CpaBoolean *pVerifyResult); + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Query symmetric cryptographic statistics for a specific instance. + * + * @deprecated + * As of v1.3 of the cryptographic API, this function has been + * deprecated, replaced by @ref cpaCySymQueryStats64(). + * + * @description + * This function will query a specific instance for statistics. The + * user MUST allocate the CpaCySymStats structure and pass the + * reference to that into this function call. This function will write + * the statistic results into the passed in CpaCySymStats + * structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pSymStats Pointer to memory into which the + * statistics will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner, i.e. no asynchronous + * callback will be generated. + * @see + * CpaCySymStats + *****************************************************************************/ +CpaStatus CPA_DEPRECATED +cpaCySymQueryStats(const CpaInstanceHandle instanceHandle, + struct _CpaCySymStats *pSymStats); + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Query symmetric cryptographic statistics (64-bit version) for a + * specific instance. + * + * @description + * This function will query a specific instance for statistics. The + * user MUST allocate the CpaCySymStats64 structure and pass the + * reference to that into this function call. This function will write + * the statistic results into the passed in CpaCySymStats64 + * structure. + * + * Note: statistics returned by this function do not interrupt current data + * processing and as such can be slightly out of sync with operations that + * are in progress during the statistics retrieval process. + * + * @context + * This is a synchronous function and it can sleep. It MUST NOT be + * executed in a context that DOES NOT permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * Yes + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[out] pSymStats Pointer to memory into which the + * statistics will be written. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * This function operates in a synchronous manner, i.e. no asynchronous + * callback will be generated. + * @see + * CpaCySymStats64 + *****************************************************************************/ +CpaStatus +cpaCySymQueryStats64(const CpaInstanceHandle instanceHandle, + CpaCySymStats64 *pSymStats); + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Symmetric Capabilities Info + * + * @description + * This structure contains the capabilities that vary across + * implementations of the symmetric sub-API of the cryptographic API. + * This structure is used in conjunction with @ref + * cpaCySymQueryCapabilities() to determine the capabilities supported + * by a particular API implementation. + * + * For example, to see if an implementation supports cipher + * @ref CPA_CY_SYM_CIPHER_AES_CBC, use the code + * + * @code + +if (CPA_BITMAP_BIT_TEST(capInfo.ciphers, CPA_CY_SYM_CIPHER_AES_CBC)) +{ + // algo is supported +} +else +{ + // algo is not supported +} + * @endcode + * + * The client MUST allocate memory for this structure and any members + * that require memory. When the structure is passed into the function + * ownership of the memory passes to the function. Ownership of the + * memory returns to the client when the function returns. + *****************************************************************************/ +typedef struct _CpaCySymCapabilitiesInfo +{ + CPA_BITMAP(ciphers, CPA_CY_SYM_CIPHER_CAP_BITMAP_SIZE); + /**< Bitmap representing which cipher algorithms (and modes) are + * supported by the instance. + * Bits can be tested using the macro @ref CPA_BITMAP_BIT_TEST. + * The bit positions are those specified in the enumerated type + * @ref CpaCySymCipherAlgorithm. */ + CPA_BITMAP(hashes, CPA_CY_SYM_HASH_CAP_BITMAP_SIZE); + /**< Bitmap representing which hash/authentication algorithms are + * supported by the instance. + * Bits can be tested using the macro @ref CPA_BITMAP_BIT_TEST. + * The bit positions are those specified in the enumerated type + * @ref CpaCySymHashAlgorithm. */ + CpaBoolean partialPacketSupported; + /**< CPA_TRUE if instance supports partial packets. + * See @ref CpaCySymPacketType. */ +} CpaCySymCapabilitiesInfo; + +/** + ***************************************************************************** + * @ingroup cpaCySym + * Returns capabilities of the symmetric API group of a Cryptographic + * API instance. + * + * @description + * This function is used to determine which specific capabilities are + * supported within the symmetric sub-group of the Cryptographic API. + * + * @context + * The function shall not be called in an interrupt context. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * This function is synchronous and blocking. + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Handle to an instance of this API. + * @param[out] pCapInfo Pointer to capabilities info structure. + * All fields in the structure + * are populated by the API instance. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The instance has been initialized via the @ref cpaCyStartInstance + * function. + * @post + * None + *****************************************************************************/ +CpaStatus +cpaCySymQueryCapabilities(const CpaInstanceHandle instanceHandle, + CpaCySymCapabilitiesInfo * pCapInfo); + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_CY_SYM_H */ Index: sys/dev/qat/qat_api/include/lac/cpa_cy_sym_dp.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/include/lac/cpa_cy_sym_dp.h @@ -0,0 +1,986 @@ +/*************************************************************************** + * + * BSD LICENSE + * + * Copyright(c) 2007-2022 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * + ***************************************************************************/ + +/* + ***************************************************************************** + * Doxygen group definitions + ****************************************************************************/ + +/** + ***************************************************************************** + * @file cpa_cy_sym_dp.h + * + * @defgroup cpaCySymDp Symmetric cryptographic Data Plane API + * + * @ingroup cpaCySym + * + * @description + * These data structures and functions specify the Data Plane API + * for symmetric cipher, hash, and combined cipher and hash + * operations. + * + * This API is recommended for data plane applications, in which the + * cost of offload - that is, the cycles consumed by the driver in + * sending requests to the hardware, and processing responses - needs + * to be minimized. In particular, use of this API is recommended + * if the following constraints are acceptable to your application: + * + * - Thread safety is not guaranteed. Each software thread should + * have access to its own unique instance (CpaInstanceHandle) to + * avoid contention. + * - Polling is used, rather than interrupts (which are expensive). + * Implementations of this API will provide a function (not + * defined as part of this API) to read responses from the hardware + * response queue and dispatch callback functions, as specified on + * this API. + * - Buffers and buffer lists are passed using physical addresses, + * to avoid virtual to physical address translation costs. + * - For GCM and CCM modes of AES, when performing decryption and + * verification, if verification fails, then the message buffer + * will NOT be zeroed. (This is a consequence of using physical + * addresses for the buffers.) + * - The ability to enqueue one or more requests without submitting + * them to the hardware allows for certain costs to be amortized + * across multiple requests. + * - Only asynchronous invocation is supported. + * - There is no support for partial packets. + * - Implementations may provide certain features as optional at + * build time, such as atomic counters. + * - The "default" instance (@ref CPA_INSTANCE_HANDLE_SINGLE) is not + * supported on this API. The specific handle should be obtained + * using the instance discovery functions (@ref cpaCyGetNumInstances, + * @ref cpaCyGetInstances). + * + * @note Performance Trade-Offs + * Different implementations of this API may have different performance + * trade-offs; please refer to the documentation for your implementation + * for details. However, the following concepts informed the definition + * of this API. + * + * The API distinguishes between enqueuing a request and actually + * submitting that request to the cryptographic acceleration + * engine to be performed. This allows multiple requests to be enqueued + * (either individually or in batch), and then for all enqueued requests + * to be submitted in a single operation. The rationale is that in some + * (especially hardware-based) implementations, the submit operation + * is expensive; for example, it may incur an MMIO instruction. The + * API allows this cost to be amortized over a number of requests. The + * precise number of such requests can be tuned for optimal + * performance. + * + * Specifically: + * + * - The function @ref cpaCySymDpEnqueueOp allows one request to be + * enqueued, and optionally for that request (and all previously + * enqueued requests) to be submitted. + * - The function @ref cpaCySymDpEnqueueOpBatch allows multiple + * requests to be enqueued, and optionally for those requests (and all + * previously enqueued requests) to be submitted. + * - The function @ref cpaCySymDpPerformOpNow enqueues no requests, but + * submits all previously enqueued requests. + *****************************************************************************/ + +#ifndef CPA_CY_SYM_DP_H +#define CPA_CY_SYM_DP_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include "cpa_cy_common.h" +#include "cpa_cy_sym.h" + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Cryptographic component symmetric session context handle for the + * data plane API. + * @description + * Handle to a cryptographic data plane session context. The memory for + * this handle is allocated by the client. The size of the memory that + * the client needs to allocate is determined by a call to the @ref + * cpaCySymDpSessionCtxGetSize or @ref cpaCySymDpSessionCtxGetDynamicSize + * functions. The session context memory is initialized with a call to + * the @ref cpaCySymInitSession function. + * This memory MUST not be freed until a call to @ref + * cpaCySymDpRemoveSession has completed successfully. + * + *****************************************************************************/ +typedef void * CpaCySymDpSessionCtx; + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Operation Data for cryptographic data plane API. + * + * @description + * This structure contains data relating to a request to perform + * symmetric cryptographic processing on one or more data buffers. + * + * The physical memory to which this structure points needs to be + * at least 8-byte aligned. + * + * All reserved fields SHOULD NOT be written or read by the + * calling code. + * + * @see + * cpaCySymDpEnqueueOp, cpaCySymDpEnqueueOpBatch + ****************************************************************************/ +typedef struct _CpaCySymDpOpData { + Cpa64U reserved0; + /**< Reserved for internal usage. */ + Cpa32U cryptoStartSrcOffsetInBytes; + /**< Starting point for cipher processing, specified as number of bytes + * from start of data in the source buffer. The result of the cipher + * operation will be written back into the buffer starting at this + * location in the destination buffer. + */ + Cpa32U messageLenToCipherInBytes; + /**< The message length, in bytes, of the source buffer on which the + * cryptographic operation will be computed. This must be a multiple of + * the block size if a block cipher is being used. This is also the + * same as the result length. + * + * @note In the case of CCM (@ref CPA_CY_SYM_HASH_AES_CCM), this value + * should not include the length of the padding or the length of the + * MAC; the driver will compute the actual number of bytes over which + * the encryption will occur, which will include these values. + * + * @note For AES-GMAC (@ref CPA_CY_SYM_HASH_AES_GMAC), this field + * should be set to 0. + * + * @note On some implementations, this length may be limited to a 16-bit + * value (65535 bytes). + */ + CpaPhysicalAddr iv; + /**< Initialization Vector or Counter. Specifically, this is the + * physical address of one of the following: + * + * - For block ciphers in CBC mode, or for Kasumi in F8 mode, or for + * SNOW3G in UEA2 mode, this is the Initialization Vector (IV) + * value. + * - For ARC4, this is reserved for internal usage. + * - For block ciphers in CTR mode, this is the counter. + * - For GCM mode, this is either the IV (if the length is 96 bits) or J0 + * (for other sizes), where J0 is as defined by NIST SP800-38D. + * Regardless of the IV length, a full 16 bytes needs to be allocated. + * - For CCM mode, the first byte is reserved, and the nonce should be + * written starting at &pIv[1] (to allow space for the implementation + * to write in the flags in the first byte). Note that a full 16 bytes + * should be allocated, even though the ivLenInBytes field will have + * a value less than this. + * The macro @ref CPA_CY_SYM_CCM_SET_NONCE may be used here. + */ + Cpa64U reserved1; + /**< Reserved for internal usage. */ + Cpa32U hashStartSrcOffsetInBytes; + /**< Starting point for hash processing, specified as number of bytes + * from start of packet in source buffer. + * + * @note For CCM and GCM modes of operation, this value in this field + * is ignored, and the field is reserved for internal usage. + * The fields @ref additionalAuthData and @ref pAdditionalAuthData + * should be set instead. + * + * @note For AES-GMAC (@ref CPA_CY_SYM_HASH_AES_GMAC) mode of + * operation, this field specifies the start of the AAD data in + * the source buffer. + */ + Cpa32U messageLenToHashInBytes; + /**< The message length, in bytes, of the source buffer that the hash + * will be computed on. + * + * @note For CCM and GCM modes of operation, this value in this field + * is ignored, and the field is reserved for internal usage. + * The fields @ref additionalAuthData and @ref pAdditionalAuthData + * should be set instead. + * + * @note For AES-GMAC (@ref CPA_CY_SYM_HASH_AES_GMAC) mode of + * operation, this field specifies the length of the AAD data in the + * source buffer. + * + * @note On some implementations, this length may be limited to a 16-bit + * value (65535 bytes). + */ + CpaPhysicalAddr additionalAuthData; + /**< Physical address of the Additional Authenticated Data (AAD), + * which is needed for authenticated cipher mechanisms (CCM and + * GCM), and to the IV for SNOW3G authentication (@ref + * CPA_CY_SYM_HASH_SNOW3G_UIA2). For other authentication + * mechanisms, this value is ignored, and the field is reserved for + * internal usage. + * + * The length of the data pointed to by this field is set up for + * the session in the @ref CpaCySymHashAuthModeSetupData structure + * as part of the @ref cpaCySymDpInitSession function call. This length + * must not exceed 240 bytes. + + * If AAD is not used, this address must be set to zero. + * + * Specifically for CCM (@ref CPA_CY_SYM_HASH_AES_CCM) and GCM (@ref + * CPA_CY_SYM_HASH_AES_GCM), the caller should be setup as described in + * the same way as the corresponding field, pAdditionalAuthData, on the + * "traditional" API (see the @ref CpaCySymOpData). + * + * @note For AES-GMAC (@ref CPA_CY_SYM_HASH_AES_GMAC) mode of + * operation, this field is not used and should be set to 0. Instead + * the AAD data should be placed in the source buffer. + * + */ + CpaPhysicalAddr digestResult; + /**< If the digestIsAppended member of the @ref CpaCySymSessionSetupData + * structure is NOT set then this is the physical address of the location + * where the digest result should be inserted (in the case of digest + * generation) or where the purported digest exists (in the case of digest + * verification). + * + * At session registration time, the client specified the digest result + * length with the digestResultLenInBytes member of the @ref + * CpaCySymHashSetupData structure. The client must allocate at least + * digestResultLenInBytes of physically contiguous memory at this location. + * + * For digest generation, the digest result will overwrite any data + * at this location. + * + * @note For GCM (@ref CPA_CY_SYM_HASH_AES_GCM), for "digest result" + * read "authentication tag T". + * + * If the digestIsAppended member of the @ref CpaCySymSessionSetupData + * structure is set then this value is ignored and the digest result + * is understood to be in the destination buffer for digest generation, + * and in the source buffer for digest verification. The location of the + * digest result in this case is immediately following the region over + * which the digest is computed. + */ + + CpaInstanceHandle instanceHandle; + /**< Instance to which the request is to be enqueued. + * @note A callback function must have been registered on the instance + * using @ref cpaCySymDpRegCbFunc. + */ + CpaCySymDpSessionCtx sessionCtx; + /**< Session context specifying the cryptographic parameters for this + * request. + * @note The session must have been created using @ref + * cpaCySymDpInitSession. + */ + Cpa32U ivLenInBytes; + /**< Length of valid IV data pointed to by the pIv parameter. + * + * - For block ciphers in CBC mode, or for Kasumi in F8 mode, or for + * SNOW3G in UEA2 mode, this is the length of the IV (which + * must be the same as the block length of the cipher). + * - For block ciphers in CTR mode, this is the length of the counter + * (which must be the same as the block length of the cipher). + * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in which + * case pIv points to J0. + * - For CCM mode, this is the length of the nonce, which can be in the + * range 7 to 13 inclusive. + */ + CpaPhysicalAddr srcBuffer; + /**< Physical address of the source buffer on which to operate. + * This is either: + * + * - The location of the data, of length srcBufferLen; or, + * - If srcBufferLen has the special value @ref CPA_DP_BUFLIST, then + * srcBuffer contains the location where a @ref CpaPhysBufferList is + * stored. In this case, the CpaPhysBufferList MUST be aligned + * on an 8-byte boundary. + * - For optimum performance, the buffer should only contain the data + * region that the cryptographic operation(s) must be performed on. + * Any additional data in the source buffer may be copied to the + * destination buffer and this copy may degrade performance. + */ + Cpa32U srcBufferLen; + /**< Length of source buffer, or @ref CPA_DP_BUFLIST. */ + CpaPhysicalAddr dstBuffer; + /**< Physical address of the destination buffer on which to operate. + * This is either: + * + * - The location of the data, of length srcBufferLen; or, + * - If srcBufferLen has the special value @ref CPA_DP_BUFLIST, then + * srcBuffer contains the location where a @ref CpaPhysBufferList is + * stored. In this case, the CpaPhysBufferList MUST be aligned + * on an 8-byte boundary. + * + * For "in-place" operation, the dstBuffer may be identical to the + * srcBuffer. + */ + Cpa32U dstBufferLen; + /**< Length of destination buffer, or @ref CPA_DP_BUFLIST. */ + + CpaPhysicalAddr thisPhys; + /**< Physical address of this data structure */ + + Cpa8U* pIv; + /**< Pointer to (and therefore, the virtual address of) the IV field + * above. + * Needed here because the driver in some cases writes to this field, + * in addition to sending it to the accelerator. + */ + Cpa8U *pAdditionalAuthData; + /**< Pointer to (and therefore, the virtual address of) the + * additionalAuthData field above. + * Needed here because the driver in some cases writes to this field, + * in addition to sending it to the accelerator. + */ + void* pCallbackTag; + /**< Opaque data that will be returned to the client in the function + * completion callback. + * + * This opaque data is not used by the implementation of the API, + * but is simply returned as part of the asynchronous response. + * It may be used to store information that might be useful when + * processing the response later. + */ +} CpaCySymDpOpData; + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Definition of callback function for cryptographic data plane API. + * + * @description + * This is the callback function prototype. The callback function is + * registered by the application using the @ref cpaCySymDpRegCbFunc + * function call, and called back on completion of asycnhronous + * requests made via calls to @ref cpaCySymDpEnqueueOp or @ref + * cpaCySymDpEnqueueOpBatch. + * + * @context + * This callback function can be executed in a context that DOES NOT + * permit sleeping to occur. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] pOpData Pointer to the CpaCySymDpOpData object which + * was supplied as part of the original request. + * @param[in] status Status of the operation. Valid values are + * CPA_STATUS_SUCCESS, CPA_STATUS_FAIL and + * CPA_STATUS_UNSUPPORTED. + * @param[in] verifyResult This parameter is valid when the verifyDigest + * option is set in the CpaCySymSessionSetupData + * structure. A value of CPA_TRUE indicates that + * the compare succeeded. A value of CPA_FALSE + * indicates that the compare failed. + * + * @return + * None + * @pre + * Component has been initialized. + * Callback has been registered with @ref cpaCySymDpRegCbFunc. + * @post + * None + * @note + * None + * @see + * cpaCySymDpRegCbFunc + *****************************************************************************/ +typedef void (*CpaCySymDpCbFunc)(CpaCySymDpOpData *pOpData, + CpaStatus status, + CpaBoolean verifyResult); + + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Registration of the operation completion callback function. + * + * @description + * This function allows a completion callback function to be registered. + * The registered callback function is invoked on completion of + * asycnhronous requests made via calls to @ref cpaCySymDpEnqueueOp + * or @ref cpaCySymDpEnqueueOpBatch. + * + * If a callback function was previously registered, it is overwritten. + * + * @context + * This is a synchronous function and it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] instanceHandle Instance on which the callback function is to be + * registered. + * @param[in] pSymNewCb Callback function for this instance. + + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * Component has been initialized. + * @post + * None + * @note + * None + * @see + * CpaCySymDpCbFunc + *****************************************************************************/ +CpaStatus cpaCySymDpRegCbFunc(const CpaInstanceHandle instanceHandle, + const CpaCySymDpCbFunc pSymNewCb); + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Gets the size required to store a session context for the data plane + * API. + * + * @description + * This function is used by the client to determine the size of the memory + * it must allocate in order to store the session context. This MUST be + * called before the client allocates the memory for the session context + * and before the client calls the @ref cpaCySymDpInitSession function. + * + * For a given implementation of this API, it is safe to assume that + * cpaCySymDpSessionCtxGetSize() will always return the same size and that + * the size will not be different for different setup data parameters. + * However, it should be noted that the size may change: + * (1) between different implementations of the API (e.g. between software + * and hardware implementations or between different hardware + * implementations) + * (2) between different releases of the same API implementation. + * + * The size returned by this function is the smallest size needed to + * support all possible combinations of setup data parameters. Some + * setup data parameter combinations may fit within a smaller session + * context size. The alternate cpaCySymDpSessionCtxGetDynamicSize() + * function will return the smallest size needed to fit the + * provided setup data parameters. + * + * @context + * This is a synchronous function that cannot sleep. It can be + * executed in a context that does not permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pSessionSetupData Pointer to session setup data which + * contains parameters which are static + * for a given cryptographic session such + * as operation type, mechanisms, and keys + * for cipher and/or hash operations. + * @param[out] pSessionCtxSizeInBytes The amount of memory in bytes required + * to hold the Session Context. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized. + * @post + * None + * @note + * This is a synchronous function and has no completion callback + * associated with it. + * @see + * CpaCySymSessionSetupData + * cpaCySymDpSessionCtxGetDynamicSize() + * cpaCySymDpInitSession() + *****************************************************************************/ +CpaStatus +cpaCySymDpSessionCtxGetSize(const CpaInstanceHandle instanceHandle, + const CpaCySymSessionSetupData *pSessionSetupData, + Cpa32U *pSessionCtxSizeInBytes); + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Gets the minimum size required to store a session context for the data + * plane API. + * + * @description + * This function is used by the client to determine the smallest size of + * the memory it must allocate in order to store the session context. + * This MUST be called before the client allocates the memory for the + * session context and before the client calls the + * @ref cpaCySymDpInitSession function. + * + * This function is an alternate to cpaCySymDpSessionGetSize(). + * cpaCySymDpSessionCtxGetSize() will return a fixed size which is the + * minimum memory size needed to support all possible setup data parameter + * combinations. cpaCySymDpSessionCtxGetDynamicSize() will return the + * minimum memory size needed to support the specific session setup + * data parmeters provided. This size may be different for different setup + * data parameters. + * + * @context + * This is a synchronous function that cannot sleep. It can be + * executed in a context that does not permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * Yes + * + * @param[in] instanceHandle Instance handle. + * @param[in] pSessionSetupData Pointer to session setup data which + * contains parameters which are static + * for a given cryptographic session such + * as operation type, mechanisms, and keys + * for cipher and/or hash operations. + * @param[out] pSessionCtxSizeInBytes The amount of memory in bytes required + * to hold the Session Context. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized. + * @post + * None + * @note + * This is a synchronous function and has no completion callback + * associated with it. + * @see + * CpaCySymSessionSetupData + * cpaCySymDpSessionCtxGetSize() + * cpaCySymDpInitSession() + *****************************************************************************/ +CpaStatus +cpaCySymDpSessionCtxGetDynamicSize(const CpaInstanceHandle instanceHandle, + const CpaCySymSessionSetupData *pSessionSetupData, + Cpa32U *pSessionCtxSizeInBytes); + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Initialize a session for the symmetric cryptographic data plane API. + * + * @description + * This function is used by the client to initialize an asynchronous + * session context for symmetric cryptographic data plane operations. + * The returned session context is the handle to the session and needs to + * be passed when requesting cryptographic operations to be performed. + * + * Only sessions created using this function may be used when + * invoking functions on this API + * + * The session can be removed using @ref cpaCySymDpRemoveSession. + * + * @context + * This is a synchronous function and it cannot sleep. It can be + * executed in a context that does not permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] instanceHandle Instance to which the requests will be + * submitted. + * @param[in] pSessionSetupData Pointer to session setup data which + * contains parameters that are static + * for a given cryptographic session such + * as operation type, algorithm, and keys + * for cipher and/or hash operations. + * @param[out] sessionCtx Pointer to the memory allocated by the + * client to store the session context. This + * memory must be physically contiguous, and + * its length (in bytes) must be at least as + * big as specified by a call to @ref + * cpaCySymDpSessionCtxGetSize. This memory + * will be initialized with this function. This + * value needs to be passed to subsequent + * processing calls. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized. + * @post + * None + * @note + * This is a synchronous function and has no completion callback + * associated with it. + * @see + * cpaCySymDpSessionCtxGetSize, cpaCySymDpRemoveSession + *****************************************************************************/ +CpaStatus +cpaCySymDpInitSession(CpaInstanceHandle instanceHandle, + const CpaCySymSessionSetupData *pSessionSetupData, + CpaCySymDpSessionCtx sessionCtx); + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Remove (delete) a symmetric cryptographic session for the data plane + * API. + * + * @description + * This function will remove a previously initialized session context + * and the installed callback handler function. Removal will fail if + * outstanding calls still exist for the initialized session handle. + * The client needs to retry the remove function at a later time. + * The memory for the session context MUST not be freed until this call + * has completed successfully. + * + * @context + * This is a synchronous function that cannot sleep. It can be + * executed in a context that does not permit sleeping. + * @assumptions + * None + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] instanceHandle Instance handle. + * @param[in,out] sessionCtx Session context to be removed. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESOURCE Error related to system resources. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized. + * @post + * None + * @note + * Note that this is a synchronous function and has no completion callback + * associated with it. + * + * @see + * CpaCySymDpSessionCtx, + * cpaCySymDpInitSession() + * + *****************************************************************************/ +CpaStatus +cpaCySymDpRemoveSession(const CpaInstanceHandle instanceHandle, + CpaCySymDpSessionCtx sessionCtx); + + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Enqueue a single symmetric cryptographic request. + * + * @description + * This function enqueues a single request to perform a cipher, + * hash or combined (cipher and hash) operation. Optionally, the + * request is also submitted to the cryptographic engine to be + * performed. + * + * See note about performance trade-offs on the @ref cpaCySymDp API. + * + * The function is asynchronous; control is returned to the user once + * the request has been submitted. On completion of the request, the + * application may poll for responses, which will cause a callback + * function (registered via @ref cpaCySymDpRegCbFunc) to be invoked. + * Callbacks within a session are guaranteed to be in the same order + * in which they were submitted. + * + * The following restrictions apply to the pOpData parameter: + * + * - The memory MUST be aligned on an 8-byte boundary. + * - The structure MUST reside in physically contiguous memory. + * - The reserved fields of the structure SHOULD NOT be written + * or read by the calling code. + * + * @context + * This function will not sleep, and hence can be executed in a context + * that does not permit sleeping. + * + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] pOpData Pointer to a structure containing the + * request parameters. The client code allocates + * the memory for this structure. This component + * takes ownership of the memory until it is + * returned in the callback, which was registered + * on the instance via @ref cpaCySymDpRegCbFunc. + * See the above Description for restrictions + * that apply to this parameter. + * @param[in] performOpNow Flag to specify whether the operation should be + * performed immediately (CPA_TRUE), or simply + * enqueued to be performed later (CPA_FALSE). + * In the latter case, the request is submitted + * to be performed either by calling this function + * again with this flag set to CPA_TRUE, or by + * invoking the function @ref + * cpaCySymDpPerformOpNow. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The session identified by pOpData->sessionCtx was setup using + * @ref cpaCySymDpInitSession. + * The instance identified by pOpData->instanceHandle has had a + * callback function registered via @ref cpaCySymDpRegCbFunc. + * + * @post + * None + * + * @note + * A callback of type @ref CpaCySymDpCbFunc is generated in response to + * this function call. Any errors generated during processing are + * reported as part of the callback status code. + * + * @see + * cpaCySymDpInitSession, + * cpaCySymDpPerformOpNow + *****************************************************************************/ +CpaStatus +cpaCySymDpEnqueueOp(CpaCySymDpOpData *pOpData, + const CpaBoolean performOpNow); + + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Enqueue multiple requests to the symmetric cryptographic data plane + * API. + * + * @description + * This function enqueues multiple requests to perform cipher, hash + * or combined (cipher and hash) operations. + + * See note about performance trade-offs on the @ref cpaCySymDp API. + * + * The function is asynchronous; control is returned to the user once + * the request has been submitted. On completion of the request, the + * application may poll for responses, which will cause a callback + * function (registered via @ref cpaCySymDpRegCbFunc) to be invoked. + * Separate callbacks will be invoked for each request. + * Callbacks within a session are guaranteed to be in the same order + * in which they were submitted. + * + * The following restrictions apply to each element of the pOpData + * array: + * + * - The memory MUST be aligned on an 8-byte boundary. + * - The structure MUST reside in physically contiguous memory. + * - The reserved fields of the structure SHOULD NOT be + * written or read by the calling code. + * + * @context + * This function will not sleep, and hence can be executed in a context + * that does not permit sleeping. + * + * @assumptions + * Client MUST allocate the request parameters to 8 byte alignment. + * Reserved elements of the CpaCySymDpOpData structure MUST be 0. + * The CpaCySymDpOpData structure MUST reside in physically + * contiguous memory. + * + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] numberRequests The number of requests in the array of + * CpaCySymDpOpData structures. + * @param[in] pOpData An array of pointers to CpaCySymDpOpData + * structures. Each of the CpaCySymDpOpData + * structure contains the request parameters for + * that request. The client code allocates the + * memory for this structure. This component takes + * ownership of the memory until it is returned in + * the callback, which was registered on the + * instance via @ref cpaCySymDpRegCbFunc. + * See the above Description for restrictions + * that apply to this parameter. + * @param[in] performOpNow Flag to specify whether the operation should be + * performed immediately (CPA_TRUE), or simply + * enqueued to be performed later (CPA_FALSE). + * In the latter case, the request is submitted + * to be performed either by calling this function + * again with this flag set to CPA_TRUE, or by + * invoking the function @ref + * cpaCySymDpPerformOpNow. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The session identified by pOpData[i]->sessionCtx was setup using + * @ref cpaCySymDpInitSession. + * The instance identified by pOpData->instanceHandle[i] has had a + * callback function registered via @ref cpaCySymDpRegCbFunc. + * + * @post + * None + * + * @note + * Multiple callbacks of type @ref CpaCySymDpCbFunc are generated in + * response to this function call (one per request). Any errors + * generated during processing are reported as part of the callback + * status code. + * + * @see + * cpaCySymDpInitSession, + * cpaCySymDpEnqueueOp + *****************************************************************************/ +CpaStatus +cpaCySymDpEnqueueOpBatch(const Cpa32U numberRequests, + CpaCySymDpOpData *pOpData[], + const CpaBoolean performOpNow); + + +/** + ***************************************************************************** + * @ingroup cpaCySymDp + * Submit any previously enqueued requests to be performed now on the + * symmetric cryptographic data plane API. + * + * @description + * If any requests/operations were enqueued via calls to @ref + * cpaCySymDpEnqueueOp and/or @ref cpaCySymDpEnqueueOpBatch, but with + * the flag performOpNow set to @ref CPA_FALSE, then these operations + * will now be submitted to the accelerator to be performed. + * + * See note about performance trade-offs on the @ref cpaCySymDp API. + * + * @context + * Will not sleep. It can be executed in a context that does not + * permit sleeping. + * + * @sideEffects + * None + * @blocking + * No + * @reentrant + * No + * @threadSafe + * No + * + * @param[in] instanceHandle Instance to which the requests will be + * submitted. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_RETRY Resubmit the request. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. + * @retval CPA_STATUS_RESTARTING API implementation is restarting. Resubmit + * the request. + * @retval CPA_STATUS_UNSUPPORTED Function is not supported. + * + * @pre + * The component has been initialized. + * A cryptographic session has been previously setup using the + * @ref cpaCySymDpInitSession function call. + * + * @post + * None + * + * @see + * cpaCySymDpEnqueueOp, cpaCySymDpEnqueueOpBatch + *****************************************************************************/ +CpaStatus +cpaCySymDpPerformOpNow(CpaInstanceHandle instanceHandle); + + +#ifdef __cplusplus +} /* close the extern "C" { */ +#endif + +#endif /* CPA_CY_SYM_DP_H */ Index: sys/dev/qat/qat_api/qat_direct/include/adf_kernel_types.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_direct/include/adf_kernel_types.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_KERNEL_TYPES_H +#define ADF_KERNEL_TYPES_H + +#define u64 uint64_t +#define u32 uint32_t +#define u16 uint16_t +#define u8 uint8_t +#define s64 int64_t +#define s32 int32_t +#define s16 int16_t +#define s8 int8_t + +#ifndef __packed +#define __packed __attribute__((__packed__)) +#endif + +#endif Index: sys/dev/qat/qat_api/qat_direct/include/icp_accel_devices.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_direct/include/icp_accel_devices.h @@ -0,0 +1,157 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/***************************************************************************** + * @file icp_accel_devices.h + * + * @defgroup Acceleration Driver Framework + * + * @ingroup icp_Adf + * + * @description + * This is the top level header file that contains the layout of the ADF + * icp_accel_dev_t structure and related macros/definitions. + * It can be used to dereference the icp_accel_dev_t *passed into upper + * layers. + * + *****************************************************************************/ + +#ifndef ICP_ACCEL_DEVICES_H_ +#define ICP_ACCEL_DEVICES_H_ + +#include "cpa.h" +#include "qat_utils.h" +#include "adf_accel_devices.h" + +#define ADF_CFG_NO_INSTANCE 0xFFFFFFFF + +#define ICP_DC_TX_RING_0 6 +#define ICP_DC_TX_RING_1 7 +#define ICP_RX_RINGS_OFFSET 8 +#define ICP_RINGS_PER_BANK 16 + +/* Number of worker threads per AE */ +#define ICP_ARB_WRK_THREAD_TO_SARB 12 +#define MAX_ACCEL_NAME_LEN 16 +#define ADF_DEVICE_NAME_LENGTH 32 +#define ADF_DEVICE_TYPE_LENGTH 8 + +#define ADF_CTL_DEVICE_NAME "/dev/qat_adf_ctl" + +/** + ***************************************************************************** + * @ingroup icp_AdfAccelHandle + * + * @description + * Accelerator capabilities + * + *****************************************************************************/ +typedef enum { + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC = 0x01, + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC = 0x02, + ICP_ACCEL_CAPABILITIES_CIPHER = 0x04, + ICP_ACCEL_CAPABILITIES_AUTHENTICATION = 0x08, + ICP_ACCEL_CAPABILITIES_RESERVED_1 = 0x10, + ICP_ACCEL_CAPABILITIES_COMPRESSION = 0x20, + ICP_ACCEL_CAPABILITIES_DEPRECATED = 0x40, + ICP_ACCEL_CAPABILITIES_RANDOM_NUMBER = 0x80, + ICP_ACCEL_CAPABILITIES_CRYPTO_ZUC = 0x100, + ICP_ACCEL_CAPABILITIES_SHA3 = 0x200, + ICP_ACCEL_CAPABILITIES_KPT = 0x400, + ICP_ACCEL_CAPABILITIES_RL = 0x800, + ICP_ACCEL_CAPABILITIES_HKDF = 0x1000, + ICP_ACCEL_CAPABILITIES_ECEDMONT = 0x2000, + ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN = 0x4000, + ICP_ACCEL_CAPABILITIES_SHA3_EXT = 0x8000, + ICP_ACCEL_CAPABILITIES_AESGCM_SPC = 0x10000, + ICP_ACCEL_CAPABILITIES_CHACHA_POLY = 0x20000, + ICP_ACCEL_CAPABILITIES_SM2 = 0x40000, + ICP_ACCEL_CAPABILITIES_SM3 = 0x80000, + ICP_ACCEL_CAPABILITIES_SM4 = 0x100000, + ICP_ACCEL_CAPABILITIES_INLINE = 0x200000, + ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY = 0x400000, + ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY64 = 0x800000, + ICP_ACCEL_CAPABILITIES_LZ4_COMPRESSION = 0x1000000, + ICP_ACCEL_CAPABILITIES_LZ4S_COMPRESSION = 0x2000000, + ICP_ACCEL_CAPABILITIES_AES_V2 = 0x4000000, + ICP_ACCEL_CAPABILITIES_KPT2 = 0x8000000, +} icp_accel_capabilities_t; + +/** + ***************************************************************************** + * @ingroup icp_AdfAccelHandle + * + * @description + * Device Configuration Data Structure + * + *****************************************************************************/ + +typedef enum device_type_e { + DEVICE_UNKNOWN = 0, + DEVICE_DH895XCC, + DEVICE_DH895XCCVF, + DEVICE_C62X, + DEVICE_C62XVF, + DEVICE_C3XXX, + DEVICE_C3XXXVF, + DEVICE_200XX, + DEVICE_200XXVF, + DEVICE_C4XXX, + DEVICE_C4XXXVF +} device_type_t; + +/* + * Enumeration on Service Type + */ +typedef enum adf_service_type_s { + ADF_SERVICE_CRYPTO, + ADF_SERVICE_COMPRESS, + ADF_SERVICE_MAX /* this is always the last one */ +} adf_service_type_t; + +typedef struct accel_dev_s { + /* Some generic information */ + Cpa32U accelId; + Cpa8U *pAccelName; /* Name given to accelerator */ + Cpa32U aeMask; /* Acceleration Engine mask */ + device_type_t deviceType; /* Device Type */ + /* Device name for SAL */ + char deviceName[ADF_DEVICE_NAME_LENGTH + 1]; + Cpa32U accelCapabilitiesMask; /* Accelerator's capabilities + mask */ + Cpa32U dcExtendedFeatures; /* bit field of features */ + QatUtilsAtomic usageCounter; /* Usage counter. Prevents + shutting down the dev if not 0*/ + Cpa32U deviceMemAvail; /* Device memory for intermediate buffers */ + /* Component specific fields - cast to relevent layer */ + void *pRingInflight; /* For offload optimization */ + void *pSalHandle; /* For SAL*/ + void *pQatStats; /* For QATAL/SAL stats */ + void *ringInfoCallBack; /* Callback for user space + ring enabling */ + void *pShramConstants; /* Virtual address of Shram constants page */ + Cpa64U pShramConstantsDma; /* Bus address of Shram constants page */ + + /* Status of ADF and registered subsystems */ + Cpa32U adfSubsystemStatus; + /* Physical processor to which the dev is connected */ + Cpa8U pkg_id; + enum dev_sku_info sku; + Cpa32U pciDevId; + Cpa8U devFileName[ADF_DEVICE_NAME_LENGTH]; + Cpa32S csrFileHdl; + Cpa32S ringFileHdl; + void *accel; + + Cpa32U maxNumBanks; + Cpa32U maxNumRingsPerBank; + + /* pointer to dynamic instance resource manager */ + void *pInstMgr; + void *banks; /* banks information */ + struct adf_accel_dev *accel_dev; + struct accel_dev_s *pPrev; + struct accel_dev_s *pNext; +} icp_accel_dev_t; + +#endif /* ICP_ACCEL_HANDLE_H */ Index: sys/dev/qat/qat_api/qat_direct/include/icp_adf_accel_mgr.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_direct/include/icp_adf_accel_mgr.h @@ -0,0 +1,172 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/***************************************************************************** + * @file icp_adf_accel_mgr.h + * + * @description + * This file contains the function prototype for accel + * instances management + * + *****************************************************************************/ +#ifndef ICP_ADF_ACCEL_MGR_H +#define ICP_ADF_ACCEL_MGR_H + +/* + * Device reset mode type. + * If device reset is triggered from atomic context + * it needs to be in ICP_ADF_DEV_RESET_ASYNC mode. + * Otherwise can be either. + */ +typedef enum icp_adf_dev_reset_mode_e { + ICP_ADF_DEV_RESET_ASYNC = 0, + ICP_ADF_DEV_RESET_SYNC +} icp_adf_dev_reset_mode_t; + +/* + * icp_adf_reset_dev + * + * Description: + * Function resets the given device. + * If device reset is triggered from atomic context + * it needs to be in ICP_ADF_DEV_RESET_ASYNC mode. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_reset_dev(icp_accel_dev_t *accel_dev, + icp_adf_dev_reset_mode_t mode); + +/* + * icp_adf_is_dev_in_reset + * Check if device is in reset state. + * + * Returns: + * CPA_TRUE device is in reset state + * CPA_FALS device is not in reset state + */ +CpaBoolean icp_adf_is_dev_in_reset(icp_accel_dev_t *accel_dev); + +/* + * icp_amgr_getNumInstances + * + * Description: + * Returns number of accel instances in the system. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_amgr_getNumInstances(Cpa16U *pNumInstances); + +/* + * icp_amgr_getInstances + * + * Description: + * Returns table of accel instances in the system. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_amgr_getInstances(Cpa16U numInstances, + icp_accel_dev_t **pAccel_devs); +/* + * icp_amgr_getAccelDevByName + * + * Description: + * Returns the accel instance by name. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_amgr_getAccelDevByName(unsigned char *instanceName, + icp_accel_dev_t **pAccel_dev); +/* + * icp_amgr_getAccelDevByCapabilities + * + * Description: + * Returns a started accel device that implements the capabilities + * specified in capabilitiesMask. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_amgr_getAccelDevByCapabilities(Cpa32U capabilitiesMask, + icp_accel_dev_t **pAccel_devs, + Cpa16U *pNumInstances); +/* + * icp_amgr_getAllAccelDevByCapabilities + * + * Description: + * Returns table of accel devices that are started and implement + * the capabilities specified in capabilitiesMask. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_amgr_getAllAccelDevByCapabilities(Cpa32U capabilitiesMask, + icp_accel_dev_t **pAccel_devs, + Cpa16U *pNumInstances); + +/* + * icp_amgr_getAccelDevCapabilities + * Returns accel devices capabilities specified in capabilitiesMask. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_amgr_getAccelDevCapabilities(icp_accel_dev_t *accel_dev, + Cpa32U *pCapabilitiesMask); + +/* + * icp_amgr_getAllAccelDevByEachCapability + * + * Description: + * Returns table of accel devices that are started and implement + * each of the capability specified in capabilitiesMask. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_amgr_getAllAccelDevByEachCapability(Cpa32U capabilitiesMask, + icp_accel_dev_t **pAccel_devs, + Cpa16U *pNumInstances); + +/* + * icp_qa_dev_get + * + * Description: + * Function increments the device usage counter. + * + * Returns: void + */ +void icp_qa_dev_get(icp_accel_dev_t *pDev); + +/* + * icp_qa_dev_put + * + * Description: + * Function decrements the device usage counter. + * + * Returns: void + */ +void icp_qa_dev_put(icp_accel_dev_t *pDev); + +/* + * icp_adf_getAccelDevByAccelId + * + * Description: + * Gets the accel_dev structure based on accelId + * + * Returns: a pointer to the accelerator structure or NULL if not found. + */ +icp_accel_dev_t *icp_adf_getAccelDevByAccelId(Cpa32U accelId); + +#endif /* ICP_ADF_ACCEL_MGR_H */ Index: sys/dev/qat/qat_api/qat_direct/include/icp_adf_cfg.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_direct/include/icp_adf_cfg.h @@ -0,0 +1,127 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/****************************************************************************** + * @file icp_adf_cfg.h + * + * @defgroup icp_AdfCfg Acceleration Driver Framework Configuration Interface. + * + * @ingroup icp_Adf + * + * @description + * This is the top level header file for the run-time system configuration + * parameters. This interface may be used by components of this API to + * access the supported run-time configuration parameters. + * + *****************************************************************************/ + +#ifndef ICP_ADF_CFG_H +#define ICP_ADF_CFG_H + +#include "cpa.h" +#include "icp_accel_devices.h" + +/****************************************************************************** +* Section for #define's & typedef's +******************************************************************************/ +/* Address of the UOF firmware */ +#define ICP_CFG_UOF_ADDRESS_KEY ("Firmware_UofAddress") +/* Size of the UOF firmware */ +#define ICP_CFG_UOF_SIZE_BYTES_KEY ("Firmware_UofSizeInBytes") +/* Address of the MMP firmware */ +#define ICP_CFG_MMP_ADDRESS_KEY ("Firmware_MmpAddress") +/* Size of the MMP firmware */ +#define ICP_CFG_MMP_SIZE_BYTES_KEY ("Firmware_MMpSizeInBytes") +/* MMP firmware version */ +#define ICP_CFG_MMP_VER_KEY ("Firmware_MmpVer") +/* UOF firmware version */ +#define ICP_CFG_UOF_VER_KEY ("Firmware_UofVer") +/* Tools version */ +#define ICP_CFG_TOOLS_VER_KEY ("Firmware_ToolsVer") +/* Hardware rev id */ +#define ICP_CFG_HW_REV_ID_KEY ("HW_RevId") +/* Lowest Compatible Driver Version */ +#define ICP_CFG_LO_COMPATIBLE_DRV_KEY ("Lowest_Compat_Drv_Ver") +/* Pke Service Disabled flag */ +#define ICP_CFG_PKE_DISABLED ("PkeServiceDisabled") +/* SRAM Physical Address Key */ +#define ADF_SRAM_PHYSICAL_ADDRESS ("Sram_PhysicalAddress") +/* SRAM Virtual Address Key */ +#define ADF_SRAM_VIRTUAL_ADDRESS ("Sram_VirtualAddress") +/* SRAM Size In Bytes Key */ +#define ADF_SRAM_SIZE_IN_BYTES ("Sram_SizeInBytes") +/* Device node id, tells to which die the device is + * connected to */ +#define ADF_DEV_NODE_ID ("Device_NodeId") +/* Device package id, this is accel_dev id */ +#define ADF_DEV_PKG_ID ("Device_PkgId") +/* Device bus address, B.D.F (Bus(8bits),Device(5bits),Function(3bits)) */ +#define ADF_DEV_BUS_ADDRESS ("Device_BusAddress") +/* Number of Acceleration Engines */ +#define ADF_DEV_NUM_AE ("Device_Num_AE") +/* Number of Accelerators */ +#define ADF_DEV_NUM_ACCEL ("Device_Num_Accel") +/* Max Number of Acceleration Engines */ +#define ADF_DEV_MAX_AE ("Device_Max_AE") +/* Max Number of Accelerators */ +#define ADF_DEV_MAX_ACCEL ("Device_Max_Accel") +/* QAT/AE Mask*/ +#define ADF_DEV_MAX_RING_PER_QAT ("Device_Max_Num_Rings_per_Accel") +/* Number of Accelerators */ +#define ADF_DEV_ACCELAE_MASK_FMT ("Device_Accel_AE_Mask_%d") +/* VF ring offset */ +#define ADF_VF_RING_OFFSET_KEY ("VF_RingOffset") +/* Mask of Accelerators */ +#define ADF_DEV_AE_MASK ("Device_AE_Mask") +/* Whether or not arbitration is supported */ +#define ADF_DEV_ARB_SUPPORTED ("ArbitrationSupported") +/* Slice Watch Dog Timer for CySym+Comp */ +#define ADF_DEV_SSM_WDT_BULK ("CySymAndDcWatchDogTimer") +/* Slice Watch Dog Timer for CyAsym */ +#define ADF_DEV_SSM_WDT_PKE ("CyAsymWatchDogTimer") + +/* String names for the exposed sections of config file. */ +#define GENERAL_SEC "GENERAL" +#define WIRELESS_SEC "WIRELESS_INT_" +#define DYN_SEC "DYN" +#define DEV_LIMIT_CFG_ACCESS_TMPL "_D_L_ACC" +/*#define WIRELESS_ENABLED "WirelessEnabled"*/ + +/* + * icp_adf_cfgGetParamValue + * + * Description: + * This function is used to determine the value for a given parameter name. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_cfgGetParamValue(icp_accel_dev_t *accel_dev, + const char *section, + const char *param_name, + char *param_value); +/* + * icp_adf_cfgGetRingNumber + * + * Description: + * Function returns ring number configured for the service. + * NOTE: this function will only be used by QATAL in kernelspace. + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_cfgGetRingNumber(icp_accel_dev_t *accel_dev, + const char *section_name, + const Cpa32U accel_num, + const Cpa32U bank_num, + const char *pServiceName, + Cpa32U *pRingNum); + +/* + * icp_adf_get_busAddress + * Gets the B.D.F. of the physical device + */ +Cpa16U icp_adf_get_busAddress(Cpa16U packageId); + +#endif /* ICP_ADF_CFG_H */ Index: sys/dev/qat/qat_api/qat_direct/include/icp_adf_debug.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_direct/include/icp_adf_debug.h @@ -0,0 +1,136 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/****************************************************************************** + * @file icp_adf_debug.h + * + * @description + * This header file that contains the prototypes and definitions required + * for ADF debug feature. + * +*****************************************************************************/ +#ifndef ICP_ADF_DEBUG_H +#define ICP_ADF_DEBUG_H + +/* + * adf_proc_type_t + * Type of proc file. Simple for files where read funct + * prints less than page size (4kB) and seq type for files + * where read function needs to print more that page size. + */ +typedef enum adf_proc_type_e { + ADF_PROC_SIMPLE = 1, + ADF_PROC_SEQ +} adf_proc_type_t; + +/* + * debug_dir_info_t + * Struct which is used to hold information about a debug directory + * under the proc filesystem. + * Client should only set name and parent fields. + */ +typedef struct debug_dir_info_s { + char *name; + struct debug_dir_info_s *parent; + /* The below fields are used internally by the driver */ + struct debug_dir_info_s *dirChildListHead; + struct debug_dir_info_s *dirChildListTail; + struct debug_dir_info_s *pNext; + struct debug_dir_info_s *pPrev; + struct debug_file_info_s *fileListHead; + struct debug_file_info_s *fileListTail; + void *proc_entry; +} debug_dir_info_t; + +/* + * Read handle type for simple proc file + * Function is called only once and can print up to 4kB (size) + * Function should return number of bytes printed. + */ +typedef int (*file_read)(void *private_data, char *buff, int size); + +/* + * Read handle type for sequential proc file + * Function can be called more than once. It will be called until the + * return value is not 0. offset should be used to mark the starting + * point for next step. In one go function can print up to 4kB (size). + * Function should return 0 (zero) if all info is printed or + * offset from where to start in next step. + */ +typedef int (*file_read_seq)(void *private_data, + char *buff, + int size, + int offset); + +/* + * debug_file_info_t + * Struct which is used to hold information about a debug file + * under the proc filesystem. + * Client should only set name, type, private_data, parent fields, + * and read or seq_read pointers depending on type used. + */ +typedef struct debug_file_info_s { + char *name; + struct debug_dir_info_s *parent; + adf_proc_type_t type; + file_read read; + file_read_seq seq_read; + void *private_data; + /* The below fields are used internally by the driver */ + struct debug_file_info_s *pNext; + struct debug_file_info_s *pPrev; + void *page; + Cpa32U offset; + void *proc_entry; +} debug_file_info_t; + +/* + * icp_adf_debugAddDir + * + * Description: + * Function used by subsystem to register a new + * directory under the proc filesystem + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_debugAddDir(icp_accel_dev_t *accel_dev, + debug_dir_info_t *dir_info); + +/* + * icp_adf_debugRemoveDir + * + * Description: + * Function used by subsystem to remove an existing + * directory for which debug output may be stored + * in the proc filesystem. + * +*/ +void icp_adf_debugRemoveDir(debug_dir_info_t *dir_info); + +/* + * icp_adf_debugAddFile + * + * Description: + * Function used by subsystem to add a new file under + * the proc file system in which debug output may be written + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_debugAddFile(icp_accel_dev_t *accel_dev, + debug_file_info_t *file_info); + +/* + * icp_adf_debugRemoveFile + * + * Description: + * Function used by subsystem to remove an existing file under + * the proc filesystem in which debug output may be written + * + */ +void icp_adf_debugRemoveFile(debug_file_info_t *file_info); + +#endif /* ICP_ADF_DEBUG_H */ Index: sys/dev/qat/qat_api/qat_direct/include/icp_adf_esram.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_direct/include/icp_adf_esram.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/****************************************************************************** + * @file icp_adf_esram.h + * + * @description + * This file contains the ADF interface to retrieve eSRAM information + * + *****************************************************************************/ +#ifndef ICP_ADF_ESRAM_H +#define ICP_ADF_ESRAM_H + +/* + * icp_adf_esramGetAddress + * + * Description: + * Returns the eSRAM's physical and virtual addresses and its size in bytes. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_esramGetAddress(icp_accel_dev_t *accel_dev, + Cpa32U accelNumber, + Cpa64U *pPhysAddr, + Cpa64U *pVirtAddr, + Cpa32U *pSize); + +#endif /* ICP_ADF_ESRAM_H */ Index: sys/dev/qat/qat_api/qat_direct/include/icp_adf_init.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_direct/include/icp_adf_init.h @@ -0,0 +1,215 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/***************************************************************************** + * @file icp_adf_init.h + * + * @description + * This file contains the function prototype used to register a subsystem + * into the Acceleration Driver Framework (ADF). + * + *****************************************************************************/ +#ifndef ICP_ADF_INIT_H +#define ICP_ADF_INIT_H + +#include "icp_accel_devices.h" +#include "adf_kernel_types.h" +#include "adf_cfg_common.h" + +/* + * Events that will be sending to subsystem. The order of the enum + * declaration matters. It should be defined so that the messages can be + * sent in loop. + */ +typedef enum icp_adf_subsystemEvent_s { + ICP_ADF_EVENT_INIT = 0, + ICP_ADF_EVENT_START, + ICP_ADF_EVENT_STOP, + ICP_ADF_EVENT_SHUTDOWN, + ICP_ADF_EVENT_RESTARING, + ICP_ADF_EVENT_RESTARTED, + ICP_ADF_EVENT_ERROR, + ICP_ADF_EVENT_END +} icp_adf_subsystemEvent_t; + +/* + * Ring info operation used to enable or disable ring polling by ME + */ +typedef enum icp_adf_ringInfoOperation_e { + ICP_ADF_RING_ENABLE = 0, + ICP_ADF_RING_DISABLE +} icp_adf_ringInfoOperation_t; + +/* + * Ring generic serivce info private data + */ +typedef enum icp_adf_ringInfoService_e { + ICP_ADF_RING_SERVICE_0 = 0, + ICP_ADF_RING_SERVICE_1, + ICP_ADF_RING_SERVICE_2, + ICP_ADF_RING_SERVICE_3, + ICP_ADF_RING_SERVICE_4, + ICP_ADF_RING_SERVICE_5, + ICP_ADF_RING_SERVICE_6, + ICP_ADF_RING_SERVICE_7, + ICP_ADF_RING_SERVICE_8, + ICP_ADF_RING_SERVICE_9, + ICP_ADF_RING_SERVICE_10, +} icp_adf_ringInfoService_t; + +/* + * Ring info callback. Function is used to send operation and ring info + * to enable or disable ring polling by ME + */ +typedef CpaStatus (*ringInfoCb)(icp_accel_dev_t *accel_dev, + Cpa32U ringNumber, + icp_adf_ringInfoOperation_t operation, + icp_adf_ringInfoService_t info); + +/* + * Registration handle structure + * Each subservice has to have an instance of it. + */ +typedef struct subservice_registation_handle_s { + CpaStatus (*subserviceEventHandler)(icp_accel_dev_t *accel_dev, + icp_adf_subsystemEvent_t event, + void *param); + struct { + Cpa32U subsystemInitBit : 1; + Cpa32U subsystemStartBit : 1; + Cpa32U subsystemFailedBit : 1; + } subsystemStatus[ADF_MAX_DEVICES]; + char *subsystem_name; + struct subservice_registation_handle_s *pNext; + struct subservice_registation_handle_s *pPrev; +} subservice_registation_handle_t; + +/* + * icp_adf_subsystemRegister + * + * Description: + * Function used by subsystem to register within ADF + * Should be called during insertion of a subsystem + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_subsystemRegister(subservice_registation_handle_t *handle); + +/* + * icp_adf_subsystemUnregister + * + * Description: + * Function used by subsystem to unregister from ADF + * Should be called while subsystem in removed + * If the subsystem is initialised and/or started + * it will be stopped and shutdown by this function + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_subsystemUnregister(subservice_registation_handle_t *handle); + +/* + * icp_adf_accesLayerRingInfoCbRegister + * + * Description: + * Function register access layer callback, which sends ring info message + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_accesLayerRingInfoCbRegister(icp_accel_dev_t *accel_dev, + ringInfoCb); + +/* + * icp_adf_accesLayerRingInfoCbUnregister + * + * Description: + * Function unregister access layer callback for ring info message + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +void icp_adf_accesLayerRingInfoCbUnregister(icp_accel_dev_t *accel_dev); + +/* + * icp_adf_isSubsystemStarted + * + * Description: + * Function returns true if the service is started on a device + * + * Returns: + * CPA_TRUE if subsystem is started + * CPA_FALSE if subsystem is not started + */ + +CpaBoolean +icp_adf_isSubsystemStarted(subservice_registation_handle_t *subsystem_hdl); + +/* + * icp_adf_isDevStarted + * + * Description: + * Function returns true if the device is started + * Returns: + * CPA_TRUE if dev is started + * CPA_FALSE if dev is not started + */ +CpaBoolean icp_adf_isDevStarted(icp_accel_dev_t *accel_dev); + +/* + * adf_subsystemRestarting + * + * Description: + * Function sends restarting event to all subsystems. + * This function should be used by error handling function only + * + * Returns: + * CPA_TRUE on success + * CPA_FALSE on failure + */ +CpaStatus adf_subsystemRestarting(icp_accel_dev_t *accel_dev); + +/* + * adf_subsystemRestarted + * + * Description: + * Function sends restarted event to all subsystems. + * This function should be used by error handling function only + * + * Returns: + * CPA_TRUE on success + * CPA_FALSE on failure + */ +CpaStatus adf_subsystemRestarted(icp_accel_dev_t *accel_dev); + +/* + * adf_subsystemError + * + * Description: + * Function sends error event to all subsystems. + * This function should be used by error handling funct. only + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus adf_subsystemError(icp_accel_dev_t *accel_dev); + +/* + * reset_adf_subsystemTable + * + * Description: + * Function to reset subsystem table head, the pointer + * to the head of the list and lock. + * + * Returns: void + */ +void reset_adf_subsystemTable(void); + +#endif /* ICP_ADF_INIT_H */ Index: sys/dev/qat/qat_api/qat_direct/include/icp_adf_poll.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_direct/include/icp_adf_poll.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/***************************************************************************** + * @file icp_adf_poll.h + * + * @description + * File contains Public API Definitions for the polling method. + * + *****************************************************************************/ +#ifndef ICP_ADF_POLL_H +#define ICP_ADF_POLL_H + +#include "cpa.h" +/* + * icp_adf_pollInstance + * + * Description: + * Poll an instance. In order to poll an instance + * sal will pass in a table of trans handles from which + * the ring to be polled can be obtained and subsequently + * polled. + * + * Returns: + * CPA_STATUS_SUCCESS on polling a ring with data + * CPA_STATUS_FAIL on failure + * CPA_STATUS_RETRY if ring has no data on it + * or ring is already being polled. + */ +CpaStatus icp_adf_pollInstance(icp_comms_trans_handle *trans_hnd, + Cpa32U num_transHandles, + Cpa32U response_quota); + +/* + * icp_adf_check_RespInstance + * + * Description: + * Check whether an instance is empty or has remaining responses on it. In + * order to check an instance for the remaining responses, sal will pass in + * a table of trans handles from which the instance to be checked can be + * obtained and subsequently checked. + * + * Returns: + * CPA_STATUS_SUCCESS if response ring is empty + * CPA_STATUS_FAIL on failure + * CPA_STATUS_RETRY if response ring is not empty + * CPA_STATUS_INVALID_PARAM Invalid parameter passed in + */ +CpaStatus icp_adf_check_RespInstance(icp_comms_trans_handle *trans_hnd, + Cpa32U num_transHandles); + +#endif /* ICP_ADF_POLL_H */ Index: sys/dev/qat/qat_api/qat_direct/include/icp_adf_transport.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_direct/include/icp_adf_transport.h @@ -0,0 +1,286 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/***************************************************************************** + * @file icp_adf_transport.h + * + * @description + * File contains Public API Definitions for ADF transport. + * + *****************************************************************************/ +#ifndef ICP_ADF_TRANSPORT_H +#define ICP_ADF_TRANSPORT_H + +#include "cpa.h" + +/* + * Enumeration on Transport Types exposed + */ +typedef enum icp_transport_type_e { + ICP_TRANS_TYPE_NONE = 0, + ICP_TRANS_TYPE_ETR, + ICP_TRANS_TYPE_DP_ETR, + ICP_TRANS_TYPE_ADMINREG, + ICP_TRANS_TYPE_DELIMIT +} icp_transport_type; + +/* + * Enumeration on response delivery method + */ +typedef enum icp_resp_deliv_method_e { + ICP_RESP_TYPE_NONE = 0, + ICP_RESP_TYPE_IRQ, + ICP_RESP_TYPE_POLL, + ICP_RESP_TYPE_DELIMIT +} icp_resp_deliv_method; + +/* + * Unique identifier of a transport handle + */ +typedef Cpa32U icp_trans_identifier; + +/* + * Opaque Transport Handle + */ +typedef void *icp_comms_trans_handle; + +/* + * Function Pointer invoked when a set of messages is received for the given + * transport handle + */ +typedef void (*icp_trans_callback)(void *pMsg); + +/* + * icp_adf_getDynInstance + * + * Description: + * Get an available instance from dynamic instance pool + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + * + */ +CpaStatus icp_adf_getDynInstance(icp_accel_dev_t *accel_dev, + adf_service_type_t stype, + Cpa32U *pinstance_id); + +/* + * icp_adf_putDynInstance + * + * Description: + * Put back an instance to dynamic instance pool + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + * + */ +CpaStatus icp_adf_putDynInstance(icp_accel_dev_t *accel_dev, + adf_service_type_t stype, + Cpa32U instance_id); + +/* + * icp_adf_getNumAvailDynInstance + * + * Description: + * Get the number of the available dynamic instances + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + * + */ +CpaStatus icp_adf_getNumAvailDynInstance(icp_accel_dev_t *accel_dev, + adf_service_type_t stype, + Cpa32U *num); + +/* + * icp_adf_transGetFdForHandle + * + * Description: + * Get a file descriptor for a particular transaction handle. + * If more than one transaction handler + * are ever present, this will need to be refactored to + * return the appropriate fd of the appropriate bank. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + * + * + */ +CpaStatus icp_adf_transGetFdForHandle(icp_comms_trans_handle trans_hnd, + int *fd); + +/* + * icp_adf_transCreateHandle + * + * Description: + * Create a transport handle + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + * + * The message size is variable: requests can be 64 or 128 bytes, responses + * can be 16, 32 or 64 bytes. + * Supported num_msgs: + * 32, 64, 128, 256, 512, 1024, 2048 number of messages. + * + */ +CpaStatus icp_adf_transCreateHandle(icp_accel_dev_t *accel_dev, + icp_transport_type trans_type, + const char *section, + const Cpa32U accel_nr, + const Cpa32U bank_nr, + const char *service_name, + const icp_adf_ringInfoService_t info, + icp_trans_callback callback, + icp_resp_deliv_method resp, + const Cpa32U num_msgs, + const Cpa32U msg_size, + icp_comms_trans_handle *trans_handle); + +/* + * icp_adf_transReinitHandle + * + * Description: + * Reinitialize a transport handle + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + * + * The message size is variable: requests can be 64 or 128 bytes, responses + * can be 16, 32 or 64 bytes. + * Supported num_msgs: + * 32, 64, 128, 256, 512, 1024, 2048 number of messages. + * + */ +CpaStatus icp_adf_transReinitHandle(icp_accel_dev_t *accel_dev, + icp_transport_type trans_type, + const char *section, + const Cpa32U accel_nr, + const Cpa32U bank_nr, + const char *service_name, + const icp_adf_ringInfoService_t info, + icp_trans_callback callback, + icp_resp_deliv_method resp, + const Cpa32U num_msgs, + const Cpa32U msg_size, + icp_comms_trans_handle *trans_handle); + +/* + * icp_adf_transGetHandle + * + * Description: + * Gets a pointer to a previously created transport handle + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + * + */ +CpaStatus icp_adf_transGetHandle(icp_accel_dev_t *accel_dev, + icp_transport_type trans_type, + const char *section, + const Cpa32U accel_nr, + const Cpa32U bank_nr, + const char *service_name, + icp_comms_trans_handle *trans_handle); + +/* + * icp_adf_transReleaseHandle + * + * Description: + * Release a transport handle + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_transReleaseHandle(icp_comms_trans_handle trans_handle); + +/* + * icp_adf_transResetHandle + * + * Description: + * Reset a transport handle + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_transResetHandle(icp_comms_trans_handle trans_handle); + +/* + * icp_adf_transPutMsg + * + * Description: + * Put Message onto the transport handle + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_transPutMsg(icp_comms_trans_handle trans_handle, + Cpa32U *inBufs, + Cpa32U bufLen); + +/* + * icp_adf_getInflightRequests + * + * Description: + * Retrieve in flight requests from the transport handle. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_getInflightRequests(icp_comms_trans_handle trans_handle, + Cpa32U *maxInflightRequests, + Cpa32U *numInflightRequests); + +/* + * icp_adf_transPutMsgSync + * + * Description: + * Put Message onto the transport handle and waits for a response. + * Note: Not all transports support method. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_transPutMsgSync(icp_comms_trans_handle trans_handle, + Cpa32U *inBuf, + Cpa32U *outBuf, + Cpa32U bufsLen); + +/* + * icp_adf_transGetRingNum + * + * Description: + * Function Returns ring number of the given trans_handle + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_transGetRingNum(icp_comms_trans_handle trans_handle, + Cpa32U *ringNum); + +/* + * icp_adf_flush_requests + * + * Description: + * Function flushes the enqueued requests on the trans_handle + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus icp_adf_flush_requests(icp_comms_trans_handle trans_handle); + +#endif /* ICP_ADF_TRANSPORT_H */ Index: sys/dev/qat/qat_api/qat_direct/include/icp_adf_transport_dp.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_direct/include/icp_adf_transport_dp.h @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/***************************************************************************** + * @file icp_adf_transport_dp.h + * + * @description + * File contains Public API definitions for ADF transport for data plane. + * + *****************************************************************************/ +#ifndef ICP_ADF_TRANSPORT_DP_H +#define ICP_ADF_TRANSPORT_DP_H + +#include "cpa.h" +#include "icp_adf_transport.h" + +/* + * icp_adf_getQueueMemory + * Data plane support function - returns the pointer to next message on the ring + * or NULL if there is not enough space. + */ +extern void icp_adf_getQueueMemory(icp_comms_trans_handle trans_handle, + Cpa32U numberRequests, + void **pCurrentQatMsg); +/* + * icp_adf_getSingleQueueAddr + * Data plane support function - returns the pointer to next message on the ring + * or NULL if there is not enough space - it also updates the shadow tail copy. + */ +extern void icp_adf_getSingleQueueAddr(icp_comms_trans_handle trans_handle, + void **pCurrentQatMsg); + +/* + * icp_adf_getQueueNext + * Data plane support function - increments the tail pointer and returns + * the pointer to next message on the ring. + */ +extern void icp_adf_getQueueNext(icp_comms_trans_handle trans_handle, + void **pCurrentQatMsg); + +/* + * icp_adf_updateQueueTail + * Data plane support function - Writes the tail shadow copy to the device. + */ +extern void icp_adf_updateQueueTail(icp_comms_trans_handle trans_handle); + +/* + * icp_adf_isRingEmpty + * Data plane support function - check if the ring is empty + */ +extern CpaBoolean icp_adf_isRingEmpty(icp_comms_trans_handle trans_handle); + +/* + * icp_adf_pollQueue + * Data plane support function - Poll messages from the queue. + */ +extern CpaStatus icp_adf_pollQueue(icp_comms_trans_handle trans_handle, + Cpa32U response_quota); + +/* + * icp_adf_queueDataToSend + * LAC lite support function - Indicates if there is data on the ring to be + * send. This should only be called on request rings. If the function returns + * true then it is ok to call icp_adf_updateQueueTail() function on this ring. + */ +extern CpaBoolean icp_adf_queueDataToSend(icp_comms_trans_handle trans_hnd); + +/* + * icp_adf_dp_getInflightRequests + * Retrieve in flight requests from the transport handle. + * Data plane API - no locks. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +extern CpaStatus +icp_adf_dp_getInflightRequests(icp_comms_trans_handle trans_handle, + Cpa32U *maxInflightRequests, + Cpa32U *numInflightRequests); + +#endif /* ICP_ADF_TRANSPORT_DP_H */ Index: sys/dev/qat/qat_api/qat_kernel/src/lac_adf_interface_freebsd.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_kernel/src/lac_adf_interface_freebsd.c @@ -0,0 +1,424 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_cfg.h" +#include "cpa.h" +#include "icp_accel_devices.h" +#include "adf_common_drv.h" +#include "icp_adf_debug.h" +#include "icp_adf_init.h" +#include "lac_sal_ctrl.h" + +static subservice_registation_handle_t *salService = NULL; +static struct service_hndl adfService = { 0 }; +static icp_accel_dev_t *adfDevices = NULL; +static icp_accel_dev_t *adfDevicesHead = NULL; +struct mtx *adfDevicesLock; + +/* + * Need to keep track of what device is currently in reset state + */ +static char accel_dev_reset_stat[ADF_MAX_DEVICES] = { 0 }; + +/* + * Need to keep track of what device is currently in error state + */ +static char accel_dev_error_stat[ADF_MAX_DEVICES] = { 0 }; + +/* + * Need to preserve sal handle during restart + */ +static void *accel_dev_sal_hdl_ptr[ADF_MAX_DEVICES] = { 0 }; + +static icp_accel_dev_t * +create_adf_dev_structure(struct adf_accel_dev *accel_dev) +{ + icp_accel_dev_t *adf = NULL; + + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + + adf = malloc(sizeof(*adf), M_QAT, M_WAITOK); + memset(adf, 0, sizeof(*adf)); + adf->accelId = accel_dev->accel_id; + adf->pAccelName = (char *)hw_data->dev_class->name; + adf->deviceType = (device_type_t)hw_data->dev_class->type; + strlcpy(adf->deviceName, + hw_data->dev_class->name, + sizeof(adf->deviceName)); + adf->accelCapabilitiesMask = hw_data->accel_capabilities_mask; + adf->sku = hw_data->get_sku(hw_data); + adf->accel_dev = accel_dev; + accel_dev->lac_dev = adf; + + return adf; +} + +/* + * adf_event_handler + * Handle device init/uninit/start/stop event + */ +static CpaStatus +adf_event_handler(struct adf_accel_dev *accel_dev, enum adf_event event) +{ + CpaStatus status = CPA_STATUS_FAIL; + icp_accel_dev_t *adf = NULL; + + if (!adf_cfg_sec_find(accel_dev, ADF_KERNEL_SAL_SEC)) { + return CPA_STATUS_SUCCESS; + } + + if (event == ADF_EVENT_INIT) { + adf = create_adf_dev_structure(accel_dev); + if (NULL == adf) { + return CPA_STATUS_FAIL; + } + if (accel_dev_sal_hdl_ptr[accel_dev->accel_id]) { + adf->pSalHandle = + accel_dev_sal_hdl_ptr[accel_dev->accel_id]; + accel_dev_sal_hdl_ptr[accel_dev->accel_id] = NULL; + } + + qatUtilsMutexLock(&adfDevicesLock, QAT_UTILS_WAIT_FOREVER); + ICP_ADD_ELEMENT_TO_END_OF_LIST(adf, adfDevices, adfDevicesHead); + qatUtilsMutexUnlock(&adfDevicesLock); + } else { + adf = accel_dev->lac_dev; + } + + if (event == ADF_EVENT_START) { + adf->dcExtendedFeatures = + accel_dev->hw_device->extended_dc_capabilities; + } + + if (event == ADF_EVENT_RESTARTING) { + accel_dev_reset_stat[accel_dev->accel_id] = 1; + accel_dev_sal_hdl_ptr[accel_dev->accel_id] = adf->pSalHandle; + } + + if (event == ADF_EVENT_RESTARTED) { + accel_dev_reset_stat[accel_dev->accel_id] = 0; + accel_dev_error_stat[accel_dev->accel_id] = 0; + } + + status = + salService->subserviceEventHandler(adf, + (icp_adf_subsystemEvent_t)event, + NULL); + + if (event == ADF_EVENT_ERROR) { + accel_dev_error_stat[accel_dev->accel_id] = 1; + } + + if ((status == CPA_STATUS_SUCCESS && event == ADF_EVENT_SHUTDOWN) || + (status != CPA_STATUS_SUCCESS && event == ADF_EVENT_INIT)) { + qatUtilsMutexLock(&adfDevicesLock, QAT_UTILS_WAIT_FOREVER); + ICP_REMOVE_ELEMENT_FROM_LIST(adf, adfDevices, adfDevicesHead); + qatUtilsMutexUnlock(&adfDevicesLock); + accel_dev->lac_dev = NULL; + free(adf, M_QAT); + } + + if (status == CPA_STATUS_SUCCESS && event == ADF_EVENT_START) { + qatUtilsMutexLock(&adfDevicesLock, QAT_UTILS_WAIT_FOREVER); + adf->adfSubsystemStatus = 1; + qatUtilsMutexUnlock(&adfDevicesLock); + } + + if ((status == CPA_STATUS_SUCCESS && event == ADF_EVENT_STOP) || + (status == CPA_STATUS_RETRY && event == ADF_EVENT_STOP)) { + qatUtilsMutexLock(&adfDevicesLock, QAT_UTILS_WAIT_FOREVER); + adf->adfSubsystemStatus = 0; + qatUtilsMutexUnlock(&adfDevicesLock); + status = CPA_STATUS_SUCCESS; + } + + return status; +} + +/* + * icp_adf_subsystemRegister + * adapter function from SAL to adf driver + * call adf_service_register from adf driver directly with same + * parameters + */ +CpaStatus +icp_adf_subsystemRegister( + subservice_registation_handle_t *sal_service_reg_handle) +{ + if (salService != NULL) + return CPA_STATUS_FAIL; + + salService = sal_service_reg_handle; + adfService.name = sal_service_reg_handle->subsystem_name; + adfService.event_hld = adf_event_handler; + + if (adf_service_register(&adfService) == 0) { + return CPA_STATUS_SUCCESS; + } else { + salService = NULL; + return CPA_STATUS_FAIL; + } +} + +/* + * icp_adf_subsystemUnegister + * adapter function from SAL to adf driver + */ +CpaStatus +icp_adf_subsystemUnregister( + subservice_registation_handle_t *sal_service_reg_handle) +{ + if (adf_service_unregister(&adfService) == 0) { + salService = NULL; + return CPA_STATUS_SUCCESS; + } else { + return CPA_STATUS_FAIL; + } +} + +/* + * icp_adf_cfgGetParamValue + * get parameter value from section @section with key @param + */ +CpaStatus +icp_adf_cfgGetParamValue(icp_accel_dev_t *adf, + const char *section, + const char *param, + char *value) +{ + if (adf_cfg_get_param_value(adf->accel_dev, section, param, value) == + 0) { + return CPA_STATUS_SUCCESS; + } else { + return CPA_STATUS_FAIL; + } +} + +CpaBoolean +icp_adf_is_dev_in_reset(icp_accel_dev_t *accel_dev) +{ + return (CpaBoolean)accel_dev_reset_stat[accel_dev->accelId]; +} + +CpaStatus +icp_adf_debugAddDir(icp_accel_dev_t *adf, debug_dir_info_t *dir_info) +{ + return CPA_STATUS_SUCCESS; +} + +void +icp_adf_debugRemoveDir(debug_dir_info_t *dir_info) +{ +} + +CpaStatus +icp_adf_debugAddFile(icp_accel_dev_t *adf, debug_file_info_t *file_info) +{ + return CPA_STATUS_SUCCESS; +} + +void +icp_adf_debugRemoveFile(debug_file_info_t *file_info) +{ +} + +/* + * icp_adf_getAccelDevByAccelId + * return acceleration device with id @accelId + */ +icp_accel_dev_t * +icp_adf_getAccelDevByAccelId(Cpa32U accelId) +{ + icp_accel_dev_t *adf = NULL; + + qatUtilsMutexLock(&adfDevicesLock, QAT_UTILS_WAIT_FOREVER); + adf = adfDevicesHead; + while (adf != NULL && adf->accelId != accelId) + adf = adf->pNext; + qatUtilsMutexUnlock(&adfDevicesLock); + return adf; +} + +/* + * icp_amgr_getNumInstances + * Return the number of acceleration devices it the system. + */ +CpaStatus +icp_amgr_getNumInstances(Cpa16U *pNumInstances) +{ + icp_accel_dev_t *adf = NULL; + Cpa16U count = 0; + + qatUtilsMutexLock(&adfDevicesLock, QAT_UTILS_WAIT_FOREVER); + for (adf = adfDevicesHead; adf != NULL; adf = adf->pNext) + count++; + qatUtilsMutexUnlock(&adfDevicesLock); + *pNumInstances = count; + return CPA_STATUS_SUCCESS; +} + +/* + * icp_amgr_getAccelDevByCapabilities + * Returns a started accel device that implements + * the capabilities specified in capabilitiesMask. + */ +CpaStatus +icp_amgr_getAccelDevByCapabilities(Cpa32U capabilitiesMask, + icp_accel_dev_t **pAccel_devs, + Cpa16U *pNumInstances) +{ + icp_accel_dev_t *adf = NULL; + *pNumInstances = 0; + + qatUtilsMutexLock(&adfDevicesLock, QAT_UTILS_WAIT_FOREVER); + for (adf = adfDevicesHead; adf != NULL; adf = adf->pNext) { + if (adf->accelCapabilitiesMask & capabilitiesMask) { + if (adf->adfSubsystemStatus) { + pAccel_devs[0] = adf; + *pNumInstances = 1; + qatUtilsMutexUnlock(&adfDevicesLock); + return CPA_STATUS_SUCCESS; + } + } + } + qatUtilsMutexUnlock(&adfDevicesLock); + return CPA_STATUS_FAIL; +} + +/* + * icp_amgr_getAllAccelDevByEachCapabilities + * Returns table of accel devices that are started and implement + * each of the capabilities specified in capabilitiesMask. + */ +CpaStatus +icp_amgr_getAllAccelDevByEachCapability(Cpa32U capabilitiesMask, + icp_accel_dev_t **pAccel_devs, + Cpa16U *pNumInstances) +{ + icp_accel_dev_t *adf = NULL; + *pNumInstances = 0; + qatUtilsMutexLock(&adfDevicesLock, QAT_UTILS_WAIT_FOREVER); + for (adf = adfDevicesHead; adf != NULL; adf = adf->pNext) { + Cpa32U enabled_caps = + adf->accelCapabilitiesMask & capabilitiesMask; + if (enabled_caps == capabilitiesMask) { + if (adf->adfSubsystemStatus) { + pAccel_devs[(*pNumInstances)++] = + (icp_accel_dev_t *)adf; + } + } + } + qatUtilsMutexUnlock(&adfDevicesLock); + return CPA_STATUS_SUCCESS; +} + +/* + * icp_amgr_getAllAccelDevByCapabilities + * Fetches accel devices based on the capability + * and returns the count of the device + */ +CpaStatus +icp_amgr_getAllAccelDevByCapabilities(Cpa32U capabilitiesMask, + icp_accel_dev_t **pAccel_devs, + Cpa16U *pNumInstances) +{ + icp_accel_dev_t *adf = NULL; + Cpa16U i = 0; + + qatUtilsMutexLock(&adfDevicesLock, QAT_UTILS_WAIT_FOREVER); + for (adf = adfDevicesHead; adf != NULL; adf = adf->pNext) { + if (adf->accelCapabilitiesMask & capabilitiesMask) { + if (adf->adfSubsystemStatus) { + pAccel_devs[i++] = adf; + } + } + } + qatUtilsMutexUnlock(&adfDevicesLock); + *pNumInstances = i; + return CPA_STATUS_SUCCESS; +} + +/* + * icp_amgr_getAccelDevCapabilities + * Returns accel devices capabilities specified in capabilitiesMask. + * + * Returns: + * CPA_STATUS_SUCCESS on success + * CPA_STATUS_FAIL on failure + */ +CpaStatus +icp_amgr_getAccelDevCapabilities(icp_accel_dev_t *accel_dev, + Cpa32U *pCapabilitiesMask) +{ + ICP_CHECK_FOR_NULL_PARAM(accel_dev); + ICP_CHECK_FOR_NULL_PARAM(pCapabilitiesMask); + + *pCapabilitiesMask = accel_dev->accelCapabilitiesMask; + return CPA_STATUS_SUCCESS; +} + +/* + * icp_qa_dev_get + * + * Description: + * Function increments the device usage counter. + * + * Returns: void + */ +void +icp_qa_dev_get(icp_accel_dev_t *pDev) +{ + ICP_CHECK_FOR_NULL_PARAM_VOID(pDev); + adf_dev_get(pDev->accel_dev); +} + +/* + * icp_qa_dev_put + * + * Description: + * Function decrements the device usage counter. + * + * Returns: void + */ +void +icp_qa_dev_put(icp_accel_dev_t *pDev) +{ + ICP_CHECK_FOR_NULL_PARAM_VOID(pDev); + adf_dev_put(pDev->accel_dev); +} + +Cpa16U +icp_adf_get_busAddress(Cpa16U packageId) +{ + Cpa16U busAddr = 0xFFFF; + icp_accel_dev_t *adf = NULL; + + qatUtilsMutexLock(&adfDevicesLock, QAT_UTILS_WAIT_FOREVER); + for (adf = adfDevicesHead; adf != NULL; adf = adf->pNext) { + if (adf->accelId == packageId) { + busAddr = pci_get_bus(accel_to_pci_dev(adf->accel_dev)) + << 8 | + pci_get_slot(accel_to_pci_dev(adf->accel_dev)) + << 3 | + pci_get_function(accel_to_pci_dev(adf->accel_dev)); + break; + } + } + qatUtilsMutexUnlock(&adfDevicesLock); + return busAddr; +} + +CpaBoolean +icp_adf_isSubsystemStarted(subservice_registation_handle_t *subsystem_hdl) +{ + if (subsystem_hdl == salService) + return CPA_TRUE; + else + return CPA_FALSE; +} + +CpaBoolean +icp_adf_is_dev_in_error(icp_accel_dev_t *accel_dev) +{ + return (CpaBoolean)accel_dev_error_stat[accel_dev->accelId]; +} Index: sys/dev/qat/qat_api/qat_kernel/src/lac_symbols.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_kernel/src/lac_symbols.c @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/****************************************************************************** + * @file lac_symbols.c + * + * This file contains all the symbols that are exported by the Look Aside + * kernel Module. + * + *****************************************************************************/ +#include +#include "cpa.h" +#include "cpa_dc.h" +#include "cpa_dc_dp.h" +#include "cpa_dc_bp.h" +#include "icp_adf_init.h" +#include "icp_adf_transport.h" +#include "icp_adf_poll.h" +#include "icp_sal_poll.h" +#include "icp_sal_iommu.h" +#include "icp_sal_versions.h" +#include "lac_common.h" + +/* Symbols for getting version information */ +EXPORT_SYMBOL(icp_sal_getDevVersionInfo); + +/* DC Compression */ +EXPORT_SYMBOL(cpaDcGetNumIntermediateBuffers); +EXPORT_SYMBOL(cpaDcInitSession); +EXPORT_SYMBOL(cpaDcResetSession); +EXPORT_SYMBOL(cpaDcUpdateSession); +EXPORT_SYMBOL(cpaDcRemoveSession); +EXPORT_SYMBOL(cpaDcCompressData); +EXPORT_SYMBOL(cpaDcDecompressData); +EXPORT_SYMBOL(cpaDcGenerateHeader); +EXPORT_SYMBOL(cpaDcGenerateFooter); +EXPORT_SYMBOL(cpaDcGetStats); +EXPORT_SYMBOL(cpaDcGetInstances); +EXPORT_SYMBOL(cpaDcGetNumInstances); +EXPORT_SYMBOL(cpaDcGetSessionSize); +EXPORT_SYMBOL(cpaDcGetStatusText); +EXPORT_SYMBOL(cpaDcBufferListGetMetaSize); +EXPORT_SYMBOL(cpaDcBnpBufferListGetMetaSize); +EXPORT_SYMBOL(cpaDcDeflateCompressBound); +EXPORT_SYMBOL(cpaDcInstanceGetInfo2); +EXPORT_SYMBOL(cpaDcQueryCapabilities); +EXPORT_SYMBOL(cpaDcSetAddressTranslation); +EXPORT_SYMBOL(cpaDcStartInstance); +EXPORT_SYMBOL(cpaDcStopInstance); +EXPORT_SYMBOL(cpaDcBPCompressData); +EXPORT_SYMBOL(cpaDcCompressData2); +EXPORT_SYMBOL(cpaDcDecompressData2); + +/* DcDp Compression */ +EXPORT_SYMBOL(cpaDcDpGetSessionSize); +EXPORT_SYMBOL(cpaDcDpInitSession); +EXPORT_SYMBOL(cpaDcDpRemoveSession); +EXPORT_SYMBOL(cpaDcDpUpdateSession); +EXPORT_SYMBOL(cpaDcDpRegCbFunc); +EXPORT_SYMBOL(cpaDcDpEnqueueOp); +EXPORT_SYMBOL(cpaDcDpEnqueueOpBatch); +EXPORT_SYMBOL(cpaDcDpPerformOpNow); + +EXPORT_SYMBOL(icp_sal_DcPollInstance); +EXPORT_SYMBOL(icp_sal_DcPollDpInstance); +EXPORT_SYMBOL(icp_sal_pollBank); +EXPORT_SYMBOL(icp_sal_pollAllBanks); + +/* sal iommu symbols */ +EXPORT_SYMBOL(icp_sal_iommu_get_remap_size); +EXPORT_SYMBOL(icp_sal_iommu_map); +EXPORT_SYMBOL(icp_sal_iommu_unmap); + +EXPORT_SYMBOL(icp_sal_get_dc_error); Index: sys/dev/qat/qat_api/qat_kernel/src/qat_transport.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_kernel/src/qat_transport.c @@ -0,0 +1,426 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" + +#include "cpa.h" +#include "icp_adf_init.h" +#include "icp_adf_transport_dp.h" + +/* + * adf_modulo + * result = data % ( 2 ^ shift ) + */ +static inline Cpa32U +adf_modulo(Cpa32U data, Cpa32U shift) +{ + Cpa32U div = data >> shift; + Cpa32U mult = div << shift; + + return data - mult; +} + +/* + * icp_adf_transCreateHandle + * crete transport handle for a service + * call adf_create_ring from adf driver directly with same parameters + */ +CpaStatus +icp_adf_transCreateHandle(icp_accel_dev_t *adf, + icp_transport_type trans_type, + const char *section, + const uint32_t accel_nr, + const uint32_t bank_nr, + const char *service_name, + const icp_adf_ringInfoService_t info, + icp_trans_callback callback, + icp_resp_deliv_method resp, + const uint32_t num_msgs, + const uint32_t msg_size, + icp_comms_trans_handle *trans_handle) +{ + CpaStatus status; + int error; + + ICP_CHECK_FOR_NULL_PARAM(trans_handle); + ICP_CHECK_FOR_NULL_PARAM(adf); + + error = adf_create_ring(adf->accel_dev, + section, + bank_nr, + num_msgs, + msg_size, + service_name, + callback, + ((resp == ICP_RESP_TYPE_IRQ) ? 0 : 1), + (struct adf_etr_ring_data **)trans_handle); + if (!error) + status = CPA_STATUS_SUCCESS; + else + status = CPA_STATUS_FAIL; + + return status; +} + +/* + * icp_adf_transReinitHandle + * Reinitialize transport handle for a service + */ +CpaStatus +icp_adf_transReinitHandle(icp_accel_dev_t *adf, + icp_transport_type trans_type, + const char *section, + const uint32_t accel_nr, + const uint32_t bank_nr, + const char *service_name, + const icp_adf_ringInfoService_t info, + icp_trans_callback callback, + icp_resp_deliv_method resp, + const uint32_t num_msgs, + const uint32_t msg_size, + icp_comms_trans_handle *trans_handle) +{ + return CPA_STATUS_SUCCESS; +} +/* + * icp_adf_transReleaseHandle + * destroy a transport handle, call adf_remove_ring from adf driver directly + */ +CpaStatus +icp_adf_transReleaseHandle(icp_comms_trans_handle trans_handle) +{ + struct adf_etr_ring_data *ring = trans_handle; + + ICP_CHECK_FOR_NULL_PARAM(ring); + adf_remove_ring(ring); + + return CPA_STATUS_SUCCESS; +} + +/* + * icp_adf_transResetHandle + * clean a transport handle, call adf_remove_ring from adf driver directly + */ +CpaStatus +icp_adf_transResetHandle(icp_comms_trans_handle trans_handle) +{ + return CPA_STATUS_SUCCESS; +} + +/* + * icp_adf_transGetRingNum + * get ring number from a transport handle + */ +CpaStatus +icp_adf_transGetRingNum(icp_comms_trans_handle trans_handle, uint32_t *ringNum) +{ + struct adf_etr_ring_data *ring = trans_handle; + + ICP_CHECK_FOR_NULL_PARAM(ring); + ICP_CHECK_FOR_NULL_PARAM(ringNum); + *ringNum = (uint32_t)(ring->ring_number); + + return CPA_STATUS_SUCCESS; +} + +/* + * icp_adf_transPutMsg + * send a request to transport handle + * call adf_send_message from adf driver directly + */ +CpaStatus +icp_adf_transPutMsg(icp_comms_trans_handle trans_handle, + uint32_t *inBuf, + uint32_t bufLen) +{ + struct adf_etr_ring_data *ring = trans_handle; + CpaStatus status = CPA_STATUS_FAIL; + int error = EFAULT; + + ICP_CHECK_FOR_NULL_PARAM(ring); + + error = adf_send_message(ring, inBuf); + if (EAGAIN == error) + status = CPA_STATUS_RETRY; + else if (0 == error) + status = CPA_STATUS_SUCCESS; + else + status = CPA_STATUS_FAIL; + + return status; +} + +CpaStatus +icp_adf_getInflightRequests(icp_comms_trans_handle trans_handle, + Cpa32U *maxInflightRequests, + Cpa32U *numInflightRequests) +{ + struct adf_etr_ring_data *ring = trans_handle; + ICP_CHECK_FOR_NULL_PARAM(ring); + ICP_CHECK_FOR_NULL_PARAM(maxInflightRequests); + ICP_CHECK_FOR_NULL_PARAM(numInflightRequests); + /* + * XXX: The qat_direct version of this routine returns max - 1, not + * the absolute max. + */ + *numInflightRequests = (*(uint32_t *)ring->inflights); + *maxInflightRequests = + ADF_MAX_INFLIGHTS(ring->ring_size, ring->msg_size); + return CPA_STATUS_SUCCESS; +} + +CpaStatus +icp_adf_dp_getInflightRequests(icp_comms_trans_handle trans_handle, + Cpa32U *maxInflightRequests, + Cpa32U *numInflightRequests) +{ + ICP_CHECK_FOR_NULL_PARAM(trans_handle); + ICP_CHECK_FOR_NULL_PARAM(maxInflightRequests); + ICP_CHECK_FOR_NULL_PARAM(numInflightRequests); + + return icp_adf_getInflightRequests(trans_handle, + maxInflightRequests, + numInflightRequests); +} + +/* + * This function allows the user to poll the response ring. The + * ring number to be polled is supplied by the user via the + * trans handle for that ring. The trans_hnd is a pointer + * to an array of trans handles. This ring is + * only polled if it contains data. + * This method is used as an alternative to the reading messages + * via the ISR method. + * This function will return RETRY if the ring is empty. + */ +CpaStatus +icp_adf_pollInstance(icp_comms_trans_handle *trans_hnd, + Cpa32U num_transHandles, + Cpa32U response_quota) +{ + Cpa32U resp_total = 0; + Cpa32U num_resp; + struct adf_etr_ring_data *ring = NULL; + struct adf_etr_bank_data *bank = NULL; + Cpa32U i; + + ICP_CHECK_FOR_NULL_PARAM(trans_hnd); + + for (i = 0; i < num_transHandles; i++) { + ring = trans_hnd[i]; + if (!ring) + continue; + bank = ring->bank; + + /* If the ring in question is empty try the next ring.*/ + if (!bank || !bank->ring_mask) { + continue; + } + + num_resp = adf_handle_response(ring, response_quota); + resp_total += num_resp; + } + + /* If any of the rings in the instance had data and was polled + * return SUCCESS. */ + if (resp_total) + return CPA_STATUS_SUCCESS; + else + return CPA_STATUS_RETRY; +} + +/* + * This function allows the user to check the response ring. The + * ring number to be polled is supplied by the user via the + * trans handle for that ring. The trans_hnd is a pointer + * to an array of trans handles. + * This function now is a empty function. + */ +CpaStatus +icp_adf_check_RespInstance(icp_comms_trans_handle *trans_hnd, + Cpa32U num_transHandles) +{ + return CPA_STATUS_SUCCESS; +} + +/* + * icp_sal_pollBank + * poll bank with id bank_number inside acceleration device with id @accelId + */ +CpaStatus +icp_sal_pollBank(Cpa32U accelId, Cpa32U bank_number, Cpa32U response_quota) +{ + int ret; + + ret = adf_poll_bank(accelId, bank_number, response_quota); + if (!ret) + return CPA_STATUS_SUCCESS; + else if (EAGAIN == ret) + return CPA_STATUS_RETRY; + + return CPA_STATUS_FAIL; +} + +/* + * icp_sal_pollAllBanks + * poll all banks inside acceleration device with id @accelId + */ +CpaStatus +icp_sal_pollAllBanks(Cpa32U accelId, Cpa32U response_quota) +{ + int ret = 0; + + ret = adf_poll_all_banks(accelId, response_quota); + if (!ret) + return CPA_STATUS_SUCCESS; + else if (ret == EAGAIN) + return CPA_STATUS_RETRY; + + return CPA_STATUS_FAIL; +} + +/* + * icp_adf_getQueueMemory + * Data plane support function - returns the pointer to next message on the ring + * or NULL if there is not enough space. + */ +void +icp_adf_getQueueMemory(icp_comms_trans_handle trans_handle, + Cpa32U numberRequests, + void **pCurrentQatMsg) +{ + struct adf_etr_ring_data *ring = trans_handle; + Cpa64U flight; + + ICP_CHECK_FOR_NULL_PARAM_VOID(ring); + + /* Check if there is enough space in the ring */ + flight = atomic_add_return(numberRequests, ring->inflights); + if (flight > ADF_MAX_INFLIGHTS(ring->ring_size, ring->msg_size)) { + atomic_sub(numberRequests, ring->inflights); + *pCurrentQatMsg = NULL; + return; + } + + /* We have enough space - get the address of next message */ + *pCurrentQatMsg = (void *)((uintptr_t)ring->base_addr + ring->tail); +} + +/* + * icp_adf_getSingleQueueAddr + * Data plane support function - returns the pointer to next message on the ring + * or NULL if there is not enough space - it also updates the shadow tail copy. + */ +void +icp_adf_getSingleQueueAddr(icp_comms_trans_handle trans_handle, + void **pCurrentQatMsg) +{ + struct adf_etr_ring_data *ring = trans_handle; + Cpa64U flight; + + ICP_CHECK_FOR_NULL_PARAM_VOID(ring); + ICP_CHECK_FOR_NULL_PARAM_VOID(pCurrentQatMsg); + + /* Check if there is enough space in the ring */ + flight = atomic_add_return(1, ring->inflights); + if (flight > ADF_MAX_INFLIGHTS(ring->ring_size, ring->msg_size)) { + atomic_dec(ring->inflights); + *pCurrentQatMsg = NULL; + return; + } + + /* We have enough space - get the address of next message */ + *pCurrentQatMsg = (void *)((uintptr_t)ring->base_addr + ring->tail); + + /* Update the shadow tail */ + ring->tail = + adf_modulo(ring->tail + ADF_MSG_SIZE_TO_BYTES(ring->msg_size), + ADF_RING_SIZE_MODULO(ring->ring_size)); +} + +/* + * icp_adf_getQueueNext + * Data plane support function - increments the tail pointer and returns + * the pointer to next message on the ring. + */ +void +icp_adf_getQueueNext(icp_comms_trans_handle trans_handle, void **pCurrentQatMsg) +{ + struct adf_etr_ring_data *ring = trans_handle; + + ICP_CHECK_FOR_NULL_PARAM_VOID(ring); + ICP_CHECK_FOR_NULL_PARAM_VOID(pCurrentQatMsg); + + /* Increment tail to next message */ + ring->tail = + adf_modulo(ring->tail + ADF_MSG_SIZE_TO_BYTES(ring->msg_size), + ADF_RING_SIZE_MODULO(ring->ring_size)); + + /* Get the address of next message */ + *pCurrentQatMsg = (void *)((uintptr_t)ring->base_addr + ring->tail); +} + +/* + * icp_adf_updateQueueTail + * Data plane support function - Writes the tail shadow copy to the device. + */ +void +icp_adf_updateQueueTail(icp_comms_trans_handle trans_handle) +{ + struct adf_etr_ring_data *ring = trans_handle; + + ICP_CHECK_FOR_NULL_PARAM_VOID(ring); + + WRITE_CSR_RING_TAIL(ring->bank->csr_addr, + ring->bank->bank_number, + ring->ring_number, + ring->tail); + ring->csr_tail_offset = ring->tail; +} + +/* + * icp_adf_pollQueue + * Data plane support function - Poll messages from the queue. + */ +CpaStatus +icp_adf_pollQueue(icp_comms_trans_handle trans_handle, Cpa32U response_quota) +{ + Cpa32U num_resp; + struct adf_etr_ring_data *ring = trans_handle; + + ICP_CHECK_FOR_NULL_PARAM(ring); + + num_resp = adf_handle_response(ring, response_quota); + + if (num_resp) + return CPA_STATUS_SUCCESS; + else + return CPA_STATUS_RETRY; +} + +/* + * icp_adf_queueDataToSend + * Data-plane support function - Indicates if there is data on the ring to be + * sent. This should only be called on request rings. If the function returns + * true then it is ok to call icp_adf_updateQueueTail() function on this ring. + */ +CpaBoolean +icp_adf_queueDataToSend(icp_comms_trans_handle trans_handle) +{ + struct adf_etr_ring_data *ring = trans_handle; + + if (ring->tail != ring->csr_tail_offset) + return CPA_TRUE; + else + return CPA_FALSE; +} + +/* + * This icp API won't be supported in kernel space currently + */ +CpaStatus +icp_adf_transGetFdForHandle(icp_comms_trans_handle trans_hnd, int *fd) +{ + return CPA_STATUS_UNSUPPORTED; +} Index: sys/dev/qat/qat_api/qat_utils/include/qat_utils.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_utils/include/qat_utils.h @@ -0,0 +1,851 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef QAT_UTILS_H +#define QAT_UTILS_H + + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#ifdef __x86_64__ +#include +#else +#include +#endif +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#include + +#include "cpa.h" + +#define QAT_UTILS_LOG(...) printf("QAT: "__VA_ARGS__) + +#define QAT_UTILS_WAIT_FOREVER (-1) +#define QAT_UTILS_WAIT_NONE 0 + +#define QAT_UTILS_HOST_TO_NW_16(uData) QAT_UTILS_OS_HOST_TO_NW_16(uData) +#define QAT_UTILS_HOST_TO_NW_32(uData) QAT_UTILS_OS_HOST_TO_NW_32(uData) +#define QAT_UTILS_HOST_TO_NW_64(uData) QAT_UTILS_OS_HOST_TO_NW_64(uData) + +#define QAT_UTILS_NW_TO_HOST_16(uData) QAT_UTILS_OS_NW_TO_HOST_16(uData) +#define QAT_UTILS_NW_TO_HOST_32(uData) QAT_UTILS_OS_NW_TO_HOST_32(uData) +#define QAT_UTILS_NW_TO_HOST_64(uData) QAT_UTILS_OS_NW_TO_HOST_64(uData) + +#define QAT_UTILS_UDIV64_32(dividend, divisor) \ + QAT_UTILS_OS_UDIV64_32(dividend, divisor) + +#define QAT_UTILS_UMOD64_32(dividend, divisor) \ + QAT_UTILS_OS_UMOD64_32(dividend, divisor) + +#define ICP_CHECK_FOR_NULL_PARAM(param) \ + do { \ + if (NULL == param) { \ + QAT_UTILS_LOG("%s(): invalid param: %s\n", \ + __FUNCTION__, \ + #param); \ + return CPA_STATUS_INVALID_PARAM; \ + } \ + } while (0) + +#define ICP_CHECK_FOR_NULL_PARAM_VOID(param) \ + do { \ + if (NULL == param) { \ + QAT_UTILS_LOG("%s(): invalid param: %s\n", \ + __FUNCTION__, \ + #param); \ + return; \ + } \ + } while (0) + +/*Macro for adding an element to the tail of a doubly linked list*/ +/*The currentptr tracks the tail, and the headptr tracks the head.*/ +#define ICP_ADD_ELEMENT_TO_END_OF_LIST(elementtoadd, currentptr, headptr) \ + do { \ + if (NULL == currentptr) { \ + currentptr = elementtoadd; \ + elementtoadd->pNext = NULL; \ + elementtoadd->pPrev = NULL; \ + headptr = currentptr; \ + } else { \ + elementtoadd->pPrev = currentptr; \ + currentptr->pNext = elementtoadd; \ + elementtoadd->pNext = NULL; \ + currentptr = elementtoadd; \ + } \ + } while (0) + +/*currentptr is not used in this case since we don't track the tail. */ +#define ICP_ADD_ELEMENT_TO_HEAD_OF_LIST(elementtoadd, currentptr, headptr) \ + do { \ + if (NULL == headptr) { \ + elementtoadd->pNext = NULL; \ + elementtoadd->pPrev = NULL; \ + headptr = elementtoadd; \ + } else { \ + elementtoadd->pPrev = NULL; \ + elementtoadd->pNext = headptr; \ + headptr->pPrev = elementtoadd; \ + headptr = elementtoadd; \ + } \ + } while (0) + +#define ICP_REMOVE_ELEMENT_FROM_LIST(elementtoremove, currentptr, headptr) \ + do { \ + /*If the previous pointer is not NULL*/ \ + if (NULL != elementtoremove->pPrev) { \ + elementtoremove->pPrev->pNext = \ + elementtoremove->pNext; \ + if (elementtoremove->pNext) { \ + elementtoremove->pNext->pPrev = \ + elementtoremove->pPrev; \ + } else { \ + /* Move the tail pointer backwards */ \ + currentptr = elementtoremove->pPrev; \ + } \ + } else if (NULL != elementtoremove->pNext) { \ + /*Remove the head pointer.*/ \ + elementtoremove->pNext->pPrev = NULL; \ + /*Hence move the head forward.*/ \ + headptr = elementtoremove->pNext; \ + } else { \ + /*Remove the final entry in the list. */ \ + currentptr = NULL; \ + headptr = NULL; \ + } \ + } while (0) + +MALLOC_DECLARE(M_QAT); + +#ifdef __x86_64__ +typedef atomic64_t QatUtilsAtomic; +#else +typedef atomic_t QatUtilsAtomic; +#endif + +#define QAT_UTILS_OS_NW_TO_HOST_16(uData) be16toh(uData) +#define QAT_UTILS_OS_NW_TO_HOST_32(uData) be32toh(uData) +#define QAT_UTILS_OS_NW_TO_HOST_64(uData) be64toh(uData) + +#define QAT_UTILS_OS_HOST_TO_NW_16(uData) htobe16(uData) +#define QAT_UTILS_OS_HOST_TO_NW_32(uData) htobe32(uData) +#define QAT_UTILS_OS_HOST_TO_NW_64(uData) htobe64(uData) + +/** + * @ingroup QatUtils + * + * @brief Atomically read the value of atomic variable + * + * @param pAtomicVar IN - atomic variable + * + * Atomically reads the value of pAtomicVar to the outValue + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return pAtomicVar value + */ +int64_t qatUtilsAtomicGet(QatUtilsAtomic *pAtomicVar); + +/** + * @ingroup QatUtils + * + * @brief Atomically set the value of atomic variable + * + * @param inValue IN - atomic variable to be set equal to inValue + * + * @param pAtomicVar OUT - atomic variable + * + * Atomically sets the value of pAtomicVar to the value given + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return none + */ +void qatUtilsAtomicSet(int64_t inValue, QatUtilsAtomic *pAtomicVar); + +/** + * @ingroup QatUtils + * + * @brief add the value to atomic variable + * + * @param inValue (in) - value to be added to the atomic variable + * + * @param pAtomicVar (in & out) - atomic variable + * + * Atomically adds the value of inValue to the pAtomicVar + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return pAtomicVar value after the addition + */ +int64_t qatUtilsAtomicAdd(int64_t inValue, QatUtilsAtomic *pAtomicVar); + +/** + * @ingroup QatUtils + * + * @brief subtract the value from atomic variable + * + * @param inValue IN - atomic variable value to be subtracted by value + * + * @param pAtomicVar IN/OUT - atomic variable + * + * Atomically subtracts the value of pAtomicVar by inValue + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return pAtomicVar value after the subtraction + */ +int64_t qatUtilsAtomicSub(int64_t inValue, QatUtilsAtomic *pAtomicVar); + +/** + * @ingroup QatUtils + * + * @brief increment value of atomic variable by 1 + * + * @param pAtomicVar IN/OUT - atomic variable + * + * Atomically increments the value of pAtomicVar by 1. + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return pAtomicVar value after the increment + */ +int64_t qatUtilsAtomicInc(QatUtilsAtomic *pAtomicVar); + +/** + * @ingroup QatUtils + * + * @brief decrement value of atomic variable by 1 + * + * @param pAtomicVar IN/OUT - atomic variable + * + * Atomically decrements the value of pAtomicVar by 1. + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return pAtomic value after the decrement + */ +int64_t qatUtilsAtomicDec(QatUtilsAtomic *pAtomicVar); + +/** + * @ingroup QatUtils + * + * @brief NUMA aware memory allocation; available on Linux OS only. + * + * @param size - memory size to allocate, in bytes + * @param node - node + * @param alignment - memory boundary alignment (alignment can not be 0) + * + * Allocates a memory zone of a given size on the specified node + * The returned memory is guaraunteed to be physically contiguous if the + * given size is less than 128KB and belonging to the node specified + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return Pointer to the allocated zone or NULL if the allocation failed + */ +void *qatUtilsMemAllocContiguousNUMA(uint32_t size, + uint32_t node, + uint32_t alignment); + +/** + * @ingroup QatUtils + * + * @brief Frees memory allocated by qatUtilsMemAllocContigousNUMA. + * + * @param ptr - pointer to the memory zone + * @param size - size of the pointer previously allocated + * + * Frees a previously allocated memory zone + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return - none + */ +void qatUtilsMemFreeNUMA(void *ptr); + +/** + * @ingroup QatUtils + * + * @brief virtual to physical address translation + * + * @param virtAddr - virtual address + * + * Converts a virtual address into its equivalent MMU-mapped physical address + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return Corresponding physical address + */ +#define QAT_UTILS_MMU_VIRT_TO_PHYS(virtAddr) \ + ((uint64_t)((virtAddr) ? vtophys(virtAddr) : 0)) + +/** + * @ingroup QatUtils + * + * @brief Initializes the SpinLock object + * + * @param pLock - Spinlock handle + * + * Initializes the SpinLock object. + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + */ +CpaStatus qatUtilsLockInit(struct mtx *pLock); + +/** + * @ingroup QatUtils + * + * @brief Acquires a spin lock + * + * @param pLock - Spinlock handle + * + * This routine acquires a spin lock so the + * caller can synchronize access to shared data in a + * multiprocessor-safe way by raising IRQL. + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - Returns CPA_STATUS_SUCCESS if the spinlock is acquired. Returns + * CPA_STATUS_FAIL + * if + * spinlock handle is NULL. If spinlock is already acquired by any + * other thread of execution then it tries in busy loop/spins till it + * gets spinlock. + */ +CpaStatus qatUtilsLock(struct mtx *pLock); + +/** + * @ingroup QatUtils + * + * @brief Releases the spin lock + * + * @param pLock - Spinlock handle + * + * This routine releases the spin lock which the thread had acquired + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - return CPA_STATUS_SUCCESS if the spinlock is released. Returns + * CPA_STATUS_FAIL + * if + * spinlockhandle passed is NULL. + */ +CpaStatus qatUtilsUnlock(struct mtx *pLock); + +/** + * @ingroup QatUtils + * + * @brief Destroy the spin lock object + * + * @param pLock - Spinlock handle + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - returns CPA_STATUS_SUCCESS if plock is destroyed. + * returns CPA_STATUS_FAIL if plock is NULL. + */ +CpaStatus qatUtilsLockDestroy(struct mtx *pLock); + +/** + * @ingroup QatUtils + * + * @brief Initializes a semaphore + * + * @param pSid - semaphore handle + * @param start_value - initial semaphore value + * + * Initializes a semaphore object + * Note: Semaphore initialization qatUtilsSemaphoreInit API must be called + * first before using any QAT Utils Semaphore APIs + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + */ +CpaStatus qatUtilsSemaphoreInit(struct sema **pSid, uint32_t start_value); + +/** + * @ingroup QatUtils + * + * @brief Destroys a semaphore object + * + * @param pSid - semaphore handle + * + * Destroys a semaphore object; the caller should ensure that no thread is + * blocked on this semaphore. If call made when thread blocked on semaphore the + * behaviour is unpredictable + * + * @li Reentrant: yes +] * @li IRQ safe: no + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + */ +CpaStatus qatUtilsSemaphoreDestroy(struct sema **pSid); + +/** + * @ingroup QatUtils + * + * @brief Waits on (decrements) a semaphore + * + * @param pSid - semaphore handle + * @param timeout - timeout, in ms; QAT_UTILS_WAIT_FOREVER (-1) if the thread + * is to block indefinitely or QAT_UTILS_WAIT_NONE (0) if the thread is to + * return immediately even if the call fails + * + * Decrements a semaphore, blocking if the semaphore is + * unavailable (value is 0). + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + */ +CpaStatus qatUtilsSemaphoreWait(struct sema **pSid, int32_t timeout); + +/** + * @ingroup QatUtils + * + * @brief Non-blocking wait on semaphore + * + * @param semaphore - semaphore handle + * + * Decrements a semaphore, not blocking the calling thread if the semaphore + * is unavailable + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + */ +CpaStatus qatUtilsSemaphoreTryWait(struct sema **semaphore); + +/** + * @ingroup QatUtils + * + * @brief Posts to (increments) a semaphore + * + * @param pSid - semaphore handle + * + * Increments a semaphore object + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + */ +CpaStatus qatUtilsSemaphorePost(struct sema **pSid); + +/** + * @ingroup QatUtils + * + * @brief initializes a pMutex + * + * @param pMutex - pMutex handle + * + * Initializes a pMutex object + * @note Mutex initialization qatUtilsMutexInit API must be called + * first before using any QAT Utils Mutex APIs + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + */ +CpaStatus qatUtilsMutexInit(struct mtx **pMutex); + +/** + * @ingroup QatUtils + * + * @brief locks a pMutex + * + * @param pMutex - pMutex handle + * @param timeout - timeout in ms; QAT_UTILS_WAIT_FOREVER (-1) to wait forever + * or QAT_UTILS_WAIT_NONE to return immediately + * + * Locks a pMutex object + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + */ +CpaStatus qatUtilsMutexLock(struct mtx **pMutex, int32_t timeout); + +/** + * @ingroup QatUtils + * + * @brief Unlocks a pMutex + * + * @param pMutex - pMutex handle + * + * Unlocks a pMutex object + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + */ +CpaStatus qatUtilsMutexUnlock(struct mtx **pMutex); + +/** + * @ingroup QatUtils + * + * @brief Destroys a pMutex object + * + * @param pMutex - pMutex handle + * + * Destroys a pMutex object; the caller should ensure that no thread is + * blocked on this pMutex. If call made when thread blocked on pMutex the + * behaviour is unpredictable + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + */ +CpaStatus qatUtilsMutexDestroy(struct mtx **pMutex); + +/** + * @ingroup QatUtils + * + * @brief Non-blocking attempt to lock a pMutex + * + * @param pMutex - pMutex handle + * + * Attempts to lock a pMutex object, returning immediately with + * CPA_STATUS_SUCCESS if + * the lock was successful or CPA_STATUS_FAIL if the lock failed + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + */ +CpaStatus qatUtilsMutexTryLock(struct mtx **pMutex); + +/** + * @ingroup QatUtils + * + * @brief Yielding sleep for a number of milliseconds + * + * @param milliseconds - number of milliseconds to sleep + * + * The calling thread will sleep for the specified number of milliseconds. + * This sleep is yielding, hence other tasks will be scheduled by the + * operating system during the sleep period. Calling this function with an + * argument of 0 will place the thread at the end of the current scheduling + * loop. + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + */ +CpaStatus qatUtilsSleep(uint32_t milliseconds); + +/** + * @ingroup QatUtils + * + * @brief Yields execution of current thread + * + * Yields the execution of the current thread + * + * @li Reentrant: yes + * @li IRQ safe: no + * + * @return - none + */ +void qatUtilsYield(void); + +/** + * @ingroup QatUtils + * + * @brief Calculate MD5 transform operation + * + * @param in - pointer to data to be processed. + * The buffer needs to be at least md5 block size long as defined in + * rfc1321 (64 bytes) + * out - output pointer for state data after single md5 transform + * operation. + * The buffer needs to be at least md5 state size long as defined in + * rfc1321 (16 bytes) + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + * + */ +CpaStatus qatUtilsHashMD5(uint8_t *in, uint8_t *out); + +/** + * @ingroup QatUtils + * + * @brief Calculate MD5 transform operation + * + * @param in - pointer to data to be processed. + * The buffer needs to be at least md5 block size long as defined in + * rfc1321 (64 bytes) + * out - output pointer for state data after single md5 transform + * operation. + * The buffer needs to be at least md5 state size long as defined in + * rfc1321 (16 bytes) + * len - Length on the input to be processed. + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + * + */ +CpaStatus qatUtilsHashMD5Full(uint8_t *in, uint8_t *out, uint32_t len); + +/** + * @ingroup QatUtils + * + * @brief Calculate SHA1 transform operation + * + * @param in - pointer to data to be processed. + * The buffer needs to be at least sha1 block size long as defined in + * rfc3174 (64 bytes) + * out - output pointer for state data after single sha1 transform + * operation. + * The buffer needs to be at least sha1 state size long as defined in + * rfc3174 (20 bytes) + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + * + */ +CpaStatus qatUtilsHashSHA1(uint8_t *in, uint8_t *out); + +/** + * @ingroup QatUtils + * + * @brief Calculate SHA1 transform operation + * + * @param in - pointer to data to be processed. + * The buffer needs to be at least sha1 block size long as defined in + * rfc3174 (64 bytes) + * out - output pointer for state data after single sha1 transform + * operation. + * The buffer needs to be at least sha1 state size long as defined in + * rfc3174 (20 bytes) + * len - Length on the input to be processed. + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + * + */ +CpaStatus qatUtilsHashSHA1Full(uint8_t *in, uint8_t *out, uint32_t len); + +/** + * @ingroup QatUtils + * + * @brief Calculate SHA224 transform operation + * + * @param in - pointer to data to be processed. + * The buffer needs to be at least sha224 block size long as defined in + * rfc3874 and rfc4868 (64 bytes) + * out - output pointer for state data after single sha224 transform + * operation. + * The buffer needs to be at least sha224 state size long as defined in + * rfc3874 and rfc4868 (32 bytes) + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + * + */ +CpaStatus qatUtilsHashSHA224(uint8_t *in, uint8_t *out); + +/** + * @ingroup QatUtils + * + * @brief Calculate SHA256 transform operation + * + * + * @param in - pointer to data to be processed. + * The buffer needs to be at least sha256 block size long as defined in + * rfc4868 (64 bytes) + * out - output pointer for state data after single sha256 transform + * operation. + * The buffer needs to be at least sha256 state size long as defined in + * rfc4868 (32 bytes) + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + * + */ +CpaStatus qatUtilsHashSHA256(uint8_t *in, uint8_t *out); + +/** + * @ingroup QatUtils + * + * @brief Calculate SHA256 transform operation + * + * + * @param in - pointer to data to be processed. + * The buffer needs to be at least sha256 block size long as defined in + * rfc4868 (64 bytes) + * out - output pointer for state data after single sha256 transform + * operation. + * The buffer needs to be at least sha256 state size long as defined in + * rfc4868 (32 bytes) + * len - Length on the input to be processed. + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + * + */ +CpaStatus qatUtilsHashSHA256Full(uint8_t *in, uint8_t *out, uint32_t len); + +/** + * @ingroup QatUtils + * + * @brief Calculate SHA384 transform operation + * + * @param in - pointer to data to be processed. + * The buffer needs to be at least sha384 block size long as defined in + * rfc4868 (128 bytes) + * out - output pointer for state data after single sha384 transform + * operation. + * The buffer needs to be at least sha384 state size long as defined in + * rfc4868 (64 bytes) + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + * + */ +CpaStatus qatUtilsHashSHA384(uint8_t *in, uint8_t *out); + +/** + * @ingroup QatUtils + * + * @brief Calculate SHA384 transform operation + * + * @param in - pointer to data to be processed. + * The buffer needs to be at least sha384 block size long as defined in + * rfc4868 (128 bytes) + * out - output pointer for state data after single sha384 transform + * operation. + * The buffer needs to be at least sha384 state size long as defined in + * rfc4868 (64 bytes) + * len - Length on the input to be processed. + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + * + */ +CpaStatus qatUtilsHashSHA384Full(uint8_t *in, uint8_t *out, uint32_t len); + +/** + * @ingroup QatUtils + * + * @brief Calculate SHA512 transform operation + * + * @param in - pointer to data to be processed. + * The buffer needs to be at least sha512 block size long as defined in + * rfc4868 (128 bytes) + * out - output pointer for state data after single sha512 transform + * operation. + * The buffer needs to be at least sha512 state size long as defined in + * rfc4868 (64 bytes) + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + * + */ +CpaStatus qatUtilsHashSHA512(uint8_t *in, uint8_t *out); + +/** + * @ingroup QatUtils + * + * @brief Calculate SHA512 transform operation + * + * @param in - pointer to data to be processed. + * The buffer needs to be at least sha512 block size long as defined in + * rfc4868 (128 bytes) + * out - output pointer for state data after single sha512 transform + * operation. + * The buffer needs to be at least sha512 state size long as defined in + * rfc4868 (64 bytes) + * len - Length on the input to be processed. + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + * + */ +CpaStatus qatUtilsHashSHA512Full(uint8_t *in, uint8_t *out, uint32_t len); + +/** + * @ingroup QatUtils + * + * @brief Single block AES encrypt + * + * @param key - pointer to symetric key. + * keyLenInBytes - key length + * in - pointer to data to encrypt + * out - pointer to output buffer for encrypted text + * The in and out buffers need to be at least AES block size long + * as defined in rfc3686 (16 bytes) + * + * @li Reentrant: yes + * @li IRQ safe: yes + * + * @return - CPA_STATUS_SUCCESS/CPA_STATUS_FAIL + * + */ +CpaStatus qatUtilsAESEncrypt(uint8_t *key, + uint32_t keyLenInBytes, + uint8_t *in, + uint8_t *out); +#endif Index: sys/dev/qat/qat_api/qat_utils/src/QatUtilsAtomic.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_utils/src/QatUtilsAtomic.c @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_utils.h" + +#ifdef __x86_64__ +__inline int64_t +qatUtilsAtomicGet(QatUtilsAtomic *pAtomicVar) +{ + return ((int64_t)atomic64_read((QatUtilsAtomic *)pAtomicVar)); +} + +__inline void +qatUtilsAtomicSet(int64_t inValue, QatUtilsAtomic *pAtomicVar) +{ + atomic64_set((QatUtilsAtomic *)pAtomicVar, inValue); +} + +__inline int64_t +qatUtilsAtomicAdd(int64_t inValue, QatUtilsAtomic *pAtomicVar) +{ + return atomic64_add_return((long)inValue, (QatUtilsAtomic *)pAtomicVar); +} + +__inline int64_t +qatUtilsAtomicSub(int64_t inValue, QatUtilsAtomic *pAtomicVar) +{ + return atomic64_sub_return((long)inValue, (QatUtilsAtomic *)pAtomicVar); +} + +__inline int64_t +qatUtilsAtomicInc(QatUtilsAtomic *pAtomicVar) +{ + return atomic64_inc_return((QatUtilsAtomic *)pAtomicVar); +} + +__inline int64_t +qatUtilsAtomicDec(QatUtilsAtomic *pAtomicVar) +{ + return atomic64_dec_return((QatUtilsAtomic *)pAtomicVar); +} +#else +__inline int64_t +qatUtilsAtomicGet(QatUtilsAtomic *pAtomicVar) +{ + return ((int64_t)atomic_read((QatUtilsAtomic *)pAtomicVar)); +} + +__inline void +qatUtilsAtomicSet(int64_t inValue, QatUtilsAtomic *pAtomicVar) +{ + atomic_set((QatUtilsAtomic *)pAtomicVar, inValue); +} + +__inline int64_t +qatUtilsAtomicAdd(int64_t inValue, QatUtilsAtomic *pAtomicVar) +{ + return atomic_add_return(inValue, (QatUtilsAtomic *)pAtomicVar); +} + +__inline int64_t +qatUtilsAtomicSub(int64_t inValue, QatUtilsAtomic *pAtomicVar) +{ + return atomic_sub_return(inValue, (QatUtilsAtomic *)pAtomicVar); +} + +__inline int64_t +qatUtilsAtomicInc(QatUtilsAtomic *pAtomicVar) +{ + return atomic_inc_return((QatUtilsAtomic *)pAtomicVar); +} + +__inline int64_t +qatUtilsAtomicDec(QatUtilsAtomic *pAtomicVar) +{ + return atomic_dec_return((QatUtilsAtomic *)pAtomicVar); +} +#endif Index: sys/dev/qat/qat_api/qat_utils/src/QatUtilsCrypto.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_utils/src/QatUtilsCrypto.c @@ -0,0 +1,152 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_utils.h" + +CpaStatus +qatUtilsHashMD5(uint8_t *in, uint8_t *out) +{ + MD5_CTX ctx; + + MD5Init(&ctx); + MD5Update(&ctx, in, MD5_BLOCK_LENGTH); + bcopy(&ctx, out, MD5_DIGEST_LENGTH); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsHashSHA1(uint8_t *in, uint8_t *out) +{ + SHA1_CTX ctx; + + SHA1Init(&ctx); + SHA1Update(&ctx, in, SHA1_BLOCK_LEN); + bcopy(&ctx, out, SHA1_HASH_LEN); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsHashSHA224(uint8_t *in, uint8_t *out) +{ + SHA224_CTX ctx; + + SHA224_Init(&ctx); + SHA224_Update(&ctx, in, SHA224_BLOCK_LENGTH); + bcopy(&ctx, out, SHA256_DIGEST_LENGTH); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsHashSHA256(uint8_t *in, uint8_t *out) +{ + SHA256_CTX ctx; + + SHA256_Init(&ctx); + SHA256_Update(&ctx, in, SHA256_BLOCK_LENGTH); + bcopy(&ctx, out, SHA256_DIGEST_LENGTH); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsHashSHA384(uint8_t *in, uint8_t *out) +{ + SHA384_CTX ctx; + + SHA384_Init(&ctx); + SHA384_Update(&ctx, in, SHA384_BLOCK_LENGTH); + bcopy(&ctx, out, SHA512_DIGEST_LENGTH); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsHashSHA512(uint8_t *in, uint8_t *out) +{ + SHA512_CTX ctx; + + SHA512_Init(&ctx); + SHA512_Update(&ctx, in, SHA512_BLOCK_LENGTH); + bcopy(&ctx, out, SHA512_DIGEST_LENGTH); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsHashMD5Full(uint8_t *in, uint8_t *out, uint32_t len) +{ + MD5_CTX ctx; + + MD5Init(&ctx); + MD5Update(&ctx, in, len); + MD5Final(out, &ctx); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsHashSHA1Full(uint8_t *in, uint8_t *out, uint32_t len) +{ + SHA1_CTX ctx; + + SHA1Init(&ctx); + SHA1Update(&ctx, in, len); + SHA1Final((caddr_t)out, &ctx); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsHashSHA256Full(uint8_t *in, uint8_t *out, uint32_t len) +{ + SHA256_CTX ctx; + + SHA256_Init(&ctx); + SHA256_Update(&ctx, in, len); + SHA256_Final(out, &ctx); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsHashSHA384Full(uint8_t *in, uint8_t *out, uint32_t len) +{ + SHA384_CTX ctx; + + SHA384_Init(&ctx); + SHA384_Update(&ctx, in, len); + SHA384_Final(out, &ctx); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsHashSHA512Full(uint8_t *in, uint8_t *out, uint32_t len) +{ + SHA512_CTX ctx; + + SHA512_Init(&ctx); + SHA512_Update(&ctx, in, len); + SHA512_Final(out, &ctx); + + return CPA_STATUS_SUCCESS; +} + +#define BYTE_TO_BITS_SHIFT 3 + +CpaStatus +qatUtilsAESEncrypt(uint8_t *key, + uint32_t keyLenInBytes, + uint8_t *in, + uint8_t *out) +{ + rijndael_ctx ctx; + + rijndael_set_key(&ctx, key, keyLenInBytes << BYTE_TO_BITS_SHIFT); + rijndael_encrypt(&ctx, in, out); + + return CPA_STATUS_SUCCESS; +} Index: sys/dev/qat/qat_api/qat_utils/src/QatUtilsSemaphore.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_utils/src/QatUtilsSemaphore.c @@ -0,0 +1,169 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_utils.h" + +#include +#include +#include +#include + +/* Define a 64 bit number */ +#define QAT_UTILS_MAX_LONG (0x7FFFFFFF) + +/* Max timeout in MS, used to guard against possible overflow */ +#define QAT_UTILS_MAX_TIMEOUT_MS (QAT_UTILS_MAX_LONG / hz) + +CpaStatus +qatUtilsSemaphoreInit(struct sema **pSid, uint32_t start_value) +{ + if (!pSid) + return CPA_STATUS_FAIL; + + *pSid = malloc(sizeof(struct sema), M_QAT, M_WAITOK); + + sema_init(*pSid, start_value, "qat sema"); + + return CPA_STATUS_SUCCESS; +} + +/** + * DESCRIPTION: If the semaphore is unset, the calling thread is blocked. + * If the semaphore is set, it is taken and control is returned + * to the caller. If the time indicated in 'timeout' is reached, + * the thread will unblock and return an error indication. If the + * timeout is set to 'QAT_UTILS_WAIT_NONE', the thread will never block; + * if it is set to 'QAT_UTILS_WAIT_FOREVER', the thread will block until + * the semaphore is available. + * + * + */ + +CpaStatus +qatUtilsSemaphoreWait(struct sema **pSid, int32_t timeout) +{ + + CpaStatus Status = CPA_STATUS_SUCCESS; + unsigned long timeoutTime; + + if (!pSid) + return CPA_STATUS_FAIL; + /* + * Guard against illegal timeout values + */ + if ((timeout < 0) && (timeout != QAT_UTILS_WAIT_FOREVER)) { + QAT_UTILS_LOG( + "QatUtilsSemaphoreWait(): illegal timeout value\n"); + return CPA_STATUS_FAIL; + } else if (timeout > QAT_UTILS_MAX_TIMEOUT_MS) { + QAT_UTILS_LOG( + "QatUtilsSemaphoreWait(): use a smaller timeout value to avoid overflow.\n"); + return CPA_STATUS_FAIL; + } + + if (timeout == QAT_UTILS_WAIT_FOREVER) { + sema_wait(*pSid); + } else if (timeout == QAT_UTILS_WAIT_NONE) { + if (sema_trywait(*pSid)) { + Status = CPA_STATUS_FAIL; + } + } else { + /* Convert timeout in milliseconds to HZ */ + timeoutTime = timeout * hz / 1000; + if (sema_timedwait(*pSid, timeoutTime)) { + Status = CPA_STATUS_FAIL; + } + } /* End of if */ + + return Status; +} + +CpaStatus +qatUtilsSemaphoreTryWait(struct sema **pSid) +{ + if (!pSid) + return CPA_STATUS_FAIL; + if (sema_trywait(*pSid)) { + return CPA_STATUS_FAIL; + } + return CPA_STATUS_SUCCESS; +} + +/** + * + * DESCRIPTION: This function causes the next available thread in the pend queue + * to be unblocked. If no thread is pending on this semaphore, the + * semaphore becomes 'full'. + */ +CpaStatus +qatUtilsSemaphorePost(struct sema **pSid) +{ + if (!pSid) + return CPA_STATUS_FAIL; + sema_post(*pSid); + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsSemaphoreDestroy(struct sema **pSid) +{ + if (!pSid) + return CPA_STATUS_FAIL; + + sema_destroy(*pSid); + free(*pSid, M_QAT); + + return CPA_STATUS_SUCCESS; +} + +/**************************** + * Mutex + ****************************/ + +CpaStatus +qatUtilsMutexInit(struct mtx **pMutex) +{ + if (!pMutex) + return CPA_STATUS_FAIL; + *pMutex = malloc(sizeof(struct mtx), M_QAT, M_WAITOK); + + memset(*pMutex, 0, sizeof(struct mtx)); + + mtx_init(*pMutex, "qat mtx", NULL, MTX_DEF); + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsMutexLock(struct mtx **pMutex, int32_t timeout) +{ + if (!pMutex) + return CPA_STATUS_FAIL; + if (timeout != QAT_UTILS_WAIT_FOREVER) { + QAT_UTILS_LOG("QatUtilsMutexLock(): Illegal timeout value\n"); + return CPA_STATUS_FAIL; + } + + mtx_lock(*pMutex); + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsMutexUnlock(struct mtx **pMutex) +{ + if (!pMutex || !(*pMutex)) + return CPA_STATUS_FAIL; + mtx_unlock(*pMutex); + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsMutexDestroy(struct mtx **pMutex) +{ + if (!pMutex || !(*pMutex)) + return CPA_STATUS_FAIL; + mtx_destroy(*pMutex); + free(*pMutex, M_QAT); + *pMutex = NULL; + + return CPA_STATUS_SUCCESS; +} Index: sys/dev/qat/qat_api/qat_utils/src/QatUtilsServices.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_utils/src/QatUtilsServices.c @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_utils.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/** + * + * @brief Private data structure + * + * Data struct to store the information on the + * memory allocated. This structure is stored at the beginning of + * the allocated chunck of memory + * size is the no of byte passed to the memory allocation functions + * mSize is the real size of the memory required to the OS + * + * +----------------------------+--------------------------------+ + * | QatUtilsMemAllocInfoStruct | memory returned to user (size) | + * +----------------------------+--------------------------------+ + * ^ ^ + * mAllocMemPtr Ptr returned to the caller of MemAlloc* + * + */ + +typedef struct _QatUtilsMemAllocInfoStruct { + void *mAllocMemPtr; /* memory addr returned by the kernel */ + uint32_t mSize; /* allocated size */ +} QatUtilsMemAllocInfoStruct; + +/************************************** + * Memory functions + *************************************/ +void * +qatUtilsMemAllocContiguousNUMA(uint32_t size, uint32_t node, uint32_t alignment) +{ + void *ptr = NULL; + void *pRet = NULL; + uint32_t alignment_offset = 0; + + QatUtilsMemAllocInfoStruct memInfo = { 0 }; + if (size == 0 || alignment < 1) { + QAT_UTILS_LOG( + "QatUtilsMemAllocNUMA: size or alignment are zero.\n"); + return NULL; + } + if (alignment & (alignment - 1)) { + QAT_UTILS_LOG( + "QatUtilsMemAllocNUMA: Expecting alignment of a power.\n"); + return NULL; + } + + memInfo.mSize = size + alignment + sizeof(QatUtilsMemAllocInfoStruct); + ptr = contigmalloc(memInfo.mSize, M_QAT, M_WAITOK, 0, ~1UL, 64, 0); + + memInfo.mAllocMemPtr = ptr; + pRet = + (char *)memInfo.mAllocMemPtr + sizeof(QatUtilsMemAllocInfoStruct); +#ifdef __x86_64__ + alignment_offset = (uint64_t)pRet % alignment; +#else + alignment_offset = (uint32_t)pRet % alignment; +#endif + pRet = (char *)pRet + (alignment - alignment_offset); + memcpy(((char *)pRet) - sizeof(QatUtilsMemAllocInfoStruct), + &memInfo, + sizeof(QatUtilsMemAllocInfoStruct)); + + return pRet; +} + +void +qatUtilsMemFreeNUMA(void *ptr) +{ + QatUtilsMemAllocInfoStruct *memInfo = NULL; + + memInfo = + (QatUtilsMemAllocInfoStruct *)((int8_t *)ptr - + sizeof(QatUtilsMemAllocInfoStruct)); + if (memInfo->mSize == 0 || memInfo->mAllocMemPtr == NULL) { + QAT_UTILS_LOG( + "QatUtilsMemAlignedFree: Detected corrupted data: memory leak!\n"); + return; + } + contigfree(memInfo->mAllocMemPtr, memInfo->mSize, M_QAT); +} + +CpaStatus +qatUtilsSleep(uint32_t milliseconds) +{ + if (milliseconds != 0) { + pause("qatUtils sleep", milliseconds * hz / (1000)); + } else { + sched_relinquish(curthread); + } + return CPA_STATUS_SUCCESS; +} + +void +qatUtilsYield(void) +{ + sched_relinquish(curthread); +} Index: sys/dev/qat/qat_api/qat_utils/src/QatUtilsSpinLock.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_api/qat_utils/src/QatUtilsSpinLock.c @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_utils.h" +#include + +CpaStatus +qatUtilsLockInit(struct mtx *pLock) +{ + if (!pLock) + return CPA_STATUS_FAIL; + memset(pLock, 0, sizeof(*pLock)); + mtx_init(pLock, "qat spin", NULL, MTX_DEF | MTX_DUPOK); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsLock(struct mtx *pLock) +{ + if (!pLock) + return CPA_STATUS_FAIL; + mtx_lock(pLock); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsUnlock(struct mtx *pLock) +{ + if (!pLock) + return CPA_STATUS_FAIL; + mtx_unlock(pLock); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qatUtilsLockDestroy(struct mtx *pLock) +{ + if (!pLock) + return CPA_STATUS_FAIL; + mtx_destroy(pLock); + return CPA_STATUS_SUCCESS; +} Index: sys/dev/qat/qat_c2xxx.c =================================================================== --- sys/dev/qat/qat_c2xxx.c +++ /dev/null @@ -1,217 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_c2xxx.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2007-2013 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -__FBSDID("$FreeBSD$"); -#if 0 -__KERNEL_RCSID(0, "$NetBSD: qat_c2xxx.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $"); -#endif - -#include -#include -#include - -#include - -#include -#include - -#include "qatreg.h" -#include "qat_hw15reg.h" -#include "qat_c2xxxreg.h" -#include "qatvar.h" -#include "qat_hw15var.h" - -static uint32_t -qat_c2xxx_get_accel_mask(struct qat_softc *sc) -{ - uint32_t fusectl; - - fusectl = pci_read_config(sc->sc_dev, FUSECTL_REG, 4); - - return ((~fusectl) & ACCEL_MASK_C2XXX); -} - -static uint32_t -qat_c2xxx_get_ae_mask(struct qat_softc *sc) -{ - uint32_t fusectl; - - fusectl = pci_read_config(sc->sc_dev, FUSECTL_REG, 4); - if (fusectl & ( - FUSECTL_C2XXX_PKE_DISABLE | - FUSECTL_C2XXX_ATH_DISABLE | - FUSECTL_C2XXX_CPH_DISABLE)) { - return 0; - } else { - if ((~fusectl & AE_MASK_C2XXX) == 0x3) { - /* - * With both AEs enabled we get spurious completions on - * ETR rings. Work around that for now by simply - * disabling the second AE. - */ - device_printf(sc->sc_dev, "disabling second AE\n"); - fusectl |= 0x2; - } - return ((~fusectl) & AE_MASK_C2XXX); - } -} - -static enum qat_sku -qat_c2xxx_get_sku(struct qat_softc *sc) -{ - uint32_t fusectl; - - fusectl = pci_read_config(sc->sc_dev, FUSECTL_REG, 4); - - switch (sc->sc_ae_num) { - case 1: - if (fusectl & FUSECTL_C2XXX_LOW_SKU) - return QAT_SKU_3; - else if (fusectl & FUSECTL_C2XXX_MID_SKU) - return QAT_SKU_2; - break; - case MAX_AE_C2XXX: - return QAT_SKU_1; - } - - return QAT_SKU_UNKNOWN; -} - -static uint32_t -qat_c2xxx_get_accel_cap(struct qat_softc *sc) -{ - return QAT_ACCEL_CAP_CRYPTO_SYMMETRIC | - QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC | - QAT_ACCEL_CAP_CIPHER | - QAT_ACCEL_CAP_AUTHENTICATION; -} - -static const char * -qat_c2xxx_get_fw_uof_name(struct qat_softc *sc) -{ - if (sc->sc_rev < QAT_REVID_C2XXX_B0) - return AE_FW_UOF_NAME_C2XXX_A0; - - /* QAT_REVID_C2XXX_B0 and QAT_REVID_C2XXX_C0 */ - return AE_FW_UOF_NAME_C2XXX_B0; -} - -static void -qat_c2xxx_enable_intr(struct qat_softc *sc) -{ - - qat_misc_write_4(sc, EP_SMIA_C2XXX, EP_SMIA_MASK_C2XXX); -} - -static void -qat_c2xxx_init_etr_intr(struct qat_softc *sc, int bank) -{ - /* - * For now, all rings within the bank are setup such that the generation - * of flag interrupts will be triggered when ring leaves the empty - * state. Note that in order for the ring interrupt to generate an IRQ - * the interrupt must also be enabled for the ring. - */ - qat_etr_bank_write_4(sc, bank, ETR_INT_SRCSEL, - ETR_INT_SRCSEL_MASK_0_C2XXX); - qat_etr_bank_write_4(sc, bank, ETR_INT_SRCSEL_2, - ETR_INT_SRCSEL_MASK_X_C2XXX); -} - -const struct qat_hw qat_hw_c2xxx = { - .qhw_sram_bar_id = BAR_SRAM_ID_C2XXX, - .qhw_misc_bar_id = BAR_PMISC_ID_C2XXX, - .qhw_etr_bar_id = BAR_ETR_ID_C2XXX, - .qhw_cap_global_offset = CAP_GLOBAL_OFFSET_C2XXX, - .qhw_ae_offset = AE_OFFSET_C2XXX, - .qhw_ae_local_offset = AE_LOCAL_OFFSET_C2XXX, - .qhw_etr_bundle_size = ETR_BUNDLE_SIZE_C2XXX, - .qhw_num_banks = ETR_MAX_BANKS_C2XXX, - .qhw_num_ap_banks = ETR_MAX_AP_BANKS_C2XXX, - .qhw_num_rings_per_bank = ETR_MAX_RINGS_PER_BANK, - .qhw_num_accel = MAX_ACCEL_C2XXX, - .qhw_num_engines = MAX_AE_C2XXX, - .qhw_tx_rx_gap = ETR_TX_RX_GAP_C2XXX, - .qhw_tx_rings_mask = ETR_TX_RINGS_MASK_C2XXX, - .qhw_msix_ae_vec_gap = MSIX_AE_VEC_GAP_C2XXX, - .qhw_fw_auth = false, - .qhw_fw_req_size = FW_REQ_DEFAULT_SZ_HW15, - .qhw_fw_resp_size = FW_REQ_DEFAULT_SZ_HW15, - .qhw_ring_asym_tx = 2, - .qhw_ring_asym_rx = 3, - .qhw_ring_sym_tx = 4, - .qhw_ring_sym_rx = 5, - .qhw_mof_fwname = AE_FW_MOF_NAME_C2XXX, - .qhw_mmp_fwname = AE_FW_MMP_NAME_C2XXX, - .qhw_prod_type = AE_FW_PROD_TYPE_C2XXX, - .qhw_get_accel_mask = qat_c2xxx_get_accel_mask, - .qhw_get_ae_mask = qat_c2xxx_get_ae_mask, - .qhw_get_sku = qat_c2xxx_get_sku, - .qhw_get_accel_cap = qat_c2xxx_get_accel_cap, - .qhw_get_fw_uof_name = qat_c2xxx_get_fw_uof_name, - .qhw_enable_intr = qat_c2xxx_enable_intr, - .qhw_init_etr_intr = qat_c2xxx_init_etr_intr, - .qhw_init_admin_comms = qat_adm_ring_init, - .qhw_send_admin_init = qat_adm_ring_send_init, - .qhw_crypto_setup_desc = qat_hw15_crypto_setup_desc, - .qhw_crypto_setup_req_params = qat_hw15_crypto_setup_req_params, - .qhw_crypto_opaque_offset = - offsetof(struct fw_la_resp, comn_resp.opaque_data), -}; Index: sys/dev/qat/qat_c2xxxreg.h =================================================================== --- sys/dev/qat/qat_c2xxxreg.h +++ /dev/null @@ -1,177 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_c2xxxreg.h,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2007-2013 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* $FreeBSD$ */ - -#ifndef _DEV_PCI_QAT_C2XXXREG_H_ -#define _DEV_PCI_QAT_C2XXXREG_H_ - -/* PCI revision IDs */ -#define QAT_REVID_C2XXX_A0 0x00 -#define QAT_REVID_C2XXX_B0 0x02 -#define QAT_REVID_C2XXX_C0 0x03 - -/* Max number of accelerators and engines */ -#define MAX_ACCEL_C2XXX 1 -#define MAX_AE_C2XXX 2 - -/* PCIe BAR index */ -#define BAR_SRAM_ID_C2XXX NO_PCI_REG -#define BAR_PMISC_ID_C2XXX 0 -#define BAR_ETR_ID_C2XXX 1 - -#define ACCEL_MASK_C2XXX 0x1 -#define AE_MASK_C2XXX 0x3 - -#define MSIX_AE_VEC_GAP_C2XXX 8 - -/* PCIe configuration space registers */ -/* PESRAM: 512K eSRAM */ -#define BAR_PESRAM_C2XXX NO_PCI_REG -#define BAR_PESRAM_SIZE_C2XXX 0 - -/* - * PMISC: 16K CAP, 16K Scratch, 32K SSU(QATs), - * 32K AE CSRs and transfer registers, 8K CHAP/PMU, - * 4K EP CSRs, 4K MSI-X Tables - */ -#define BAR_PMISC_C2XXX 0x18 -#define BAR_PMISC_SIZE_C2XXX 0x20000 /* 128K */ - -/* PETRINGCSR: 8K 16 bundles of ET Ring CSRs */ -#define BAR_PETRINGCSR_C2XXX 0x20 -#define BAR_PETRINGCSR_SIZE_C2XXX 0x4000 /* 16K */ - -/* Fuse Control */ -#define FUSECTL_C2XXX_PKE_DISABLE (1 << 6) -#define FUSECTL_C2XXX_ATH_DISABLE (1 << 5) -#define FUSECTL_C2XXX_CPH_DISABLE (1 << 4) -#define FUSECTL_C2XXX_LOW_SKU (1 << 3) -#define FUSECTL_C2XXX_MID_SKU (1 << 2) -#define FUSECTL_C2XXX_AE1_DISABLE (1 << 1) - -/* SINT: Signal Target Raw Interrupt Register */ -#define EP_SINTPF_C2XXX 0x1A024 - -/* SMIA: Signal Target IA Mask Register */ -#define EP_SMIA_C2XXX 0x1A028 -#define EP_SMIA_BUNDLES_IRQ_MASK_C2XXX 0xFF -#define EP_SMIA_AE_IRQ_MASK_C2XXX 0x10000 -#define EP_SMIA_MASK_C2XXX \ - (EP_SMIA_BUNDLES_IRQ_MASK_C2XXX | EP_SMIA_AE_IRQ_MASK_C2XXX) - -#define EP_RIMISCCTL_C2XXX 0x1A0C4 -#define EP_RIMISCCTL_MASK_C2XXX 0x40000000 - -#define PFCGCIOSFPRIR_REG_C2XXX 0x2C0 -#define PFCGCIOSFPRIR_MASK_C2XXX 0XFFFF7FFF - -/* BAR sub-regions */ -#define PESRAM_BAR_C2XXX NO_PCI_REG -#define PESRAM_OFFSET_C2XXX 0x0 -#define PESRAM_SIZE_C2XXX 0x0 -#define CAP_GLOBAL_BAR_C2XXX BAR_PMISC_C2XXX -#define CAP_GLOBAL_OFFSET_C2XXX 0x00000 -#define CAP_GLOBAL_SIZE_C2XXX 0x04000 -#define CAP_HASH_OFFSET 0x900 -#define SCRATCH_BAR_C2XXX NO_PCI_REG -#define SCRATCH_OFFSET_C2XXX NO_REG_OFFSET -#define SCRATCH_SIZE_C2XXX 0x0 -#define SSU_BAR_C2XXX BAR_PMISC_C2XXX -#define SSU_OFFSET_C2XXX 0x08000 -#define SSU_SIZE_C2XXX 0x08000 -#define AE_BAR_C2XXX BAR_PMISC_C2XXX -#define AE_OFFSET_C2XXX 0x10000 -#define AE_LOCAL_OFFSET_C2XXX 0x10800 -#define PMU_BAR_C2XXX NO_PCI_REG -#define PMU_OFFSET_C2XXX NO_REG_OFFSET -#define PMU_SIZE_C2XXX 0x0 -#define EP_BAR_C2XXX BAR_PMISC_C2XXX -#define EP_OFFSET_C2XXX 0x1A000 -#define EP_SIZE_C2XXX 0x01000 -#define MSIX_TAB_BAR_C2XXX NO_PCI_REG /* mapped by pci(9) */ -#define MSIX_TAB_OFFSET_C2XXX 0x1B000 -#define MSIX_TAB_SIZE_C2XXX 0x01000 -#define PETRINGCSR_BAR_C2XXX BAR_PETRINGCSR_C2XXX -#define PETRINGCSR_OFFSET_C2XXX 0x0 -#define PETRINGCSR_SIZE_C2XXX 0x0 /* use size of BAR */ - -/* ETR */ -#define ETR_MAX_BANKS_C2XXX 8 -#define ETR_MAX_ET_RINGS_C2XXX \ - (ETR_MAX_BANKS_C2XXX * ETR_MAX_RINGS_PER_BANK_C2XXX) -#define ETR_MAX_AP_BANKS_C2XXX 4 - -#define ETR_TX_RX_GAP_C2XXX 1 -#define ETR_TX_RINGS_MASK_C2XXX 0x51 - -#define ETR_BUNDLE_SIZE_C2XXX 0x0200 - -/* Initial bank Interrupt Source mask */ -#define ETR_INT_SRCSEL_MASK_0_C2XXX 0x4444444CUL -#define ETR_INT_SRCSEL_MASK_X_C2XXX 0x44444444UL - -/* AE firmware */ -#define AE_FW_PROD_TYPE_C2XXX 0x00800000 -#define AE_FW_MOF_NAME_C2XXX "qat_c2xxxfw" -#define AE_FW_MMP_NAME_C2XXX "mmp_firmware_c2xxx" -#define AE_FW_UOF_NAME_C2XXX_A0 "icp_qat_nae.uof" -#define AE_FW_UOF_NAME_C2XXX_B0 "icp_qat_nae_b0.uof" - -#endif Index: sys/dev/qat/qat_c3xxx.c =================================================================== --- sys/dev/qat/qat_c3xxx.c +++ /dev/null @@ -1,298 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_c3xxx.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2014 Intel Corporation. - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -__FBSDID("$FreeBSD$"); -#if 0 -__KERNEL_RCSID(0, "$NetBSD: qat_c3xxx.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $"); -#endif - -#include -#include -#include - -#include - -#include -#include - -#include "qatreg.h" -#include "qat_hw17reg.h" -#include "qat_c3xxxreg.h" -#include "qatvar.h" -#include "qat_hw17var.h" - -static uint32_t -qat_c3xxx_get_accel_mask(struct qat_softc *sc) -{ - uint32_t fusectl, strap; - - fusectl = pci_read_config(sc->sc_dev, FUSECTL_REG, 4); - strap = pci_read_config(sc->sc_dev, SOFTSTRAP_REG_C3XXX, 4); - - return (((~(fusectl | strap)) >> ACCEL_REG_OFFSET_C3XXX) & - ACCEL_MASK_C3XXX); -} - -static uint32_t -qat_c3xxx_get_ae_mask(struct qat_softc *sc) -{ - uint32_t fusectl, me_strap, me_disable, ssms_disabled; - - fusectl = pci_read_config(sc->sc_dev, FUSECTL_REG, 4); - me_strap = pci_read_config(sc->sc_dev, SOFTSTRAP_REG_C3XXX, 4); - - /* If SSMs are disabled, then disable the corresponding MEs */ - ssms_disabled = (~qat_c3xxx_get_accel_mask(sc)) & ACCEL_MASK_C3XXX; - me_disable = 0x3; - while (ssms_disabled) { - if (ssms_disabled & 1) - me_strap |= me_disable; - ssms_disabled >>= 1; - me_disable <<= 2; - } - - return (~(fusectl | me_strap)) & AE_MASK_C3XXX; -} - -static enum qat_sku -qat_c3xxx_get_sku(struct qat_softc *sc) -{ - switch (sc->sc_ae_num) { - case MAX_AE_C3XXX: - return QAT_SKU_4; - } - - return QAT_SKU_UNKNOWN; -} - -static uint32_t -qat_c3xxx_get_accel_cap(struct qat_softc *sc) -{ - uint32_t cap, legfuse, strap; - - legfuse = pci_read_config(sc->sc_dev, LEGFUSE_REG, 4); - strap = pci_read_config(sc->sc_dev, SOFTSTRAP_REG_C3XXX, 4); - - cap = QAT_ACCEL_CAP_CRYPTO_SYMMETRIC + - QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC + - QAT_ACCEL_CAP_CIPHER + - QAT_ACCEL_CAP_AUTHENTICATION + - QAT_ACCEL_CAP_COMPRESSION + - QAT_ACCEL_CAP_ZUC + - QAT_ACCEL_CAP_SHA3; - - if (legfuse & LEGFUSE_ACCEL_MASK_CIPHER_SLICE) { - cap &= ~QAT_ACCEL_CAP_CRYPTO_SYMMETRIC; - cap &= ~QAT_ACCEL_CAP_CIPHER; - } - if (legfuse & LEGFUSE_ACCEL_MASK_AUTH_SLICE) - cap &= ~QAT_ACCEL_CAP_AUTHENTICATION; - if (legfuse & LEGFUSE_ACCEL_MASK_PKE_SLICE) - cap &= ~QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC; - if (legfuse & LEGFUSE_ACCEL_MASK_COMPRESS_SLICE) - cap &= ~QAT_ACCEL_CAP_COMPRESSION; - if (legfuse & LEGFUSE_ACCEL_MASK_EIA3_SLICE) - cap &= ~QAT_ACCEL_CAP_ZUC; - - if ((strap | legfuse) & SOFTSTRAP_SS_POWERGATE_PKE_C3XXX) - cap &= ~QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC; - if ((strap | legfuse) & SOFTSTRAP_SS_POWERGATE_CY_C3XXX) - cap &= ~QAT_ACCEL_CAP_COMPRESSION; - - return cap; -} - -static const char * -qat_c3xxx_get_fw_uof_name(struct qat_softc *sc) -{ - - return AE_FW_UOF_NAME_C3XXX; -} - -static void -qat_c3xxx_enable_intr(struct qat_softc *sc) -{ - - /* Enable bundle and misc interrupts */ - qat_misc_write_4(sc, SMIAPF0_C3XXX, SMIA0_MASK_C3XXX); - qat_misc_write_4(sc, SMIAPF1_C3XXX, SMIA1_MASK_C3XXX); -} - -/* Worker thread to service arbiter mappings */ -static uint32_t thrd_to_arb_map[] = { - 0x12222AAA, 0x11222AAA, 0x12222AAA, - 0x11222AAA, 0x12222AAA, 0x11222AAA -}; - -static void -qat_c3xxx_get_arb_mapping(struct qat_softc *sc, const uint32_t **arb_map_config) -{ - int i; - - for (i = 1; i < MAX_AE_C3XXX; i++) { - if ((~sc->sc_ae_mask) & (1 << i)) - thrd_to_arb_map[i] = 0; - } - *arb_map_config = thrd_to_arb_map; -} - -static void -qat_c3xxx_enable_error_interrupts(struct qat_softc *sc) -{ - qat_misc_write_4(sc, ERRMSK0, ERRMSK0_CERR_C3XXX); /* ME0-ME3 */ - qat_misc_write_4(sc, ERRMSK1, ERRMSK1_CERR_C3XXX); /* ME4-ME5 */ - qat_misc_write_4(sc, ERRMSK5, ERRMSK5_CERR_C3XXX); /* SSM2 */ - - /* Reset everything except VFtoPF1_16. */ - qat_misc_read_write_and_4(sc, ERRMSK3, VF2PF1_16_C3XXX); - - /* RI CPP bus interface error detection and reporting. */ - qat_misc_write_4(sc, RICPPINTCTL_C3XXX, RICPP_EN_C3XXX); - - /* TI CPP bus interface error detection and reporting. */ - qat_misc_write_4(sc, TICPPINTCTL_C3XXX, TICPP_EN_C3XXX); - - /* Enable CFC Error interrupts and logging. */ - qat_misc_write_4(sc, CPP_CFC_ERR_CTRL_C3XXX, CPP_CFC_UE_C3XXX); -} - -static void -qat_c3xxx_disable_error_interrupts(struct qat_softc *sc) -{ - /* ME0-ME3 */ - qat_misc_write_4(sc, ERRMSK0, ERRMSK0_UERR_C3XXX | ERRMSK0_CERR_C3XXX); - /* ME4-ME5 */ - qat_misc_write_4(sc, ERRMSK1, ERRMSK1_UERR_C3XXX | ERRMSK1_CERR_C3XXX); - /* CPP Push Pull, RI, TI, SSM0-SSM1, CFC */ - qat_misc_write_4(sc, ERRMSK3, ERRMSK3_UERR_C3XXX); - /* SSM2 */ - qat_misc_write_4(sc, ERRMSK5, ERRMSK5_UERR_C3XXX); -} - -static void -qat_c3xxx_enable_error_correction(struct qat_softc *sc) -{ - u_int i, mask; - - /* Enable Accel Engine error detection & correction */ - for (i = 0, mask = sc->sc_ae_mask; mask; i++, mask >>= 1) { - if (!(mask & 1)) - continue; - qat_misc_read_write_or_4(sc, AE_CTX_ENABLES_C3XXX(i), - ENABLE_AE_ECC_ERR_C3XXX); - qat_misc_read_write_or_4(sc, AE_MISC_CONTROL_C3XXX(i), - ENABLE_AE_ECC_PARITY_CORR_C3XXX); - } - - /* Enable shared memory error detection & correction */ - for (i = 0, mask = sc->sc_accel_mask; mask; i++, mask >>= 1) { - if (!(mask & 1)) - continue; - - qat_misc_read_write_or_4(sc, UERRSSMSH(i), ERRSSMSH_EN_C3XXX); - qat_misc_read_write_or_4(sc, CERRSSMSH(i), ERRSSMSH_EN_C3XXX); - qat_misc_read_write_or_4(sc, PPERR(i), PPERR_EN_C3XXX); - } - - qat_c3xxx_enable_error_interrupts(sc); -} - -const struct qat_hw qat_hw_c3xxx = { - .qhw_sram_bar_id = BAR_SRAM_ID_C3XXX, - .qhw_misc_bar_id = BAR_PMISC_ID_C3XXX, - .qhw_etr_bar_id = BAR_ETR_ID_C3XXX, - .qhw_cap_global_offset = CAP_GLOBAL_OFFSET_C3XXX, - .qhw_ae_offset = AE_OFFSET_C3XXX, - .qhw_ae_local_offset = AE_LOCAL_OFFSET_C3XXX, - .qhw_etr_bundle_size = ETR_BUNDLE_SIZE_C3XXX, - .qhw_num_banks = ETR_MAX_BANKS_C3XXX, - .qhw_num_rings_per_bank = ETR_MAX_RINGS_PER_BANK, - .qhw_num_accel = MAX_ACCEL_C3XXX, - .qhw_num_engines = MAX_AE_C3XXX, - .qhw_tx_rx_gap = ETR_TX_RX_GAP_C3XXX, - .qhw_tx_rings_mask = ETR_TX_RINGS_MASK_C3XXX, - .qhw_clock_per_sec = CLOCK_PER_SEC_C3XXX, - .qhw_fw_auth = true, - .qhw_fw_req_size = FW_REQ_DEFAULT_SZ_HW17, - .qhw_fw_resp_size = FW_RESP_DEFAULT_SZ_HW17, - .qhw_ring_asym_tx = 0, - .qhw_ring_asym_rx = 8, - .qhw_ring_sym_tx = 2, - .qhw_ring_sym_rx = 10, - .qhw_mof_fwname = AE_FW_MOF_NAME_C3XXX, - .qhw_mmp_fwname = AE_FW_MMP_NAME_C3XXX, - .qhw_prod_type = AE_FW_PROD_TYPE_C3XXX, - .qhw_get_accel_mask = qat_c3xxx_get_accel_mask, - .qhw_get_ae_mask = qat_c3xxx_get_ae_mask, - .qhw_get_sku = qat_c3xxx_get_sku, - .qhw_get_accel_cap = qat_c3xxx_get_accel_cap, - .qhw_get_fw_uof_name = qat_c3xxx_get_fw_uof_name, - .qhw_enable_intr = qat_c3xxx_enable_intr, - .qhw_init_admin_comms = qat_adm_mailbox_init, - .qhw_send_admin_init = qat_adm_mailbox_send_init, - .qhw_init_arb = qat_arb_init, - .qhw_get_arb_mapping = qat_c3xxx_get_arb_mapping, - .qhw_enable_error_correction = qat_c3xxx_enable_error_correction, - .qhw_disable_error_interrupts = qat_c3xxx_disable_error_interrupts, - .qhw_set_ssm_wdtimer = qat_set_ssm_wdtimer, - .qhw_check_slice_hang = qat_check_slice_hang, - .qhw_crypto_setup_desc = qat_hw17_crypto_setup_desc, - .qhw_crypto_setup_req_params = qat_hw17_crypto_setup_req_params, - .qhw_crypto_opaque_offset = offsetof(struct fw_la_resp, opaque_data), -}; Index: sys/dev/qat/qat_c3xxxreg.h =================================================================== --- sys/dev/qat/qat_c3xxxreg.h +++ /dev/null @@ -1,178 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_c3xxxreg.h,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2014 Intel Corporation. - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* $FreeBSD$ */ - -#ifndef _DEV_PCI_QAT_C3XXXREG_H_ -#define _DEV_PCI_QAT_C3XXXREG_H_ - -/* Max number of accelerators and engines */ -#define MAX_ACCEL_C3XXX 3 -#define MAX_AE_C3XXX 6 - -/* PCIe BAR index */ -#define BAR_SRAM_ID_C3XXX NO_PCI_REG -#define BAR_PMISC_ID_C3XXX 0 -#define BAR_ETR_ID_C3XXX 1 - -/* BAR PMISC sub-regions */ -#define AE_OFFSET_C3XXX 0x20000 -#define AE_LOCAL_OFFSET_C3XXX 0x20800 -#define CAP_GLOBAL_OFFSET_C3XXX 0x30000 - -#define SOFTSTRAP_REG_C3XXX 0x2EC -#define SOFTSTRAP_SS_POWERGATE_CY_C3XXX __BIT(23) -#define SOFTSTRAP_SS_POWERGATE_PKE_C3XXX __BIT(24) - -#define ACCEL_REG_OFFSET_C3XXX 16 -#define ACCEL_MASK_C3XXX 0x7 -#define AE_MASK_C3XXX 0x3F - -#define SMIAPF0_C3XXX 0x3A028 -#define SMIAPF1_C3XXX 0x3A030 -#define SMIA0_MASK_C3XXX 0xFFFF -#define SMIA1_MASK_C3XXX 0x1 - -/* Error detection and correction */ -#define AE_CTX_ENABLES_C3XXX(i) ((i) * 0x1000 + 0x20818) -#define AE_MISC_CONTROL_C3XXX(i) ((i) * 0x1000 + 0x20960) -#define ENABLE_AE_ECC_ERR_C3XXX __BIT(28) -#define ENABLE_AE_ECC_PARITY_CORR_C3XXX (__BIT(24) | __BIT(12)) -#define ERRSSMSH_EN_C3XXX __BIT(3) -/* BIT(2) enables the logging of push/pull data errors. */ -#define PPERR_EN_C3XXX (__BIT(2)) - -/* Mask for VF2PF interrupts */ -#define VF2PF1_16_C3XXX (0xFFFF << 9) -#define ERRSOU3_VF2PF_C3XXX(errsou3) (((errsou3) & 0x01FFFE00) >> 9) -#define ERRMSK3_VF2PF_C3XXX(vf_mask) (((vf_mask) & 0xFFFF) << 9) - -/* Masks for correctable error interrupts. */ -#define ERRMSK0_CERR_C3XXX (__BIT(24) | __BIT(16) | __BIT(8) | __BIT(0)) -#define ERRMSK1_CERR_C3XXX (__BIT(8) | __BIT(0)) -#define ERRMSK5_CERR_C3XXX (0) - -/* Masks for uncorrectable error interrupts. */ -#define ERRMSK0_UERR_C3XXX (__BIT(25) | __BIT(17) | __BIT(9) | __BIT(1)) -#define ERRMSK1_UERR_C3XXX (__BIT(9) | __BIT(1)) -#define ERRMSK3_UERR_C3XXX (__BIT(6) | __BIT(5) | __BIT(4) | __BIT(3) | \ - __BIT(2) | __BIT(0)) -#define ERRMSK5_UERR_C3XXX (__BIT(16)) - -/* RI CPP control */ -#define RICPPINTCTL_C3XXX (0x3A000 + 0x110) -/* - * BIT(2) enables error detection and reporting on the RI Parity Error. - * BIT(1) enables error detection and reporting on the RI CPP Pull interface. - * BIT(0) enables error detection and reporting on the RI CPP Push interface. - */ -#define RICPP_EN_C3XXX (__BIT(2) | __BIT(1) | __BIT(0)) - -/* TI CPP control */ -#define TICPPINTCTL_C3XXX (0x3A400 + 0x138) -/* - * BIT(3) enables error detection and reporting on the ETR Parity Error. - * BIT(2) enables error detection and reporting on the TI Parity Error. - * BIT(1) enables error detection and reporting on the TI CPP Pull interface. - * BIT(0) enables error detection and reporting on the TI CPP Push interface. - */ -#define TICPP_EN_C3XXX \ - (__BIT(3) | __BIT(2) | __BIT(1) | __BIT(0)) - -/* CFC Uncorrectable Errors */ -#define CPP_CFC_ERR_CTRL_C3XXX (0x30000 + 0xC00) -/* - * BIT(1) enables interrupt. - * BIT(0) enables detecting and logging of push/pull data errors. - */ -#define CPP_CFC_UE_C3XXX (__BIT(1) | __BIT(0)) - -#define SLICEPWRDOWN_C3XXX(i) ((i) * 0x4000 + 0x2C) -/* Enabling PKE4-PKE0. */ -#define MMP_PWR_UP_MSK_C3XXX \ - (__BIT(20) | __BIT(19) | __BIT(18) | __BIT(17) | __BIT(16)) - -/* CPM Uncorrectable Errors */ -#define INTMASKSSM_C3XXX(i) ((i) * 0x4000 + 0x0) -/* Disabling interrupts for correctable errors. */ -#define INTMASKSSM_UERR_C3XXX \ - (__BIT(11) | __BIT(9) | __BIT(7) | __BIT(5) | __BIT(3) | __BIT(1)) - -/* MMP */ -/* BIT(3) enables correction. */ -#define CERRSSMMMP_EN_C3XXX (__BIT(3)) - -/* BIT(3) enables logging. */ -#define UERRSSMMMP_EN_C3XXX (__BIT(3)) - -/* ETR */ -#define ETR_MAX_BANKS_C3XXX 16 -#define ETR_TX_RX_GAP_C3XXX 8 -#define ETR_TX_RINGS_MASK_C3XXX 0xFF -#define ETR_BUNDLE_SIZE_C3XXX 0x1000 - -/* AE firmware */ -#define AE_FW_PROD_TYPE_C3XXX 0x02000000 -#define AE_FW_MOF_NAME_C3XXX "qat_c3xxxfw" -#define AE_FW_MMP_NAME_C3XXX "qat_c3xxx_mmp" -#define AE_FW_UOF_NAME_C3XXX "icp_qat_ae.suof" - -/* Clock frequency */ -#define CLOCK_PER_SEC_C3XXX (685 * 1000000 / 16) - -#endif Index: sys/dev/qat/qat_c62x.c =================================================================== --- sys/dev/qat/qat_c62x.c +++ /dev/null @@ -1,314 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_c62x.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2014 Intel Corporation. - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -__FBSDID("$FreeBSD$"); -#if 0 -__KERNEL_RCSID(0, "$NetBSD: qat_c62x.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $"); -#endif - -#include -#include -#include - -#include - -#include -#include - -#include "qatreg.h" -#include "qat_hw17reg.h" -#include "qat_c62xreg.h" -#include "qatvar.h" -#include "qat_hw17var.h" - -static uint32_t -qat_c62x_get_accel_mask(struct qat_softc *sc) -{ - uint32_t fusectl, strap; - - fusectl = pci_read_config(sc->sc_dev, FUSECTL_REG, 4); - strap = pci_read_config(sc->sc_dev, SOFTSTRAP_REG_C62X, 4); - - return (((~(fusectl | strap)) >> ACCEL_REG_OFFSET_C62X) & - ACCEL_MASK_C62X); -} - -static uint32_t -qat_c62x_get_ae_mask(struct qat_softc *sc) -{ - uint32_t fusectl, me_strap, me_disable, ssms_disabled; - - fusectl = pci_read_config(sc->sc_dev, FUSECTL_REG, 4); - me_strap = pci_read_config(sc->sc_dev, SOFTSTRAP_REG_C62X, 4); - - /* If SSMs are disabled, then disable the corresponding MEs */ - ssms_disabled = (~qat_c62x_get_accel_mask(sc)) & ACCEL_MASK_C62X; - me_disable = 0x3; - while (ssms_disabled) { - if (ssms_disabled & 1) - me_strap |= me_disable; - ssms_disabled >>= 1; - me_disable <<= 2; - } - - return (~(fusectl | me_strap)) & AE_MASK_C62X; -} - -static enum qat_sku -qat_c62x_get_sku(struct qat_softc *sc) -{ - switch (sc->sc_ae_num) { - case 8: - return QAT_SKU_2; - case MAX_AE_C62X: - return QAT_SKU_4; - } - - return QAT_SKU_UNKNOWN; -} - -static uint32_t -qat_c62x_get_accel_cap(struct qat_softc *sc) -{ - uint32_t cap, legfuse, strap; - - legfuse = pci_read_config(sc->sc_dev, LEGFUSE_REG, 4); - strap = pci_read_config(sc->sc_dev, SOFTSTRAP_REG_C62X, 4); - - cap = QAT_ACCEL_CAP_CRYPTO_SYMMETRIC + - QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC + - QAT_ACCEL_CAP_CIPHER + - QAT_ACCEL_CAP_AUTHENTICATION + - QAT_ACCEL_CAP_COMPRESSION + - QAT_ACCEL_CAP_ZUC + - QAT_ACCEL_CAP_SHA3; - - if (legfuse & LEGFUSE_ACCEL_MASK_CIPHER_SLICE) { - cap &= ~QAT_ACCEL_CAP_CRYPTO_SYMMETRIC; - cap &= ~QAT_ACCEL_CAP_CIPHER; - } - if (legfuse & LEGFUSE_ACCEL_MASK_AUTH_SLICE) - cap &= ~QAT_ACCEL_CAP_AUTHENTICATION; - if (legfuse & LEGFUSE_ACCEL_MASK_PKE_SLICE) - cap &= ~QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC; - if (legfuse & LEGFUSE_ACCEL_MASK_COMPRESS_SLICE) - cap &= ~QAT_ACCEL_CAP_COMPRESSION; - if (legfuse & LEGFUSE_ACCEL_MASK_EIA3_SLICE) - cap &= ~QAT_ACCEL_CAP_ZUC; - - if ((strap | legfuse) & SOFTSTRAP_SS_POWERGATE_PKE_C62X) - cap &= ~QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC; - if ((strap | legfuse) & SOFTSTRAP_SS_POWERGATE_CY_C62X) - cap &= ~QAT_ACCEL_CAP_COMPRESSION; - - return cap; -} - -static const char * -qat_c62x_get_fw_uof_name(struct qat_softc *sc) -{ - - return AE_FW_UOF_NAME_C62X; -} - -static void -qat_c62x_enable_intr(struct qat_softc *sc) -{ - - /* Enable bundle and misc interrupts */ - qat_misc_write_4(sc, SMIAPF0_C62X, SMIA0_MASK_C62X); - qat_misc_write_4(sc, SMIAPF1_C62X, SMIA1_MASK_C62X); -} - -/* Worker thread to service arbiter mappings */ -static uint32_t thrd_to_arb_map[] = { - 0x12222AAA, 0x11222AAA, 0x12222AAA, 0x11222AAA, 0x12222AAA, - 0x11222AAA, 0x12222AAA, 0x11222AAA, 0x12222AAA, 0x11222AAA -}; - -static void -qat_c62x_get_arb_mapping(struct qat_softc *sc, const uint32_t **arb_map_config) -{ - int i; - - for (i = 1; i < MAX_AE_C62X; i++) { - if ((~sc->sc_ae_mask) & (1 << i)) - thrd_to_arb_map[i] = 0; - } - *arb_map_config = thrd_to_arb_map; -} - -static void -qat_c62x_enable_error_interrupts(struct qat_softc *sc) -{ - qat_misc_write_4(sc, ERRMSK0, ERRMSK0_CERR_C62X); /* ME0-ME3 */ - qat_misc_write_4(sc, ERRMSK1, ERRMSK1_CERR_C62X); /* ME4-ME7 */ - qat_misc_write_4(sc, ERRMSK4, ERRMSK4_CERR_C62X); /* ME8-ME9 */ - qat_misc_write_4(sc, ERRMSK5, ERRMSK5_CERR_C62X); /* SSM2-SSM4 */ - - /* Reset everything except VFtoPF1_16. */ - qat_misc_read_write_and_4(sc, ERRMSK3, VF2PF1_16_C62X); - /* Disable Secure RAM correctable error interrupt */ - qat_misc_read_write_or_4(sc, ERRMSK3, ERRMSK3_CERR_C62X); - - /* RI CPP bus interface error detection and reporting. */ - qat_misc_write_4(sc, RICPPINTCTL_C62X, RICPP_EN_C62X); - - /* TI CPP bus interface error detection and reporting. */ - qat_misc_write_4(sc, TICPPINTCTL_C62X, TICPP_EN_C62X); - - /* Enable CFC Error interrupts and logging. */ - qat_misc_write_4(sc, CPP_CFC_ERR_CTRL_C62X, CPP_CFC_UE_C62X); - - /* Enable SecureRAM to fix and log Correctable errors */ - qat_misc_write_4(sc, SECRAMCERR_C62X, SECRAM_CERR_C62X); - - /* Enable SecureRAM Uncorrectable error interrupts and logging */ - qat_misc_write_4(sc, SECRAMUERR, SECRAM_UERR_C62X); - - /* Enable Push/Pull Misc Uncorrectable error interrupts and logging */ - qat_misc_write_4(sc, CPPMEMTGTERR, TGT_UERR_C62X); -} - -static void -qat_c62x_disable_error_interrupts(struct qat_softc *sc) -{ - /* ME0-ME3 */ - qat_misc_write_4(sc, ERRMSK0, ERRMSK0_UERR_C62X | ERRMSK0_CERR_C62X); - /* ME4-ME7 */ - qat_misc_write_4(sc, ERRMSK1, ERRMSK1_UERR_C62X | ERRMSK1_CERR_C62X); - /* Secure RAM, CPP Push Pull, RI, TI, SSM0-SSM1, CFC */ - qat_misc_write_4(sc, ERRMSK3, ERRMSK3_UERR_C62X | ERRMSK3_CERR_C62X); - /* ME8-ME9 */ - qat_misc_write_4(sc, ERRMSK4, ERRMSK4_UERR_C62X | ERRMSK4_CERR_C62X); - /* SSM2-SSM4 */ - qat_misc_write_4(sc, ERRMSK5, ERRMSK5_UERR_C62X | ERRMSK5_CERR_C62X); -} - -static void -qat_c62x_enable_error_correction(struct qat_softc *sc) -{ - u_int i, mask; - - /* Enable Accel Engine error detection & correction */ - for (i = 0, mask = sc->sc_ae_mask; mask; i++, mask >>= 1) { - if (!(mask & 1)) - continue; - qat_misc_read_write_or_4(sc, AE_CTX_ENABLES_C62X(i), - ENABLE_AE_ECC_ERR_C62X); - qat_misc_read_write_or_4(sc, AE_MISC_CONTROL_C62X(i), - ENABLE_AE_ECC_PARITY_CORR_C62X); - } - - /* Enable shared memory error detection & correction */ - for (i = 0, mask = sc->sc_accel_mask; mask; i++, mask >>= 1) { - if (!(mask & 1)) - continue; - - qat_misc_read_write_or_4(sc, UERRSSMSH(i), ERRSSMSH_EN_C62X); - qat_misc_read_write_or_4(sc, CERRSSMSH(i), ERRSSMSH_EN_C62X); - qat_misc_read_write_or_4(sc, PPERR(i), PPERR_EN_C62X); - } - - qat_c62x_enable_error_interrupts(sc); -} - -const struct qat_hw qat_hw_c62x = { - .qhw_sram_bar_id = BAR_SRAM_ID_C62X, - .qhw_misc_bar_id = BAR_PMISC_ID_C62X, - .qhw_etr_bar_id = BAR_ETR_ID_C62X, - .qhw_cap_global_offset = CAP_GLOBAL_OFFSET_C62X, - .qhw_ae_offset = AE_OFFSET_C62X, - .qhw_ae_local_offset = AE_LOCAL_OFFSET_C62X, - .qhw_etr_bundle_size = ETR_BUNDLE_SIZE_C62X, - .qhw_num_banks = ETR_MAX_BANKS_C62X, - .qhw_num_rings_per_bank = ETR_MAX_RINGS_PER_BANK, - .qhw_num_accel = MAX_ACCEL_C62X, - .qhw_num_engines = MAX_AE_C62X, - .qhw_tx_rx_gap = ETR_TX_RX_GAP_C62X, - .qhw_tx_rings_mask = ETR_TX_RINGS_MASK_C62X, - .qhw_clock_per_sec = CLOCK_PER_SEC_C62X, - .qhw_fw_auth = true, - .qhw_fw_req_size = FW_REQ_DEFAULT_SZ_HW17, - .qhw_fw_resp_size = FW_RESP_DEFAULT_SZ_HW17, - .qhw_ring_asym_tx = 0, - .qhw_ring_asym_rx = 8, - .qhw_ring_sym_tx = 2, - .qhw_ring_sym_rx = 10, - .qhw_mof_fwname = AE_FW_MOF_NAME_C62X, - .qhw_mmp_fwname = AE_FW_MMP_NAME_C62X, - .qhw_prod_type = AE_FW_PROD_TYPE_C62X, - .qhw_get_accel_mask = qat_c62x_get_accel_mask, - .qhw_get_ae_mask = qat_c62x_get_ae_mask, - .qhw_get_sku = qat_c62x_get_sku, - .qhw_get_accel_cap = qat_c62x_get_accel_cap, - .qhw_get_fw_uof_name = qat_c62x_get_fw_uof_name, - .qhw_enable_intr = qat_c62x_enable_intr, - .qhw_init_admin_comms = qat_adm_mailbox_init, - .qhw_send_admin_init = qat_adm_mailbox_send_init, - .qhw_init_arb = qat_arb_init, - .qhw_get_arb_mapping = qat_c62x_get_arb_mapping, - .qhw_enable_error_correction = qat_c62x_enable_error_correction, - .qhw_disable_error_interrupts = qat_c62x_disable_error_interrupts, - .qhw_set_ssm_wdtimer = qat_set_ssm_wdtimer, - .qhw_check_slice_hang = qat_check_slice_hang, - .qhw_crypto_setup_desc = qat_hw17_crypto_setup_desc, - .qhw_crypto_setup_req_params = qat_hw17_crypto_setup_req_params, - .qhw_crypto_opaque_offset = offsetof(struct fw_la_resp, opaque_data), -}; Index: sys/dev/qat/qat_c62xreg.h =================================================================== --- sys/dev/qat/qat_c62xreg.h +++ /dev/null @@ -1,201 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_c62xreg.h,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2014 Intel Corporation. - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* $FreeBSD$ */ - -#ifndef _DEV_PCI_QAT_C62XREG_H_ -#define _DEV_PCI_QAT_C62XREG_H_ - -/* Max number of accelerators and engines */ -#define MAX_ACCEL_C62X 5 -#define MAX_AE_C62X 10 - -/* PCIe BAR index */ -#define BAR_SRAM_ID_C62X 0 -#define BAR_PMISC_ID_C62X 1 -#define BAR_ETR_ID_C62X 2 - -/* BAR PMISC sub-regions */ -#define AE_OFFSET_C62X 0x20000 -#define AE_LOCAL_OFFSET_C62X 0x20800 -#define CAP_GLOBAL_OFFSET_C62X 0x30000 - -#define SOFTSTRAP_REG_C62X 0x2EC -#define SOFTSTRAP_SS_POWERGATE_CY_C62X __BIT(23) -#define SOFTSTRAP_SS_POWERGATE_PKE_C62X __BIT(24) - -#define ACCEL_REG_OFFSET_C62X 16 -#define ACCEL_MASK_C62X 0x1F -#define AE_MASK_C62X 0x3FF - -#define SMIAPF0_C62X 0x3A028 -#define SMIAPF1_C62X 0x3A030 -#define SMIA0_MASK_C62X 0xFFFF -#define SMIA1_MASK_C62X 0x1 - -/* Error detection and correction */ -#define AE_CTX_ENABLES_C62X(i) ((i) * 0x1000 + 0x20818) -#define AE_MISC_CONTROL_C62X(i) ((i) * 0x1000 + 0x20960) -#define ENABLE_AE_ECC_ERR_C62X __BIT(28) -#define ENABLE_AE_ECC_PARITY_CORR_C62X (__BIT(24) | __BIT(12)) -#define ERRSSMSH_EN_C62X __BIT(3) -/* BIT(2) enables the logging of push/pull data errors. */ -#define PPERR_EN_C62X (__BIT(2)) - -/* Mask for VF2PF interrupts */ -#define VF2PF1_16_C62X (0xFFFF << 9) -#define ERRSOU3_VF2PF_C62X(errsou3) (((errsou3) & 0x01FFFE00) >> 9) -#define ERRMSK3_VF2PF_C62X(vf_mask) (((vf_mask) & 0xFFFF) << 9) - -/* Masks for correctable error interrupts. */ -#define ERRMSK0_CERR_C62X (__BIT(24) | __BIT(16) | __BIT(8) | __BIT(0)) -#define ERRMSK1_CERR_C62X (__BIT(24) | __BIT(16) | __BIT(8) | __BIT(0)) -#define ERRMSK3_CERR_C62X (__BIT(7)) -#define ERRMSK4_CERR_C62X (__BIT(8) | __BIT(0)) -#define ERRMSK5_CERR_C62X (0) - -/* Masks for uncorrectable error interrupts. */ -#define ERRMSK0_UERR_C62X (__BIT(25) | __BIT(17) | __BIT(9) | __BIT(1)) -#define ERRMSK1_UERR_C62X (__BIT(25) | __BIT(17) | __BIT(9) | __BIT(1)) -#define ERRMSK3_UERR_C62X (__BIT(8) | __BIT(6) | __BIT(5) | __BIT(4) | \ - __BIT(3) | __BIT(2) | __BIT(0)) -#define ERRMSK4_UERR_C62X (__BIT(9) | __BIT(1)) -#define ERRMSK5_UERR_C62X (__BIT(18) | __BIT(17) | __BIT(16)) - -/* RI CPP control */ -#define RICPPINTCTL_C62X (0x3A000 + 0x110) -/* - * BIT(2) enables error detection and reporting on the RI Parity Error. - * BIT(1) enables error detection and reporting on the RI CPP Pull interface. - * BIT(0) enables error detection and reporting on the RI CPP Push interface. - */ -#define RICPP_EN_C62X (__BIT(2) | __BIT(1) | __BIT(0)) - -/* TI CPP control */ -#define TICPPINTCTL_C62X (0x3A400 + 0x138) -/* - * BIT(3) enables error detection and reporting on the ETR Parity Error. - * BIT(2) enables error detection and reporting on the TI Parity Error. - * BIT(1) enables error detection and reporting on the TI CPP Pull interface. - * BIT(0) enables error detection and reporting on the TI CPP Push interface. - */ -#define TICPP_EN_C62X \ - (__BIT(4) | __BIT(3) | __BIT(2) | __BIT(1) | __BIT(0)) - -/* CFC Uncorrectable Errors */ -#define CPP_CFC_ERR_CTRL_C62X (0x30000 + 0xC00) -/* - * BIT(1) enables interrupt. - * BIT(0) enables detecting and logging of push/pull data errors. - */ -#define CPP_CFC_UE_C62X (__BIT(1) | __BIT(0)) - -/* Correctable SecureRAM Error Reg */ -#define SECRAMCERR_C62X (0x3AC00 + 0x00) -/* BIT(3) enables fixing and logging of correctable errors. */ -#define SECRAM_CERR_C62X (__BIT(3)) - -/* Uncorrectable SecureRAM Error Reg */ -/* - * BIT(17) enables interrupt. - * BIT(3) enables detecting and logging of uncorrectable errors. - */ -#define SECRAM_UERR_C62X (__BIT(17) | __BIT(3)) - -/* Miscellaneous Memory Target Errors Register */ -/* - * BIT(3) enables detecting and logging push/pull data errors. - * BIT(2) enables interrupt. - */ -#define TGT_UERR_C62X (__BIT(3) | __BIT(2)) - - -#define SLICEPWRDOWN_C62X(i) ((i) * 0x4000 + 0x2C) -/* Enabling PKE4-PKE0. */ -#define MMP_PWR_UP_MSK_C62X \ - (__BIT(20) | __BIT(19) | __BIT(18) | __BIT(17) | __BIT(16)) - -/* CPM Uncorrectable Errors */ -#define INTMASKSSM_C62X(i) ((i) * 0x4000 + 0x0) -/* Disabling interrupts for correctable errors. */ -#define INTMASKSSM_UERR_C62X \ - (__BIT(11) | __BIT(9) | __BIT(7) | __BIT(5) | __BIT(3) | __BIT(1)) - -/* MMP */ -/* BIT(3) enables correction. */ -#define CERRSSMMMP_EN_C62X (__BIT(3)) - -/* BIT(3) enables logging. */ -#define UERRSSMMMP_EN_C62X (__BIT(3)) - -/* ETR */ -#define ETR_MAX_BANKS_C62X 16 -#define ETR_TX_RX_GAP_C62X 8 -#define ETR_TX_RINGS_MASK_C62X 0xFF -#define ETR_BUNDLE_SIZE_C62X 0x1000 - -/* AE firmware */ -#define AE_FW_PROD_TYPE_C62X 0x01000000 -#define AE_FW_MOF_NAME_C62X "qat_c62xfw" -#define AE_FW_MMP_NAME_C62X "qat_c62x_mmp" -#define AE_FW_UOF_NAME_C62X "icp_qat_ae.suof" - -/* Clock frequency */ -#define CLOCK_PER_SEC_C62X (685 * 1000000 / 16) - -#endif Index: sys/dev/qat/qat_common/adf_accel_engine.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_accel_engine.c @@ -0,0 +1,267 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "icp_qat_uclo.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_cfg_strings.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include +#include +#include "adf_cfg.h" +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "icp_qat_uclo.h" +#include "icp_qat_hw.h" + +#define MMP_VERSION_LEN 4 + +struct adf_mmp_version_s { + u8 ver_val[MMP_VERSION_LEN]; +}; + +static int +request_firmware(const struct firmware **firmware_p, const char *name) +{ + int retval = 0; + if (NULL == firmware_p) { + return -1; + } + *firmware_p = firmware_get(name); + if (NULL == *firmware_p) { + retval = -1; + } + return retval; +} + +int +adf_ae_fw_load(struct adf_accel_dev *accel_dev) +{ + struct adf_fw_loader_data *loader_data = accel_dev->fw_loader; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + const void *fw_addr, *mmp_addr; + u32 fw_size, mmp_size; + s32 i = 0; + u32 max_objs = 1; + const char *obj_name = NULL; + struct adf_mmp_version_s mmp_ver = { { 0 } }; + unsigned int cfg_ae_mask = 0; + + if (!hw_device->fw_name) + return 0; + + if (request_firmware(&loader_data->uof_fw, hw_device->fw_name)) { + device_printf(GET_DEV(accel_dev), + "Failed to load UOF FW %s\n", + hw_device->fw_name); + goto out_err; + } + + if (request_firmware(&loader_data->mmp_fw, hw_device->fw_mmp_name)) { + device_printf(GET_DEV(accel_dev), + "Failed to load MMP FW %s\n", + hw_device->fw_mmp_name); + goto out_err; + } + + fw_size = loader_data->uof_fw->datasize; + fw_addr = loader_data->uof_fw->data; + mmp_size = loader_data->mmp_fw->datasize; + mmp_addr = loader_data->mmp_fw->data; + + memcpy(&mmp_ver, mmp_addr, MMP_VERSION_LEN); + + accel_dev->fw_versions.mmp_version_major = mmp_ver.ver_val[0]; + accel_dev->fw_versions.mmp_version_minor = mmp_ver.ver_val[1]; + accel_dev->fw_versions.mmp_version_patch = mmp_ver.ver_val[2]; + + if (hw_device->accel_capabilities_mask & + ADF_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC) + if (qat_uclo_wr_mimage(loader_data->fw_loader, + mmp_addr, + mmp_size)) { + device_printf(GET_DEV(accel_dev), + "Failed to load MMP\n"); + goto out_err; + } + + if (hw_device->get_objs_num) + max_objs = hw_device->get_objs_num(accel_dev); + + for (i = max_objs - 1; i >= 0; i--) { + /* obj_name is used to indicate the firmware name in MOF, + * config unit0 must be loaded at end for authentication + */ + if (hw_device->get_obj_name && hw_device->get_obj_cfg_ae_mask) { + unsigned long service_mask = hw_device->service_mask; + + if (hw_device->service_mask && + !(test_bit(i, &service_mask))) + continue; + obj_name = hw_device->get_obj_name(accel_dev, BIT(i)); + if (!obj_name) { + device_printf( + GET_DEV(accel_dev), + "Invalid object (service = %lx)\n", + BIT(i)); + goto out_err; + } + if (!hw_device->get_obj_cfg_ae_mask(accel_dev, BIT(i))) + continue; + cfg_ae_mask = + hw_device->get_obj_cfg_ae_mask(accel_dev, BIT(i)); + if (qat_uclo_set_cfg_ae_mask(loader_data->fw_loader, + cfg_ae_mask)) { + device_printf(GET_DEV(accel_dev), + "Invalid config AE mask\n"); + goto out_err; + } + } + + if (qat_uclo_map_obj( + loader_data->fw_loader, fw_addr, fw_size, obj_name)) { + device_printf(GET_DEV(accel_dev), + "Failed to map UOF firmware\n"); + goto out_err; + } + if (qat_uclo_wr_all_uimage(loader_data->fw_loader)) { + device_printf(GET_DEV(accel_dev), + "Failed to load UOF firmware\n"); + goto out_err; + } + qat_uclo_del_obj(loader_data->fw_loader); + obj_name = NULL; + } + + return 0; + +out_err: + adf_ae_fw_release(accel_dev); + return EFAULT; +} + +void +adf_ae_fw_release(struct adf_accel_dev *accel_dev) +{ + struct adf_fw_loader_data *loader_data = accel_dev->fw_loader; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + + if (!hw_device->fw_name) + return; + if (loader_data->fw_loader) + qat_uclo_del_obj(loader_data->fw_loader); + if (loader_data->fw_loader && loader_data->fw_loader->mobj_handle) + qat_uclo_del_mof(loader_data->fw_loader); + qat_hal_deinit(loader_data->fw_loader); + if (loader_data->uof_fw) + firmware_put(loader_data->uof_fw, FIRMWARE_UNLOAD); + if (loader_data->mmp_fw) + firmware_put(loader_data->mmp_fw, FIRMWARE_UNLOAD); + loader_data->uof_fw = NULL; + loader_data->mmp_fw = NULL; + loader_data->fw_loader = NULL; +} + +int +adf_ae_start(struct adf_accel_dev *accel_dev) +{ + struct adf_fw_loader_data *loader_data = accel_dev->fw_loader; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + uint32_t ae_ctr, ae, max_aes = GET_MAX_ACCELENGINES(accel_dev); + + if (!hw_data->fw_name) + return 0; + + for (ae = 0, ae_ctr = 0; ae < max_aes; ae++) { + if (hw_data->ae_mask & (1 << ae)) { + qat_hal_start(loader_data->fw_loader, ae, 0xFF); + ae_ctr++; + } + } + device_printf(GET_DEV(accel_dev), + "qat_dev%d started %d acceleration engines\n", + accel_dev->accel_id, + ae_ctr); + return 0; +} + +int +adf_ae_stop(struct adf_accel_dev *accel_dev) +{ + struct adf_fw_loader_data *loader_data = accel_dev->fw_loader; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + uint32_t ae_ctr, ae, max_aes = GET_MAX_ACCELENGINES(accel_dev); + + if (!hw_data->fw_name) + return 0; + + for (ae = 0, ae_ctr = 0; ae < max_aes; ae++) { + if (hw_data->ae_mask & (1 << ae)) { + qat_hal_stop(loader_data->fw_loader, ae, 0xFF); + ae_ctr++; + } + } + device_printf(GET_DEV(accel_dev), + "qat_dev%d stopped %d acceleration engines\n", + accel_dev->accel_id, + ae_ctr); + return 0; +} + +static int +adf_ae_reset(struct adf_accel_dev *accel_dev, int ae) +{ + struct adf_fw_loader_data *loader_data = accel_dev->fw_loader; + + qat_hal_reset(loader_data->fw_loader); + if (qat_hal_clr_reset(loader_data->fw_loader)) + return EFAULT; + + return 0; +} + +int +adf_ae_init(struct adf_accel_dev *accel_dev) +{ + struct adf_fw_loader_data *loader_data; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + + if (!hw_device->fw_name) + return 0; + + loader_data = malloc(sizeof(*loader_data), M_QAT, M_WAITOK | M_ZERO); + + accel_dev->fw_loader = loader_data; + if (qat_hal_init(accel_dev)) { + device_printf(GET_DEV(accel_dev), "Failed to init the AEs\n"); + free(loader_data, M_QAT); + return EFAULT; + } + if (adf_ae_reset(accel_dev, 0)) { + device_printf(GET_DEV(accel_dev), "Failed to reset the AEs\n"); + qat_hal_deinit(loader_data->fw_loader); + free(loader_data, M_QAT); + return EFAULT; + } + return 0; +} + +int +adf_ae_shutdown(struct adf_accel_dev *accel_dev) +{ + struct adf_fw_loader_data *loader_data = accel_dev->fw_loader; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + + if (!hw_device->fw_name) + return 0; + + qat_hal_deinit(loader_data->fw_loader); + free(accel_dev->fw_loader, M_QAT); + accel_dev->fw_loader = NULL; + return 0; +} Index: sys/dev/qat/qat_common/adf_aer.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_aer.c @@ -0,0 +1,342 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "icp_qat_uclo.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_cfg_strings.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include +#include +#include +#include + +#define ADF_PPAERUCM_MASK (BIT(14) | BIT(20) | BIT(22)) + +static struct workqueue_struct *fatal_error_wq; +struct adf_fatal_error_data { + struct adf_accel_dev *accel_dev; + struct work_struct work; +}; + +static struct workqueue_struct *device_reset_wq; + +void +linux_complete_common(struct completion *c, int all) +{ + int wakeup_swapper; + + sleepq_lock(c); + c->done++; + if (all) + wakeup_swapper = sleepq_broadcast(c, SLEEPQ_SLEEP, 0, 0); + else + wakeup_swapper = sleepq_signal(c, SLEEPQ_SLEEP, 0, 0); + sleepq_release(c); + if (wakeup_swapper) + kick_proc0(); +} + +/* reset dev data */ +struct adf_reset_dev_data { + int mode; + struct adf_accel_dev *accel_dev; + struct completion compl; + struct work_struct reset_work; +}; + +int +adf_aer_store_ppaerucm_reg(device_t dev, struct adf_hw_device_data *hw_data) +{ + unsigned int aer_offset, reg_val = 0; + + if (!hw_data) + return -EINVAL; + + if (pci_find_extcap(dev, PCIZ_AER, &aer_offset) == 0) { + reg_val = + pci_read_config(dev, aer_offset + PCIR_AER_UC_MASK, 4); + + hw_data->aerucm_mask = reg_val; + } else { + device_printf(dev, + "Unable to find AER capability of the device\n"); + return -ENODEV; + } + + return 0; +} + +void +adf_reset_sbr(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_to_pci_dev(accel_dev); + device_t parent = device_get_parent(device_get_parent(pdev)); + uint16_t bridge_ctl = 0; + + if (accel_dev->is_vf) + return; + + if (!parent) + parent = pdev; + + if (!pcie_wait_for_pending_transactions(pdev, 0)) + device_printf(GET_DEV(accel_dev), + "Transaction still in progress. Proceeding\n"); + + device_printf(GET_DEV(accel_dev), "Secondary bus reset\n"); + + pci_save_state(pdev); + bridge_ctl = pci_read_config(parent, PCIR_BRIDGECTL_1, 2); + bridge_ctl |= PCIB_BCR_SECBUS_RESET; + pci_write_config(parent, PCIR_BRIDGECTL_1, bridge_ctl, 2); + pause_ms("adfrst", 100); + bridge_ctl &= ~PCIB_BCR_SECBUS_RESET; + pci_write_config(parent, PCIR_BRIDGECTL_1, bridge_ctl, 2); + pause_ms("adfrst", 100); + pci_restore_state(pdev); +} + +void +adf_reset_flr(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_to_pci_dev(accel_dev); + + pci_save_state(pdev); + if (pcie_flr(pdev, + max(pcie_get_max_completion_timeout(pdev) / 1000, 10), + true)) { + pci_restore_state(pdev); + return; + } + pci_restore_state(pdev); + device_printf(GET_DEV(accel_dev), + "FLR qat_dev%d failed trying secondary bus reset\n", + accel_dev->accel_id); + adf_reset_sbr(accel_dev); +} + +void +adf_dev_pre_reset(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + device_t pdev = accel_to_pci_dev(accel_dev); + u32 aer_offset, reg_val = 0; + + if (pci_find_extcap(pdev, PCIZ_AER, &aer_offset) == 0) { + reg_val = + pci_read_config(pdev, aer_offset + PCIR_AER_UC_MASK, 4); + reg_val |= ADF_PPAERUCM_MASK; + pci_write_config(pdev, + aer_offset + PCIR_AER_UC_MASK, + reg_val, + 4); + } else { + device_printf(pdev, + "Unable to find AER capability of the device\n"); + } + + if (hw_device->disable_arb) { + device_printf(GET_DEV(accel_dev), "Disable arbiter.\n"); + hw_device->disable_arb(accel_dev); + } +} + +void +adf_dev_post_reset(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + device_t pdev = accel_to_pci_dev(accel_dev); + u32 aer_offset; + + if (pci_find_extcap(pdev, PCIZ_AER, &aer_offset) == 0) { + pci_write_config(pdev, + aer_offset + PCIR_AER_UC_MASK, + hw_device->aerucm_mask, + 4); + } else { + device_printf(pdev, + "Unable to find AER capability of the device\n"); + } +} + +void +adf_dev_restore(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + device_t pdev = accel_to_pci_dev(accel_dev); + + if (hw_device->pre_reset) { + dev_dbg(GET_DEV(accel_dev), "Performing pre reset save\n"); + hw_device->pre_reset(accel_dev); + } + + if (hw_device->reset_device) { + device_printf(GET_DEV(accel_dev), + "Resetting device qat_dev%d\n", + accel_dev->accel_id); + hw_device->reset_device(accel_dev); + pci_restore_state(pdev); + pci_save_state(pdev); + } + + if (hw_device->post_reset) { + dev_dbg(GET_DEV(accel_dev), "Performing post reset restore\n"); + hw_device->post_reset(accel_dev); + } +} + +static void +adf_device_reset_worker(struct work_struct *work) +{ + struct adf_reset_dev_data *reset_data = + container_of(work, struct adf_reset_dev_data, reset_work); + struct adf_accel_dev *accel_dev = reset_data->accel_dev; + + if (adf_dev_restarting_notify(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Unable to send RESTARTING notification.\n"); + return; + } + + if (adf_dev_stop(accel_dev)) { + device_printf(GET_DEV(accel_dev), "Stopping device failed.\n"); + return; + } + + adf_dev_shutdown(accel_dev); + + if (adf_dev_init(accel_dev) || adf_dev_start(accel_dev)) { + /* The device hanged and we can't restart it */ + /* so stop here */ + device_printf(GET_DEV(accel_dev), "Restart device failed\n"); + if (reset_data->mode == ADF_DEV_RESET_ASYNC) + kfree(reset_data); + WARN(1, "QAT: device restart failed. Device is unusable\n"); + return; + } + + adf_dev_restarted_notify(accel_dev); + clear_bit(ADF_STATUS_RESTARTING, &accel_dev->status); + + /* The dev is back alive. Notify the caller if in sync mode */ + if (reset_data->mode == ADF_DEV_RESET_SYNC) + complete(&reset_data->compl); + else + kfree(reset_data); +} + +int +adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev, + enum adf_dev_reset_mode mode) +{ + struct adf_reset_dev_data *reset_data; + if (!adf_dev_started(accel_dev) || + test_bit(ADF_STATUS_RESTARTING, &accel_dev->status)) + return 0; + set_bit(ADF_STATUS_RESTARTING, &accel_dev->status); + reset_data = kzalloc(sizeof(*reset_data), GFP_ATOMIC); + if (!reset_data) + return -ENOMEM; + reset_data->accel_dev = accel_dev; + init_completion(&reset_data->compl); + reset_data->mode = mode; + INIT_WORK(&reset_data->reset_work, adf_device_reset_worker); + queue_work(device_reset_wq, &reset_data->reset_work); + /* If in sync mode wait for the result */ + if (mode == ADF_DEV_RESET_SYNC) { + int ret = 0; + /* Maximum device reset time is 10 seconds */ + unsigned long wait_jiffies = msecs_to_jiffies(10000); + unsigned long timeout = + wait_for_completion_timeout(&reset_data->compl, + wait_jiffies); + if (!timeout) { + device_printf(GET_DEV(accel_dev), + "Reset device timeout expired\n"); + ret = -EFAULT; + } + kfree(reset_data); + return ret; + } + return 0; +} + +int +adf_dev_autoreset(struct adf_accel_dev *accel_dev) +{ + if (accel_dev->autoreset_on_error) + return adf_dev_reset(accel_dev, ADF_DEV_RESET_ASYNC); + return 0; +} + +static void +adf_notify_fatal_error_work(struct work_struct *work) +{ + struct adf_fatal_error_data *wq_data = + container_of(work, struct adf_fatal_error_data, work); + struct adf_accel_dev *accel_dev = wq_data->accel_dev; + + adf_error_notifier((uintptr_t)accel_dev); + if (!accel_dev->is_vf) { + if (accel_dev->u1.pf.vf_info) + adf_pf2vf_notify_fatal_error(accel_dev); + adf_dev_autoreset(accel_dev); + } + + kfree(wq_data); +} + +int +adf_notify_fatal_error(struct adf_accel_dev *accel_dev) +{ + struct adf_fatal_error_data *wq_data; + + wq_data = kzalloc(sizeof(*wq_data), GFP_ATOMIC); + if (!wq_data) { + device_printf(GET_DEV(accel_dev), + "Failed to allocate memory\n"); + return ENOMEM; + } + wq_data->accel_dev = accel_dev; + + INIT_WORK(&wq_data->work, adf_notify_fatal_error_work); + queue_work(fatal_error_wq, &wq_data->work); + + return 0; +} + +int __init +adf_init_fatal_error_wq(void) +{ + fatal_error_wq = create_workqueue("qat_fatal_error_wq"); + return !fatal_error_wq ? EFAULT : 0; +} + +void +adf_exit_fatal_error_wq(void) +{ + if (fatal_error_wq) + destroy_workqueue(fatal_error_wq); + fatal_error_wq = NULL; +} + +int +adf_init_aer(void) +{ + device_reset_wq = create_workqueue("qat_device_reset_wq"); + return !device_reset_wq ? -EFAULT : 0; +} + +void +adf_exit_aer(void) +{ + if (device_reset_wq) + destroy_workqueue(device_reset_wq); + device_reset_wq = NULL; +} Index: sys/dev/qat/qat_common/adf_cfg.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_cfg.c @@ -0,0 +1,574 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_accel_devices.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_cfg_dev_dbg.h" +#include "adf_heartbeat_dbg.h" +#include "adf_ver_dbg.h" +#include "adf_fw_counters.h" +#include "adf_cnvnr_freq_counters.h" + +/** + * adf_cfg_dev_add() - Create an acceleration device configuration table. + * @accel_dev: Pointer to acceleration device. + * + * Function creates a configuration table for the given acceleration device. + * The table stores device specific config values. + * To be used by QAT device specific drivers. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_cfg_dev_add(struct adf_accel_dev *accel_dev) +{ + struct adf_cfg_device_data *dev_cfg_data; + + dev_cfg_data = malloc(sizeof(*dev_cfg_data), M_QAT, M_WAITOK | M_ZERO); + INIT_LIST_HEAD(&dev_cfg_data->sec_list); + sx_init(&dev_cfg_data->lock, "qat cfg data"); + accel_dev->cfg = dev_cfg_data; + + if (adf_cfg_dev_dbg_add(accel_dev)) + goto err; + if (!accel_dev->is_vf) { + if (adf_heartbeat_dbg_add(accel_dev)) + goto err; + + if (adf_ver_dbg_add(accel_dev)) + goto err; + + if (adf_fw_counters_add(accel_dev)) + goto err; + + if (adf_cnvnr_freq_counters_add(accel_dev)) + goto err; + } + return 0; + +err: + free(dev_cfg_data, M_QAT); + accel_dev->cfg = NULL; + return EFAULT; +} + +static void adf_cfg_section_del_all(struct list_head *head); + +void +adf_cfg_del_all(struct adf_accel_dev *accel_dev) +{ + struct adf_cfg_device_data *dev_cfg_data = accel_dev->cfg; + + sx_xlock(&dev_cfg_data->lock); + adf_cfg_section_del_all(&dev_cfg_data->sec_list); + sx_xunlock(&dev_cfg_data->lock); + clear_bit(ADF_STATUS_CONFIGURED, &accel_dev->status); +} + +void +adf_cfg_depot_del_all(struct list_head *head) +{ + adf_cfg_section_del_all(head); +} + +/** + * adf_cfg_dev_remove() - Clears acceleration device configuration table. + * @accel_dev: Pointer to acceleration device. + * + * Function removes configuration table from the given acceleration device + * and frees all allocated memory. + * To be used by QAT device specific drivers. + * + * Return: void + */ +void +adf_cfg_dev_remove(struct adf_accel_dev *accel_dev) +{ + struct adf_cfg_device_data *dev_cfg_data = accel_dev->cfg; + + if (!dev_cfg_data) + return; + + sx_xlock(&dev_cfg_data->lock); + adf_cfg_section_del_all(&dev_cfg_data->sec_list); + sx_xunlock(&dev_cfg_data->lock); + + adf_cfg_dev_dbg_remove(accel_dev); + if (!accel_dev->is_vf) { + adf_ver_dbg_del(accel_dev); + adf_heartbeat_dbg_del(accel_dev); + adf_fw_counters_remove(accel_dev); + adf_cnvnr_freq_counters_remove(accel_dev); + } + + free(dev_cfg_data, M_QAT); + accel_dev->cfg = NULL; +} + +static void +adf_cfg_keyval_add(struct adf_cfg_key_val *new, struct adf_cfg_section *sec) +{ + list_add_tail(&new->list, &sec->param_head); +} + +static void +adf_cfg_keyval_remove(const char *key, struct adf_cfg_section *sec) +{ + struct list_head *list_ptr, *tmp; + struct list_head *head = &sec->param_head; + + list_for_each_prev_safe(list_ptr, tmp, head) + { + struct adf_cfg_key_val *ptr = + list_entry(list_ptr, struct adf_cfg_key_val, list); + + if (strncmp(ptr->key, key, sizeof(ptr->key)) != 0) + continue; + + list_del(list_ptr); + free(ptr, M_QAT); + break; + } +} + +static int +adf_cfg_section_restore_all(struct adf_accel_dev *accel_dev, + struct adf_cfg_depot_list *cfg_depot_list) +{ + struct adf_cfg_section *ptr_sec, *iter_sec; + struct adf_cfg_key_val *ptr_key; + struct list_head *list, *tmp; + struct list_head *restore_list = &accel_dev->cfg->sec_list; + struct list_head *head = &cfg_depot_list[accel_dev->accel_id].sec_list; + + INIT_LIST_HEAD(restore_list); + + list_for_each_prev_safe(list, tmp, head) + { + ptr_sec = list_entry(list, struct adf_cfg_section, list); + iter_sec = malloc(sizeof(*iter_sec), M_QAT, M_WAITOK | M_ZERO); + + strlcpy(iter_sec->name, ptr_sec->name, sizeof(iter_sec->name)); + + INIT_LIST_HEAD(&iter_sec->param_head); + + /* now we restore all the parameters */ + list_for_each_entry(ptr_key, &ptr_sec->param_head, list) + { + struct adf_cfg_key_val *key_val; + + key_val = + malloc(sizeof(*key_val), M_QAT, M_WAITOK | M_ZERO); + + memcpy(key_val, ptr_key, sizeof(*key_val)); + list_add_tail(&key_val->list, &iter_sec->param_head); + } + list_add_tail(&iter_sec->list, restore_list); + } + adf_cfg_section_del_all(head); + return 0; +} + +int +adf_cfg_depot_restore_all(struct adf_accel_dev *accel_dev, + struct adf_cfg_depot_list *cfg_depot_list) +{ + struct adf_cfg_device_data *dev_cfg_data = accel_dev->cfg; + int ret = 0; + + sx_xlock(&dev_cfg_data->lock); + ret = adf_cfg_section_restore_all(accel_dev, cfg_depot_list); + sx_xunlock(&dev_cfg_data->lock); + + return ret; +} + +/** + * adf_cfg_section_del() - Delete config section entry to config table. + * @accel_dev: Pointer to acceleration device. + * @name: Name of the section + * + * Function deletes configuration section where key - value entries + * will be stored. + * To be used by QAT device specific drivers. + */ +static void +adf_cfg_section_del(struct adf_accel_dev *accel_dev, const char *name) +{ + struct adf_cfg_section *sec = adf_cfg_sec_find(accel_dev, name); + + if (!sec) + return; + adf_cfg_keyval_del_all(&sec->param_head); + list_del(&sec->list); + free(sec, M_QAT); +} + +void +adf_cfg_keyval_del_all(struct list_head *head) +{ + struct list_head *list_ptr, *tmp; + + list_for_each_prev_safe(list_ptr, tmp, head) + { + struct adf_cfg_key_val *ptr = + list_entry(list_ptr, struct adf_cfg_key_val, list); + list_del(list_ptr); + free(ptr, M_QAT); + } +} + +static void +adf_cfg_section_del_all(struct list_head *head) +{ + struct adf_cfg_section *ptr; + struct list_head *list, *tmp; + + list_for_each_prev_safe(list, tmp, head) + { + ptr = list_entry(list, struct adf_cfg_section, list); + adf_cfg_keyval_del_all(&ptr->param_head); + list_del(list); + free(ptr, M_QAT); + } +} + +static struct adf_cfg_key_val * +adf_cfg_key_value_find(struct adf_cfg_section *s, const char *key) +{ + struct list_head *list; + + list_for_each(list, &s->param_head) + { + struct adf_cfg_key_val *ptr = + list_entry(list, struct adf_cfg_key_val, list); + if (!strncmp(ptr->key, key, sizeof(ptr->key))) + return ptr; + } + return NULL; +} + +struct adf_cfg_section * +adf_cfg_sec_find(struct adf_accel_dev *accel_dev, const char *sec_name) +{ + struct adf_cfg_device_data *cfg = accel_dev->cfg; + struct list_head *list; + + list_for_each(list, &cfg->sec_list) + { + struct adf_cfg_section *ptr = + list_entry(list, struct adf_cfg_section, list); + if (!strncmp(ptr->name, sec_name, sizeof(ptr->name))) + return ptr; + } + return NULL; +} + +static int +adf_cfg_key_val_get(struct adf_accel_dev *accel_dev, + const char *sec_name, + const char *key_name, + char *val) +{ + struct adf_cfg_section *sec = adf_cfg_sec_find(accel_dev, sec_name); + struct adf_cfg_key_val *keyval = NULL; + + if (sec) + keyval = adf_cfg_key_value_find(sec, key_name); + if (keyval) { + memcpy(val, keyval->val, ADF_CFG_MAX_VAL_LEN_IN_BYTES); + return 0; + } + return -1; +} + +/** + * adf_cfg_add_key_value_param() - Add key-value config entry to config table. + * @accel_dev: Pointer to acceleration device. + * @section_name: Name of the section where the param will be added + * @key: The key string + * @val: Value pain for the given @key + * @type: Type - string, int or address + * + * Function adds configuration key - value entry in the appropriate section + * in the given acceleration device + * To be used by QAT device specific drivers. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_cfg_add_key_value_param(struct adf_accel_dev *accel_dev, + const char *section_name, + const char *key, + const void *val, + enum adf_cfg_val_type type) +{ + char temp_val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + struct adf_cfg_device_data *cfg = accel_dev->cfg; + struct adf_cfg_key_val *key_val; + struct adf_cfg_section *section = + adf_cfg_sec_find(accel_dev, section_name); + if (!section) + return EFAULT; + + key_val = malloc(sizeof(*key_val), M_QAT, M_WAITOK | M_ZERO); + + INIT_LIST_HEAD(&key_val->list); + strlcpy(key_val->key, key, sizeof(key_val->key)); + + if (type == ADF_DEC) { + snprintf(key_val->val, + ADF_CFG_MAX_VAL_LEN_IN_BYTES, + "%ld", + (*((const long *)val))); + } else if (type == ADF_STR) { + strlcpy(key_val->val, (const char *)val, sizeof(key_val->val)); + } else if (type == ADF_HEX) { + snprintf(key_val->val, + ADF_CFG_MAX_VAL_LEN_IN_BYTES, + "0x%lx", + (unsigned long)val); + } else { + device_printf(GET_DEV(accel_dev), "Unknown type given.\n"); + free(key_val, M_QAT); + return -1; + } + key_val->type = type; + + /* Add the key-value pair as below policy: + * 1. If the key doesn't exist, add it, + * 2. If the key already exists with a different value + * then delete it, + * 3. If the key exists with the same value, then return + * without doing anything. + */ + if (adf_cfg_key_val_get(accel_dev, section_name, key, temp_val) == 0) { + if (strncmp(temp_val, key_val->val, sizeof(temp_val)) != 0) { + adf_cfg_keyval_remove(key, section); + } else { + free(key_val, M_QAT); + return 0; + } + } + + sx_xlock(&cfg->lock); + adf_cfg_keyval_add(key_val, section); + sx_xunlock(&cfg->lock); + return 0; +} + +int +adf_cfg_save_section(struct adf_accel_dev *accel_dev, + const char *name, + struct adf_cfg_section *section) +{ + struct adf_cfg_key_val *ptr; + struct adf_cfg_section *sec = adf_cfg_sec_find(accel_dev, name); + + if (!sec) { + device_printf(GET_DEV(accel_dev), + "Couldn't find section %s\n", + name); + return EFAULT; + } + + strlcpy(section->name, name, sizeof(section->name)); + INIT_LIST_HEAD(§ion->param_head); + + /* now we save all the parameters */ + list_for_each_entry(ptr, &sec->param_head, list) + { + struct adf_cfg_key_val *key_val; + + key_val = malloc(sizeof(*key_val), M_QAT, M_WAITOK | M_ZERO); + + memcpy(key_val, ptr, sizeof(*key_val)); + list_add_tail(&key_val->list, §ion->param_head); + } + return 0; +} + +static int +adf_cfg_section_save_all(struct adf_accel_dev *accel_dev, + struct adf_cfg_depot_list *cfg_depot_list) +{ + struct adf_cfg_section *ptr_sec, *iter_sec; + struct list_head *list, *tmp, *save_list; + struct list_head *head = &accel_dev->cfg->sec_list; + + save_list = &cfg_depot_list[accel_dev->accel_id].sec_list; + + list_for_each_prev_safe(list, tmp, head) + { + ptr_sec = list_entry(list, struct adf_cfg_section, list); + iter_sec = malloc(sizeof(*iter_sec), M_QAT, M_WAITOK | M_ZERO); + + adf_cfg_save_section(accel_dev, ptr_sec->name, iter_sec); + list_add_tail(&iter_sec->list, save_list); + } + return 0; +} + +int +adf_cfg_depot_save_all(struct adf_accel_dev *accel_dev, + struct adf_cfg_depot_list *cfg_depot_list) +{ + struct adf_cfg_device_data *dev_cfg_data = accel_dev->cfg; + int ret = 0; + + sx_xlock(&dev_cfg_data->lock); + ret = adf_cfg_section_save_all(accel_dev, cfg_depot_list); + sx_xunlock(&dev_cfg_data->lock); + + return ret; +} + +/** + * adf_cfg_remove_key_param() - remove config entry in config table. + * @accel_dev: Pointer to acceleration device. + * @section_name: Name of the section where the param will be added + * @key: The key string + * + * Function remove configuration key + * To be used by QAT device specific drivers. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_cfg_remove_key_param(struct adf_accel_dev *accel_dev, + const char *section_name, + const char *key) +{ + struct adf_cfg_device_data *cfg = accel_dev->cfg; + struct adf_cfg_section *section = + adf_cfg_sec_find(accel_dev, section_name); + if (!section) + return EFAULT; + + sx_xlock(&cfg->lock); + adf_cfg_keyval_remove(key, section); + sx_xunlock(&cfg->lock); + return 0; +} + +/** + * adf_cfg_section_add() - Add config section entry to config table. + * @accel_dev: Pointer to acceleration device. + * @name: Name of the section + * + * Function adds configuration section where key - value entries + * will be stored. + * To be used by QAT device specific drivers. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_cfg_section_add(struct adf_accel_dev *accel_dev, const char *name) +{ + struct adf_cfg_device_data *cfg = accel_dev->cfg; + struct adf_cfg_section *sec = adf_cfg_sec_find(accel_dev, name); + + if (sec) + return 0; + + sec = malloc(sizeof(*sec), M_QAT, M_WAITOK | M_ZERO); + + strlcpy(sec->name, name, sizeof(sec->name)); + INIT_LIST_HEAD(&sec->param_head); + sx_xlock(&cfg->lock); + list_add_tail(&sec->list, &cfg->sec_list); + sx_xunlock(&cfg->lock); + return 0; +} + +/* need to differentiate derived section with the original section */ +int +adf_cfg_derived_section_add(struct adf_accel_dev *accel_dev, const char *name) +{ + struct adf_cfg_device_data *cfg = accel_dev->cfg; + struct adf_cfg_section *sec = NULL; + + if (adf_cfg_section_add(accel_dev, name)) + return EFAULT; + + sec = adf_cfg_sec_find(accel_dev, name); + if (!sec) + return EFAULT; + + sx_xlock(&cfg->lock); + sec->is_derived = true; + sx_xunlock(&cfg->lock); + return 0; +} + +static int +adf_cfg_restore_key_value_param(struct adf_accel_dev *accel_dev, + const char *section_name, + const char *key, + const char *val, + enum adf_cfg_val_type type) +{ + struct adf_cfg_device_data *cfg = accel_dev->cfg; + struct adf_cfg_key_val *key_val; + struct adf_cfg_section *section = + adf_cfg_sec_find(accel_dev, section_name); + if (!section) + return EFAULT; + + key_val = malloc(sizeof(*key_val), M_QAT, M_WAITOK | M_ZERO); + + INIT_LIST_HEAD(&key_val->list); + + strlcpy(key_val->key, key, sizeof(key_val->key)); + strlcpy(key_val->val, val, sizeof(key_val->val)); + key_val->type = type; + sx_xlock(&cfg->lock); + adf_cfg_keyval_add(key_val, section); + sx_xunlock(&cfg->lock); + return 0; +} + +int +adf_cfg_restore_section(struct adf_accel_dev *accel_dev, + struct adf_cfg_section *section) +{ + struct adf_cfg_key_val *ptr; + int ret = 0; + + ret = adf_cfg_section_add(accel_dev, section->name); + if (ret) + goto err; + + list_for_each_entry(ptr, §ion->param_head, list) + { + ret = adf_cfg_restore_key_value_param( + accel_dev, section->name, ptr->key, ptr->val, ptr->type); + if (ret) + goto err_remove_sec; + } + return 0; + +err_remove_sec: + adf_cfg_section_del(accel_dev, section->name); +err: + device_printf(GET_DEV(accel_dev), + "Failed to restore section %s\n", + section->name); + return ret; +} + +int +adf_cfg_get_param_value(struct adf_accel_dev *accel_dev, + const char *section, + const char *name, + char *value) +{ + struct adf_cfg_device_data *cfg = accel_dev->cfg; + int ret; + + sx_slock(&cfg->lock); + ret = adf_cfg_key_val_get(accel_dev, section, name, value); + sx_sunlock(&cfg->lock); + return ret; +} Index: sys/dev/qat/qat_common/adf_cfg_bundle.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_cfg_bundle.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_BUNDLE_H_ +#define ADF_CFG_BUNDLE_H_ + +#include "adf_accel_devices.h" +#include "adf_cfg_common.h" + +#define MAX_SECTIONS_PER_BUNDLE 8 +#define MAX_SECTION_NAME_LEN 64 + +#define TX 0x0 +#define RX 0x1 + +#define ASSIGN_SERV_TO_RINGS(bund, index, base, stype, rng_per_srv) \ + do { \ + int j = 0; \ + typeof(bund) b = (bund); \ + typeof(index) i = (index); \ + typeof(base) s = (base); \ + typeof(stype) t = (stype); \ + typeof(rng_per_srv) rps = (rng_per_srv); \ + for (j = 0; j < rps; j++) { \ + b->rings[i + j]->serv_type = t; \ + b->rings[i + j + s]->serv_type = t; \ + } \ + } while (0) + +bool adf_cfg_is_free(struct adf_cfg_bundle *bundle); + +int adf_cfg_get_ring_pairs_from_bundle(struct adf_cfg_bundle *bundle, + struct adf_cfg_instance *inst, + const char *process_name, + struct adf_cfg_instance *bundle_inst); + +struct adf_cfg_instance * +adf_cfg_get_free_instance(struct adf_cfg_device *device, + struct adf_cfg_bundle *bundle, + struct adf_cfg_instance *inst, + const char *process_name); + +int adf_cfg_bundle_init(struct adf_cfg_bundle *bundle, + struct adf_cfg_device *device, + int bank_num, + struct adf_accel_dev *accel_dev); + +void adf_cfg_bundle_clear(struct adf_cfg_bundle *bundle, + struct adf_accel_dev *accel_dev); + +void adf_cfg_init_ring2serv_mapping(struct adf_accel_dev *accel_dev, + struct adf_cfg_bundle *bundle); + +int adf_cfg_rel_ring2serv_mapping(struct adf_cfg_bundle *bundle); +#endif Index: sys/dev/qat/qat_common/adf_cfg_bundle.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_cfg_bundle.c @@ -0,0 +1,377 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_cfg_bundle.h" +#include "adf_cfg_strings.h" +#include "adf_cfg_instance.h" +#include + +static bool +adf_cfg_is_interrupt_mode(struct adf_cfg_bundle *bundle) +{ + return (bundle->polling_mode == ADF_CFG_RESP_EPOLL) || + (bundle->type == KERNEL && + (bundle->polling_mode != ADF_CFG_RESP_POLL)); +} + +static bool +adf_cfg_can_be_shared(struct adf_cfg_bundle *bundle, + const char *process_name, + int polling_mode) +{ + if (adf_cfg_is_free(bundle)) + return true; + + if (bundle->polling_mode != polling_mode) + return false; + + return !adf_cfg_is_interrupt_mode(bundle) || + !strncmp(process_name, + bundle->sections[0], + ADF_CFG_MAX_SECTION_LEN_IN_BYTES); +} + +bool +adf_cfg_is_free(struct adf_cfg_bundle *bundle) +{ + return bundle->type == FREE; +} + +struct adf_cfg_instance * +adf_cfg_get_free_instance(struct adf_cfg_device *device, + struct adf_cfg_bundle *bundle, + struct adf_cfg_instance *inst, + const char *process_name) +{ + int i = 0; + struct adf_cfg_instance *ret_instance = NULL; + + if (adf_cfg_can_be_shared(bundle, process_name, inst->polling_mode)) { + for (i = 0; i < device->instance_index; i++) { + /* + * the selected instance must match two criteria + * 1) instance is from the bundle + * 2) instance type is same + */ + if (bundle->number == device->instances[i]->bundle && + inst->stype == device->instances[i]->stype) { + ret_instance = device->instances[i]; + break; + } + /* + * no opportunity to match, + * quit the loop as early as possible + */ + if ((bundle->number + 1) == + device->instances[i]->bundle) + break; + } + } + + return ret_instance; +} + +int +adf_cfg_get_ring_pairs_from_bundle(struct adf_cfg_bundle *bundle, + struct adf_cfg_instance *inst, + const char *process_name, + struct adf_cfg_instance *bundle_inst) +{ + if (inst->polling_mode == ADF_CFG_RESP_POLL && + adf_cfg_is_interrupt_mode(bundle)) { + pr_err("Trying to get ring pairs for a non-interrupt"); + pr_err(" bundle from an interrupt bundle\n"); + return EFAULT; + } + + if (inst->stype != bundle_inst->stype) { + pr_err("Got an instance of different type (cy/dc) than the"); + pr_err(" one request\n"); + return EFAULT; + } + + if (strcmp(ADF_KERNEL_SEC, process_name) && + strcmp(ADF_KERNEL_SAL_SEC, process_name) && + inst->polling_mode != ADF_CFG_RESP_EPOLL && + inst->polling_mode != ADF_CFG_RESP_POLL) { + pr_err("User instance %s needs to be configured", inst->name); + pr_err(" with IsPolled 1 or 2 for poll and epoll mode,"); + pr_err(" respectively\n"); + return EFAULT; + } + + strlcpy(bundle->sections[bundle->section_index], + process_name, + ADF_CFG_MAX_STR_LEN); + bundle->section_index++; + + if (adf_cfg_is_free(bundle)) { + bundle->polling_mode = inst->polling_mode; + bundle->type = (!strcmp(ADF_KERNEL_SEC, process_name) || + !strcmp(ADF_KERNEL_SAL_SEC, process_name)) ? + KERNEL : + USER; + if (adf_cfg_is_interrupt_mode(bundle)) { + CPU_ZERO(&bundle->affinity_mask); + CPU_COPY(&inst->affinity_mask, &bundle->affinity_mask); + } + } + + switch (inst->stype) { + case CRYPTO: + inst->asym_tx = bundle_inst->asym_tx; + inst->asym_rx = bundle_inst->asym_rx; + inst->sym_tx = bundle_inst->sym_tx; + inst->sym_rx = bundle_inst->sym_rx; + break; + case COMP: + inst->dc_tx = bundle_inst->dc_tx; + inst->dc_rx = bundle_inst->dc_rx; + break; + case ASYM: + inst->asym_tx = bundle_inst->asym_tx; + inst->asym_rx = bundle_inst->asym_rx; + break; + case SYM: + inst->sym_tx = bundle_inst->sym_tx; + inst->sym_rx = bundle_inst->sym_rx; + break; + default: + /* unknown service type of instance */ + pr_err("1 Unknown service type %d of instance\n", inst->stype); + } + + /* mark it as used */ + bundle_inst->stype = USED; + + inst->bundle = bundle->number; + + return 0; +} + +static void +adf_cfg_init_and_insert_inst(struct adf_cfg_bundle *bundle, + struct adf_cfg_device *device, + int bank_num, + struct adf_accel_dev *accel_dev) +{ + struct adf_cfg_instance *cfg_instance = NULL; + int ring_pair_index = 0; + int i = 0; + u8 serv_type; + int num_req_rings = bundle->num_of_rings / 2; + int num_rings_per_srv = num_req_rings / ADF_CFG_NUM_SERVICES; + u16 ring_to_svc_map = GET_HW_DATA(accel_dev)->ring_to_svc_map; + + /* init the bundle with instance information */ + for (ring_pair_index = 0; ring_pair_index < ADF_CFG_NUM_SERVICES; + ring_pair_index++) { + serv_type = GET_SRV_TYPE(ring_to_svc_map, ring_pair_index); + for (i = 0; i < num_rings_per_srv; i++) { + cfg_instance = malloc(sizeof(*cfg_instance), + M_QAT, + M_WAITOK | M_ZERO); + + switch (serv_type) { + case CRYPTO: + crypto_instance_init(cfg_instance, bundle); + break; + case COMP: + dc_instance_init(cfg_instance, bundle); + break; + case ASYM: + asym_instance_init(cfg_instance, bundle); + break; + case SYM: + sym_instance_init(cfg_instance, bundle); + break; + case NA: + break; + + default: + /* Unknown service type */ + device_printf( + GET_DEV(accel_dev), + "Unknown service type %d of instance, mask is 0x%x\n", + serv_type, + ring_to_svc_map); + } + cfg_instance->bundle = bank_num; + device->instances[device->instance_index++] = + cfg_instance; + cfg_instance = NULL; + } + if (serv_type == CRYPTO) { + ring_pair_index++; + serv_type = + GET_SRV_TYPE(ring_to_svc_map, ring_pair_index); + } + } + + return; +} + +int +adf_cfg_bundle_init(struct adf_cfg_bundle *bundle, + struct adf_cfg_device *device, + int bank_num, + struct adf_accel_dev *accel_dev) +{ + int i = 0; + + /* init ring to service mapping for this bundle */ + adf_cfg_init_ring2serv_mapping(accel_dev, bundle); + + /* init the bundle with instance information */ + adf_cfg_init_and_insert_inst(bundle, device, bank_num, accel_dev); + + CPU_FILL(&bundle->affinity_mask); + bundle->type = FREE; + bundle->polling_mode = -1; + bundle->section_index = 0; + bundle->number = bank_num; + + bundle->sections = malloc(sizeof(char *) * bundle->max_section, + M_QAT, + M_WAITOK | M_ZERO); + + for (i = 0; i < bundle->max_section; i++) { + bundle->sections[i] = + malloc(ADF_CFG_MAX_STR_LEN, M_QAT, M_WAITOK | M_ZERO); + } + return 0; +} + +void +adf_cfg_bundle_clear(struct adf_cfg_bundle *bundle, + struct adf_accel_dev *accel_dev) +{ + int i = 0; + + for (i = 0; i < bundle->max_section; i++) { + if (bundle->sections && bundle->sections[i]) { + free(bundle->sections[i], M_QAT); + bundle->sections[i] = NULL; + } + } + + free(bundle->sections, M_QAT); + bundle->sections = NULL; + + adf_cfg_rel_ring2serv_mapping(bundle); +} + +static void +adf_cfg_assign_serv_to_rings(struct adf_cfg_bundle *bundle, u16 ring_to_svc_map) +{ + int ring_pair_index = 0; + int ring_index = 0; + u8 serv_type = 0; + int num_req_rings = bundle->num_of_rings / 2; + int num_rings_per_srv = num_req_rings / ADF_CFG_NUM_SERVICES; + + for (ring_pair_index = 0; ring_pair_index < ADF_CFG_NUM_SERVICES; + ring_pair_index++) { + serv_type = GET_SRV_TYPE(ring_to_svc_map, ring_pair_index); + ring_index = num_rings_per_srv * ring_pair_index; + switch (serv_type) { + case CRYPTO: + ASSIGN_SERV_TO_RINGS(bundle, + ring_index, + num_req_rings, + ADF_ACCEL_SERV_ASYM, + num_rings_per_srv); + ring_pair_index++; + ring_index = num_rings_per_srv * ring_pair_index; + if (ring_pair_index == ADF_CFG_NUM_SERVICES) + break; + ASSIGN_SERV_TO_RINGS(bundle, + ring_index, + num_req_rings, + ADF_ACCEL_SERV_SYM, + num_rings_per_srv); + break; + case COMP: + ASSIGN_SERV_TO_RINGS(bundle, + ring_index, + num_req_rings, + ADF_ACCEL_SERV_DC, + num_rings_per_srv); + break; + case SYM: + ASSIGN_SERV_TO_RINGS(bundle, + ring_index, + num_req_rings, + ADF_ACCEL_SERV_SYM, + num_rings_per_srv); + break; + case ASYM: + ASSIGN_SERV_TO_RINGS(bundle, + ring_index, + num_req_rings, + ADF_ACCEL_SERV_ASYM, + num_rings_per_srv); + break; + case NA: + ASSIGN_SERV_TO_RINGS(bundle, + ring_index, + num_req_rings, + ADF_ACCEL_SERV_NA, + num_rings_per_srv); + break; + + default: + /* unknown service type */ + pr_err("Unknown service type %d, mask 0x%x.\n", + serv_type, + ring_to_svc_map); + } + } + + return; +} + +void +adf_cfg_init_ring2serv_mapping(struct adf_accel_dev *accel_dev, + struct adf_cfg_bundle *bundle) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_cfg_ring *ring_in_bundle; + int ring_num = 0; + + bundle->num_of_rings = hw_data->num_rings_per_bank; + + bundle->rings = + malloc(bundle->num_of_rings * sizeof(struct adf_cfg_ring *), + M_QAT, + M_WAITOK | M_ZERO); + + for (ring_num = 0; ring_num < bundle->num_of_rings; ring_num++) { + ring_in_bundle = malloc(sizeof(struct adf_cfg_ring), + M_QAT, + M_WAITOK | M_ZERO); + ring_in_bundle->mode = + (ring_num < bundle->num_of_rings / 2) ? TX : RX; + ring_in_bundle->number = ring_num; + bundle->rings[ring_num] = ring_in_bundle; + } + + adf_cfg_assign_serv_to_rings(bundle, hw_data->ring_to_svc_map); + + return; +} + +int +adf_cfg_rel_ring2serv_mapping(struct adf_cfg_bundle *bundle) +{ + int i = 0; + + if (bundle->rings) { + for (i = 0; i < bundle->num_of_rings; i++) + free(bundle->rings[i], M_QAT); + + free(bundle->rings, M_QAT); + } + + return 0; +} Index: sys/dev/qat/qat_common/adf_cfg_device.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_cfg_device.c @@ -0,0 +1,1102 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_cfg_instance.h" +#include "adf_cfg_section.h" +#include "adf_cfg_device.h" +#include "icp_qat_hw.h" +#include "adf_common_drv.h" + +#define ADF_CFG_SVCS_MAX (25) +#define ADF_CFG_DEPRE_PARAMS_NUM (4) + +#define ADF_CFG_CAP_DC ADF_ACCEL_CAPABILITIES_COMPRESSION +#define ADF_CFG_CAP_ASYM ADF_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC +#define ADF_CFG_CAP_SYM \ + (ADF_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC | \ + ADF_ACCEL_CAPABILITIES_CIPHER | \ + ADF_ACCEL_CAPABILITIES_AUTHENTICATION) +#define ADF_CFG_CAP_CY (ADF_CFG_CAP_ASYM | ADF_CFG_CAP_SYM) + +#define ADF_CFG_FW_CAP_RL ICP_ACCEL_CAPABILITIES_RL +#define ADF_CFG_FW_CAP_HKDF ICP_ACCEL_CAPABILITIES_HKDF +#define ADF_CFG_FW_CAP_ECEDMONT ICP_ACCEL_CAPABILITIES_ECEDMONT +#define ADF_CFG_FW_CAP_EXT_ALGCHAIN ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN + +#define ADF_CFG_CY_RINGS \ + (CRYPTO | CRYPTO << ADF_CFG_SERV_RING_PAIR_1_SHIFT | \ + CRYPTO << ADF_CFG_SERV_RING_PAIR_2_SHIFT | \ + CRYPTO << ADF_CFG_SERV_RING_PAIR_3_SHIFT) + +#define ADF_CFG_SYM_RINGS \ + (SYM | SYM << ADF_CFG_SERV_RING_PAIR_1_SHIFT | \ + SYM << ADF_CFG_SERV_RING_PAIR_2_SHIFT | \ + SYM << ADF_CFG_SERV_RING_PAIR_3_SHIFT) + +#define ADF_CFG_ASYM_RINGS \ + (ASYM | ASYM << ADF_CFG_SERV_RING_PAIR_1_SHIFT | \ + ASYM << ADF_CFG_SERV_RING_PAIR_2_SHIFT | \ + ASYM << ADF_CFG_SERV_RING_PAIR_3_SHIFT) + +#define ADF_CFG_CY_DC_RINGS \ + (CRYPTO | CRYPTO << ADF_CFG_SERV_RING_PAIR_1_SHIFT | \ + NA << ADF_CFG_SERV_RING_PAIR_2_SHIFT | \ + COMP << ADF_CFG_SERV_RING_PAIR_3_SHIFT) + +#define ADF_CFG_ASYM_DC_RINGS \ + (ASYM | ASYM << ADF_CFG_SERV_RING_PAIR_1_SHIFT | \ + COMP << ADF_CFG_SERV_RING_PAIR_2_SHIFT | \ + COMP << ADF_CFG_SERV_RING_PAIR_3_SHIFT) + +#define ADF_CFG_SYM_DC_RINGS \ + (SYM | SYM << ADF_CFG_SERV_RING_PAIR_1_SHIFT | \ + COMP << ADF_CFG_SERV_RING_PAIR_2_SHIFT | \ + COMP << ADF_CFG_SERV_RING_PAIR_3_SHIFT) + +#define ADF_CFG_DC_RINGS \ + (COMP | COMP << ADF_CFG_SERV_RING_PAIR_1_SHIFT | \ + COMP << ADF_CFG_SERV_RING_PAIR_2_SHIFT | \ + COMP << ADF_CFG_SERV_RING_PAIR_3_SHIFT) + +static char adf_cfg_deprecated_params[][ADF_CFG_MAX_KEY_LEN_IN_BYTES] = + { ADF_DEV_KPT_ENABLE, + ADF_STORAGE_FIRMWARE_ENABLED, + ADF_RL_FIRMWARE_ENABLED, + ADF_PKE_DISABLED }; + +struct adf_cfg_enabled_services { + const char svcs_enabled[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + u16 rng_to_svc_msk; + u32 enabled_svc_cap; + u32 enabled_fw_cap; +}; + +struct adf_cfg_profile { + enum adf_cfg_fw_image_type fw_image_type; + struct adf_cfg_enabled_services supported_svcs[ADF_CFG_SVCS_MAX]; +}; + +static struct adf_cfg_profile adf_profiles[] = + { { ADF_FW_IMAGE_DEFAULT, + { + { "cy", + ADF_CFG_CY_RINGS, + ADF_CFG_CAP_CY, + ADF_CFG_FW_CAP_ECEDMONT | ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "dc", ADF_CFG_DC_RINGS, ADF_CFG_CAP_DC, 0 }, + { "sym", + ADF_CFG_SYM_RINGS, + ADF_CFG_CAP_SYM, + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "asym", + ADF_CFG_ASYM_RINGS, + ADF_CFG_CAP_ASYM, + ADF_CFG_FW_CAP_ECEDMONT }, + { "cy;dc", + ADF_CFG_CY_DC_RINGS, + ADF_CFG_CAP_CY | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_ECEDMONT | ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "dc;cy", + ADF_CFG_CY_DC_RINGS, + ADF_CFG_CAP_CY | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_ECEDMONT | ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "asym;dc", + ADF_CFG_ASYM_DC_RINGS, + ADF_CFG_CAP_ASYM | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_ECEDMONT }, + { "dc;asym", + ADF_CFG_ASYM_DC_RINGS, + ADF_CFG_CAP_ASYM | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_ECEDMONT }, + { "sym;dc", + ADF_CFG_SYM_DC_RINGS, + ADF_CFG_CAP_SYM | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "dc;sym", + ADF_CFG_SYM_DC_RINGS, + ADF_CFG_CAP_SYM | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "inline;sym", + ADF_CFG_SYM_RINGS, + ADF_CFG_CAP_SYM, + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "sym;inline", + ADF_CFG_SYM_RINGS, + ADF_CFG_CAP_SYM, + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "inline;asym", + ADF_CFG_SYM_RINGS, + ADF_CFG_CAP_SYM, + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "asym;inline", + ADF_CFG_ASYM_RINGS, + ADF_CFG_CAP_ASYM, + ADF_CFG_FW_CAP_ECEDMONT }, + { "inline", 0, 0, 0 }, + { "inline;cy", + ADF_CFG_CY_RINGS, + ADF_CFG_CAP_CY, + ADF_CFG_FW_CAP_ECEDMONT | ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "cy;inline", + ADF_CFG_CY_RINGS, + ADF_CFG_CAP_CY, + ADF_CFG_FW_CAP_ECEDMONT | ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "dc;inline", ADF_CFG_DC_RINGS, ADF_CFG_CAP_DC, 0 }, + { "inline;dc", ADF_CFG_DC_RINGS, ADF_CFG_CAP_DC, 0 }, + { "cy;dc;inline", + ADF_CFG_CY_DC_RINGS, + ADF_CFG_CAP_CY | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_ECEDMONT | ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "cy;inline;dc", + ADF_CFG_CY_DC_RINGS, + ADF_CFG_CAP_CY | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_ECEDMONT | ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "dc;inline;cy", + ADF_CFG_CY_DC_RINGS, + ADF_CFG_CAP_CY | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_ECEDMONT | ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "dc;cy;inline", + ADF_CFG_CY_DC_RINGS, + ADF_CFG_CAP_CY | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_ECEDMONT | ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "inline;cy;dc", + ADF_CFG_CY_DC_RINGS, + ADF_CFG_CAP_CY | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_ECEDMONT | ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "inline;dc;cy", + ADF_CFG_CY_DC_RINGS, + ADF_CFG_CAP_CY | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_ECEDMONT | ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + } }, + { ADF_FW_IMAGE_CRYPTO, + { + { "cy", + ADF_CFG_CY_RINGS, + ADF_CFG_CAP_CY, + ADF_CFG_FW_CAP_RL | ADF_CFG_FW_CAP_HKDF | + ADF_CFG_FW_CAP_ECEDMONT | + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "sym", + ADF_CFG_SYM_RINGS, + ADF_CFG_CAP_SYM, + ADF_CFG_FW_CAP_RL | ADF_CFG_FW_CAP_HKDF | + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "asym", + ADF_CFG_ASYM_RINGS, + ADF_CFG_CAP_ASYM, + ADF_CFG_FW_CAP_RL | ADF_CFG_FW_CAP_ECEDMONT }, + } }, + { ADF_FW_IMAGE_COMPRESSION, + { + { "dc", ADF_CFG_DC_RINGS, ADF_CFG_CAP_DC, 0 }, + } }, + { ADF_FW_IMAGE_CUSTOM1, + { + { "cy", + ADF_CFG_CY_RINGS, + ADF_CFG_CAP_CY, + ADF_CFG_FW_CAP_RL | ADF_CFG_FW_CAP_HKDF | + ADF_CFG_FW_CAP_ECEDMONT | + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "dc", ADF_CFG_DC_RINGS, ADF_CFG_CAP_DC, 0 }, + { "sym", + ADF_CFG_SYM_RINGS, + ADF_CFG_CAP_SYM, + ADF_CFG_FW_CAP_RL | ADF_CFG_FW_CAP_HKDF | + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "asym", + ADF_CFG_ASYM_RINGS, + ADF_CFG_CAP_ASYM, + ADF_CFG_FW_CAP_RL | ADF_CFG_FW_CAP_ECEDMONT }, + { "cy;dc", + ADF_CFG_CY_DC_RINGS, + ADF_CFG_CAP_CY | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_RL | ADF_CFG_FW_CAP_HKDF | + ADF_CFG_FW_CAP_ECEDMONT | + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "dc;cy", + ADF_CFG_CY_DC_RINGS, + ADF_CFG_CAP_CY | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_RL | ADF_CFG_FW_CAP_HKDF | + ADF_CFG_FW_CAP_ECEDMONT | + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "asym;dc", + ADF_CFG_ASYM_DC_RINGS, + ADF_CFG_CAP_ASYM | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_RL | ADF_CFG_FW_CAP_ECEDMONT }, + { "dc;asym", + ADF_CFG_ASYM_DC_RINGS, + ADF_CFG_CAP_ASYM | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_RL | ADF_CFG_FW_CAP_ECEDMONT }, + { "sym;dc", + ADF_CFG_SYM_DC_RINGS, + ADF_CFG_CAP_SYM | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_RL | ADF_CFG_FW_CAP_HKDF | + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + { "dc;sym", + ADF_CFG_SYM_DC_RINGS, + ADF_CFG_CAP_SYM | ADF_CFG_CAP_DC, + ADF_CFG_FW_CAP_RL | ADF_CFG_FW_CAP_HKDF | + ADF_CFG_FW_CAP_EXT_ALGCHAIN }, + } } }; + +int +adf_cfg_get_ring_pairs(struct adf_cfg_device *device, + struct adf_cfg_instance *inst, + const char *process_name, + struct adf_accel_dev *accel_dev) +{ + int i = 0; + int ret = EFAULT; + struct adf_cfg_instance *free_inst = NULL; + struct adf_cfg_bundle *first_free_bundle = NULL; + enum adf_cfg_bundle_type free_bundle_type; + int first_user_bundle = 0; + + /* Section of user process with poll mode */ + if (strcmp(ADF_KERNEL_SEC, process_name) && + strcmp(ADF_KERNEL_SAL_SEC, process_name) && + inst->polling_mode == ADF_CFG_RESP_POLL) { + first_user_bundle = device->max_kernel_bundle_nr + 1; + for (i = first_user_bundle; i < device->bundle_num; i++) { + free_inst = adf_cfg_get_free_instance( + device, device->bundles[i], inst, process_name); + + if (!free_inst) + continue; + + ret = adf_cfg_get_ring_pairs_from_bundle( + device->bundles[i], inst, process_name, free_inst); + return ret; + } + } else { + /* Section of in-tree, or kernel API or user process + * with epoll mode + */ + if (!strcmp(ADF_KERNEL_SEC, process_name) || + !strcmp(ADF_KERNEL_SAL_SEC, process_name)) + free_bundle_type = KERNEL; + else + free_bundle_type = USER; + + for (i = 0; i < device->bundle_num; i++) { + /* Since both in-tree and kernel API's bundle type + * are kernel, use cpumask_subset to check if the + * ring's affinity mask is a subset of a bundle's + * one. + */ + if (free_bundle_type == device->bundles[i]->type && + CPU_SUBSET(&device->bundles[i]->affinity_mask, + &inst->affinity_mask)) { + free_inst = adf_cfg_get_free_instance( + device, + device->bundles[i], + inst, + process_name); + + if (!free_inst) + continue; + ret = adf_cfg_get_ring_pairs_from_bundle( + device->bundles[i], + inst, + process_name, + free_inst); + + return ret; + + } else if (!first_free_bundle && + adf_cfg_is_free(device->bundles[i])) { + first_free_bundle = device->bundles[i]; + } + } + + if (first_free_bundle) { + free_inst = adf_cfg_get_free_instance(device, + first_free_bundle, + inst, + process_name); + + if (!free_inst) + return ret; + + ret = adf_cfg_get_ring_pairs_from_bundle( + first_free_bundle, inst, process_name, free_inst); + + if (free_bundle_type == KERNEL) { + device->max_kernel_bundle_nr = + first_free_bundle->number; + } + return ret; + } + } + pr_err("Don't have enough rings for instance %s in process %s\n", + inst->name, + process_name); + + return ret; +} + +int +adf_cfg_get_services_enabled(struct adf_accel_dev *accel_dev, + u16 *ring_to_svc_map) +{ + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + u32 i = 0; + struct adf_cfg_enabled_services *svcs = NULL; + enum adf_cfg_fw_image_type fw_image_type = ADF_FW_IMAGE_DEFAULT; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + *ring_to_svc_map = 0; + + /* Get the services enabled by user */ + snprintf(key, sizeof(key), ADF_SERVICES_ENABLED); + if (adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) + return EFAULT; + + if (hw_data->get_fw_image_type) { + if (hw_data->get_fw_image_type(accel_dev, &fw_image_type)) + return EFAULT; + } + + for (i = 0; i < ADF_CFG_SVCS_MAX; i++) { + svcs = &adf_profiles[fw_image_type].supported_svcs[i]; + + if (!strncmp(svcs->svcs_enabled, + "", + ADF_CFG_MAX_VAL_LEN_IN_BYTES)) + break; + + if (!strncmp(val, + svcs->svcs_enabled, + ADF_CFG_MAX_VAL_LEN_IN_BYTES)) { + *ring_to_svc_map = svcs->rng_to_svc_msk; + return 0; + } + } + + device_printf(GET_DEV(accel_dev), + "Invalid ServicesEnabled %s for ServicesProfile: %d\n", + val, + fw_image_type); + + return EFAULT; +} + +void +adf_cfg_set_asym_rings_mask(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + + hw_data->asym_rings_mask = 0; +} + +void +adf_cfg_gen_dispatch_arbiter(struct adf_accel_dev *accel_dev, + const u32 *thrd_to_arb_map, + u32 *thrd_to_arb_map_gen, + u32 total_engines) +{ + int engine, thread, service, bits; + u32 thread_ability, ability_map, service_mask, service_type; + u16 ena_srv_mask = GET_HW_DATA(accel_dev)->ring_to_svc_map; + + for (engine = 0; engine < total_engines; engine++) { + if (!(GET_HW_DATA(accel_dev)->ae_mask & (1 << engine))) + continue; + bits = 0; + /* ability_map is used to indicate the threads ability */ + ability_map = thrd_to_arb_map[engine]; + thrd_to_arb_map_gen[engine] = 0; + /* parse each thread on the engine */ + for (thread = 0; thread < ADF_NUM_THREADS_PER_AE; thread++) { + /* get the ability of this thread */ + thread_ability = ability_map & ADF_THRD_ABILITY_MASK; + ability_map >>= ADF_THRD_ABILITY_BIT_LEN; + /* parse each service */ + for (service = 0; service < ADF_CFG_MAX_SERVICES; + service++) { + service_type = + GET_SRV_TYPE(ena_srv_mask, service); + switch (service_type) { + case CRYPTO: + service_mask = ADF_CFG_ASYM_SRV_MASK; + if (thread_ability & service_mask) + thrd_to_arb_map_gen[engine] |= + (1 << bits); + bits++; + service++; + service_mask = ADF_CFG_SYM_SRV_MASK; + break; + case COMP: + service_mask = ADF_CFG_DC_SRV_MASK; + break; + case SYM: + service_mask = ADF_CFG_SYM_SRV_MASK; + break; + case ASYM: + service_mask = ADF_CFG_ASYM_SRV_MASK; + break; + default: + service_mask = ADF_CFG_UNKNOWN_SRV_MASK; + } + if (thread_ability & service_mask) + thrd_to_arb_map_gen[engine] |= + (1 << bits); + bits++; + } + } + } +} + +int +adf_cfg_get_fw_image_type(struct adf_accel_dev *accel_dev, + enum adf_cfg_fw_image_type *fw_image_type) +{ + *fw_image_type = ADF_FW_IMAGE_CUSTOM1; + + return 0; +} + +static int +adf_cfg_get_caps_enabled(struct adf_accel_dev *accel_dev, + u32 *enabled_svc_caps, + u32 *enabled_fw_caps) +{ + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + u8 i = 0; + struct adf_cfg_enabled_services *svcs = NULL; + enum adf_cfg_fw_image_type fw_image_type = ADF_FW_IMAGE_DEFAULT; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + + *enabled_svc_caps = 0; + *enabled_fw_caps = 0; + + /* Get the services enabled by user */ + snprintf(key, sizeof(key), ADF_SERVICES_ENABLED); + if (adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) + return EFAULT; + + /* + * Only the PF driver has the hook for get_fw_image_type as the VF's + * enabled service is from PFVF communication. The fw_image_type for + * the VF is set to DEFAULT since this type contains all kinds of + * enabled service. + */ + if (hw_data->get_fw_image_type) { + if (hw_data->get_fw_image_type(accel_dev, &fw_image_type)) + return EFAULT; + } + + for (i = 0; i < ADF_CFG_SVCS_MAX; i++) { + svcs = &adf_profiles[fw_image_type].supported_svcs[i]; + + if (!strncmp(svcs->svcs_enabled, + "", + ADF_CFG_MAX_VAL_LEN_IN_BYTES)) + break; + + if (!strncmp(val, + svcs->svcs_enabled, + ADF_CFG_MAX_VAL_LEN_IN_BYTES)) { + *enabled_svc_caps = svcs->enabled_svc_cap; + *enabled_fw_caps = svcs->enabled_fw_cap; + return 0; + } + } + device_printf(GET_DEV(accel_dev), + "Invalid ServicesEnabled %s for ServicesProfile: %d\n", + val, + fw_image_type); + + return EFAULT; +} + +static void +adf_cfg_check_deprecated_params(struct adf_accel_dev *accel_dev) +{ + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + u8 i = 0; + + for (i = 0; i < ADF_CFG_DEPRE_PARAMS_NUM; i++) { + /* give a warning if the deprecated params are set by user */ + snprintf(key, sizeof(key), "%s", adf_cfg_deprecated_params[i]); + if (!adf_cfg_get_param_value( + accel_dev, ADF_GENERAL_SEC, key, val)) { + device_printf(GET_DEV(accel_dev), + "Parameter '%s' has been deprecated\n", + key); + } + } +} + +static int +adf_cfg_check_enabled_services(struct adf_accel_dev *accel_dev, + u32 enabled_svc_caps) +{ + u32 hw_caps = GET_HW_DATA(accel_dev)->accel_capabilities_mask; + + if ((enabled_svc_caps & hw_caps) == enabled_svc_caps) + return 0; + + device_printf(GET_DEV(accel_dev), "Unsupported device configuration\n"); + + return EFAULT; +} + +static int +adf_cfg_update_pf_accel_cap_mask(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 enabled_svc_caps = 0; + u32 enabled_fw_caps = 0; + + if (hw_data->get_accel_cap) { + hw_data->accel_capabilities_mask = + hw_data->get_accel_cap(accel_dev); + } + + if (adf_cfg_get_caps_enabled(accel_dev, + &enabled_svc_caps, + &enabled_fw_caps)) + return EFAULT; + + if (adf_cfg_check_enabled_services(accel_dev, enabled_svc_caps)) + return EFAULT; + + if (!(enabled_svc_caps & ADF_CFG_CAP_ASYM)) + hw_data->accel_capabilities_mask &= ~ADF_CFG_CAP_ASYM; + if (!(enabled_svc_caps & ADF_CFG_CAP_SYM)) + hw_data->accel_capabilities_mask &= ~ADF_CFG_CAP_SYM; + if (!(enabled_svc_caps & ADF_CFG_CAP_DC)) + hw_data->accel_capabilities_mask &= ~ADF_CFG_CAP_DC; + + /* Enable FW defined capabilities*/ + if (enabled_fw_caps) + hw_data->accel_capabilities_mask |= enabled_fw_caps; + + return 0; +} + +static int +adf_cfg_update_vf_accel_cap_mask(struct adf_accel_dev *accel_dev) +{ + u32 enabled_svc_caps = 0; + u32 enabled_fw_caps = 0; + + if (adf_cfg_get_caps_enabled(accel_dev, + &enabled_svc_caps, + &enabled_fw_caps)) + return EFAULT; + + if (adf_cfg_check_enabled_services(accel_dev, enabled_svc_caps)) + return EFAULT; + + return 0; +} + +int +adf_cfg_device_init(struct adf_cfg_device *device, + struct adf_accel_dev *accel_dev) +{ + int i = 0; + /* max_inst indicates the max instance number one bank can hold */ + int max_inst = accel_dev->hw_device->tx_rx_gap; + int ret = ENOMEM; + struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev); + + adf_cfg_check_deprecated_params(accel_dev); + + device->bundle_num = 0; + device->bundles = (struct adf_cfg_bundle **)malloc( + sizeof(struct adf_cfg_bundle *) * accel_dev->hw_device->num_banks, + M_QAT, + M_WAITOK | M_ZERO); + + device->bundle_num = accel_dev->hw_device->num_banks; + + device->instances = (struct adf_cfg_instance **)malloc( + sizeof(struct adf_cfg_instance *) * device->bundle_num * max_inst, + M_QAT, + M_WAITOK | M_ZERO); + + device->instance_index = 0; + + device->max_kernel_bundle_nr = -1; + + ret = EFAULT; + + /* Update the acceleration capability mask based on User capability */ + if (!accel_dev->is_vf) { + if (adf_cfg_update_pf_accel_cap_mask(accel_dev)) + goto failed; + } else { + if (adf_cfg_update_vf_accel_cap_mask(accel_dev)) + goto failed; + } + + /* Based on the svc configured, get ring_to_svc_map */ + if (hw_data->get_ring_to_svc_map) { + if (hw_data->get_ring_to_svc_map(accel_dev, + &hw_data->ring_to_svc_map)) + goto failed; + } + + ret = ENOMEM; + /* + * 1) get the config information to generate the ring to service + * mapping table + * 2) init each bundle of this device + */ + for (i = 0; i < device->bundle_num; i++) { + device->bundles[i] = malloc(sizeof(struct adf_cfg_bundle), + M_QAT, + M_WAITOK | M_ZERO); + + device->bundles[i]->max_section = max_inst; + adf_cfg_bundle_init(device->bundles[i], device, i, accel_dev); + } + + return 0; + +failed: + for (i = 0; i < device->bundle_num; i++) { + if (device->bundles[i]) + adf_cfg_bundle_clear(device->bundles[i], accel_dev); + } + + for (i = 0; i < (device->bundle_num * max_inst); i++) { + if (device->instances && device->instances[i]) + free(device->instances[i], M_QAT); + } + + free(device->instances, M_QAT); + device->instances = NULL; + + device_printf(GET_DEV(accel_dev), "Failed to do device init\n"); + return ret; +} + +void +adf_cfg_device_clear(struct adf_cfg_device *device, + struct adf_accel_dev *accel_dev) +{ + int i = 0; + + for (i = 0; i < device->bundle_num; i++) { + if (device->bundles && device->bundles[i]) { + adf_cfg_bundle_clear(device->bundles[i], accel_dev); + free(device->bundles[i], M_QAT); + device->bundles[i] = NULL; + } + } + + free(device->bundles, M_QAT); + device->bundles = NULL; + + for (i = 0; i < device->instance_index; i++) { + if (device->instances && device->instances[i]) { + free(device->instances[i], M_QAT); + device->instances[i] = NULL; + } + } + + free(device->instances, M_QAT); + device->instances = NULL; +} + +static int +adf_cfg_static_conf(struct adf_accel_dev *accel_dev) +{ + int ret = 0; + unsigned long val = 0; + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char value[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + int cpus; + int instances = 0; + int cy_poll_instances; + int cy_irq_instances; + int dc_instances; + int i = 0; + + cpus = num_online_cpus(); + instances = + GET_MAX_BANKS(accel_dev) > cpus ? GET_MAX_BANKS(accel_dev) : cpus; + if (!instances) + return EFAULT; + + if (instances >= ADF_CFG_STATIC_CONF_INST_NUM_DC) + dc_instances = ADF_CFG_STATIC_CONF_INST_NUM_DC; + else + return EFAULT; + instances -= dc_instances; + + if (instances >= ADF_CFG_STATIC_CONF_INST_NUM_CY_POLL) + cy_poll_instances = ADF_CFG_STATIC_CONF_INST_NUM_CY_POLL; + else + return EFAULT; + instances -= cy_poll_instances; + + if (instances >= ADF_CFG_STATIC_CONF_INST_NUM_CY_IRQ) + cy_irq_instances = ADF_CFG_STATIC_CONF_INST_NUM_CY_IRQ; + else + return EFAULT; + instances -= cy_irq_instances; + + ret |= adf_cfg_section_add(accel_dev, ADF_GENERAL_SEC); + + ret |= adf_cfg_section_add(accel_dev, ADF_KERNEL_SAL_SEC); + + val = ADF_CFG_STATIC_CONF_VER; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, ADF_CONFIG_VERSION); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_AUTO_RESET; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, ADF_AUTO_RESET_ON_ERROR); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + if (accel_dev->hw_device->get_num_accel_units) { + int cy_au = 0; + int dc_au = 0; + int num_au = accel_dev->hw_device->get_num_accel_units( + accel_dev->hw_device); + + if (num_au > ADF_CFG_STATIC_CONF_NUM_DC_ACCEL_UNITS) { + cy_au = num_au - ADF_CFG_STATIC_CONF_NUM_DC_ACCEL_UNITS; + dc_au = ADF_CFG_STATIC_CONF_NUM_DC_ACCEL_UNITS; + } else if (num_au == ADF_CFG_STATIC_CONF_NUM_DC_ACCEL_UNITS) { + cy_au = 1; + dc_au = 1; + } else { + return EFAULT; + } + + val = cy_au; + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_NUM_CY_ACCEL_UNITS); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = dc_au; + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_NUM_DC_ACCEL_UNITS); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_NUM_INLINE_ACCEL_UNITS; + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_NUM_INLINE_ACCEL_UNITS); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + } + + val = ADF_CFG_STATIC_CONF_CY_ASYM_RING_SIZE; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, ADF_CY ADF_RING_ASYM_SIZE); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_CY_SYM_RING_SIZE; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, ADF_CY ADF_RING_SYM_SIZE); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_DC_INTER_BUF_SIZE; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, ADF_INTER_BUF_SIZE); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, ADF_SERVICES_ENABLED); + if ((cy_poll_instances + cy_irq_instances) == 0 && dc_instances > 0) { + snprintf(value, ADF_CFG_MAX_VAL_LEN_IN_BYTES, ADF_CFG_DC); + } else if (((cy_poll_instances + cy_irq_instances)) > 0 && + dc_instances == 0) { + snprintf(value, ADF_CFG_MAX_VAL_LEN_IN_BYTES, ADF_CFG_SYM); + } else { + snprintf(value, + ADF_CFG_MAX_VAL_LEN_IN_BYTES, + "%s;%s", + ADF_CFG_SYM, + ADF_CFG_DC); + } + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)value, ADF_STR); + + val = ADF_CFG_STATIC_CONF_SAL_STATS_CFG_DC; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, SAL_STATS_CFG_DC); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_SAL_STATS_CFG_DH; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, SAL_STATS_CFG_DH); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_SAL_STATS_CFG_DRBG; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, SAL_STATS_CFG_DRBG); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_SAL_STATS_CFG_DSA; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, SAL_STATS_CFG_DSA); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_SAL_STATS_CFG_ECC; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, SAL_STATS_CFG_ECC); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_SAL_STATS_CFG_ENABLED; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, SAL_STATS_CFG_ENABLED); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_SAL_STATS_CFG_KEYGEN; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, SAL_STATS_CFG_KEYGEN); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_SAL_STATS_CFG_LN; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, SAL_STATS_CFG_LN); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_SAL_STATS_CFG_PRIME; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, SAL_STATS_CFG_PRIME); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_SAL_STATS_CFG_RSA; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, SAL_STATS_CFG_RSA); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_SAL_STATS_CFG_SYM; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, SAL_STATS_CFG_SYM); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC); + + val = (cy_poll_instances + cy_irq_instances); + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, ADF_NUM_CY); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_KERNEL_SAL_SEC, key, (void *)&val, ADF_DEC); + + val = dc_instances; + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, ADF_NUM_DC); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_KERNEL_SAL_SEC, key, (void *)&val, ADF_DEC); + + for (i = 0; i < (cy_irq_instances); i++) { + val = i; + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY "%d" ADF_ETRMGR_CORE_AFFINITY, + i); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_KERNEL_SAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_IRQ; + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY "%d" ADF_POLL_MODE, + i); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_KERNEL_SAL_SEC, key, (void *)&val, ADF_DEC); + + snprintf(value, ADF_CFG_MAX_VAL_LEN_IN_BYTES, ADF_CY "%d", i); + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_NAME_FORMAT, + i); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_KERNEL_SAL_SEC, key, (void *)value, ADF_STR); + } + + for (i = cy_irq_instances; i < (cy_poll_instances + cy_irq_instances); + i++) { + val = i; + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY "%d" ADF_ETRMGR_CORE_AFFINITY, + i); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_KERNEL_SAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_POLL; + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY "%d" ADF_POLL_MODE, + i); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_KERNEL_SAL_SEC, key, (void *)&val, ADF_DEC); + + snprintf(value, ADF_CFG_MAX_VAL_LEN_IN_BYTES, ADF_CY "%d", i); + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_NAME_FORMAT, + i); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_KERNEL_SAL_SEC, key, (void *)value, ADF_STR); + } + + for (i = 0; i < dc_instances; i++) { + val = i; + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_DC "%d" ADF_ETRMGR_CORE_AFFINITY, + i); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_KERNEL_SAL_SEC, key, (void *)&val, ADF_DEC); + + val = ADF_CFG_STATIC_CONF_POLL; + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_DC "%d" ADF_POLL_MODE, + i); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_KERNEL_SAL_SEC, key, (void *)&val, ADF_DEC); + + snprintf(value, ADF_CFG_MAX_VAL_LEN_IN_BYTES, ADF_DC "%d", i); + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_DC_NAME_FORMAT, + i); + ret |= adf_cfg_add_key_value_param( + accel_dev, ADF_KERNEL_SAL_SEC, key, (void *)value, ADF_STR); + } + + if (ret) + ret = EFAULT; + return ret; +} + +int +adf_config_device(struct adf_accel_dev *accel_dev) +{ + struct adf_cfg_device_data *cfg = NULL; + struct adf_cfg_device *cfg_device = NULL; + struct adf_cfg_section *sec; + struct list_head *list; + int ret = ENOMEM; + + if (!accel_dev) + return ret; + + ret = adf_cfg_static_conf(accel_dev); + if (ret) + goto failed; + + cfg = accel_dev->cfg; + cfg->dev = NULL; + cfg_device = (struct adf_cfg_device *)malloc(sizeof(*cfg_device), + M_QAT, + M_WAITOK | M_ZERO); + + ret = EFAULT; + + if (adf_cfg_device_init(cfg_device, accel_dev)) + goto failed; + + cfg->dev = cfg_device; + + /* GENERAL and KERNEL section must be processed before others */ + list_for_each(list, &cfg->sec_list) + { + sec = list_entry(list, struct adf_cfg_section, list); + if (!strcmp(sec->name, ADF_GENERAL_SEC)) { + ret = adf_cfg_process_section(accel_dev, + sec->name, + accel_dev->accel_id); + if (ret) + goto failed; + sec->processed = true; + break; + } + } + + list_for_each(list, &cfg->sec_list) + { + sec = list_entry(list, struct adf_cfg_section, list); + if (!strcmp(sec->name, ADF_KERNEL_SEC)) { + ret = adf_cfg_process_section(accel_dev, + sec->name, + accel_dev->accel_id); + if (ret) + goto failed; + sec->processed = true; + break; + } + } + + list_for_each(list, &cfg->sec_list) + { + sec = list_entry(list, struct adf_cfg_section, list); + if (!strcmp(sec->name, ADF_KERNEL_SAL_SEC)) { + ret = adf_cfg_process_section(accel_dev, + sec->name, + accel_dev->accel_id); + if (ret) + goto failed; + sec->processed = true; + break; + } + } + + list_for_each(list, &cfg->sec_list) + { + sec = list_entry(list, struct adf_cfg_section, list); + /* avoid reprocessing one section */ + if (!sec->processed && !sec->is_derived) { + ret = adf_cfg_process_section(accel_dev, + sec->name, + accel_dev->accel_id); + if (ret) + goto failed; + sec->processed = true; + } + } + + /* newly added accel section */ + ret = adf_cfg_process_section(accel_dev, + ADF_ACCEL_SEC, + accel_dev->accel_id); + if (ret) + goto failed; + + /* + * put item-remove task after item-process + * because during process we may fetch values from those items + */ + list_for_each(list, &cfg->sec_list) + { + sec = list_entry(list, struct adf_cfg_section, list); + if (!sec->is_derived) { + ret = adf_cfg_cleanup_section(accel_dev, + sec->name, + accel_dev->accel_id); + if (ret) + goto failed; + } + } + + ret = 0; + set_bit(ADF_STATUS_CONFIGURED, &accel_dev->status); +failed: + if (ret) { + if (cfg_device) { + adf_cfg_device_clear(cfg_device, accel_dev); + free(cfg_device, M_QAT); + cfg->dev = NULL; + } + adf_cfg_del_all(accel_dev); + device_printf(GET_DEV(accel_dev), "Failed to config device\n"); + } + + return ret; +} Index: sys/dev/qat/qat_common/adf_cfg_instance.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_cfg_instance.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_INSTANCE_H_ +#define ADF_CFG_INSTANCE_H_ + +#include "adf_accel_devices.h" +#include "adf_cfg_common.h" +#include "adf_cfg_bundle.h" + +void crypto_instance_init(struct adf_cfg_instance *instance, + struct adf_cfg_bundle *bundle); +void dc_instance_init(struct adf_cfg_instance *instance, + struct adf_cfg_bundle *bundle); +void asym_instance_init(struct adf_cfg_instance *instance, + struct adf_cfg_bundle *bundle); +void sym_instance_init(struct adf_cfg_instance *instance, + struct adf_cfg_bundle *bundle); +#endif Index: sys/dev/qat/qat_common/adf_cfg_instance.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_cfg_instance.c @@ -0,0 +1,156 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_cfg_instance.h" + +void +crypto_instance_init(struct adf_cfg_instance *instance, + struct adf_cfg_bundle *bundle) +{ + int i = 0; + + instance->stype = CRYPTO; + for (i = 0; i < bundle->num_of_rings / 2; i++) { + if ((bundle->in_use >> bundle->rings[i]->number) & 0x1) + continue; + + if (bundle->rings[i]->serv_type == ADF_ACCEL_SERV_ASYM && + bundle->rings[i]->mode == TX) { + instance->asym_tx = bundle->rings[i]->number; + bundle->in_use |= 1 << bundle->rings[i]->number; + break; + } + } + + for (i = 0; i < bundle->num_of_rings / 2; i++) { + if ((bundle->in_use >> bundle->rings[i]->number) & 0x1) + continue; + + if (bundle->rings[i]->serv_type == ADF_ACCEL_SERV_SYM && + bundle->rings[i]->mode == TX) { + instance->sym_tx = bundle->rings[i]->number; + bundle->in_use |= 1 << bundle->rings[i]->number; + break; + } + } + + for (i = bundle->num_of_rings / 2; i < bundle->num_of_rings; i++) { + if ((bundle->in_use >> bundle->rings[i]->number) & 0x1) + continue; + + if (bundle->rings[i]->serv_type == ADF_ACCEL_SERV_ASYM && + bundle->rings[i]->mode == RX) { + instance->asym_rx = bundle->rings[i]->number; + bundle->in_use |= 1 << bundle->rings[i]->number; + break; + } + } + + for (i = bundle->num_of_rings / 2; i < bundle->num_of_rings; i++) { + if ((bundle->in_use >> bundle->rings[i]->number) & 0x1) + continue; + + if (bundle->rings[i]->serv_type == ADF_ACCEL_SERV_SYM && + bundle->rings[i]->mode == RX) { + instance->sym_rx = bundle->rings[i]->number; + bundle->in_use |= 1 << bundle->rings[i]->number; + break; + } + } +} + +void +dc_instance_init(struct adf_cfg_instance *instance, + struct adf_cfg_bundle *bundle) +{ + int i = 0; + + instance->stype = COMP; + for (i = 0; i < bundle->num_of_rings / 2; i++) { + if ((bundle->in_use >> bundle->rings[i]->number) & 0x1) + continue; + + if (bundle->rings[i]->serv_type == ADF_ACCEL_SERV_DC && + bundle->rings[i]->mode == TX) { + instance->dc_tx = bundle->rings[i]->number; + bundle->in_use |= 1 << bundle->rings[i]->number; + break; + } + } + + for (i = bundle->num_of_rings / 2; i < bundle->num_of_rings; i++) { + if ((bundle->in_use >> bundle->rings[i]->number) & 0x1) + continue; + + if (bundle->rings[i]->serv_type == ADF_ACCEL_SERV_DC && + bundle->rings[i]->mode == RX) { + instance->dc_rx = bundle->rings[i]->number; + bundle->in_use |= 1 << bundle->rings[i]->number; + break; + } + } +} + +void +asym_instance_init(struct adf_cfg_instance *instance, + struct adf_cfg_bundle *bundle) +{ + int i = 0; + + instance->stype = ASYM; + for (i = 0; i < bundle->num_of_rings / 2; i++) { + if ((bundle->in_use >> bundle->rings[i]->number) & 0x1) + continue; + + if (bundle->rings[i]->serv_type == ADF_ACCEL_SERV_ASYM && + bundle->rings[i]->mode == TX) { + instance->asym_tx = bundle->rings[i]->number; + bundle->in_use |= 1 << bundle->rings[i]->number; + break; + } + } + + for (i = bundle->num_of_rings / 2; i < bundle->num_of_rings; i++) { + if ((bundle->in_use >> bundle->rings[i]->number) & 0x1) + continue; + + if (bundle->rings[i]->serv_type == ADF_ACCEL_SERV_ASYM && + bundle->rings[i]->mode == RX) { + instance->asym_rx = bundle->rings[i]->number; + bundle->in_use |= 1 << bundle->rings[i]->number; + break; + } + } +} + +void +sym_instance_init(struct adf_cfg_instance *instance, + struct adf_cfg_bundle *bundle) +{ + int i = 0; + + instance->stype = SYM; + for (i = 0; i < bundle->num_of_rings / 2; i++) { + if ((bundle->in_use >> bundle->rings[i]->number) & 0x1) + continue; + + if (bundle->rings[i]->serv_type == ADF_ACCEL_SERV_SYM && + bundle->rings[i]->mode == TX) { + instance->sym_tx = bundle->rings[i]->number; + bundle->in_use |= 1 << bundle->rings[i]->number; + break; + } + } + + for (i = 0 + bundle->num_of_rings / 2; i < bundle->num_of_rings; i++) { + if ((bundle->in_use >> bundle->rings[i]->number) & 0x1) + continue; + + if (bundle->rings[i]->serv_type == ADF_ACCEL_SERV_SYM && + bundle->rings[i]->mode == RX) { + instance->sym_rx = bundle->rings[i]->number; + bundle->in_use |= 1 << bundle->rings[i]->number; + break; + } + } +} Index: sys/dev/qat/qat_common/adf_cfg_section.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_cfg_section.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_SECTION_H_ +#define ADF_CFG_SECTION_H_ + +#include +#include "adf_accel_devices.h" +#include "adf_cfg_common.h" +#include "adf_cfg_strings.h" + +int adf_cfg_process_section(struct adf_accel_dev *accel_dev, + const char *section_name, + int dev); + +int adf_cfg_cleanup_section(struct adf_accel_dev *accel_dev, + const char *section_name, + int dev); +#endif Index: sys/dev/qat/qat_common/adf_cfg_section.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_cfg_section.c @@ -0,0 +1,1144 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_cfg_instance.h" +#include "adf_cfg_device.h" +#include "adf_cfg_section.h" + +static bool +adf_cfg_is_svc_enabled(struct adf_accel_dev *accel_dev, const u8 svc) +{ + int ring_pair_index = 0; + u8 serv_type = NA; + struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev); + + for (ring_pair_index = 0; ring_pair_index < ADF_CFG_NUM_SERVICES; + ring_pair_index++) { + serv_type = + GET_SRV_TYPE(hw_data->ring_to_svc_map, ring_pair_index); + if (serv_type == svc) + return true; + } + return false; +} + +static int +adf_cfg_set_core_number_for_instance(struct adf_accel_dev *accel_dev, + const char *sec_name, + const char *inst_name, + int process_num, + unsigned long *core_number) +{ + char *core_val = NULL; + char *pos = NULL; + char **tokens = NULL; + int token_index = 0; + int core_arr_index = 0; + int i = 0; + int ret = EFAULT; + unsigned long *core_num_arr = NULL; + unsigned long core_num; + unsigned long start, end; + + /* do memory allocation */ + core_val = + malloc(ADF_CFG_MAX_VAL_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + tokens = malloc(sizeof(char *) * ADF_CFG_MAX_TOKENS, + M_QAT, + M_WAITOK | M_ZERO); + + for (i = 0; i < ADF_CFG_MAX_TOKENS; i++) { + tokens[i] = + malloc(ADF_CFG_MAX_TOKEN_LEN, M_QAT, M_WAITOK | M_ZERO); + } + + core_num_arr = malloc(sizeof(unsigned long) * ADF_CFG_MAX_CORE_NUM, + M_QAT, + M_WAITOK | M_ZERO); + + /* parse the core_val */ + ret = EFAULT; + if (adf_cfg_get_param_value(accel_dev, sec_name, inst_name, core_val)) + goto failed; + + pos = strchr(core_val, ','); + while (pos) { + pos[0] = '\0'; + strlcpy(tokens[token_index++], core_val, ADF_CFG_MAX_TOKEN_LEN); + strlcpy(core_val, pos + 1, ADF_CFG_MAX_VAL_LEN_IN_BYTES); + pos = strchr(core_val, ','); + if (!pos) + strlcpy(tokens[token_index++], + core_val, + ADF_CFG_MAX_VAL_LEN_IN_BYTES); + } + + /* in case there is only N-M */ + if (token_index == 0) + strlcpy(tokens[token_index++], + core_val, + ADF_CFG_MAX_VAL_LEN_IN_BYTES); + + /* parse the tokens such as N-M */ + for (i = 0; i < token_index; i++) { + pos = strchr(tokens[i], '-'); + if (pos) { + pos[0] = '\0'; + ret = compat_strtoul(tokens[i], 10, &start); + if (ret) + goto failed; + ret = compat_strtoul(pos + 1, 10, &end); + if (ret) + goto failed; + if (start > end) { + ret = EFAULT; + goto failed; + } + for (core_num = start; core_num < end + 1; core_num++) + core_num_arr[core_arr_index++] = core_num; + } else { + ret = compat_strtoul(tokens[i], 10, &core_num); + if (ret) + goto failed; + core_num_arr[core_arr_index++] = core_num; + } + } + + if (core_arr_index == 0) { + ret = compat_strtoul(core_val, 10, &core_num); + if (ret) + goto failed; + else + core_num_arr[core_arr_index++] = core_num; + } + + *core_number = core_num_arr[process_num % core_arr_index]; + ret = 0; +failed: + free(core_val, M_QAT); + if (tokens) { + for (i = 0; i < ADF_CFG_MAX_TOKENS; i++) + free(tokens[i], M_QAT); + free(tokens, M_QAT); + } + free(core_num_arr, M_QAT); + + if (ret) + device_printf(GET_DEV(accel_dev), + "Get core number failed with error %d\n", + ret); + return ret; +} + +static int +adf_cfg_set_value(struct adf_accel_dev *accel_dev, + const char *sec, + const char *key, + unsigned long *value) +{ + char *val = NULL; + int ret = EFAULT; + + val = malloc(ADF_CFG_MAX_VAL_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + if (adf_cfg_get_param_value(accel_dev, sec, key, val)) + goto out; + + /* as the key type can be either ADF_DEC or ADF_HEX */ + if (compat_strtoul(val, 10, value) && compat_strtoul(val, 16, value)) + goto out; + + ret = 0; +out: + free(val, M_QAT); + return ret; +} + +static void +adf_cfg_add_cy_inst_info(struct adf_accel_dev *accel_dev, + struct adf_cfg_instance *crypto_inst, + const char *derived_sec, + int inst_index) +{ + char *key = NULL; + unsigned long bank_number = 0; + unsigned long ring_number = 0; + unsigned long asym_req = 0; + unsigned long sym_req = 0; + + key = malloc(ADF_CFG_MAX_KEY_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_BANK_NUM_FORMAT, + inst_index); + bank_number = crypto_inst->bundle; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&bank_number, ADF_DEC); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_ASYM_TX_FORMAT, + inst_index); + ring_number = crypto_inst->asym_tx; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&ring_number, ADF_DEC); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_SYM_TX_FORMAT, + inst_index); + ring_number = crypto_inst->sym_tx; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&ring_number, ADF_DEC); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_ASYM_RX_FORMAT, + inst_index); + ring_number = crypto_inst->asym_rx; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&ring_number, ADF_DEC); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_SYM_RX_FORMAT, + inst_index); + ring_number = crypto_inst->sym_rx; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&ring_number, ADF_DEC); + + strlcpy(key, ADF_CY_RING_ASYM_SIZE, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value(accel_dev, ADF_GENERAL_SEC, key, &asym_req)) + asym_req = ADF_CFG_DEF_CY_RING_ASYM_SIZE; + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_RING_ASYM_SIZE_FORMAT, + inst_index); + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&asym_req, ADF_DEC); + + strlcpy(key, ADF_CY_RING_SYM_SIZE, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value(accel_dev, ADF_GENERAL_SEC, key, &sym_req)) + sym_req = ADF_CFG_DEF_CY_RING_SYM_SIZE; + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_RING_SYM_SIZE_FORMAT, + inst_index); + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&sym_req, ADF_DEC); + + free(key, M_QAT); +} + +static void +adf_cfg_add_dc_inst_info(struct adf_accel_dev *accel_dev, + struct adf_cfg_instance *dc_inst, + const char *derived_sec, + int inst_index) +{ + char *key = NULL; + unsigned long bank_number = 0; + unsigned long ring_number = 0; + unsigned long dc_req = 0; + + key = malloc(ADF_CFG_MAX_KEY_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + snprintf(key, ADF_CFG_MAX_STR_LEN, ADF_DC_BANK_NUM_FORMAT, inst_index); + bank_number = dc_inst->bundle; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&bank_number, ADF_DEC); + + snprintf(key, ADF_CFG_MAX_STR_LEN, ADF_DC_TX_FORMAT, inst_index); + ring_number = dc_inst->dc_tx; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&ring_number, ADF_DEC); + + snprintf(key, ADF_CFG_MAX_STR_LEN, ADF_DC_RX_FORMAT, inst_index); + ring_number = dc_inst->dc_rx; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&ring_number, ADF_DEC); + + strlcpy(key, ADF_DC_RING_SIZE, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value(accel_dev, ADF_GENERAL_SEC, key, &dc_req)) + dc_req = ADF_CFG_DEF_DC_RING_SIZE; + + snprintf(key, ADF_CFG_MAX_STR_LEN, ADF_DC_RING_SIZE_FORMAT, inst_index); + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&dc_req, ADF_DEC); + + free(key, M_QAT); +} + +static void +adf_cfg_add_asym_inst_info(struct adf_accel_dev *accel_dev, + struct adf_cfg_instance *asym_inst, + const char *derived_sec, + int inst_index) +{ + char *key = NULL; + unsigned long bank_number = 0; + unsigned long ring_number = 0; + unsigned long asym_req = 0; + + key = malloc(ADF_CFG_MAX_KEY_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_BANK_NUM_FORMAT, + inst_index); + bank_number = asym_inst->bundle; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&bank_number, ADF_DEC); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_ASYM_TX_FORMAT, + inst_index); + ring_number = asym_inst->asym_tx; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&ring_number, ADF_DEC); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_ASYM_RX_FORMAT, + inst_index); + ring_number = asym_inst->asym_rx; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&ring_number, ADF_DEC); + + strlcpy(key, ADF_CY_RING_ASYM_SIZE, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value(accel_dev, ADF_GENERAL_SEC, key, &asym_req)) + asym_req = ADF_CFG_DEF_CY_RING_ASYM_SIZE; + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_RING_ASYM_SIZE_FORMAT, + inst_index); + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&asym_req, ADF_DEC); + + free(key, M_QAT); +} + +static void +adf_cfg_add_sym_inst_info(struct adf_accel_dev *accel_dev, + struct adf_cfg_instance *sym_inst, + const char *derived_sec, + int inst_index) +{ + char *key = NULL; + unsigned long bank_number = 0; + unsigned long ring_number = 0; + unsigned long sym_req = 0; + + key = malloc(ADF_CFG_MAX_KEY_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_BANK_NUM_FORMAT, + inst_index); + bank_number = sym_inst->bundle; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&bank_number, ADF_DEC); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_SYM_TX_FORMAT, + inst_index); + ring_number = sym_inst->sym_tx; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&ring_number, ADF_DEC); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_SYM_RX_FORMAT, + inst_index); + ring_number = sym_inst->sym_rx; + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&ring_number, ADF_DEC); + + strlcpy(key, ADF_CY_RING_SYM_SIZE, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value(accel_dev, ADF_GENERAL_SEC, key, &sym_req)) + sym_req = ADF_CFG_DEF_CY_RING_SYM_SIZE; + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_RING_SYM_SIZE_FORMAT, + inst_index); + adf_cfg_add_key_value_param( + accel_dev, derived_sec, key, (void *)&sym_req, ADF_DEC); + + free(key, M_QAT); +} + +static int +adf_cfg_section_copy(struct adf_accel_dev *accel_dev, + const char *processed_sec, + const char *derived_sec) +{ + unsigned long val = 0; + struct list_head *list; + struct adf_cfg_section *sec_process = + adf_cfg_sec_find(accel_dev, processed_sec); + if (!sec_process) + return EFAULT; + + list_for_each(list, &sec_process->param_head) + { + struct adf_cfg_key_val *ptr = + list_entry(list, struct adf_cfg_key_val, list); + + /* + * ignore CoreAffinity since it will be generated later, and + * there is no need to keep NumProcesses and LimitDevAccess. + */ + if (strstr(ptr->key, ADF_ETRMGR_CORE_AFFINITY) || + strstr(ptr->key, ADF_NUM_PROCESSES) || + strstr(ptr->key, ADF_LIMIT_DEV_ACCESS)) + continue; + + if (ptr->type == ADF_DEC) { + if (!compat_strtoul(ptr->val, 10, &val)) + adf_cfg_add_key_value_param(accel_dev, + derived_sec, + ptr->key, + (void *)&val, + ptr->type); + } else if (ptr->type == ADF_STR) { + adf_cfg_add_key_value_param(accel_dev, + derived_sec, + ptr->key, + (void *)ptr->val, + ptr->type); + } else if (ptr->type == ADF_HEX) { + if (!compat_strtoul(ptr->val, 16, &val)) + adf_cfg_add_key_value_param(accel_dev, + derived_sec, + ptr->key, + (void *)val, + ptr->type); + } + } + return 0; +} + +static int +adf_cfg_create_rings_entries_for_cy_inst(struct adf_accel_dev *accel_dev, + const char *processed_sec, + const char *derived_sec, + int process_num, + enum adf_cfg_service_type serv_type) +{ + int i = 0; + int ret = EFAULT; + unsigned long num_inst = 0, num_dc_inst = 0; + unsigned long core_number = 0; + unsigned long polling_mode = 0; + struct adf_cfg_instance *crypto_inst = NULL; + + char *key = NULL; + char *val = NULL; + + key = malloc(ADF_CFG_MAX_KEY_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + val = malloc(ADF_CFG_MAX_VAL_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + snprintf(key, ADF_CFG_MAX_KEY_LEN_IN_BYTES, ADF_SERVICES_ENABLED); + if (adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) + goto failed; + if ((!strncmp(val, ADF_CFG_CY, ADF_CFG_MAX_VAL_LEN_IN_BYTES)) || + (!strncmp(val, ADF_CFG_ASYM, ADF_CFG_MAX_VAL_LEN_IN_BYTES)) || + (!strncmp(val, ADF_CFG_SYM, ADF_CFG_MAX_VAL_LEN_IN_BYTES))) { + strlcpy(key, ADF_NUM_DC, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value( + accel_dev, processed_sec, key, &num_dc_inst)) + goto failed; + if (num_dc_inst > 0) { + device_printf( + GET_DEV(accel_dev), + "NumDcInstances > 0,when CY only is enabled\n"); + goto failed; + } + } + ret = EFAULT; + + strlcpy(key, ADF_NUM_CY, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value(accel_dev, processed_sec, key, &num_inst)) + goto failed; + + crypto_inst = malloc(sizeof(*crypto_inst), M_QAT, M_WAITOK | M_ZERO); + + for (i = 0; i < num_inst; i++) { + memset(crypto_inst, 0, sizeof(*crypto_inst)); + crypto_inst->stype = serv_type; + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_CORE_AFFINITY_FORMAT, + i); + if (adf_cfg_set_core_number_for_instance(accel_dev, + processed_sec, + key, + process_num, + &core_number)) + goto failed; + + if (strcmp(processed_sec, ADF_KERNEL_SEC) && + strcmp(processed_sec, ADF_KERNEL_SAL_SEC)) + adf_cfg_add_key_value_param(accel_dev, + derived_sec, + key, + (void *)&core_number, + ADF_DEC); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_NAME_FORMAT, + i); + if (adf_cfg_get_param_value(accel_dev, processed_sec, key, val)) + goto failed; + + strlcpy(crypto_inst->name, val, sizeof(crypto_inst->name)); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_CY_POLL_MODE_FORMAT, + i); + if (adf_cfg_set_value( + accel_dev, processed_sec, key, &polling_mode)) + goto failed; + + crypto_inst->polling_mode = polling_mode; + CPU_ZERO(&crypto_inst->affinity_mask); + CPU_SET(core_number, &crypto_inst->affinity_mask); + + if (adf_cfg_get_ring_pairs(accel_dev->cfg->dev, + crypto_inst, + derived_sec, + accel_dev)) + goto failed; + + switch (serv_type) { + case CRYPTO: + adf_cfg_add_cy_inst_info(accel_dev, + crypto_inst, + derived_sec, + i); + break; + case ASYM: + adf_cfg_add_asym_inst_info(accel_dev, + crypto_inst, + derived_sec, + i); + break; + case SYM: + adf_cfg_add_sym_inst_info(accel_dev, + crypto_inst, + derived_sec, + i); + break; + default: + pr_err("unknown crypto instance type %d.\n", serv_type); + goto failed; + } + } + + ret = 0; +failed: + free(crypto_inst, M_QAT); + free(val, M_QAT); + free(key, M_QAT); + + if (ret) + device_printf(GET_DEV(accel_dev), + "Failed to create rings for cy\n"); + + return ret; +} + +static int +adf_cfg_create_rings_entries_for_dc_inst(struct adf_accel_dev *accel_dev, + const char *processed_sec, + const char *derived_sec, + int process_num) +{ + int i = 0; + int ret = EFAULT; + unsigned long num_inst = 0, num_cy_inst = 0; + unsigned long core_number = 0; + unsigned long polling_mode = 0; + struct adf_cfg_instance *dc_inst = NULL; + + char *key = NULL; + char *val = NULL; + + key = malloc(ADF_CFG_MAX_KEY_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + val = malloc(ADF_CFG_MAX_VAL_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + ret = EFAULT; + + snprintf(key, ADF_CFG_MAX_STR_LEN, ADF_SERVICES_ENABLED); + if (adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) + goto failed; + + if (!strncmp(val, ADF_CFG_DC, ADF_CFG_MAX_VAL_LEN_IN_BYTES)) { + strlcpy(key, ADF_NUM_CY, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value( + accel_dev, processed_sec, key, &num_cy_inst)) + goto failed; + if (num_cy_inst > 0) { + device_printf( + GET_DEV(accel_dev), + "NumCyInstances > 0,when DC only is enabled\n"); + goto failed; + } + } + + strlcpy(key, ADF_NUM_DC, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value(accel_dev, processed_sec, key, &num_inst)) + goto failed; + + dc_inst = malloc(sizeof(*dc_inst), M_QAT, M_WAITOK | M_ZERO); + + for (i = 0; i < num_inst; i++) { + memset(dc_inst, 0, sizeof(*dc_inst)); + dc_inst->stype = COMP; + snprintf(key, + ADF_CFG_MAX_STR_LEN, + ADF_DC_CORE_AFFINITY_FORMAT, + i); + + if (adf_cfg_set_core_number_for_instance(accel_dev, + processed_sec, + key, + process_num, + &core_number)) + goto failed; + + if (strcmp(processed_sec, ADF_KERNEL_SEC) && + strcmp(processed_sec, ADF_KERNEL_SAL_SEC)) { + adf_cfg_add_key_value_param(accel_dev, + derived_sec, + key, + (void *)&core_number, + ADF_DEC); + } + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_DC_NAME_FORMAT, + i); + if (adf_cfg_get_param_value(accel_dev, processed_sec, key, val)) + goto failed; + + strlcpy(dc_inst->name, val, sizeof(dc_inst->name)); + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_DC_POLL_MODE_FORMAT, + i); + if (adf_cfg_set_value( + accel_dev, processed_sec, key, &polling_mode)) + goto failed; + + dc_inst->polling_mode = polling_mode; + CPU_ZERO(&dc_inst->affinity_mask); + CPU_SET(core_number, &dc_inst->affinity_mask); + + if (adf_cfg_get_ring_pairs( + accel_dev->cfg->dev, dc_inst, derived_sec, accel_dev)) + goto failed; + + adf_cfg_add_dc_inst_info(accel_dev, dc_inst, derived_sec, i); + } + + ret = 0; +failed: + free(dc_inst, M_QAT); + free(val, M_QAT); + free(key, M_QAT); + + if (ret) + device_printf(GET_DEV(accel_dev), + "Failed to create rings for dc\n"); + + return ret; +} + +static int +adf_cfg_process_user_section(struct adf_accel_dev *accel_dev, + const char *sec_name, + int dev) +{ + int i = 0; + int ret = EFAULT; + unsigned long num_processes = 0; + unsigned long limit_dev_acc = 0; + u8 serv_type = 0; + + char *key = NULL; + char *val = NULL; + char *derived_sec_name = NULL; + + key = malloc(ADF_CFG_MAX_KEY_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + val = malloc(ADF_CFG_MAX_VAL_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + derived_sec_name = + malloc(ADF_CFG_MAX_STR_LEN, M_QAT, M_WAITOK | M_ZERO); + + strlcpy(key, ADF_NUM_PROCESSES, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value(accel_dev, sec_name, key, &num_processes)) + num_processes = 0; + + strlcpy(key, ADF_LIMIT_DEV_ACCESS, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value(accel_dev, sec_name, key, &limit_dev_acc)) + limit_dev_acc = 0; + + for (i = 0; i < num_processes; i++) { + if (limit_dev_acc) + snprintf(derived_sec_name, + ADF_CFG_MAX_STR_LEN, + ADF_LIMITED_USER_SECTION_NAME_FORMAT, + sec_name, + dev, + i); + else + snprintf(derived_sec_name, + ADF_CFG_MAX_STR_LEN, + ADF_USER_SECTION_NAME_FORMAT, + sec_name, + i); + + if (adf_cfg_derived_section_add(accel_dev, derived_sec_name)) + goto failed; + + /* copy items to the derived section */ + adf_cfg_section_copy(accel_dev, sec_name, derived_sec_name); + + for (serv_type = NA; serv_type <= USED; serv_type++) { + switch (serv_type) { + case NA: + break; + case CRYPTO: + case ASYM: + case SYM: + if (adf_cfg_is_svc_enabled(accel_dev, + serv_type)) + if (adf_cfg_create_rings_entries_for_cy_inst( + accel_dev, + sec_name, + derived_sec_name, + i, + (enum adf_cfg_service_type) + serv_type)) + goto failed; + break; + case COMP: + if (adf_cfg_is_svc_enabled(accel_dev, + serv_type)) + if (adf_cfg_create_rings_entries_for_dc_inst( + accel_dev, + sec_name, + derived_sec_name, + i)) + goto failed; + break; + case USED: + break; + default: + pr_err("Unknown service type %d.\n", serv_type); + } + } + } + + ret = 0; +failed: + + free(val, M_QAT); + free(key, M_QAT); + free(derived_sec_name, M_QAT); + + if (ret) + device_printf(GET_DEV(accel_dev), + "Failed to process user section %s\n", + sec_name); + + return ret; +} + +static int +adf_cfg_cleanup_user_section(struct adf_accel_dev *accel_dev, + const char *sec_name) +{ + struct adf_cfg_section *sec = adf_cfg_sec_find(accel_dev, sec_name); + struct list_head *head; + struct list_head *list_ptr, *tmp; + + if (!sec) + return EFAULT; + + if (sec->is_derived) + return 0; + + head = &sec->param_head; + list_for_each_prev_safe(list_ptr, tmp, head) + { + struct adf_cfg_key_val *ptr = + list_entry(list_ptr, struct adf_cfg_key_val, list); + + if (!strcmp(ptr->key, ADF_LIMIT_DEV_ACCESS)) + continue; + + list_del(list_ptr); + free(ptr, M_QAT); + } + return 0; +} + +static int +adf_cfg_process_section_no_op(struct adf_accel_dev *accel_dev, + const char *sec_name) +{ + return 0; +} + +static int +adf_cfg_cleanup_general_section(struct adf_accel_dev *accel_dev, + const char *sec_name) +{ + unsigned long first_used_bundle = 0; + int ret = EFAULT; + char *key = NULL; + char *val = NULL; + + key = malloc(ADF_CFG_MAX_KEY_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + val = malloc(ADF_CFG_MAX_VAL_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + /* Remove sections that not needed after processing */ + strlcpy(key, ADF_CONFIG_VERSION, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_remove_key_param(accel_dev, sec_name, key)) + goto failed; + + strlcpy(key, ADF_CY ADF_RING_ASYM_SIZE, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_remove_key_param(accel_dev, sec_name, key)) + goto failed; + + strlcpy(key, ADF_CY ADF_RING_SYM_SIZE, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_remove_key_param(accel_dev, sec_name, key)) + goto failed; + + strlcpy(key, ADF_DC ADF_RING_DC_SIZE, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_remove_key_param(accel_dev, sec_name, key)) + goto failed; + + /* After all processing done, set the "FirstUserBundle" value */ + first_used_bundle = accel_dev->cfg->dev->max_kernel_bundle_nr + 1; + strlcpy(key, ADF_FIRST_USER_BUNDLE, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_add_key_value_param( + accel_dev, sec_name, key, (void *)&first_used_bundle, ADF_DEC)) + goto failed; + + ret = 0; +failed: + free(key, M_QAT); + free(val, M_QAT); + + if (ret) + device_printf(GET_DEV(accel_dev), + "Failed to clean up general section\n"); + + return ret; +} + +static int +adf_cfg_process_kernel_section(struct adf_accel_dev *accel_dev, + const char *sec_name) +{ + u8 serv_type = 0; + + for (serv_type = NA; serv_type <= USED; serv_type++) { + switch (serv_type) { + case NA: + break; + case CRYPTO: + case ASYM: + case SYM: + if (adf_cfg_is_svc_enabled(accel_dev, serv_type)) + if (adf_cfg_create_rings_entries_for_cy_inst( + accel_dev, + sec_name, + sec_name, + 0, + (enum adf_cfg_service_type)serv_type)) + goto failed; + break; + case COMP: + if (adf_cfg_is_svc_enabled(accel_dev, serv_type)) + if (adf_cfg_create_rings_entries_for_dc_inst( + accel_dev, sec_name, sec_name, 0)) + goto failed; + break; + case USED: + break; + default: + pr_err("Unknown service type of instance %d.\n", + serv_type); + } + } + + return 0; + +failed: + return EFAULT; +} + +static int +adf_cfg_cleanup_kernel_section(struct adf_accel_dev *accel_dev, + const char *sec_name) +{ + return 0; +} + +static int +adf_cfg_create_accel_section(struct adf_accel_dev *accel_dev, + const char *sec_name) +{ + /* Find global settings for coalescing. Use defaults if not found */ + unsigned long accel_coales = 0; + unsigned long accel_coales_timer = 0; + unsigned long accel_coales_num_msg = 0; + unsigned long cpu; + char *key = NULL; + char *val = NULL; + int ret = EFAULT; + int index = 0; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + + if (!hw_device) + goto failed; + + key = malloc(ADF_CFG_MAX_KEY_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + val = malloc(ADF_CFG_MAX_VAL_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + strlcpy(key, + ADF_ETRMGR_COALESCING_ENABLED, + ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value(accel_dev, ADF_GENERAL_SEC, key, &accel_coales)) + accel_coales = ADF_CFG_ACCEL_DEF_COALES; + + strlcpy(key, ADF_ETRMGR_COALESCE_TIMER, ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value( + accel_dev, ADF_GENERAL_SEC, key, &accel_coales_timer)) + accel_coales_timer = ADF_CFG_ACCEL_DEF_COALES_TIMER; + + strlcpy(key, + ADF_ETRMGR_COALESCING_MSG_ENABLED, + ADF_CFG_MAX_KEY_LEN_IN_BYTES); + if (adf_cfg_set_value( + accel_dev, ADF_GENERAL_SEC, key, &accel_coales_num_msg)) + accel_coales_num_msg = ADF_CFG_ACCEL_DEF_COALES_NUM_MSG; + + for (index = 0; index < hw_device->num_banks; index++) { + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_ETRMGR_COALESCING_ENABLED_FORMAT, + index); + ret = adf_cfg_add_key_value_param( + accel_dev, sec_name, key, &accel_coales, ADF_DEC); + if (ret != 0) + goto failed; + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_ETRMGR_COALESCE_TIMER_FORMAT, + index); + ret = adf_cfg_add_key_value_param( + accel_dev, sec_name, key, &accel_coales_timer, ADF_DEC); + if (ret != 0) + goto failed; + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_ETRMGR_COALESCING_MSG_ENABLED_FORMAT, + index); + ret = adf_cfg_add_key_value_param( + accel_dev, sec_name, key, &accel_coales_num_msg, ADF_DEC); + if (ret != 0) + goto failed; + + cpu = ADF_CFG_AFFINITY_WHATEVER; + + snprintf(key, + ADF_CFG_MAX_KEY_LEN_IN_BYTES, + ADF_ETRMGR_CORE_AFFINITY_FORMAT, + index); + ret = adf_cfg_add_key_value_param( + accel_dev, sec_name, key, &cpu, ADF_DEC); + if (ret != 0) + goto failed; + } + + ret = 0; + +failed: + free(key, M_QAT); + free(val, M_QAT); + + if (ret) + device_printf(GET_DEV(accel_dev), + "Failed to create accel section\n"); + + return ret; +} + +static int +adf_cfg_cleanup_accel_section(struct adf_accel_dev *accel_dev, + const char *sec_name) +{ + return 0; +} + +static int +adf_cfg_process_accel_section(struct adf_accel_dev *accel_dev, + const char *sec_name) +{ + int accel_num = 0; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + char *derived_name = NULL; + int ret = EFAULT; + + if (!hw_device) + goto failed; + + if (hw_device->num_logical_accel == 0) + goto failed; + + derived_name = + malloc(ADF_CFG_MAX_SECTION_LEN_IN_BYTES, M_QAT, M_WAITOK | M_ZERO); + + for (accel_num = 0; accel_num < hw_device->num_logical_accel; + accel_num++) { + snprintf(derived_name, + ADF_CFG_MAX_SECTION_LEN_IN_BYTES, + ADF_ACCEL_STR, + accel_num); + ret = adf_cfg_section_add(accel_dev, derived_name); + if (ret != 0) + goto failed; + + ret = adf_cfg_create_accel_section(accel_dev, derived_name); + if (ret != 0) + goto failed; + } + + ret = 0; +failed: + free(derived_name, M_QAT); + + if (ret) + device_printf(GET_DEV(accel_dev), + "Failed to process accel section\n"); + + return ret; +} + +int +adf_cfg_process_section(struct adf_accel_dev *accel_dev, + const char *sec_name, + int dev) +{ + if (!strcmp(sec_name, ADF_GENERAL_SEC) || + !strcmp(sec_name, ADF_INLINE_SEC)) + return adf_cfg_process_section_no_op(accel_dev, sec_name); + else if (!strcmp(sec_name, ADF_KERNEL_SEC) || + !strcmp(sec_name, ADF_KERNEL_SAL_SEC)) + return adf_cfg_process_kernel_section(accel_dev, sec_name); + else if (!strcmp(sec_name, ADF_ACCEL_SEC)) + return adf_cfg_process_accel_section(accel_dev, sec_name); + else + return adf_cfg_process_user_section(accel_dev, sec_name, dev); +} + +int +adf_cfg_cleanup_section(struct adf_accel_dev *accel_dev, + const char *sec_name, + int dev) +{ + if (!strcmp(sec_name, ADF_GENERAL_SEC)) + return adf_cfg_cleanup_general_section(accel_dev, sec_name); + else if (!strcmp(sec_name, ADF_INLINE_SEC)) + return adf_cfg_process_section_no_op(accel_dev, sec_name); + else if (!strcmp(sec_name, ADF_KERNEL_SEC) || + !strcmp(sec_name, ADF_KERNEL_SAL_SEC)) + return adf_cfg_cleanup_kernel_section(accel_dev, sec_name); + else if (strstr(sec_name, ADF_ACCEL_SEC)) + return adf_cfg_cleanup_accel_section(accel_dev, sec_name); + else + return adf_cfg_cleanup_user_section(accel_dev, sec_name); +} + +int +adf_cfg_setup_irq(struct adf_accel_dev *accel_dev) +{ + int ret = EFAULT; + struct adf_accel_pci *info_pci_dev = &accel_dev->accel_pci_dev; + struct adf_cfg_device *cfg_dev = NULL; + struct msix_entry *msixe = NULL; + u32 num_msix = 0; + int index = 0; + int computed_core = 0; + + if (!accel_dev || !accel_dev->cfg || !accel_dev->hw_device) + goto failed; + + cfg_dev = accel_dev->cfg->dev; + if (!cfg_dev) + goto failed; + + msixe = + (struct msix_entry *)accel_dev->accel_pci_dev.msix_entries.entries; + num_msix = accel_dev->accel_pci_dev.msix_entries.num_entries; + if (!msixe) + goto cleanup_and_fail; + + /* + * Here we want to set the affinity of kernel and epoll mode + * bundle into user defined value. + * Because in adf_isr.c we setup core affinity by round-robin + * we need to reset it after device up done. + */ + for (index = 0; index < accel_dev->hw_device->num_banks; index++) { + struct adf_cfg_bundle *bundle = cfg_dev->bundles[index]; + + if (!bundle) + continue; + + if (bundle->type != KERNEL && + bundle->polling_mode != ADF_CFG_RESP_EPOLL) + continue; + + if (bundle->number >= num_msix) + goto cleanup_and_fail; + + computed_core = CPU_FFS(&bundle->affinity_mask) - 1; + bus_bind_intr(info_pci_dev->pci_dev, + msixe[index].irq, + computed_core); + } + ret = 0; + +cleanup_and_fail: + adf_cfg_device_clear(cfg_dev, accel_dev); + free(cfg_dev, M_QAT); + accel_dev->cfg->dev = NULL; + +failed: + return ret; +} Index: sys/dev/qat/qat_common/adf_clock.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_clock.c @@ -0,0 +1,187 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_accel_devices.h" +#include "adf_common_drv.h" + +#include + +#define MEASURE_CLOCK_RETRIES 10 +#define MEASURE_CLOCK_DELTA_THRESHOLD 100 +#define MEASURE_CLOCK_DELAY 10000 +#define ME_CLK_DIVIDER 16 + +#define CLK_DBGFS_FILE "frequency" +#define HB_SYSCTL_ERR(RC) \ + do { \ + if (!RC) { \ + device_printf(GET_DEV(accel_dev), \ + "Memory allocation failed in \ + adf_heartbeat_dbg_add\n"); \ + return ENOMEM; \ + } \ + } while (0) + +int +adf_clock_debugfs_add(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + + struct sysctl_ctx_list *qat_sysctl_ctx; + struct sysctl_oid *qat_sysctl_tree; + struct sysctl_oid *rc = 0; + + qat_sysctl_ctx = + device_get_sysctl_ctx(accel_dev->accel_pci_dev.pci_dev); + qat_sysctl_tree = + device_get_sysctl_tree(accel_dev->accel_pci_dev.pci_dev); + + rc = SYSCTL_ADD_UINT(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_sysctl_tree), + OID_AUTO, + CLK_DBGFS_FILE, + CTLFLAG_RD, + &hw_data->clock_frequency, + 0, + "clock frequency"); + HB_SYSCTL_ERR(rc); + return 0; +} + +/** + * adf_dev_measure_clock() -- Measure the CPM clock frequency + * @accel_dev: Pointer to acceleration device. + * @frequency: Pointer to returned frequency in Hz. + * + * Return: 0 on success, error code otherwise. + */ +static int +measure_clock(struct adf_accel_dev *accel_dev, u32 *frequency) +{ + struct timespec ts1; + struct timespec ts2; + struct timespec ts3; + struct timespec ts4; + struct timespec delta; + u64 delta_us = 0; + u64 timestamp1 = 0; + u64 timestamp2 = 0; + u64 temp = 0; + int tries = 0; + + if (!accel_dev || !frequency) + return EIO; + do { + nanotime(&ts1); + if (adf_get_fw_timestamp(accel_dev, ×tamp1)) { + device_printf(GET_DEV(accel_dev), + "Failed to get fw timestamp\n"); + return EIO; + } + nanotime(&ts2); + + delta = timespec_sub(ts2, ts1); + temp = delta.tv_nsec; + do_div(temp, NSEC_PER_USEC); + + delta_us = delta.tv_sec * USEC_PER_SEC + temp; + } while (delta_us > MEASURE_CLOCK_DELTA_THRESHOLD && + ++tries < MEASURE_CLOCK_RETRIES); + + if (tries >= MEASURE_CLOCK_RETRIES) { + device_printf(GET_DEV(accel_dev), + "Excessive clock measure delay\n"); + return EIO; + } + + usleep_range(MEASURE_CLOCK_DELAY, MEASURE_CLOCK_DELAY * 2); + tries = 0; + do { + nanotime(&ts3); + if (adf_get_fw_timestamp(accel_dev, ×tamp2)) { + device_printf(GET_DEV(accel_dev), + "Failed to get fw timestamp\n"); + return EIO; + } + nanotime(&ts4); + + delta = timespec_sub(ts4, ts3); + temp = delta.tv_nsec; + do_div(temp, NSEC_PER_USEC); + + delta_us = delta.tv_sec * USEC_PER_SEC + temp; + } while (delta_us > MEASURE_CLOCK_DELTA_THRESHOLD && + ++tries < MEASURE_CLOCK_RETRIES); + + if (tries >= MEASURE_CLOCK_RETRIES) { + device_printf(GET_DEV(accel_dev), + "Excessive clock measure delay\n"); + return EIO; + } + + delta = timespec_sub(ts3, ts1); + temp = + delta.tv_sec * NSEC_PER_SEC + delta.tv_nsec + (NSEC_PER_USEC / 2); + do_div(temp, NSEC_PER_USEC); + delta_us = temp; + /* Don't pretend that this gives better than 100KHz resolution */ + temp = (timestamp2 - timestamp1) * ME_CLK_DIVIDER * 10 + (delta_us / 2); + do_div(temp, delta_us); + *frequency = temp * 100000; + + return 0; +} + +/** + * adf_dev_measure_clock() -- Measure the CPM clock frequency + * @accel_dev: Pointer to acceleration device. + * @frequency: Pointer to returned frequency in Hz. + * @min: Minimum expected frequency + * @max: Maximum expected frequency + * + * Return: 0 on success, error code otherwise. + */ +int +adf_dev_measure_clock(struct adf_accel_dev *accel_dev, + u32 *frequency, + u32 min, + u32 max) +{ + int ret; + u32 freq; + + ret = measure_clock(accel_dev, &freq); + if (ret) + return ret; + + if (freq < min) { + device_printf(GET_DEV(accel_dev), + "Slow clock %d MHz measured, assuming %d\n", + freq, + min); + freq = min; + } else if (freq > max) { + device_printf(GET_DEV(accel_dev), + "Fast clock %d MHz measured, assuming %d\n", + freq, + max); + freq = max; + } + *frequency = freq; + return 0; +} + +static inline u64 +timespec_to_ms(const struct timespec *ts) +{ + return (uint64_t)(ts->tv_sec * (1000)) + (ts->tv_nsec / NSEC_PER_MSEC); +} + +u64 +adf_clock_get_current_time(void) +{ + struct timespec ts; + + getnanotime(&ts); + return timespec_to_ms(&ts); +} Index: sys/dev/qat/qat_common/adf_dev_err.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_dev_err.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_dev_err.h" + +struct reg_info { + size_t offs; + char *name; +}; + +static struct reg_info adf_err_regs[] = { + { ADF_ERRSOU0, "ERRSOU0" }, + { ADF_ERRSOU1, "ERRSOU1" }, + { ADF_ERRSOU3, "ERRSOU3" }, + { ADF_ERRSOU4, "ERRSOU4" }, + { ADF_ERRSOU5, "ERRSOU5" }, + { ADF_RICPPINTSTS, "RICPPINTSTS" }, + { ADF_RIERRPUSHID, "RIERRPUSHID" }, + { ADF_RIERRPULLID, "RIERRPULLID" }, + { ADF_CPP_CFC_ERR_STATUS, "CPP_CFC_ERR_STATUS" }, + { ADF_CPP_CFC_ERR_PPID, "CPP_CFC_ERR_PPID" }, + { ADF_TICPPINTSTS, "TICPPINTSTS" }, + { ADF_TIERRPUSHID, "TIERRPUSHID" }, + { ADF_TIERRPULLID, "TIERRPULLID" }, + { ADF_SECRAMUERR, "SECRAMUERR" }, + { ADF_SECRAMUERRAD, "SECRAMUERRAD" }, + { ADF_CPPMEMTGTERR, "CPPMEMTGTERR" }, + { ADF_ERRPPID, "ERRPPID" }, +}; + +static u32 +adf_get_intstatsssm(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_INTSTATSSM(dev)); +} + +static u32 +adf_get_pperr(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_PPERR(dev)); +} + +static u32 +adf_get_pperrid(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_PPERRID(dev)); +} + +static u32 +adf_get_uerrssmsh(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_UERRSSMSH(dev)); +} + +static u32 +adf_get_uerrssmshad(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_UERRSSMSHAD(dev)); +} + +static u32 +adf_get_uerrssmmmp0(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_UERRSSMMMP(dev, 0)); +} + +static u32 +adf_get_uerrssmmmp1(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_UERRSSMMMP(dev, 1)); +} + +static u32 +adf_get_uerrssmmmp2(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_UERRSSMMMP(dev, 2)); +} + +static u32 +adf_get_uerrssmmmp3(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_UERRSSMMMP(dev, 3)); +} + +static u32 +adf_get_uerrssmmmp4(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_UERRSSMMMP(dev, 4)); +} + +static u32 +adf_get_uerrssmmmpad0(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_UERRSSMMMPAD(dev, 0)); +} + +static u32 +adf_get_uerrssmmmpad1(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_UERRSSMMMPAD(dev, 1)); +} + +static u32 +adf_get_uerrssmmmpad2(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_UERRSSMMMPAD(dev, 2)); +} + +static u32 +adf_get_uerrssmmmpad3(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_UERRSSMMMPAD(dev, 3)); +} + +static u32 +adf_get_uerrssmmmpad4(struct resource *pmisc_bar_addr, size_t dev) +{ + return ADF_CSR_RD(pmisc_bar_addr, ADF_UERRSSMMMPAD(dev, 4)); +} + +struct reg_array_info { + u32 (*read)(struct resource *pmisc_bar_addr, size_t dev); + char *name; +}; + +static struct reg_array_info adf_accel_err_regs[] = { + { adf_get_intstatsssm, "INTSTATSSM" }, + { adf_get_pperr, "PPERR" }, + { adf_get_pperrid, "PPERRID" }, + { adf_get_uerrssmsh, "UERRSSMSH" }, + { adf_get_uerrssmshad, "UERRSSMSHAD" }, + { adf_get_uerrssmmmp0, "UERRSSMMMP0" }, + { adf_get_uerrssmmmp1, "UERRSSMMMP1" }, + { adf_get_uerrssmmmp2, "UERRSSMMMP2" }, + { adf_get_uerrssmmmp3, "UERRSSMMMP3" }, + { adf_get_uerrssmmmp4, "UERRSSMMMP4" }, + { adf_get_uerrssmmmpad0, "UERRSSMMMPAD0" }, + { adf_get_uerrssmmmpad1, "UERRSSMMMPAD1" }, + { adf_get_uerrssmmmpad2, "UERRSSMMMPAD2" }, + { adf_get_uerrssmmmpad3, "UERRSSMMMPAD3" }, + { adf_get_uerrssmmmpad4, "UERRSSMMMPAD4" }, +}; + +static char adf_printf_buf[128] = { 0 }; +static size_t adf_printf_len; + +static void +adf_print_flush(struct adf_accel_dev *accel_dev) +{ + if (adf_printf_len > 0) { + device_printf(GET_DEV(accel_dev), "%.128s\n", adf_printf_buf); + adf_printf_len = 0; + } +} + +static void +adf_print_reg(struct adf_accel_dev *accel_dev, + const char *name, + size_t idx, + u32 val) +{ + adf_printf_len += snprintf(&adf_printf_buf[adf_printf_len], + sizeof(adf_printf_buf) - adf_printf_len, + "%s[%zu],%.8x,", + name, + idx, + val); + + if (adf_printf_len >= 80) + adf_print_flush(accel_dev); +} + +void +adf_print_err_registers(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_bar *misc_bar = + &GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)]; + struct resource *csr = misc_bar->virt_addr; + size_t i; + unsigned int mask; + u32 val; + + for (i = 0; i < ARRAY_SIZE(adf_err_regs); ++i) { + val = ADF_CSR_RD(csr, adf_err_regs[i].offs); + + adf_print_reg(accel_dev, adf_err_regs[i].name, 0, val); + } + + for (i = 0; i < ARRAY_SIZE(adf_accel_err_regs); ++i) { + size_t accel; + + for (accel = 0, mask = hw_data->accel_mask; mask; + accel++, mask >>= 1) { + if (!(mask & 1)) + continue; + val = adf_accel_err_regs[i].read(csr, accel); + + adf_print_reg(accel_dev, + adf_accel_err_regs[i].name, + accel, + val); + } + } + + adf_print_flush(accel_dev); +} + +static void +adf_log_slice_hang(struct adf_accel_dev *accel_dev, + u8 accel_num, + char *unit_name, + u8 unit_number) +{ + device_printf(GET_DEV(accel_dev), + "CPM #%x Slice Hang Detected unit: %s%d.\n", + accel_num, + unit_name, + unit_number); +} + +bool +adf_handle_slice_hang(struct adf_accel_dev *accel_dev, + u8 accel_num, + struct resource *csr, + u32 slice_hang_offset) +{ + u32 slice_hang = ADF_CSR_RD(csr, slice_hang_offset); + + if (!slice_hang) + return false; + + if (slice_hang & ADF_SLICE_HANG_AUTH0_MASK) + adf_log_slice_hang(accel_dev, accel_num, "Auth", 0); + if (slice_hang & ADF_SLICE_HANG_AUTH1_MASK) + adf_log_slice_hang(accel_dev, accel_num, "Auth", 1); + if (slice_hang & ADF_SLICE_HANG_AUTH2_MASK) + adf_log_slice_hang(accel_dev, accel_num, "Auth", 2); + if (slice_hang & ADF_SLICE_HANG_CPHR0_MASK) + adf_log_slice_hang(accel_dev, accel_num, "Cipher", 0); + if (slice_hang & ADF_SLICE_HANG_CPHR1_MASK) + adf_log_slice_hang(accel_dev, accel_num, "Cipher", 1); + if (slice_hang & ADF_SLICE_HANG_CPHR2_MASK) + adf_log_slice_hang(accel_dev, accel_num, "Cipher", 2); + if (slice_hang & ADF_SLICE_HANG_CMP0_MASK) + adf_log_slice_hang(accel_dev, accel_num, "Comp", 0); + if (slice_hang & ADF_SLICE_HANG_CMP1_MASK) + adf_log_slice_hang(accel_dev, accel_num, "Comp", 1); + if (slice_hang & ADF_SLICE_HANG_XLT0_MASK) + adf_log_slice_hang(accel_dev, accel_num, "Xlator", 0); + if (slice_hang & ADF_SLICE_HANG_XLT1_MASK) + adf_log_slice_hang(accel_dev, accel_num, "Xlator", 1); + if (slice_hang & ADF_SLICE_HANG_MMP0_MASK) + adf_log_slice_hang(accel_dev, accel_num, "MMP", 0); + if (slice_hang & ADF_SLICE_HANG_MMP1_MASK) + adf_log_slice_hang(accel_dev, accel_num, "MMP", 1); + if (slice_hang & ADF_SLICE_HANG_MMP2_MASK) + adf_log_slice_hang(accel_dev, accel_num, "MMP", 2); + if (slice_hang & ADF_SLICE_HANG_MMP3_MASK) + adf_log_slice_hang(accel_dev, accel_num, "MMP", 3); + if (slice_hang & ADF_SLICE_HANG_MMP4_MASK) + adf_log_slice_hang(accel_dev, accel_num, "MMP", 4); + + /* Clear the associated interrupt */ + ADF_CSR_WR(csr, slice_hang_offset, slice_hang); + + return true; +} + +/** + * adf_check_slice_hang() - Check slice hang status + * + * Return: true if a slice hange interrupt is serviced.. + */ +bool +adf_check_slice_hang(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_bar *misc_bar = + &GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)]; + struct resource *csr = misc_bar->virt_addr; + u32 errsou3 = ADF_CSR_RD(csr, ADF_ERRSOU3); + u32 errsou5 = ADF_CSR_RD(csr, ADF_ERRSOU5); + u32 offset; + u32 accel_num; + bool handled = false; + u32 errsou[] = { errsou3, errsou3, errsou5, errsou5, errsou5 }; + u32 mask[] = { ADF_EMSK3_CPM0_MASK, + ADF_EMSK3_CPM1_MASK, + ADF_EMSK5_CPM2_MASK, + ADF_EMSK5_CPM3_MASK, + ADF_EMSK5_CPM4_MASK }; + unsigned int accel_mask; + + for (accel_num = 0, accel_mask = hw_data->accel_mask; accel_mask; + accel_num++, accel_mask >>= 1) { + if (!(accel_mask & 1)) + continue; + if (accel_num >= ARRAY_SIZE(errsou)) { + device_printf(GET_DEV(accel_dev), + "Invalid accel_num %d.\n", + accel_num); + break; + } + + if (errsou[accel_num] & mask[accel_num]) { + if (ADF_CSR_RD(csr, ADF_INTSTATSSM(accel_num)) & + ADF_INTSTATSSM_SHANGERR) { + offset = ADF_SLICEHANGSTATUS(accel_num); + handled |= adf_handle_slice_hang(accel_dev, + accel_num, + csr, + offset); + } + } + } + + return handled; +} Index: sys/dev/qat/qat_common/adf_dev_mgr.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_dev_mgr.c @@ -0,0 +1,406 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "icp_qat_uclo.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_cfg_strings.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include +#include "adf_cfg.h" +#include "adf_common_drv.h" + +#define ADF_AE_PAIR 2 +#define PKE_SLICES_PER_AE_PAIR 5 + +static LIST_HEAD(accel_table); +static LIST_HEAD(vfs_table); +static DEFINE_MUTEX(table_lock); +static uint32_t num_devices; +static u8 id_map[ADF_MAX_DEVICES]; + +struct vf_id_map { + u32 bdf; + u32 id; + u32 fake_id; + bool attached; + struct list_head list; +}; + +/** + * adf_get_vf_real_id() - Translate fake to real device id + * + * The "real" id is assigned to a device when it is initially + * bound to the driver. + * The "fake" id is usually the same as the real id, but + * can change when devices are unbound from the qat driver, + * perhaps to assign the device to a guest. + */ +static int +adf_get_vf_real_id(u32 fake) +{ + struct list_head *itr; + + list_for_each(itr, &vfs_table) + { + struct vf_id_map *ptr = list_entry(itr, struct vf_id_map, list); + if (ptr->fake_id == fake) + return ptr->id; + } + return -1; +} + +/** + * adf_clean_vf_map() - Cleans VF id mapings + * + * Function cleans internal ids for virtual functions. + * @vf: flag indicating whether mappings is cleaned + * for vfs only or for vfs and pfs + */ +void +adf_clean_vf_map(bool vf) +{ + struct vf_id_map *map; + struct list_head *ptr, *tmp; + + mutex_lock(&table_lock); + list_for_each_safe(ptr, tmp, &vfs_table) + { + map = list_entry(ptr, struct vf_id_map, list); + if (map->bdf != -1) { + id_map[map->id] = 0; + num_devices--; + } + + if (vf && map->bdf == -1) + continue; + + list_del(ptr); + free(map, M_QAT); + } + mutex_unlock(&table_lock); +} + +/** + * adf_devmgr_update_class_index() - Update internal index + * @hw_data: Pointer to internal device data. + * + * Function updates internal dev index for VFs + */ +void +adf_devmgr_update_class_index(struct adf_hw_device_data *hw_data) +{ + struct adf_hw_device_class *class = hw_data->dev_class; + struct list_head *itr; + int i = 0; + + list_for_each(itr, &accel_table) + { + struct adf_accel_dev *ptr = + list_entry(itr, struct adf_accel_dev, list); + + if (ptr->hw_device->dev_class == class) + ptr->hw_device->instance_id = i++; + + if (i == class->instances) + break; + } +} + +static unsigned int +adf_find_free_id(void) +{ + unsigned int i; + + for (i = 0; i < ADF_MAX_DEVICES; i++) { + if (!id_map[i]) { + id_map[i] = 1; + return i; + } + } + return ADF_MAX_DEVICES + 1; +} + +/** + * adf_devmgr_add_dev() - Add accel_dev to the acceleration framework + * @accel_dev: Pointer to acceleration device. + * @pf: Corresponding PF if the accel_dev is a VF + * + * Function adds acceleration device to the acceleration framework. + * To be used by QAT device specific drivers. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_devmgr_add_dev(struct adf_accel_dev *accel_dev, struct adf_accel_dev *pf) +{ + struct list_head *itr; + int ret = 0; + + if (num_devices == ADF_MAX_DEVICES) { + device_printf(GET_DEV(accel_dev), + "Only support up to %d devices\n", + ADF_MAX_DEVICES); + return EFAULT; + } + + mutex_lock(&table_lock); + + /* PF on host or VF on guest */ + if (!accel_dev->is_vf || (accel_dev->is_vf && !pf)) { + struct vf_id_map *map; + + list_for_each(itr, &accel_table) + { + struct adf_accel_dev *ptr = + list_entry(itr, struct adf_accel_dev, list); + + if (ptr == accel_dev) { + ret = EEXIST; + goto unlock; + } + } + + list_add_tail(&accel_dev->list, &accel_table); + accel_dev->accel_id = adf_find_free_id(); + if (accel_dev->accel_id > ADF_MAX_DEVICES) { + ret = EFAULT; + goto unlock; + } + num_devices++; + map = malloc(sizeof(*map), M_QAT, GFP_KERNEL); + if (!map) { + ret = ENOMEM; + goto unlock; + } + map->bdf = ~0; + map->id = accel_dev->accel_id; + map->fake_id = map->id; + map->attached = true; + list_add_tail(&map->list, &vfs_table); + } else if (accel_dev->is_vf && pf) { + ret = ENOTSUP; + goto unlock; + } +unlock: + mutex_unlock(&table_lock); + return ret; +} + +struct list_head * +adf_devmgr_get_head(void) +{ + return &accel_table; +} + +/** + * adf_devmgr_rm_dev() - Remove accel_dev from the acceleration framework. + * @accel_dev: Pointer to acceleration device. + * @pf: Corresponding PF if the accel_dev is a VF + * + * Function removes acceleration device from the acceleration framework. + * To be used by QAT device specific drivers. + * + * Return: void + */ +void +adf_devmgr_rm_dev(struct adf_accel_dev *accel_dev, struct adf_accel_dev *pf) +{ + mutex_lock(&table_lock); + if (!accel_dev->is_vf || (accel_dev->is_vf && !pf)) { + id_map[accel_dev->accel_id] = 0; + num_devices--; + } + list_del(&accel_dev->list); + mutex_unlock(&table_lock); +} + +struct adf_accel_dev * +adf_devmgr_get_first(void) +{ + struct adf_accel_dev *dev = NULL; + + if (!list_empty(&accel_table)) + dev = + list_first_entry(&accel_table, struct adf_accel_dev, list); + return dev; +} + +/** + * adf_devmgr_pci_to_accel_dev() - Get accel_dev associated with the pci_dev. + * @accel_dev: Pointer to pci device. + * + * Function returns acceleration device associated with the given pci device. + * To be used by QAT device specific drivers. + * + * Return: pointer to accel_dev or NULL if not found. + */ +struct adf_accel_dev * +adf_devmgr_pci_to_accel_dev(device_t pci_dev) +{ + struct list_head *itr; + + mutex_lock(&table_lock); + list_for_each(itr, &accel_table) + { + struct adf_accel_dev *ptr = + list_entry(itr, struct adf_accel_dev, list); + + if (ptr->accel_pci_dev.pci_dev == pci_dev) { + mutex_unlock(&table_lock); + return ptr; + } + } + mutex_unlock(&table_lock); + return NULL; +} + +struct adf_accel_dev * +adf_devmgr_get_dev_by_id(uint32_t id) +{ + struct list_head *itr; + int real_id; + + mutex_lock(&table_lock); + real_id = adf_get_vf_real_id(id); + if (real_id < 0) + goto unlock; + + id = real_id; + + list_for_each(itr, &accel_table) + { + struct adf_accel_dev *ptr = + list_entry(itr, struct adf_accel_dev, list); + if (ptr->accel_id == id) { + mutex_unlock(&table_lock); + return ptr; + } + } +unlock: + mutex_unlock(&table_lock); + return NULL; +} + +int +adf_devmgr_verify_id(uint32_t *id) +{ + struct adf_accel_dev *accel_dev; + + if (*id == ADF_CFG_ALL_DEVICES) + return 0; + + accel_dev = adf_devmgr_get_dev_by_id(*id); + if (!accel_dev) + return ENODEV; + + /* Correct the id if real and fake differ */ + *id = accel_dev->accel_id; + return 0; +} + +static int +adf_get_num_dettached_vfs(void) +{ + struct list_head *itr; + int vfs = 0; + + mutex_lock(&table_lock); + list_for_each(itr, &vfs_table) + { + struct vf_id_map *ptr = list_entry(itr, struct vf_id_map, list); + if (ptr->bdf != ~0 && !ptr->attached) + vfs++; + } + mutex_unlock(&table_lock); + return vfs; +} + +void +adf_devmgr_get_num_dev(uint32_t *num) +{ + *num = num_devices - adf_get_num_dettached_vfs(); +} + +/** + * adf_dev_in_use() - Check whether accel_dev is currently in use + * @accel_dev: Pointer to acceleration device. + * + * To be used by QAT device specific drivers. + * + * Return: 1 when device is in use, 0 otherwise. + */ +int +adf_dev_in_use(struct adf_accel_dev *accel_dev) +{ + return atomic_read(&accel_dev->ref_count) != 0; +} + +/** + * adf_dev_get() - Increment accel_dev reference count + * @accel_dev: Pointer to acceleration device. + * + * Increment the accel_dev refcount and if this is the first time + * incrementing it during this period the accel_dev is in use, + * increment the module refcount too. + * To be used by QAT device specific drivers. + * + * Return: void + */ +void +adf_dev_get(struct adf_accel_dev *accel_dev) +{ + if (atomic_add_return(1, &accel_dev->ref_count) == 1) + device_busy(GET_DEV(accel_dev)); +} + +/** + * adf_dev_put() - Decrement accel_dev reference count + * @accel_dev: Pointer to acceleration device. + * + * Decrement the accel_dev refcount and if this is the last time + * decrementing it during this period the accel_dev is in use, + * decrement the module refcount too. + * To be used by QAT device specific drivers. + * + * Return: void + */ +void +adf_dev_put(struct adf_accel_dev *accel_dev) +{ + if (atomic_sub_return(1, &accel_dev->ref_count) == 0) + device_unbusy(GET_DEV(accel_dev)); +} + +/** + * adf_devmgr_in_reset() - Check whether device is in reset + * @accel_dev: Pointer to acceleration device. + * + * To be used by QAT device specific drivers. + * + * Return: 1 when the device is being reset, 0 otherwise. + */ +int +adf_devmgr_in_reset(struct adf_accel_dev *accel_dev) +{ + return test_bit(ADF_STATUS_RESTARTING, &accel_dev->status); +} + +/** + * adf_dev_started() - Check whether device has started + * @accel_dev: Pointer to acceleration device. + * + * To be used by QAT device specific drivers. + * + * Return: 1 when the device has started, 0 otherwise + */ +int +adf_dev_started(struct adf_accel_dev *accel_dev) +{ + return test_bit(ADF_STATUS_STARTED, &accel_dev->status); +} Index: sys/dev/qat/qat_common/adf_freebsd_admin.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_freebsd_admin.c @@ -0,0 +1,602 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "icp_qat_uclo.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_cfg_strings.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include "adf_heartbeat.h" +#include +#include +#include +#include +#include +#include +#include + +#include + +#define ADF_CONST_TABLE_VERSION_BYTE (0) +/* Keep version number in range 0-255 */ +#define ADF_CONST_TABLE_VERSION (1) + +/* Admin Messages Registers */ +#define ADF_DH895XCC_ADMINMSGUR_OFFSET (0x3A000 + 0x574) +#define ADF_DH895XCC_ADMINMSGLR_OFFSET (0x3A000 + 0x578) +#define ADF_DH895XCC_MAILBOX_BASE_OFFSET 0x20970 +#define ADF_DH895XCC_MAILBOX_STRIDE 0x1000 +#define ADF_ADMINMSG_LEN 32 +#define FREEBSD_ALLIGNMENT_SIZE 64 +#define ADF_INIT_CONFIG_SIZE 1024 + +static u8 const_tab[1024] __aligned(1024) = { +ADF_CONST_TABLE_VERSION, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x11, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x11, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x21, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x03, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x01, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x13, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x13, 0x02, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x13, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x13, +0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x23, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x33, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef, 0xfe, 0xdc, 0xba, 0x98, 0x76, +0x54, 0x32, 0x10, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x67, 0x45, 0x23, 0x01, 0xef, 0xcd, 0xab, +0x89, 0x98, 0xba, 0xdc, 0xfe, 0x10, 0x32, 0x54, 0x76, 0xc3, 0xd2, 0xe1, 0xf0, +0x00, 0x00, 0x00, 0x00, 0x11, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xc1, 0x05, 0x9e, +0xd8, 0x36, 0x7c, 0xd5, 0x07, 0x30, 0x70, 0xdd, 0x17, 0xf7, 0x0e, 0x59, 0x39, +0xff, 0xc0, 0x0b, 0x31, 0x68, 0x58, 0x15, 0x11, 0x64, 0xf9, 0x8f, 0xa7, 0xbe, +0xfa, 0x4f, 0xa4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x6a, 0x09, 0xe6, 0x67, 0xbb, 0x67, 0xae, +0x85, 0x3c, 0x6e, 0xf3, 0x72, 0xa5, 0x4f, 0xf5, 0x3a, 0x51, 0x0e, 0x52, 0x7f, +0x9b, 0x05, 0x68, 0x8c, 0x1f, 0x83, 0xd9, 0xab, 0x5b, 0xe0, 0xcd, 0x19, 0x05, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0xcb, 0xbb, 0x9d, 0x5d, 0xc1, 0x05, 0x9e, 0xd8, 0x62, 0x9a, 0x29, +0x2a, 0x36, 0x7c, 0xd5, 0x07, 0x91, 0x59, 0x01, 0x5a, 0x30, 0x70, 0xdd, 0x17, +0x15, 0x2f, 0xec, 0xd8, 0xf7, 0x0e, 0x59, 0x39, 0x67, 0x33, 0x26, 0x67, 0xff, +0xc0, 0x0b, 0x31, 0x8e, 0xb4, 0x4a, 0x87, 0x68, 0x58, 0x15, 0x11, 0xdb, 0x0c, +0x2e, 0x0d, 0x64, 0xf9, 0x8f, 0xa7, 0x47, 0xb5, 0x48, 0x1d, 0xbe, 0xfa, 0x4f, +0xa4, 0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x6a, 0x09, 0xe6, 0x67, 0xf3, 0xbc, 0xc9, 0x08, 0xbb, +0x67, 0xae, 0x85, 0x84, 0xca, 0xa7, 0x3b, 0x3c, 0x6e, 0xf3, 0x72, 0xfe, 0x94, +0xf8, 0x2b, 0xa5, 0x4f, 0xf5, 0x3a, 0x5f, 0x1d, 0x36, 0xf1, 0x51, 0x0e, 0x52, +0x7f, 0xad, 0xe6, 0x82, 0xd1, 0x9b, 0x05, 0x68, 0x8c, 0x2b, 0x3e, 0x6c, 0x1f, +0x1f, 0x83, 0xd9, 0xab, 0xfb, 0x41, 0xbd, 0x6b, 0x5b, 0xe0, 0xcd, 0x19, 0x13, +0x7e, 0x21, 0x79, 0x14, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x16, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x18, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x01, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x15, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x15, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x02, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x14, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x15, 0x02, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x15, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x24, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x25, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x24, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x25, +0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x12, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x43, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x43, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x45, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x45, 0x01, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x44, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x44, 0x01, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x2B, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x2B, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x15, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x17, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; + +#define ADF_ADMIN_POLL_INTERVAL_US 20 +#define ADF_ADMIN_POLL_RETRIES 5000 + +static void +dma_callback(void *arg, bus_dma_segment_t *segs, int nseg, int error) +{ + bus_addr_t *addr; + + addr = arg; + if (error == 0 && nseg == 1) + *addr = segs[0].ds_addr; + else + *addr = 0; +} + +int +adf_put_admin_msg_sync(struct adf_accel_dev *accel_dev, + u32 ae, + void *in, + void *out) +{ + struct adf_admin_comms *admin = accel_dev->admin; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct resource *mailbox = admin->mailbox_addr; + struct admin_info admin_csrs_info; + + hw_data->get_admin_info(&admin_csrs_info); + int offset = ae * ADF_ADMINMSG_LEN * 2; + int mb_offset = + ae * ADF_DH895XCC_MAILBOX_STRIDE + admin_csrs_info.mailbox_offset; + + int times, received; + struct icp_qat_fw_init_admin_req *request = in; + + sx_xlock(&admin->lock); + + if (ADF_CSR_RD(mailbox, mb_offset) == 1) { + sx_xunlock(&admin->lock); + return EAGAIN; + } + + memcpy(admin->virt_addr + offset, in, ADF_ADMINMSG_LEN); + ADF_CSR_WR(mailbox, mb_offset, 1); + received = 0; + for (times = 0; times < ADF_ADMIN_POLL_RETRIES; times++) { + usleep_range(ADF_ADMIN_POLL_INTERVAL_US, + ADF_ADMIN_POLL_INTERVAL_US * 2); + if (ADF_CSR_RD(mailbox, mb_offset) == 0) { + received = 1; + break; + } + } + if (received) + memcpy(out, + admin->virt_addr + offset + ADF_ADMINMSG_LEN, + ADF_ADMINMSG_LEN); + else + device_printf(GET_DEV(accel_dev), + "Failed to send admin msg %d to accelerator %d\n", + request->cmd_id, + ae); + + sx_xunlock(&admin->lock); + return received ? 0 : EFAULT; +} + +static inline int +adf_set_dc_ibuf(struct adf_accel_dev *accel_dev, + struct icp_qat_fw_init_admin_req *req) +{ + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + unsigned long ibuf_size = 0; + + if (!adf_cfg_get_param_value( + accel_dev, ADF_GENERAL_SEC, ADF_INTER_BUF_SIZE, val)) { + if (compat_strtoul(val, 0, &ibuf_size)) + return EFAULT; + } + + if (ibuf_size != 32 && ibuf_size != 64) + ibuf_size = 64; + + req->ibuf_size_in_kb = ibuf_size; + + return 0; +} + +int +adf_send_admin(struct adf_accel_dev *accel_dev, + struct icp_qat_fw_init_admin_req *req, + struct icp_qat_fw_init_admin_resp *resp, + u32 ae_mask) +{ + int i; + unsigned int mask; + + for (i = 0, mask = ae_mask; mask; i++, mask >>= 1) { + if (!(mask & 1)) + continue; + if (adf_put_admin_msg_sync(accel_dev, i, req, resp) || + resp->status) + return EFAULT; + } + + return 0; +} + +static int +adf_init_me(struct adf_accel_dev *accel_dev) +{ + struct icp_qat_fw_init_admin_req req; + struct icp_qat_fw_init_admin_resp resp; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + u32 ae_mask = hw_device->ae_mask; + + explicit_bzero(&req, sizeof(req)); + explicit_bzero(&resp, sizeof(resp)); + req.cmd_id = ICP_QAT_FW_INIT_ME; + + if (adf_set_dc_ibuf(accel_dev, &req)) + return EFAULT; + if (accel_dev->aram_info) { + req.init_cfg_sz = sizeof(*accel_dev->aram_info); + req.init_cfg_ptr = (u64)accel_dev->admin->aram_map_phys_addr; + } + if (adf_send_admin(accel_dev, &req, &resp, ae_mask)) + return EFAULT; + + return 0; +} + +static int +adf_set_heartbeat_timer(struct adf_accel_dev *accel_dev) +{ + struct icp_qat_fw_init_admin_req req; + struct icp_qat_fw_init_admin_resp resp; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + u32 ae_mask = hw_device->ae_mask; + u32 heartbeat_ticks; + + explicit_bzero(&req, sizeof(req)); + req.cmd_id = ICP_QAT_FW_HEARTBEAT_TIMER_SET; + req.hb_cfg_ptr = accel_dev->admin->phy_hb_addr; + if (adf_get_hb_timer(accel_dev, &heartbeat_ticks)) + return EINVAL; + req.heartbeat_ticks = heartbeat_ticks; + + if (adf_send_admin(accel_dev, &req, &resp, ae_mask)) + return EFAULT; + + return 0; +} + +static int +adf_get_dc_capabilities(struct adf_accel_dev *accel_dev, u32 *capabilities) +{ + struct icp_qat_fw_init_admin_req req; + struct icp_qat_fw_init_admin_resp resp; + u32 ae_mask = 1; + + explicit_bzero(&req, sizeof(req)); + req.cmd_id = ICP_QAT_FW_COMP_CAPABILITY_GET; + + if (adf_send_admin(accel_dev, &req, &resp, ae_mask)) + return EFAULT; + + *capabilities = resp.extended_features; + + return 0; +} + +static int +adf_set_fw_constants(struct adf_accel_dev *accel_dev) +{ + struct icp_qat_fw_init_admin_req req; + struct icp_qat_fw_init_admin_resp resp; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + u32 ae_mask = hw_device->ae_mask; + + explicit_bzero(&req, sizeof(req)); + req.cmd_id = ICP_QAT_FW_CONSTANTS_CFG; + + req.init_cfg_sz = sizeof(const_tab); + req.init_cfg_ptr = accel_dev->admin->const_tbl_addr; + + if (adf_send_admin(accel_dev, &req, &resp, ae_mask)) + return EFAULT; + + return 0; +} + +static int +adf_get_fw_status(struct adf_accel_dev *accel_dev, + u8 *major, + u8 *minor, + u8 *patch) +{ + struct icp_qat_fw_init_admin_req req; + struct icp_qat_fw_init_admin_resp resp; + u32 ae_mask = 1; + + explicit_bzero(&req, sizeof(req)); + req.cmd_id = ICP_QAT_FW_STATUS_GET; + + if (adf_send_admin(accel_dev, &req, &resp, ae_mask)) + return EFAULT; + + *major = resp.version_major_num; + *minor = resp.version_minor_num; + *patch = resp.version_patch_num; + + return 0; +} + +int +adf_get_fw_timestamp(struct adf_accel_dev *accel_dev, u64 *timestamp) +{ + struct icp_qat_fw_init_admin_req req; + struct icp_qat_fw_init_admin_resp rsp; + unsigned int ae_mask = 1; + + if (!accel_dev || !timestamp) + return EFAULT; + + explicit_bzero(&req, sizeof(req)); + req.cmd_id = ICP_QAT_FW_TIMER_GET; + + if (adf_send_admin(accel_dev, &req, &rsp, ae_mask)) + return EFAULT; + + *timestamp = rsp.timestamp; + return 0; +} + +int +adf_get_fw_pke_stats(struct adf_accel_dev *accel_dev, + u64 *suc_count, + u64 *unsuc_count) +{ + struct icp_qat_fw_init_admin_req req = { 0 }; + struct icp_qat_fw_init_admin_resp resp = { 0 }; + unsigned long sym_ae_msk = 0; + u8 sym_ae_msk_size = 0; + u8 i = 0; + + if (!suc_count || !unsuc_count) + return EFAULT; + + sym_ae_msk = accel_dev->au_info->sym_ae_msk; + sym_ae_msk_size = + sizeof(accel_dev->au_info->sym_ae_msk) * BITS_PER_BYTE; + + req.cmd_id = ICP_QAT_FW_PKE_REPLAY_STATS_GET; + for_each_set_bit(i, &sym_ae_msk, sym_ae_msk_size) + { + memset(&resp, 0, sizeof(struct icp_qat_fw_init_admin_resp)); + if (adf_put_admin_msg_sync(accel_dev, i, &req, &resp) || + resp.status) { + return EFAULT; + } + *suc_count += resp.successful_count; + *unsuc_count += resp.unsuccessful_count; + } + return 0; +} + +/** + * adf_send_admin_init() - Function sends init message to FW + * @accel_dev: Pointer to acceleration device. + * + * Function sends admin init message to the FW + * + * Return: 0 on success, error code otherwise. + */ +int +adf_send_admin_init(struct adf_accel_dev *accel_dev) +{ + int ret; + u32 dc_capabilities = 0; + unsigned int storage_enabled = 0; + + if (GET_HW_DATA(accel_dev)->query_storage_cap) { + ret = adf_get_dc_capabilities(accel_dev, &dc_capabilities); + if (ret) { + device_printf(GET_DEV(accel_dev), + "Cannot get dc capabilities\n"); + return ret; + } + accel_dev->hw_device->extended_dc_capabilities = + dc_capabilities; + } else { + ret = GET_HW_DATA(accel_dev)->get_storage_enabled( + accel_dev, &storage_enabled); + if (ret) { + device_printf(GET_DEV(accel_dev), + "Cannot get storage enabled\n"); + return ret; + } + } + + ret = adf_set_heartbeat_timer(accel_dev); + if (ret) { + if (ret == EINVAL) { + device_printf(GET_DEV(accel_dev), + "Cannot set heartbeat timer\n"); + return ret; + } + device_printf(GET_DEV(accel_dev), + "Heartbeat is not supported\n"); + } + + ret = adf_get_fw_status(accel_dev, + &accel_dev->fw_versions.fw_version_major, + &accel_dev->fw_versions.fw_version_minor, + &accel_dev->fw_versions.fw_version_patch); + if (ret) { + device_printf(GET_DEV(accel_dev), "Cannot get fw version\n"); + return ret; + } + + device_printf(GET_DEV(accel_dev), + "FW version: %d.%d.%d\n", + accel_dev->fw_versions.fw_version_major, + accel_dev->fw_versions.fw_version_minor, + accel_dev->fw_versions.fw_version_patch); + + ret = adf_set_fw_constants(accel_dev); + if (ret) { + device_printf(GET_DEV(accel_dev), "Cannot set fw constants\n"); + return ret; + } + + ret = adf_init_me(accel_dev); + if (ret) + device_printf(GET_DEV(accel_dev), "Cannot init AE\n"); + + return ret; +} + +int +adf_init_admin_comms(struct adf_accel_dev *accel_dev) +{ + struct adf_admin_comms *admin = NULL; + struct adf_hw_device_data *hw_data = NULL; + struct adf_bar *pmisc = NULL; + struct resource *csr = NULL; + struct admin_info admin_csrs_info; + unsigned int adminmsg_u, adminmsg_l; + u64 reg_val = 0; + int ret = 0; + + admin = kzalloc_node(sizeof(*accel_dev->admin), + M_WAITOK | M_ZERO, + dev_to_node(GET_DEV(accel_dev))); + hw_data = accel_dev->hw_device; + pmisc = &GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)]; + csr = pmisc->virt_addr; + ret = bus_dma_mem_create(&admin->dma_mem, + accel_dev->dma_tag, + FREEBSD_ALLIGNMENT_SIZE, + BUS_SPACE_MAXADDR, + PAGE_SIZE, + 0); + if (ret != 0) { + device_printf(GET_DEV(accel_dev), + "Failed to allocate dma buff\n"); + kfree(admin); + return ret; + } + admin->virt_addr = admin->dma_mem.dma_vaddr; + admin->phy_addr = admin->dma_mem.dma_baddr; + bzero(admin->virt_addr, PAGE_SIZE); + + ret = bus_dmamap_create(accel_dev->dma_tag, 0, &admin->const_tbl_map); + if (ret != 0) { + device_printf(GET_DEV(accel_dev), "Failed to create DMA map\n"); + bus_dma_mem_free(&admin->dma_mem); + kfree(admin); + return ret; + } + + ret = bus_dmamap_load(accel_dev->dma_tag, + admin->const_tbl_map, + (void *)const_tab, + 1024, + dma_callback, + &admin->const_tbl_addr, + BUS_DMA_NOWAIT); + if (ret == 0 && admin->const_tbl_addr == 0) + ret = EFBIG; + if (ret != 0) { + device_printf(GET_DEV(accel_dev), + "Failed to map const table for DMA\n"); + bus_dmamap_destroy(accel_dev->dma_tag, admin->const_tbl_map); + bus_dma_mem_free(&admin->dma_mem); + kfree(admin); + return ret; + } + + /* DMA ARAM address map */ + if (accel_dev->aram_info) { + ret = + bus_dmamap_create(accel_dev->dma_tag, 0, &admin->aram_map); + if (ret != 0) { + device_printf(GET_DEV(accel_dev), + "Failed to create DMA map\n"); + bus_dma_mem_free(&admin->dma_mem); + kfree(admin); + return ret; + } + ret = bus_dmamap_load(accel_dev->dma_tag, + admin->aram_map, + (void *)accel_dev->aram_info, + sizeof(*accel_dev->aram_info), + dma_callback, + &admin->aram_map_phys_addr, + BUS_DMA_NOWAIT); + + if (ret == 0 && admin->aram_map_phys_addr == 0) + ret = EFBIG; + if (ret != 0) { + device_printf(GET_DEV(accel_dev), + "Failed to map aram phys addr for DMA\n"); + bus_dmamap_destroy(accel_dev->dma_tag, admin->aram_map); + bus_dma_mem_free(&admin->dma_mem); + kfree(admin); + return ret; + } + } + + ret = bus_dma_mem_create(&admin->dma_hb, + accel_dev->dma_tag, + FREEBSD_ALLIGNMENT_SIZE, + BUS_SPACE_MAXADDR, + PAGE_SIZE, + 0); + if (ret != 0) { + device_printf(GET_DEV(accel_dev), + "Failed to allocate dma buff\n"); + bus_dmamap_unload(accel_dev->dma_tag, admin->const_tbl_map); + bus_dmamap_destroy(accel_dev->dma_tag, admin->const_tbl_map); + bus_dma_mem_free(&admin->dma_mem); + kfree(admin); + return ret; + } + + admin->virt_hb_addr = admin->dma_hb.dma_vaddr; + admin->phy_hb_addr = admin->dma_hb.dma_baddr; + bzero(admin->virt_hb_addr, PAGE_SIZE); + + hw_data->get_admin_info(&admin_csrs_info); + + adminmsg_u = admin_csrs_info.admin_msg_ur; + adminmsg_l = admin_csrs_info.admin_msg_lr; + reg_val = (u64)admin->phy_addr; + ADF_CSR_WR(csr, adminmsg_u, reg_val >> 32); + ADF_CSR_WR(csr, adminmsg_l, reg_val); + sx_init(&admin->lock, "qat admin"); + admin->mailbox_addr = csr; + accel_dev->admin = admin; + return 0; +} + +void +adf_exit_admin_comms(struct adf_accel_dev *accel_dev) +{ + struct adf_admin_comms *admin = accel_dev->admin; + + if (!admin) + return; + + if (admin->virt_addr) + bus_dma_mem_free(&admin->dma_mem); + + if (admin->virt_hb_addr) + bus_dma_mem_free(&admin->dma_hb); + + bus_dmamap_unload(accel_dev->dma_tag, admin->const_tbl_map); + bus_dmamap_destroy(accel_dev->dma_tag, admin->const_tbl_map); + sx_destroy(&admin->lock); + kfree(admin); + accel_dev->admin = NULL; +} Index: sys/dev/qat/qat_common/adf_freebsd_cfg_dev_dbg.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_freebsd_cfg_dev_dbg.c @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_common_drv.h" +#include "adf_cfg_device.h" +#include "adf_cfg_dev_dbg.h" +#include +#include +#include +#include +#include +#include +#include +#include + +static int qat_dev_cfg_show(SYSCTL_HANDLER_ARGS) +{ + struct adf_cfg_device_data *dev_cfg; + struct adf_cfg_section *sec; + struct adf_cfg_key_val *ptr; + struct sbuf sb; + int error; + + sbuf_new_for_sysctl(&sb, NULL, 128, req); + dev_cfg = arg1; + sx_slock(&dev_cfg->lock); + list_for_each_entry(sec, &dev_cfg->sec_list, list) + { + sbuf_printf(&sb, "[%s]\n", sec->name); + list_for_each_entry(ptr, &sec->param_head, list) + { + sbuf_printf(&sb, "%s = %s\n", ptr->key, ptr->val); + } + } + sx_sunlock(&dev_cfg->lock); + error = sbuf_finish(&sb); + sbuf_delete(&sb); + return error; +} + +int +adf_cfg_dev_dbg_add(struct adf_accel_dev *accel_dev) +{ + struct adf_cfg_device_data *dev_cfg_data = accel_dev->cfg; + device_t dev; + + dev = GET_DEV(accel_dev); + dev_cfg_data->debug = + SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev), + SYSCTL_CHILDREN(device_get_sysctl_tree(dev)), + OID_AUTO, + "dev_cfg", + CTLFLAG_RD | CTLTYPE_STRING, + dev_cfg_data, + 0, + qat_dev_cfg_show, + "A", + "Device configuration"); + + if (!dev_cfg_data->debug) { + device_printf(dev, "Failed to create qat cfg sysctl.\n"); + return ENXIO; + } + return 0; +} + +void +adf_cfg_dev_dbg_remove(struct adf_accel_dev *accel_dev) +{ + struct adf_cfg_device_data *dev_cfg_data = accel_dev->cfg; + + if (dev_cfg_data->dev) { + adf_cfg_device_clear(dev_cfg_data->dev, accel_dev); + free(dev_cfg_data->dev, M_QAT); + dev_cfg_data->dev = NULL; + } +} Index: sys/dev/qat/qat_common/adf_freebsd_cnvnr_ctrs_dbg.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_freebsd_cnvnr_ctrs_dbg.c @@ -0,0 +1,179 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include +#include +#include +#include "adf_cnvnr_freq_counters.h" +#include "adf_common_drv.h" +#include "adf_cfg.h" +#include "icp_qat_fw_init_admin.h" + +#define ADF_CNVNR_ERR_MASK 0xFFF + +#define LINE \ + "+-----------------------------------------------------------------+\n" +#define BANNER \ + "| CNV Error Freq Statistics for Qat Device |\n" +#define NEW_LINE "\n" +#define REPORT_ENTRY_FORMAT \ + "|[AE %2d]: TotalErrors: %5d : LastError: %s [%5d] |\n" +#define MAX_LINE_LENGTH 128 +#define MAX_REPORT_SIZE ((ADF_MAX_ACCELENGINES + 3) * MAX_LINE_LENGTH) + +#define PRINT_LINE(line) \ + (snprintf( \ + report_ptr, MAX_REPORT_SIZE - (report_ptr - report), "%s", line)) + +const char *cnvnr_err_str[] = {"No Error ", + "Checksum Error", + "Length Error-P", + "Decomp Error ", + "Xlat Error ", + "Length Error-C", + "Unknown Error "}; + +/* Handler for HB status check */ +static int qat_cnvnr_ctrs_dbg_read(SYSCTL_HANDLER_ARGS) +{ + struct adf_accel_dev *accel_dev = arg1; + struct adf_hw_device_data *hw_device; + struct icp_qat_fw_init_admin_req request; + struct icp_qat_fw_init_admin_resp response; + unsigned long dc_ae_msk = 0; + u8 num_aes = 0, ae = 0, error_type = 0, bytes_written = 0; + s16 latest_error = 0; + char report[MAX_REPORT_SIZE]; + char *report_ptr = report; + + /* Defensive check */ + if (!accel_dev || accel_dev->accel_id > ADF_MAX_DEVICES) + return EINVAL; + + if (!adf_dev_started(accel_dev)) { + device_printf(GET_DEV(accel_dev), "QAT Device not started\n"); + return EINVAL; + } + + hw_device = accel_dev->hw_device; + if (!hw_device) { + device_printf(GET_DEV(accel_dev), "Failed to get hw_device.\n"); + return EFAULT; + } + + /* Clean report memory */ + explicit_bzero(report, sizeof(report)); + + /* Adding banner to report */ + bytes_written = PRINT_LINE(NEW_LINE); + if (bytes_written <= 0) + return EINVAL; + report_ptr += bytes_written; + + bytes_written = PRINT_LINE(LINE); + if (bytes_written <= 0) + return EINVAL; + report_ptr += bytes_written; + + bytes_written = PRINT_LINE(BANNER); + if (bytes_written <= 0) + return EINVAL; + report_ptr += bytes_written; + + bytes_written = PRINT_LINE(LINE); + if (bytes_written <= 0) + return EINVAL; + report_ptr += bytes_written; + + if (accel_dev->au_info) + dc_ae_msk = accel_dev->au_info->dc_ae_msk; + + /* Extracting number of Acceleration Engines */ + num_aes = hw_device->get_num_aes(hw_device); + for (ae = 0; ae < num_aes; ae++) { + if (accel_dev->au_info && !test_bit(ae, &dc_ae_msk)) + continue; + explicit_bzero(&response, + sizeof(struct icp_qat_fw_init_admin_resp)); + request.cmd_id = ICP_QAT_FW_CNV_STATS_GET; + if (adf_put_admin_msg_sync( + accel_dev, ae, &request, &response) || + response.status) { + return EFAULT; + } + error_type = CNV_ERROR_TYPE_GET(response.latest_error); + if (error_type == CNV_ERR_TYPE_DECOMP_PRODUCED_LENGTH_ERROR || + error_type == CNV_ERR_TYPE_DECOMP_CONSUMED_LENGTH_ERROR) { + latest_error = + CNV_ERROR_LENGTH_DELTA_GET(response.latest_error); + } else if (error_type == CNV_ERR_TYPE_DECOMPRESSION_ERROR || + error_type == CNV_ERR_TYPE_TRANSLATION_ERROR) { + latest_error = + CNV_ERROR_DECOMP_STATUS_GET(response.latest_error); + } else { + latest_error = + response.latest_error & ADF_CNVNR_ERR_MASK; + } + + bytes_written = + snprintf(report_ptr, + MAX_REPORT_SIZE - (report_ptr - report), + REPORT_ENTRY_FORMAT, + ae, + response.error_count, + cnvnr_err_str[error_type], + latest_error); + if (bytes_written <= 0) { + printf("ERROR: No space left in CnV ctrs line buffer\n" + "\tAcceleration ID: %d, Engine: %d\n", + accel_dev->accel_id, + ae); + break; + } + report_ptr += bytes_written; + } + + sysctl_handle_string(oidp, report, sizeof(report), req); + return 0; +} + +int +adf_cnvnr_freq_counters_add(struct adf_accel_dev *accel_dev) +{ + struct sysctl_ctx_list *qat_sysctl_ctx; + struct sysctl_oid *qat_cnvnr_ctrs_sysctl_tree; + struct sysctl_oid *oid_rc; + + /* Defensive checks */ + if (!accel_dev) + return EINVAL; + + /* Creating context and tree */ + qat_sysctl_ctx = + device_get_sysctl_ctx(accel_dev->accel_pci_dev.pci_dev); + qat_cnvnr_ctrs_sysctl_tree = + device_get_sysctl_tree(accel_dev->accel_pci_dev.pci_dev); + + /* Create "cnv_error" string type leaf - with callback */ + oid_rc = SYSCTL_ADD_PROC(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_cnvnr_ctrs_sysctl_tree), + OID_AUTO, + "cnv_error", + CTLTYPE_STRING | CTLFLAG_RD, + accel_dev, + 0, + qat_cnvnr_ctrs_dbg_read, + "IU", + "QAT CnVnR status"); + + if (!oid_rc) { + printf("ERROR: Memory allocation failed\n"); + return ENOMEM; + } + return 0; +} + +void +adf_cnvnr_freq_counters_remove(struct adf_accel_dev *accel_dev) +{ +} Index: sys/dev/qat/qat_common/adf_freebsd_heartbeat_dbg.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_freebsd_heartbeat_dbg.c @@ -0,0 +1,106 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include +#include +#include +#include "adf_heartbeat_dbg.h" +#include "adf_common_drv.h" +#include "adf_cfg.h" +#include "adf_heartbeat.h" + +#define HB_SYSCTL_ERR(RC) \ + do { \ + if (RC == NULL) { \ + printf( \ + "Memory allocation failed in adf_heartbeat_dbg_add\n"); \ + return ENOMEM; \ + } \ + } while (0) + +/* Handler for HB status check */ +static int qat_dev_hb_read(SYSCTL_HANDLER_ARGS) +{ + enum adf_device_heartbeat_status hb_status = DEV_HB_UNRESPONSIVE; + struct adf_accel_dev *accel_dev = arg1; + struct adf_heartbeat *hb; + int ret = 0; + if (accel_dev == NULL) { + return EINVAL; + } + hb = accel_dev->heartbeat; + + /* if FW is loaded, proceed else set heartbeat down */ + if (test_bit(ADF_STATUS_AE_UCODE_LOADED, &accel_dev->status)) { + adf_heartbeat_status(accel_dev, &hb_status); + } + if (hb_status == DEV_HB_ALIVE) { + hb->heartbeat.hb_sysctlvar = 1; + } else { + hb->heartbeat.hb_sysctlvar = 0; + } + ret = sysctl_handle_int(oidp, &hb->heartbeat.hb_sysctlvar, 0, req); + return ret; +} + +int +adf_heartbeat_dbg_add(struct adf_accel_dev *accel_dev) +{ + struct sysctl_ctx_list *qat_hb_sysctl_ctx; + struct sysctl_oid *qat_hb_sysctl_tree; + struct adf_heartbeat *hb; + struct sysctl_oid *rc = 0; + + if (accel_dev == NULL) { + return EINVAL; + } + + if (adf_heartbeat_init(accel_dev)) + return EINVAL; + + hb = accel_dev->heartbeat; + qat_hb_sysctl_ctx = + device_get_sysctl_ctx(accel_dev->accel_pci_dev.pci_dev); + qat_hb_sysctl_tree = + device_get_sysctl_tree(accel_dev->accel_pci_dev.pci_dev); + + rc = SYSCTL_ADD_UINT(qat_hb_sysctl_ctx, + SYSCTL_CHILDREN(qat_hb_sysctl_tree), + OID_AUTO, + "heartbeat_sent", + CTLFLAG_RD, + &hb->hb_sent_counter, + 0, + "HB sent count"); + HB_SYSCTL_ERR(rc); + + rc = SYSCTL_ADD_UINT(qat_hb_sysctl_ctx, + SYSCTL_CHILDREN(qat_hb_sysctl_tree), + OID_AUTO, + "heartbeat_failed", + CTLFLAG_RD, + &hb->hb_failed_counter, + 0, + "HB failed count"); + HB_SYSCTL_ERR(rc); + + rc = SYSCTL_ADD_PROC(qat_hb_sysctl_ctx, + SYSCTL_CHILDREN(qat_hb_sysctl_tree), + OID_AUTO, + "heartbeat", + CTLTYPE_INT | CTLFLAG_RD, + accel_dev, + 0, + qat_dev_hb_read, + "IU", + "QAT device status"); + HB_SYSCTL_ERR(rc); + return 0; +} + +int +adf_heartbeat_dbg_del(struct adf_accel_dev *accel_dev) +{ + adf_heartbeat_clean(accel_dev); + return 0; +} Index: sys/dev/qat/qat_common/adf_freebsd_pfvf_ctrs_dbg.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_freebsd_pfvf_ctrs_dbg.c @@ -0,0 +1,137 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "adf_dev_err.h" +#include "adf_freebsd_pfvf_ctrs_dbg.h" + +#define MAX_REPORT_LINES (14) +#define MAX_REPORT_LINE_LEN (64) +#define MAX_REPORT_SIZE (MAX_REPORT_LINES * MAX_REPORT_LINE_LEN) + +static void +adf_pfvf_ctrs_prepare_report(char *rep, struct pfvf_stats *pfvf_counters) +{ + unsigned int value = 0; + char *string = "unknown"; + unsigned int pos = 0; + char *ptr = rep; + + for (pos = 0; pos < MAX_REPORT_LINES; pos++) { + switch (pos) { + case 0: + string = "Messages written to CSR"; + value = pfvf_counters->tx; + break; + case 1: + string = "Messages read from CSR"; + value = pfvf_counters->rx; + break; + case 2: + string = "Spurious Interrupt"; + value = pfvf_counters->spurious; + break; + case 3: + string = "Block messages sent"; + value = pfvf_counters->blk_tx; + break; + case 4: + string = "Block messages received"; + value = pfvf_counters->blk_rx; + break; + case 5: + string = "Blocks received with CRC errors"; + value = pfvf_counters->crc_err; + break; + case 6: + string = "CSR in use"; + value = pfvf_counters->busy; + break; + case 7: + string = "No acknowledgment"; + value = pfvf_counters->no_ack; + break; + case 8: + string = "Collisions"; + value = pfvf_counters->collision; + break; + case 9: + string = "Put msg timeout"; + value = pfvf_counters->tx_timeout; + break; + case 10: + string = "No response received"; + value = pfvf_counters->rx_timeout; + break; + case 11: + string = "Responses received"; + value = pfvf_counters->rx_rsp; + break; + case 12: + string = "Messages re-transmitted"; + value = pfvf_counters->retry; + break; + case 13: + string = "Put event timeout"; + value = pfvf_counters->event_timeout; + break; + default: + value = 0; + } + if (value) + ptr += snprintf(ptr, + (MAX_REPORT_SIZE - (ptr - rep)), + "%s %u\n", + string, + value); + } +} + +static int adf_pfvf_ctrs_show(SYSCTL_HANDLER_ARGS) +{ + struct pfvf_stats *pfvf_counters = arg1; + char report[MAX_REPORT_SIZE]; + + if (!pfvf_counters) + return EINVAL; + + explicit_bzero(report, sizeof(report)); + adf_pfvf_ctrs_prepare_report(report, pfvf_counters); + sysctl_handle_string(oidp, report, sizeof(report), req); + return 0; +} + +int +adf_pfvf_ctrs_dbg_add(struct adf_accel_dev *accel_dev) +{ + struct sysctl_ctx_list *qat_sysctl_ctx; + struct sysctl_oid *qat_pfvf_ctrs_sysctl_tree; + struct sysctl_oid *oid_pfvf; + device_t dev; + + if (!accel_dev || accel_dev->accel_id > ADF_MAX_DEVICES) + return EINVAL; + + dev = GET_DEV(accel_dev); + + qat_sysctl_ctx = device_get_sysctl_ctx(dev); + qat_pfvf_ctrs_sysctl_tree = device_get_sysctl_tree(dev); + + oid_pfvf = SYSCTL_ADD_PROC(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_pfvf_ctrs_sysctl_tree), + OID_AUTO, + "pfvf_counters", + CTLTYPE_STRING | CTLFLAG_RD, + &accel_dev->u1.vf.pfvf_counters, + 0, + adf_pfvf_ctrs_show, + "A", + "QAT PFVF counters"); + + if (!oid_pfvf) { + device_printf(dev, "Failure creating PFVF counters sysctl\n"); + return ENOMEM; + } + return 0; +} Index: sys/dev/qat/qat_common/adf_freebsd_transport_debug.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_freebsd_transport_debug.c @@ -0,0 +1,209 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "icp_qat_uclo.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_cfg_strings.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include +#include +#include +#include + +static int adf_ring_show(SYSCTL_HANDLER_ARGS) +{ + struct adf_etr_ring_data *ring = arg1; + struct adf_etr_bank_data *bank = ring->bank; + struct resource *csr = ring->bank->csr_addr; + struct sbuf sb; + int error, word; + uint32_t *wp, *end; + + sbuf_new_for_sysctl(&sb, NULL, 128, req); + { + int head, tail, empty; + + head = READ_CSR_RING_HEAD(csr, + bank->bank_number, + ring->ring_number); + tail = READ_CSR_RING_TAIL(csr, + bank->bank_number, + ring->ring_number); + empty = READ_CSR_E_STAT(csr, bank->bank_number); + + sbuf_cat(&sb, "\n------- Ring configuration -------\n"); + sbuf_printf(&sb, + "ring name: %s\n", + ring->ring_debug->ring_name); + sbuf_printf(&sb, + "ring num %d, bank num %d\n", + ring->ring_number, + ring->bank->bank_number); + sbuf_printf(&sb, + "head %x, tail %x, empty: %d\n", + head, + tail, + (empty & 1 << ring->ring_number) >> + ring->ring_number); + sbuf_printf(&sb, + "ring size %d, msg size %d\n", + ADF_SIZE_TO_RING_SIZE_IN_BYTES(ring->ring_size), + ADF_MSG_SIZE_TO_BYTES(ring->msg_size)); + sbuf_cat(&sb, "----------- Ring data ------------\n"); + } + wp = ring->base_addr; + end = (uint32_t *)((char *)ring->base_addr + + ADF_SIZE_TO_RING_SIZE_IN_BYTES(ring->ring_size)); + while (wp < end) { + sbuf_printf(&sb, "%p:", wp); + for (word = 0; word < 32 / 4; word++, wp++) + sbuf_printf(&sb, " %08x", *wp); + sbuf_printf(&sb, "\n"); + } + error = sbuf_finish(&sb); + sbuf_delete(&sb); + return (error); +} + +int +adf_ring_debugfs_add(struct adf_etr_ring_data *ring, const char *name) +{ + struct adf_etr_ring_debug_entry *ring_debug; + char entry_name[8]; + + ring_debug = malloc(sizeof(*ring_debug), M_QAT, M_WAITOK | M_ZERO); + + strlcpy(ring_debug->ring_name, name, sizeof(ring_debug->ring_name)); + snprintf(entry_name, + sizeof(entry_name), + "ring_%02d", + ring->ring_number); + + ring_debug->debug = + SYSCTL_ADD_PROC(&ring->bank->accel_dev->sysctl_ctx, + SYSCTL_CHILDREN(ring->bank->bank_debug_dir), + OID_AUTO, + entry_name, + CTLFLAG_RD | CTLTYPE_STRING, + ring, + 0, + adf_ring_show, + "A", + "Ring configuration"); + + if (!ring_debug->debug) { + printf("QAT: Failed to create ring debug entry.\n"); + free(ring_debug, M_QAT); + return EFAULT; + } + ring->ring_debug = ring_debug; + return 0; +} + +void +adf_ring_debugfs_rm(struct adf_etr_ring_data *ring) +{ + if (ring->ring_debug) { + free(ring->ring_debug, M_QAT); + ring->ring_debug = NULL; + } +} + +static int adf_bank_show(SYSCTL_HANDLER_ARGS) +{ + struct adf_etr_bank_data *bank; + struct adf_accel_dev *accel_dev = NULL; + struct adf_hw_device_data *hw_data = NULL; + u8 num_rings_per_bank = 0; + struct sbuf sb; + int error, ring_id; + + sbuf_new_for_sysctl(&sb, NULL, 128, req); + bank = arg1; + accel_dev = bank->accel_dev; + hw_data = accel_dev->hw_device; + num_rings_per_bank = hw_data->num_rings_per_bank; + sbuf_printf(&sb, + "\n------- Bank %d configuration -------\n", + bank->bank_number); + for (ring_id = 0; ring_id < num_rings_per_bank; ring_id++) { + struct adf_etr_ring_data *ring = &bank->rings[ring_id]; + struct resource *csr = bank->csr_addr; + int head, tail, empty; + + if (!(bank->ring_mask & 1 << ring_id)) + continue; + + head = READ_CSR_RING_HEAD(csr, + bank->bank_number, + ring->ring_number); + tail = READ_CSR_RING_TAIL(csr, + bank->bank_number, + ring->ring_number); + empty = READ_CSR_E_STAT(csr, bank->bank_number); + + sbuf_printf(&sb, + "ring num %02d, head %04x, tail %04x, empty: %d\n", + ring->ring_number, + head, + tail, + (empty & 1 << ring->ring_number) >> + ring->ring_number); + } + error = sbuf_finish(&sb); + sbuf_delete(&sb); + return (error); +} + +int +adf_bank_debugfs_add(struct adf_etr_bank_data *bank) +{ + struct adf_accel_dev *accel_dev = bank->accel_dev; + struct sysctl_oid *parent = accel_dev->transport->debug; + char name[9]; + + snprintf(name, sizeof(name), "bank_%03d", bank->bank_number); + + bank->bank_debug_dir = SYSCTL_ADD_NODE(&accel_dev->sysctl_ctx, + SYSCTL_CHILDREN(parent), + OID_AUTO, + name, + CTLFLAG_RD | CTLFLAG_SKIP, + NULL, + ""); + + if (!bank->bank_debug_dir) { + printf("QAT: Failed to create bank debug dir.\n"); + return EFAULT; + } + + bank->bank_debug_cfg = + SYSCTL_ADD_PROC(&accel_dev->sysctl_ctx, + SYSCTL_CHILDREN(bank->bank_debug_dir), + OID_AUTO, + "config", + CTLFLAG_RD | CTLTYPE_STRING, + bank, + 0, + adf_bank_show, + "A", + "Bank configuration"); + + if (!bank->bank_debug_cfg) { + printf("QAT: Failed to create bank debug entry.\n"); + return EFAULT; + } + + return 0; +} + +void +adf_bank_debugfs_rm(struct adf_etr_bank_data *bank) +{ +} Index: sys/dev/qat/qat_common/adf_freebsd_ver_dbg.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_freebsd_ver_dbg.c @@ -0,0 +1,149 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "adf_ver_dbg.h" + +static int adf_sysctl_read_fw_versions(SYSCTL_HANDLER_ARGS) +{ + struct adf_accel_dev *accel_dev = arg1; + char fw_version[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + + if (!accel_dev) + return -EINVAL; + + if (adf_dev_started(accel_dev)) + snprintf(fw_version, + ADF_CFG_MAX_VAL_LEN_IN_BYTES, + "%d.%d.%d", + accel_dev->fw_versions.fw_version_major, + accel_dev->fw_versions.fw_version_minor, + accel_dev->fw_versions.fw_version_patch); + else + snprintf(fw_version, ADF_CFG_MAX_VAL_LEN_IN_BYTES, ""); + + return SYSCTL_OUT(req, + fw_version, + strnlen(fw_version, ADF_CFG_MAX_VAL_LEN_IN_BYTES)); +} + +static int adf_sysctl_read_hw_versions(SYSCTL_HANDLER_ARGS) +{ + struct adf_accel_dev *accel_dev = arg1; + char hw_version[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + + if (!accel_dev) + return -EINVAL; + + if (adf_dev_started(accel_dev)) + snprintf(hw_version, + ADF_CFG_MAX_VAL_LEN_IN_BYTES, + "%d", + accel_dev->accel_pci_dev.revid); + else + snprintf(hw_version, ADF_CFG_MAX_VAL_LEN_IN_BYTES, ""); + + return SYSCTL_OUT(req, + hw_version, + strnlen(hw_version, ADF_CFG_MAX_VAL_LEN_IN_BYTES)); +} + +static int adf_sysctl_read_mmp_versions(SYSCTL_HANDLER_ARGS) +{ + struct adf_accel_dev *accel_dev = arg1; + char mmp_version[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + + if (!accel_dev) + return -EINVAL; + + if (adf_dev_started(accel_dev)) + snprintf(mmp_version, + ADF_CFG_MAX_VAL_LEN_IN_BYTES, + "%d.%d.%d", + accel_dev->fw_versions.mmp_version_major, + accel_dev->fw_versions.mmp_version_minor, + accel_dev->fw_versions.mmp_version_patch); + + if (adf_dev_started(accel_dev)) + snprintf(mmp_version, + ADF_CFG_MAX_VAL_LEN_IN_BYTES, + "%d.%d.%d", + accel_dev->fw_versions.mmp_version_major, + accel_dev->fw_versions.mmp_version_minor, + accel_dev->fw_versions.mmp_version_patch); + else + snprintf(mmp_version, ADF_CFG_MAX_VAL_LEN_IN_BYTES, ""); + + return SYSCTL_OUT(req, + mmp_version, + strnlen(mmp_version, ADF_CFG_MAX_VAL_LEN_IN_BYTES)); +} + +int +adf_ver_dbg_add(struct adf_accel_dev *accel_dev) +{ + struct sysctl_ctx_list *qat_sysctl_ctx; + struct sysctl_oid *qat_sysctl_tree; + struct sysctl_oid *rc = 0; + + if (!accel_dev) + return -EINVAL; + + qat_sysctl_ctx = + device_get_sysctl_ctx(accel_dev->accel_pci_dev.pci_dev); + qat_sysctl_tree = + device_get_sysctl_tree(accel_dev->accel_pci_dev.pci_dev); + + rc = SYSCTL_ADD_OID(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_sysctl_tree), + OID_AUTO, + "fw_version", + CTLTYPE_STRING | CTLFLAG_RD, + accel_dev, + 0, + adf_sysctl_read_fw_versions, + "A", + "QAT FW version"); + if (!rc) + goto err; + + rc = SYSCTL_ADD_OID(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_sysctl_tree), + OID_AUTO, + "hw_version", + CTLTYPE_STRING | CTLFLAG_RD, + accel_dev, + 0, + adf_sysctl_read_hw_versions, + "A", + "QAT HW version"); + if (!rc) + goto err; + + rc = SYSCTL_ADD_OID(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_sysctl_tree), + OID_AUTO, + "mmp_version", + CTLTYPE_STRING | CTLFLAG_RD, + accel_dev, + 0, + adf_sysctl_read_mmp_versions, + "A", + "QAT MMP version"); + if (!rc) + goto err; + + return 0; +err: + device_printf(GET_DEV(accel_dev), + "Failed to add firmware versions to sysctl\n"); + return -EINVAL; +} + +void +adf_ver_dbg_del(struct adf_accel_dev *accel_dev) +{ +} Index: sys/dev/qat/qat_common/adf_fw_counters.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_fw_counters.c @@ -0,0 +1,411 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include +#include +#include +#include "adf_accel_devices.h" +#include "adf_fw_counters.h" +#include "adf_common_drv.h" +#include "icp_qat_fw_init_admin.h" +#include +#include +#define ADF_FW_COUNTERS_BUF_SZ 4096 + +#define ADF_RAS_EVENT_STR "RAS events" +#define ADF_FW_REQ_STR "Firmware Requests" +#define ADF_FW_RESP_STR "Firmware Responses" + +static void adf_fw_counters_section_del_all(struct list_head *head); +static void adf_fw_counters_del_all(struct adf_accel_dev *accel_dev); +static int +adf_fw_counters_add_key_value_param(struct adf_accel_dev *accel_dev, + const char *section_name, + const unsigned long sec_name_max_size, + const char *key, + const void *val); +static int adf_fw_counters_section_add(struct adf_accel_dev *accel_dev, + const char *name, + const unsigned long name_max_size); +int adf_get_fw_counters(struct adf_accel_dev *accel_dev); +int adf_read_fw_counters(SYSCTL_HANDLER_ARGS); + +int +adf_get_fw_counters(struct adf_accel_dev *accel_dev) +{ + struct icp_qat_fw_init_admin_req req; + struct icp_qat_fw_init_admin_resp resp; + unsigned long ae_mask; + int i; + int ret = 0; + char aeidstr[16] = { 0 }; + struct adf_hw_device_data *hw_device; + + if (!accel_dev) { + ret = EFAULT; + goto fail_clean; + } + if (!adf_dev_started(accel_dev)) { + device_printf(GET_DEV(accel_dev), "Qat Device not started\n"); + ret = EFAULT; + goto fail_clean; + } + + hw_device = accel_dev->hw_device; + if (!hw_device) { + ret = EFAULT; + goto fail_clean; + } + + adf_fw_counters_del_all(accel_dev); + explicit_bzero(&req, sizeof(struct icp_qat_fw_init_admin_req)); + req.cmd_id = ICP_QAT_FW_COUNTERS_GET; + ae_mask = hw_device->ae_mask; + for_each_set_bit(i, &ae_mask, GET_MAX_ACCELENGINES(accel_dev)) + { + explicit_bzero(&resp, + sizeof(struct icp_qat_fw_init_admin_resp)); + if (adf_put_admin_msg_sync(accel_dev, i, &req, &resp) || + resp.status) { + resp.req_rec_count = ADF_FW_COUNTERS_NO_RESPONSE; + resp.resp_sent_count = ADF_FW_COUNTERS_NO_RESPONSE; + resp.ras_event_count = ADF_FW_COUNTERS_NO_RESPONSE; + } + explicit_bzero(aeidstr, sizeof(aeidstr)); + snprintf(aeidstr, sizeof(aeidstr), "AE %2d", i); + + if (adf_fw_counters_section_add(accel_dev, + aeidstr, + sizeof(aeidstr))) { + ret = ENOMEM; + goto fail_clean; + } + + if (adf_fw_counters_add_key_value_param( + accel_dev, + aeidstr, + sizeof(aeidstr), + ADF_FW_REQ_STR, + (void *)&resp.req_rec_count)) { + adf_fw_counters_del_all(accel_dev); + ret = ENOMEM; + goto fail_clean; + } + + if (adf_fw_counters_add_key_value_param( + accel_dev, + aeidstr, + sizeof(aeidstr), + ADF_FW_RESP_STR, + (void *)&resp.resp_sent_count)) { + adf_fw_counters_del_all(accel_dev); + ret = ENOMEM; + goto fail_clean; + } + + if (hw_device->count_ras_event && + hw_device->count_ras_event(accel_dev, + (void *)&resp.ras_event_count, + aeidstr)) { + adf_fw_counters_del_all(accel_dev); + ret = ENOMEM; + goto fail_clean; + } + } + +fail_clean: + return ret; +} + +int adf_read_fw_counters(SYSCTL_HANDLER_ARGS) +{ + struct adf_accel_dev *accel_dev = arg1; + struct adf_fw_counters_section *ptr = NULL; + struct list_head *list = NULL, *list_ptr = NULL; + struct list_head *tmp = NULL, *tmp_val = NULL; + int ret = 0; + struct sbuf *sbuf = NULL; + char *cbuf = NULL; + + if (accel_dev == NULL) { + return EINVAL; + } + cbuf = malloc(ADF_FW_COUNTERS_BUF_SZ, M_QAT, M_WAITOK | M_ZERO); + + sbuf = sbuf_new(NULL, cbuf, ADF_FW_COUNTERS_BUF_SZ, SBUF_FIXEDLEN); + if (sbuf == NULL) { + free(cbuf, M_QAT); + return ENOMEM; + } + ret = adf_get_fw_counters(accel_dev); + + if (ret) { + sbuf_delete(sbuf); + free(cbuf, M_QAT); + return ret; + } + + sbuf_printf(sbuf, + "\n+------------------------------------------------+\n"); + sbuf_printf( + sbuf, + "| FW Statistics for Qat Device |\n"); + sbuf_printf(sbuf, + "+------------------------------------------------+\n"); + + list_for_each_prev_safe(list, + tmp, + &accel_dev->fw_counters_data->ae_sec_list) + { + ptr = list_entry(list, struct adf_fw_counters_section, list); + sbuf_printf(sbuf, "%s\n", ptr->name); + list_for_each_prev_safe(list_ptr, tmp_val, &ptr->param_head) + { + struct adf_fw_counters_val *count = + list_entry(list_ptr, + struct adf_fw_counters_val, + list); + sbuf_printf(sbuf, "%s:%s\n", count->key, count->val); + } + } + + sbuf_finish(sbuf); + ret = SYSCTL_OUT(req, sbuf_data(sbuf), sbuf_len(sbuf)); + sbuf_delete(sbuf); + free(cbuf, M_QAT); + return ret; +} + +int +adf_fw_count_ras_event(struct adf_accel_dev *accel_dev, + u32 *ras_event, + char *aeidstr) +{ + unsigned long count = 0; + + if (!accel_dev || !ras_event || !aeidstr) + return EINVAL; + + count = (*ras_event == ADF_FW_COUNTERS_NO_RESPONSE ? + ADF_FW_COUNTERS_NO_RESPONSE : + (unsigned long)*ras_event); + + return adf_fw_counters_add_key_value_param( + accel_dev, aeidstr, 16, ADF_RAS_EVENT_STR, (void *)&count); +} + +/** + * adf_fw_counters_add() - Create an acceleration device FW counters table. + * @accel_dev: Pointer to acceleration device. + * + * Function creates a FW counters statistics table for the given + * acceleration device. + * The table stores device specific values of FW Requests sent to the FW and + * FW Responses received from the FW. + * To be used by QAT device specific drivers. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_fw_counters_add(struct adf_accel_dev *accel_dev) +{ + struct adf_fw_counters_data *fw_counters_data; + struct sysctl_ctx_list *qat_sysctl_ctx; + struct sysctl_oid *qat_sysctl_tree; + struct sysctl_oid *rc = 0; + + fw_counters_data = + malloc(sizeof(*fw_counters_data), M_QAT, M_WAITOK | M_ZERO); + + INIT_LIST_HEAD(&fw_counters_data->ae_sec_list); + + init_rwsem(&fw_counters_data->lock); + accel_dev->fw_counters_data = fw_counters_data; + + qat_sysctl_ctx = + device_get_sysctl_ctx(accel_dev->accel_pci_dev.pci_dev); + qat_sysctl_tree = + device_get_sysctl_tree(accel_dev->accel_pci_dev.pci_dev); + rc = SYSCTL_ADD_OID(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_sysctl_tree), + OID_AUTO, + "fw_counters", + CTLTYPE_STRING | CTLFLAG_RD, + accel_dev, + 0, + adf_read_fw_counters, + "A", + "QAT FW counters"); + if (!rc) + return ENOMEM; + else + return 0; +} + +static void +adf_fw_counters_del_all(struct adf_accel_dev *accel_dev) +{ + struct adf_fw_counters_data *fw_counters_data = + accel_dev->fw_counters_data; + + down_write(&fw_counters_data->lock); + adf_fw_counters_section_del_all(&fw_counters_data->ae_sec_list); + up_write(&fw_counters_data->lock); +} + +static void +adf_fw_counters_keyval_add(struct adf_fw_counters_val *new, + struct adf_fw_counters_section *sec) +{ + list_add_tail(&new->list, &sec->param_head); +} + +static void +adf_fw_counters_keyval_del_all(struct list_head *head) +{ + struct list_head *list_ptr = NULL, *tmp = NULL; + + list_for_each_prev_safe(list_ptr, tmp, head) + { + struct adf_fw_counters_val *ptr = + list_entry(list_ptr, struct adf_fw_counters_val, list); + list_del(list_ptr); + free(ptr, M_QAT); + } +} + +static void +adf_fw_counters_section_del_all(struct list_head *head) +{ + struct adf_fw_counters_section *ptr = NULL; + struct list_head *list = NULL, *tmp = NULL; + + list_for_each_prev_safe(list, tmp, head) + { + ptr = list_entry(list, struct adf_fw_counters_section, list); + adf_fw_counters_keyval_del_all(&ptr->param_head); + list_del(list); + free(ptr, M_QAT); + } +} + +static struct adf_fw_counters_section * +adf_fw_counters_sec_find(struct adf_accel_dev *accel_dev, + const char *sec_name, + const unsigned long sec_name_max_size) +{ + struct adf_fw_counters_data *fw_counters_data = + accel_dev->fw_counters_data; + struct list_head *list = NULL; + + list_for_each(list, &fw_counters_data->ae_sec_list) + { + struct adf_fw_counters_section *ptr = + list_entry(list, struct adf_fw_counters_section, list); + if (!strncmp(ptr->name, sec_name, sec_name_max_size)) + return ptr; + } + return NULL; +} + +static int +adf_fw_counters_add_key_value_param(struct adf_accel_dev *accel_dev, + const char *section_name, + const unsigned long sec_name_max_size, + const char *key, + const void *val) +{ + struct adf_fw_counters_data *fw_counters_data = + accel_dev->fw_counters_data; + struct adf_fw_counters_val *key_val; + struct adf_fw_counters_section *section = + adf_fw_counters_sec_find(accel_dev, + section_name, + sec_name_max_size); + long tmp = *((const long *)val); + + if (!section) + return EFAULT; + key_val = malloc(sizeof(*key_val), M_QAT, M_WAITOK | M_ZERO); + + INIT_LIST_HEAD(&key_val->list); + + if (tmp == ADF_FW_COUNTERS_NO_RESPONSE) { + snprintf(key_val->val, + FW_COUNTERS_MAX_VAL_LEN_IN_BYTES, + "No Response"); + } else { + snprintf(key_val->val, + FW_COUNTERS_MAX_VAL_LEN_IN_BYTES, + "%ld", + tmp); + } + + strlcpy(key_val->key, key, sizeof(key_val->key)); + down_write(&fw_counters_data->lock); + adf_fw_counters_keyval_add(key_val, section); + up_write(&fw_counters_data->lock); + return 0; +} + +/** + * adf_fw_counters_section_add() - Add AE section entry to FW counters table. + * @accel_dev: Pointer to acceleration device. + * @name: Name of the section + * + * Function adds a section for each AE where FW Requests/Responses and their + * values will be stored. + * To be used by QAT device specific drivers. + * + * Return: 0 on success, error code otherwise. + */ +static int +adf_fw_counters_section_add(struct adf_accel_dev *accel_dev, + const char *name, + const unsigned long name_max_size) +{ + struct adf_fw_counters_data *fw_counters_data = + accel_dev->fw_counters_data; + struct adf_fw_counters_section *sec = + adf_fw_counters_sec_find(accel_dev, name, name_max_size); + + if (sec) + return 0; + + sec = malloc(sizeof(*sec), M_QAT, M_WAITOK | M_ZERO); + + strlcpy(sec->name, name, sizeof(sec->name)); + INIT_LIST_HEAD(&sec->param_head); + + down_write(&fw_counters_data->lock); + + list_add_tail(&sec->list, &fw_counters_data->ae_sec_list); + up_write(&fw_counters_data->lock); + return 0; +} + +/** + * adf_fw_counters_remove() - Clears acceleration device FW counters table. + * @accel_dev: Pointer to acceleration device. + * + * Function removes FW counters table from the given acceleration device + * and frees all allocated memory. + * To be used by QAT device specific drivers. + * + * Return: void + */ +void +adf_fw_counters_remove(struct adf_accel_dev *accel_dev) +{ + struct adf_fw_counters_data *fw_counters_data = + accel_dev->fw_counters_data; + + if (!fw_counters_data) + return; + + down_write(&fw_counters_data->lock); + adf_fw_counters_section_del_all(&fw_counters_data->ae_sec_list); + up_write(&fw_counters_data->lock); + free(fw_counters_data, M_QAT); + accel_dev->fw_counters_data = NULL; +} Index: sys/dev/qat/qat_common/adf_heartbeat.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_heartbeat.c @@ -0,0 +1,213 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include +#include +#include "qat_freebsd.h" + +#include "adf_heartbeat.h" +#include "adf_common_drv.h" +#include "adf_cfg.h" +#include "adf_cfg_strings.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_transport_internal.h" + +#define MAX_HB_TICKS 0xFFFFFFFF + +static int +adf_check_hb_poll_freq(struct adf_accel_dev *accel_dev) +{ + u64 curr_hb_check_time = 0; + char timer_str[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + unsigned int timer_val = ADF_CFG_HB_DEFAULT_VALUE; + + curr_hb_check_time = adf_clock_get_current_time(); + + if (!adf_cfg_get_param_value(accel_dev, + ADF_GENERAL_SEC, + ADF_HEARTBEAT_TIMER, + (char *)timer_str)) { + if (compat_strtouint((char *)timer_str, + ADF_CFG_BASE_DEC, + &timer_val)) + timer_val = ADF_CFG_HB_DEFAULT_VALUE; + } + if ((curr_hb_check_time - accel_dev->heartbeat->last_hb_check_time) < + timer_val) { + return EINVAL; + } + accel_dev->heartbeat->last_hb_check_time = curr_hb_check_time; + + return 0; +} + +int +adf_heartbeat_init(struct adf_accel_dev *accel_dev) +{ + if (accel_dev->heartbeat) + adf_heartbeat_clean(accel_dev); + + accel_dev->heartbeat = + malloc(sizeof(*accel_dev->heartbeat), M_QAT, M_WAITOK | M_ZERO); + + return 0; +} + +void +adf_heartbeat_clean(struct adf_accel_dev *accel_dev) +{ + free(accel_dev->heartbeat, M_QAT); + accel_dev->heartbeat = NULL; +} + +int +adf_get_hb_timer(struct adf_accel_dev *accel_dev, unsigned int *value) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + char timer_str[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + unsigned int timer_val = ADF_CFG_HB_DEFAULT_VALUE; + u32 clk_per_sec = 0; + + if (!hw_data->get_ae_clock) + return EINVAL; + + clk_per_sec = (u32)hw_data->get_ae_clock(hw_data); + + /* Get Heartbeat Timer value from the configuration */ + if (!adf_cfg_get_param_value(accel_dev, + ADF_GENERAL_SEC, + ADF_HEARTBEAT_TIMER, + (char *)timer_str)) { + if (compat_strtouint((char *)timer_str, + ADF_CFG_BASE_DEC, + &timer_val)) + timer_val = ADF_CFG_HB_DEFAULT_VALUE; + } + + if (timer_val < ADF_MIN_HB_TIMER_MS) { + device_printf(GET_DEV(accel_dev), + "%s value cannot be lesser than %u\n", + ADF_HEARTBEAT_TIMER, + ADF_MIN_HB_TIMER_MS); + return EINVAL; + } + + /* Convert msec to clocks */ + clk_per_sec = clk_per_sec / 1000; + *value = timer_val * clk_per_sec; + + return 0; +} + +struct adf_hb_count { + u16 ae_thread[ADF_NUM_HB_CNT_PER_AE]; +}; + +int +adf_get_heartbeat_status(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + struct icp_qat_fw_init_admin_hb_stats *live_s = + (struct icp_qat_fw_init_admin_hb_stats *) + accel_dev->admin->virt_hb_addr; + const size_t max_aes = hw_device->get_num_aes(hw_device); + const size_t stats_size = + max_aes * sizeof(struct icp_qat_fw_init_admin_hb_stats); + int ret = 0; + size_t ae, thr; + unsigned long ae_mask = 0; + int num_threads_per_ae = ADF_NUM_HB_CNT_PER_AE; + + /* + * Memory layout of Heartbeat + * + * +----------------+----------------+---------+ + * | Live value | Last value | Count | + * +----------------+----------------+---------+ + * \_______________/\_______________/\________/ + * ^ ^ ^ + * | | | + * | | max_aes * sizeof(adf_hb_count) + * | max_aes * sizeof(icp_qat_fw_init_admin_hb_stats) + * max_aes * sizeof(icp_qat_fw_init_admin_hb_stats) + */ + struct icp_qat_fw_init_admin_hb_stats *curr_s; + struct icp_qat_fw_init_admin_hb_stats *last_s = live_s + max_aes; + struct adf_hb_count *count = (struct adf_hb_count *)(last_s + max_aes); + + curr_s = malloc(stats_size, M_QAT, M_WAITOK | M_ZERO); + + memcpy(curr_s, live_s, stats_size); + ae_mask = hw_device->ae_mask; + + for_each_set_bit(ae, &ae_mask, max_aes) + { + for (thr = 0; thr < num_threads_per_ae; ++thr) { + struct icp_qat_fw_init_admin_hb_cnt *curr = + &curr_s[ae].stats[thr]; + struct icp_qat_fw_init_admin_hb_cnt *prev = + &last_s[ae].stats[thr]; + u16 req = curr->req_heartbeat_cnt; + u16 resp = curr->resp_heartbeat_cnt; + u16 last = prev->resp_heartbeat_cnt; + + if ((thr == ADF_AE_ADMIN_THREAD || req != resp) && + resp == last) { + u16 retry = ++count[ae].ae_thread[thr]; + + if (retry >= ADF_CFG_HB_COUNT_THRESHOLD) + ret = EIO; + } else { + count[ae].ae_thread[thr] = 0; + } + } + } + + /* Copy current stats for the next iteration */ + memcpy(last_s, curr_s, stats_size); + free(curr_s, M_QAT); + + return ret; +} + +int +adf_heartbeat_status(struct adf_accel_dev *accel_dev, + enum adf_device_heartbeat_status *hb_status) +{ + /* Heartbeat is not implemented in VFs at the moment so they do not + * set get_heartbeat_status. Also, in case the device is not up, + * unsupported should be returned */ + if (!accel_dev || !accel_dev->hw_device || + !accel_dev->hw_device->get_heartbeat_status || + !accel_dev->heartbeat) { + *hb_status = DEV_HB_UNSUPPORTED; + return 0; + } + + if (!adf_dev_started(accel_dev) || + test_bit(ADF_STATUS_RESTARTING, &accel_dev->status)) { + *hb_status = DEV_HB_UNRESPONSIVE; + accel_dev->heartbeat->last_hb_status = DEV_HB_UNRESPONSIVE; + return 0; + } + + if (adf_check_hb_poll_freq(accel_dev) == EINVAL) { + *hb_status = accel_dev->heartbeat->last_hb_status; + return 0; + } + + accel_dev->heartbeat->hb_sent_counter++; + if (unlikely(accel_dev->hw_device->get_heartbeat_status(accel_dev))) { + device_printf(GET_DEV(accel_dev), + "ERROR: QAT is not responding.\n"); + *hb_status = DEV_HB_UNRESPONSIVE; + accel_dev->heartbeat->last_hb_status = DEV_HB_UNRESPONSIVE; + accel_dev->heartbeat->hb_failed_counter++; + return adf_notify_fatal_error(accel_dev); + } + + *hb_status = DEV_HB_ALIVE; + accel_dev->heartbeat->last_hb_status = DEV_HB_ALIVE; + + return 0; +} Index: sys/dev/qat/qat_common/adf_hw_arbiter.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_hw_arbiter.c @@ -0,0 +1,186 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "icp_qat_uclo.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_cfg_strings.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "adf_transport_internal.h" + +#define ADF_ARB_NUM 4 +#define ADF_ARB_REG_SIZE 0x4 +#define ADF_ARB_WTR_SIZE 0x20 +#define ADF_ARB_OFFSET 0x30000 +#define ADF_ARB_REG_SLOT 0x1000 +#define ADF_ARB_WTR_OFFSET 0x010 +#define ADF_ARB_RO_EN_OFFSET 0x090 +#define ADF_ARB_WQCFG_OFFSET 0x100 +#define ADF_ARB_WRK_2_SER_MAP_OFFSET 0x180 +#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C + +#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \ + ADF_CSR_WR(csr_addr, \ + ADF_ARB_RINGSRVARBEN_OFFSET + (ADF_ARB_REG_SLOT * (index)), \ + value) + +#define WRITE_CSR_ARB_SARCONFIG(csr_addr, csr_offset, index, value) \ + ADF_CSR_WR(csr_addr, (csr_offset) + (ADF_ARB_REG_SIZE * (index)), value) +#define READ_CSR_ARB_RINGSRVARBEN(csr_addr, index) \ + ADF_CSR_RD(csr_addr, \ + ADF_ARB_RINGSRVARBEN_OFFSET + (ADF_ARB_REG_SLOT * (index))) + +static DEFINE_MUTEX(csr_arb_lock); + +#define WRITE_CSR_ARB_WRK_2_SER_MAP( \ + csr_addr, csr_offset, wrk_to_ser_map_offset, index, value) \ + ADF_CSR_WR(csr_addr, \ + ((csr_offset) + (wrk_to_ser_map_offset)) + \ + (ADF_ARB_REG_SIZE * (index)), \ + value) + +int +adf_init_arb(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct arb_info info; + struct resource *csr = accel_dev->transport->banks[0].csr_addr; + u32 arb_cfg = 0x1 << 31 | 0x4 << 4 | 0x1; + u32 arb; + + hw_data->get_arb_info(&info); + + /* Service arb configured for 32 bytes responses and + * ring flow control check enabled. + */ + for (arb = 0; arb < ADF_ARB_NUM; arb++) + WRITE_CSR_ARB_SARCONFIG(csr, info.arbiter_offset, arb, arb_cfg); + + return 0; +} + +int +adf_init_gen2_arb(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct arb_info info; + struct resource *csr = accel_dev->transport->banks[0].csr_addr; + u32 i; + const u32 *thd_2_arb_cfg; + + /* invoke common adf_init_arb */ + adf_init_arb(accel_dev); + + hw_data->get_arb_info(&info); + + /* Map worker threads to service arbiters */ + hw_data->get_arb_mapping(accel_dev, &thd_2_arb_cfg); + if (!thd_2_arb_cfg) + return EFAULT; + + for (i = 0; i < hw_data->num_engines; i++) + WRITE_CSR_ARB_WRK_2_SER_MAP(csr, + info.arbiter_offset, + info.wrk_thd_2_srv_arb_map, + i, + *(thd_2_arb_cfg + i)); + return 0; +} + +void +adf_update_ring_arb(struct adf_etr_ring_data *ring) +{ + WRITE_CSR_ARB_RINGSRVARBEN(ring->bank->csr_addr, + ring->bank->bank_number, + ring->bank->ring_mask & 0xFF); +} + +void +adf_enable_ring_arb(void *csr_addr, unsigned int bank_nr, unsigned int mask) +{ + struct resource *csr = csr_addr; + u32 arbenable; + + if (!csr) + return; + + mutex_lock(&csr_arb_lock); + arbenable = READ_CSR_ARB_RINGSRVARBEN(csr, bank_nr); + arbenable |= mask & 0xFF; + WRITE_CSR_ARB_RINGSRVARBEN(csr, bank_nr, arbenable); + + mutex_unlock(&csr_arb_lock); +} + +void +adf_disable_ring_arb(void *csr_addr, unsigned int bank_nr, unsigned int mask) +{ + struct resource *csr = csr_addr; + u32 arbenable; + + if (!csr_addr) + return; + + mutex_lock(&csr_arb_lock); + arbenable = READ_CSR_ARB_RINGSRVARBEN(csr, bank_nr); + arbenable &= ~mask & 0xFF; + WRITE_CSR_ARB_RINGSRVARBEN(csr, bank_nr, arbenable); + mutex_unlock(&csr_arb_lock); +} + +void +adf_exit_arb(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct arb_info info; + struct resource *csr; + unsigned int i; + + if (!accel_dev->transport) + return; + + csr = accel_dev->transport->banks[0].csr_addr; + + hw_data->get_arb_info(&info); + + /* Reset arbiter configuration */ + for (i = 0; i < ADF_ARB_NUM; i++) + WRITE_CSR_ARB_SARCONFIG(csr, info.arbiter_offset, i, 0); + + /* Unmap worker threads to service arbiters */ + if (hw_data->get_arb_mapping) { + for (i = 0; i < hw_data->num_engines; i++) + WRITE_CSR_ARB_WRK_2_SER_MAP(csr, + info.arbiter_offset, + info.wrk_thd_2_srv_arb_map, + i, + 0); + } + + /* Disable arbitration on all rings */ + for (i = 0; i < GET_MAX_BANKS(accel_dev); i++) + WRITE_CSR_ARB_RINGSRVARBEN(csr, i, 0); +} + +void +adf_disable_arb(struct adf_accel_dev *accel_dev) +{ + struct resource *csr; + unsigned int i; + + if (!accel_dev || !accel_dev->transport) + return; + + csr = accel_dev->transport->banks[0].csr_addr; + + /* Disable arbitration on all rings */ + for (i = 0; i < GET_MAX_BANKS(accel_dev); i++) + WRITE_CSR_ARB_RINGSRVARBEN(csr, i, 0); +} Index: sys/dev/qat/qat_common/adf_init.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_init.c @@ -0,0 +1,730 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "icp_qat_uclo.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_cfg_strings.h" +#include "adf_dev_err.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include +#include +#include "adf_accel_devices.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "icp_qat_fw.h" + +/* Mask used to check the CompressAndVerify capability bit */ +#define DC_CNV_EXTENDED_CAPABILITY (0x01) + +/* Mask used to check the CompressAndVerifyAndRecover capability bit */ +#define DC_CNVNR_EXTENDED_CAPABILITY (0x100) + +static LIST_HEAD(service_table); +static DEFINE_MUTEX(service_lock); + +static void +adf_service_add(struct service_hndl *service) +{ + mutex_lock(&service_lock); + list_add(&service->list, &service_table); + mutex_unlock(&service_lock); +} + +int +adf_service_register(struct service_hndl *service) +{ + memset(service->init_status, 0, sizeof(service->init_status)); + memset(service->start_status, 0, sizeof(service->start_status)); + adf_service_add(service); + return 0; +} + +static void +adf_service_remove(struct service_hndl *service) +{ + mutex_lock(&service_lock); + list_del(&service->list); + mutex_unlock(&service_lock); +} + +int +adf_service_unregister(struct service_hndl *service) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(service->init_status); i++) { + if (service->init_status[i] || service->start_status[i]) { + pr_err("QAT: Could not remove active service [%d]\n", + i); + return EFAULT; + } + } + adf_service_remove(service); + return 0; +} + +static int +adf_cfg_add_device_params(struct adf_accel_dev *accel_dev) +{ + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char hw_version[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + char mmp_version[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + struct adf_hw_device_data *hw_data = NULL; + unsigned long val; + + if (!accel_dev) + return -EINVAL; + + hw_data = accel_dev->hw_device; + + if (adf_cfg_section_add(accel_dev, ADF_GENERAL_SEC)) + goto err; + + snprintf(key, sizeof(key), ADF_DEV_MAX_BANKS); + val = GET_MAX_BANKS(accel_dev); + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC)) + goto err; + + snprintf(key, sizeof(key), ADF_DEV_CAPABILITIES_MASK); + val = hw_data->accel_capabilities_mask; + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)val, ADF_HEX)) + goto err; + + snprintf(key, sizeof(key), ADF_DEV_PKG_ID); + val = accel_dev->accel_id; + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC)) + goto err; + + snprintf(key, sizeof(key), ADF_DEV_NODE_ID); + val = dev_to_node(GET_DEV(accel_dev)); + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC)) + goto err; + + snprintf(key, sizeof(key), ADF_DEV_MAX_RINGS_PER_BANK); + val = hw_data->num_rings_per_bank; + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC)) + goto err; + + snprintf(key, sizeof(key), ADF_HW_REV_ID_KEY); + snprintf(hw_version, + ADF_CFG_MAX_VAL_LEN_IN_BYTES, + "%d", + accel_dev->accel_pci_dev.revid); + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)hw_version, ADF_STR)) + goto err; + + snprintf(key, sizeof(key), ADF_MMP_VER_KEY); + snprintf(mmp_version, + ADF_CFG_MAX_VAL_LEN_IN_BYTES, + "%d.%d.%d", + accel_dev->fw_versions.mmp_version_major, + accel_dev->fw_versions.mmp_version_minor, + accel_dev->fw_versions.mmp_version_patch); + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)mmp_version, ADF_STR)) + goto err; + + return 0; +err: + device_printf(GET_DEV(accel_dev), + "Failed to add internal values to accel_dev cfg\n"); + return -EINVAL; +} + +static int +adf_cfg_add_fw_version(struct adf_accel_dev *accel_dev) +{ + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char fw_version[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + + snprintf(key, sizeof(key), ADF_UOF_VER_KEY); + snprintf(fw_version, + ADF_CFG_MAX_VAL_LEN_IN_BYTES, + "%d.%d.%d", + accel_dev->fw_versions.fw_version_major, + accel_dev->fw_versions.fw_version_minor, + accel_dev->fw_versions.fw_version_patch); + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)fw_version, ADF_STR)) + return EFAULT; + + return 0; +} + +static int +adf_cfg_add_ext_params(struct adf_accel_dev *accel_dev) +{ + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + unsigned long val; + + snprintf(key, sizeof(key), ADF_DC_EXTENDED_FEATURES); + + val = hw_data->extended_dc_capabilities; + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)val, ADF_HEX)) + return -EINVAL; + + return 0; +} + +void +adf_error_notifier(uintptr_t arg) +{ + struct adf_accel_dev *accel_dev = (struct adf_accel_dev *)arg; + struct service_hndl *service; + struct list_head *list_itr; + + list_for_each(list_itr, &service_table) + { + service = list_entry(list_itr, struct service_hndl, list); + if (service->event_hld(accel_dev, ADF_EVENT_ERROR)) + device_printf(GET_DEV(accel_dev), + "Failed to send error event to %s.\n", + service->name); + } +} + +/** + * adf_set_ssm_wdtimer() - Initialize the slice hang watchdog timer. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_set_ssm_wdtimer(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_bar *misc_bar = + &GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)]; + struct resource *csr = misc_bar->virt_addr; + u32 i; + unsigned int mask; + u32 clk_per_sec = hw_data->get_clock_speed(hw_data); + u32 timer_val = ADF_WDT_TIMER_SYM_COMP_MS * (clk_per_sec / 1000); + u32 timer_val_pke = ADF_SSM_WDT_PKE_DEFAULT_VALUE; + char timer_str[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + + /* Get Watch Dog Timer for CySym+Comp from the configuration */ + if (!adf_cfg_get_param_value(accel_dev, + ADF_GENERAL_SEC, + ADF_DEV_SSM_WDT_BULK, + (char *)timer_str)) { + if (!compat_strtouint((char *)timer_str, + ADF_CFG_BASE_DEC, + &timer_val)) + /* Convert msec to CPP clocks */ + timer_val = timer_val * (clk_per_sec / 1000); + } + /* Get Watch Dog Timer for CyAsym from the configuration */ + if (!adf_cfg_get_param_value(accel_dev, + ADF_GENERAL_SEC, + ADF_DEV_SSM_WDT_PKE, + (char *)timer_str)) { + if (!compat_strtouint((char *)timer_str, + ADF_CFG_BASE_DEC, + &timer_val_pke)) + /* Convert msec to CPP clocks */ + timer_val_pke = timer_val_pke * (clk_per_sec / 1000); + } + + for (i = 0, mask = hw_data->accel_mask; mask; i++, mask >>= 1) { + if (!(mask & 1)) + continue; + /* Enable Watch Dog Timer for CySym + Comp */ + ADF_CSR_WR(csr, ADF_SSMWDT(i), timer_val); + /* Enable Watch Dog Timer for CyAsym */ + ADF_CSR_WR(csr, ADF_SSMWDTPKE(i), timer_val_pke); + } + return 0; +} + +/** + * adf_dev_init() - Init data structures and services for the given accel device + * @accel_dev: Pointer to acceleration device. + * + * Initialize the ring data structures and the admin comms and arbitration + * services. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_dev_init(struct adf_accel_dev *accel_dev) +{ + struct service_hndl *service; + struct list_head *list_itr; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + char value[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + int ret = 0; + sysctl_ctx_init(&accel_dev->sysctl_ctx); + set_bit(ADF_STATUS_SYSCTL_CTX_INITIALISED, &accel_dev->status); + + if (!hw_data) { + device_printf(GET_DEV(accel_dev), + "Failed to init device - hw_data not set\n"); + return EFAULT; + } + if (hw_data->reset_hw_units) + hw_data->reset_hw_units(accel_dev); + + if (!test_bit(ADF_STATUS_CONFIGURED, &accel_dev->status) && + !accel_dev->is_vf) { + device_printf(GET_DEV(accel_dev), "Device not configured\n"); + return EFAULT; + } + + if (adf_init_etr_data(accel_dev)) { + device_printf(GET_DEV(accel_dev), "Failed initialize etr\n"); + return EFAULT; + } + + if (hw_data->init_accel_units && hw_data->init_accel_units(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed initialize accel_units\n"); + return EFAULT; + } + + if (hw_data->init_admin_comms && hw_data->init_admin_comms(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed initialize admin comms\n"); + return EFAULT; + } + + if (hw_data->init_arb && hw_data->init_arb(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed initialize hw arbiter\n"); + return EFAULT; + } + + if (hw_data->set_asym_rings_mask) + hw_data->set_asym_rings_mask(accel_dev); + + hw_data->enable_ints(accel_dev); + + if (adf_ae_init(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed to initialise Acceleration Engine\n"); + return EFAULT; + } + + set_bit(ADF_STATUS_AE_INITIALISED, &accel_dev->status); + + if (adf_ae_fw_load(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed to load acceleration FW\n"); + return EFAULT; + } + set_bit(ADF_STATUS_AE_UCODE_LOADED, &accel_dev->status); + + if (hw_data->alloc_irq(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed to allocate interrupts\n"); + return EFAULT; + } + set_bit(ADF_STATUS_IRQ_ALLOCATED, &accel_dev->status); + + if (hw_data->init_ras && hw_data->init_ras(accel_dev)) { + device_printf(GET_DEV(accel_dev), "Failed to init RAS\n"); + return EFAULT; + } + + hw_data->enable_ints(accel_dev); + + hw_data->enable_error_correction(accel_dev); + + if (hw_data->enable_vf2pf_comms(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "QAT: Failed to enable vf2pf comms\n"); + return EFAULT; + } + + if (adf_pf_vf_capabilities_init(accel_dev)) + return EFAULT; + + if (adf_pf_vf_ring_to_svc_init(accel_dev)) + return EFAULT; + + if (adf_cfg_add_device_params(accel_dev)) + return EFAULT; + + if (hw_data->add_pke_stats && hw_data->add_pke_stats(accel_dev)) + return EFAULT; + + if (hw_data->add_misc_error && hw_data->add_misc_error(accel_dev)) + return EFAULT; + /* + * Subservice initialisation is divided into two stages: init and start. + * This is to facilitate any ordering dependencies between services + * prior to starting any of the accelerators. + */ + list_for_each(list_itr, &service_table) + { + service = list_entry(list_itr, struct service_hndl, list); + if (service->event_hld(accel_dev, ADF_EVENT_INIT)) { + device_printf(GET_DEV(accel_dev), + "Failed to initialise service %s\n", + service->name); + return EFAULT; + } + set_bit(accel_dev->accel_id, service->init_status); + } + + /* Read autoreset on error parameter */ + ret = adf_cfg_get_param_value(accel_dev, + ADF_GENERAL_SEC, + ADF_AUTO_RESET_ON_ERROR, + value); + if (!ret) { + if (compat_strtouint(value, + 10, + &accel_dev->autoreset_on_error)) { + device_printf( + GET_DEV(accel_dev), + "Failed converting %s to a decimal value\n", + ADF_AUTO_RESET_ON_ERROR); + return EFAULT; + } + } + + return 0; +} + +/** + * adf_dev_start() - Start acceleration service for the given accel device + * @accel_dev: Pointer to acceleration device. + * + * Function notifies all the registered services that the acceleration device + * is ready to be used. + * To be used by QAT device specific drivers. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_dev_start(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct service_hndl *service; + struct list_head *list_itr; + + set_bit(ADF_STATUS_STARTING, &accel_dev->status); + if (adf_devmgr_verify_id(&accel_dev->accel_id)) { + device_printf(GET_DEV(accel_dev), + "QAT: Device %d not found\n", + accel_dev->accel_id); + return ENODEV; + } + if (adf_ae_start(accel_dev)) { + device_printf(GET_DEV(accel_dev), "AE Start Failed\n"); + return EFAULT; + } + + set_bit(ADF_STATUS_AE_STARTED, &accel_dev->status); + if (hw_data->send_admin_init(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed to send init message\n"); + return EFAULT; + } + + if (adf_cfg_add_fw_version(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed to update configuration FW version\n"); + return EFAULT; + } + + if (hw_data->measure_clock) + hw_data->measure_clock(accel_dev); + + /* + * Set ssm watch dog timer for slice hang detection + * Note! Not supported on devices older than C62x + */ + if (hw_data->set_ssm_wdtimer && hw_data->set_ssm_wdtimer(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "QAT: Failed to set ssm watch dog timer\n"); + return EFAULT; + } + + list_for_each(list_itr, &service_table) + { + service = list_entry(list_itr, struct service_hndl, list); + if (service->event_hld(accel_dev, ADF_EVENT_START)) { + device_printf(GET_DEV(accel_dev), + "Failed to start service %s\n", + service->name); + return EFAULT; + } + set_bit(accel_dev->accel_id, service->start_status); + } + + if (!test_bit(ADF_STATUS_RESTARTING, &accel_dev->status) && + adf_cfg_add_ext_params(accel_dev)) + return EFAULT; + + clear_bit(ADF_STATUS_STARTING, &accel_dev->status); + set_bit(ADF_STATUS_STARTED, &accel_dev->status); + + return 0; +} + +/** + * adf_dev_stop() - Stop acceleration service for the given accel device + * @accel_dev: Pointer to acceleration device. + * + * Function notifies all the registered services that the acceleration device + * is shuting down. + * To be used by QAT device specific drivers. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_dev_stop(struct adf_accel_dev *accel_dev) +{ + struct service_hndl *service; + struct list_head *list_itr; + + if (adf_devmgr_verify_id(&accel_dev->accel_id)) { + device_printf(GET_DEV(accel_dev), + "QAT: Device %d not found\n", + accel_dev->accel_id); + return ENODEV; + } + if (!adf_dev_started(accel_dev) && + !test_bit(ADF_STATUS_STARTING, &accel_dev->status)) { + return 0; + } + + if (adf_dev_stop_notify_sync(accel_dev)) { + device_printf( + GET_DEV(accel_dev), + "Waiting for device un-busy failed. Retries limit reached\n"); + return EBUSY; + } + + clear_bit(ADF_STATUS_STARTING, &accel_dev->status); + clear_bit(ADF_STATUS_STARTED, &accel_dev->status); + + list_for_each(list_itr, &service_table) + { + service = list_entry(list_itr, struct service_hndl, list); + if (!test_bit(accel_dev->accel_id, service->start_status)) + continue; + clear_bit(accel_dev->accel_id, service->start_status); + } + + if (test_bit(ADF_STATUS_AE_STARTED, &accel_dev->status)) { + if (adf_ae_stop(accel_dev)) + device_printf(GET_DEV(accel_dev), + "failed to stop AE\n"); + else + clear_bit(ADF_STATUS_AE_STARTED, &accel_dev->status); + } + + return 0; +} + +/** + * adf_dev_shutdown() - shutdown acceleration services and data strucutures + * @accel_dev: Pointer to acceleration device + * + * Cleanup the ring data structures and the admin comms and arbitration + * services. + */ +void +adf_dev_shutdown(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct service_hndl *service; + struct list_head *list_itr; + + if (test_bit(ADF_STATUS_SYSCTL_CTX_INITIALISED, &accel_dev->status)) { + sysctl_ctx_free(&accel_dev->sysctl_ctx); + clear_bit(ADF_STATUS_SYSCTL_CTX_INITIALISED, + &accel_dev->status); + } + + if (!hw_data) { + device_printf( + GET_DEV(accel_dev), + "QAT: Failed to shutdown device - hw_data not set\n"); + return; + } + + if (test_bit(ADF_STATUS_AE_UCODE_LOADED, &accel_dev->status)) { + adf_ae_fw_release(accel_dev); + clear_bit(ADF_STATUS_AE_UCODE_LOADED, &accel_dev->status); + } + + if (test_bit(ADF_STATUS_AE_INITIALISED, &accel_dev->status)) { + if (adf_ae_shutdown(accel_dev)) + device_printf(GET_DEV(accel_dev), + "Failed to shutdown Accel Engine\n"); + else + clear_bit(ADF_STATUS_AE_INITIALISED, + &accel_dev->status); + } + + list_for_each(list_itr, &service_table) + { + service = list_entry(list_itr, struct service_hndl, list); + if (!test_bit(accel_dev->accel_id, service->init_status)) + continue; + if (service->event_hld(accel_dev, ADF_EVENT_SHUTDOWN)) + device_printf(GET_DEV(accel_dev), + "Failed to shutdown service %s\n", + service->name); + else + clear_bit(accel_dev->accel_id, service->init_status); + } + + hw_data->disable_iov(accel_dev); + + if (hw_data->disable_vf2pf_comms) + hw_data->disable_vf2pf_comms(accel_dev); + + if (test_bit(ADF_STATUS_IRQ_ALLOCATED, &accel_dev->status)) { + hw_data->free_irq(accel_dev); + clear_bit(ADF_STATUS_IRQ_ALLOCATED, &accel_dev->status); + } + + /* Delete configuration only if not restarting */ + if (!test_bit(ADF_STATUS_RESTARTING, &accel_dev->status)) + adf_cfg_del_all(accel_dev); + + if (hw_data->remove_pke_stats) + hw_data->remove_pke_stats(accel_dev); + + if (hw_data->remove_misc_error) + hw_data->remove_misc_error(accel_dev); + + if (hw_data->exit_ras) + hw_data->exit_ras(accel_dev); + + if (hw_data->exit_arb) + hw_data->exit_arb(accel_dev); + + if (hw_data->exit_admin_comms) + hw_data->exit_admin_comms(accel_dev); + + if (hw_data->exit_accel_units) + hw_data->exit_accel_units(accel_dev); + + adf_cleanup_etr_data(accel_dev); + if (hw_data->restore_device) + hw_data->restore_device(accel_dev); +} + +/** + * adf_dev_reset() - Reset acceleration service for the given accel device + * @accel_dev: Pointer to acceleration device. + * @mode: Specifies reset mode - synchronous or asynchronous. + * Function notifies all the registered services that the acceleration device + * is resetting. + * To be used by QAT device specific drivers. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_dev_reset(struct adf_accel_dev *accel_dev, enum adf_dev_reset_mode mode) +{ + return adf_dev_aer_schedule_reset(accel_dev, mode); +} + +int +adf_dev_restarting_notify(struct adf_accel_dev *accel_dev) +{ + struct service_hndl *service; + struct list_head *list_itr; + + list_for_each(list_itr, &service_table) + { + service = list_entry(list_itr, struct service_hndl, list); + if (service->event_hld(accel_dev, ADF_EVENT_RESTARTING)) + device_printf(GET_DEV(accel_dev), + "Failed to restart service %s.\n", + service->name); + } + return 0; +} + +int +adf_dev_restarting_notify_sync(struct adf_accel_dev *accel_dev) +{ + int times; + + adf_dev_restarting_notify(accel_dev); + for (times = 0; times < ADF_STOP_RETRY; times++) { + if (!adf_dev_in_use(accel_dev)) + break; + dev_dbg(GET_DEV(accel_dev), "retry times=%d\n", times); + pause_ms("adfstop", 100); + } + if (adf_dev_in_use(accel_dev)) { + clear_bit(ADF_STATUS_RESTARTING, &accel_dev->status); + device_printf(GET_DEV(accel_dev), + "Device still in use during reset sequence.\n"); + return EBUSY; + } + + return 0; +} + +int +adf_dev_stop_notify_sync(struct adf_accel_dev *accel_dev) +{ + int times; + + struct service_hndl *service; + struct list_head *list_itr; + + list_for_each(list_itr, &service_table) + { + service = list_entry(list_itr, struct service_hndl, list); + if (service->event_hld(accel_dev, ADF_EVENT_STOP)) + device_printf(GET_DEV(accel_dev), + "Failed to restart service %s.\n", + service->name); + } + + for (times = 0; times < ADF_STOP_RETRY; times++) { + if (!adf_dev_in_use(accel_dev)) + break; + dev_dbg(GET_DEV(accel_dev), "retry times=%d\n", times); + pause_ms("adfstop", 100); + } + if (adf_dev_in_use(accel_dev)) { + clear_bit(ADF_STATUS_RESTARTING, &accel_dev->status); + device_printf(GET_DEV(accel_dev), + "Device still in use during stop sequence.\n"); + return EBUSY; + } + + return 0; +} + +int +adf_dev_restarted_notify(struct adf_accel_dev *accel_dev) +{ + struct service_hndl *service; + struct list_head *list_itr; + + list_for_each(list_itr, &service_table) + { + service = list_entry(list_itr, struct service_hndl, list); + if (service->event_hld(accel_dev, ADF_EVENT_RESTARTED)) + device_printf(GET_DEV(accel_dev), + "Failed to restart service %s.\n", + service->name); + } + return 0; +} Index: sys/dev/qat/qat_common/adf_isr.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_isr.c @@ -0,0 +1,345 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "icp_qat_uclo.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_cfg_strings.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include +#include +#include +#include +#include +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "adf_cfg.h" +#include "adf_cfg_strings.h" +#include "adf_cfg_common.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include "adf_dev_err.h" + +TASKQUEUE_DEFINE_THREAD(qat_pf); + +static int +adf_enable_msix(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_pci *info_pci_dev = &accel_dev->accel_pci_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + int msix_num_entries = 1; + int count = 0; + int error = 0; + int num_vectors = 0; + u_int *vectors; + + /* If SR-IOV is disabled, add entries for each bank */ + if (!accel_dev->u1.pf.vf_info) { + msix_num_entries += hw_data->num_banks; + num_vectors = 0; + vectors = NULL; + } else { + num_vectors = hw_data->num_banks + 1; + vectors = malloc(num_vectors * sizeof(u_int), + M_QAT, + M_WAITOK | M_ZERO); + vectors[hw_data->num_banks] = 1; + } + + count = msix_num_entries; + error = pci_alloc_msix(info_pci_dev->pci_dev, &count); + if (error == 0 && count != msix_num_entries) { + pci_release_msi(info_pci_dev->pci_dev); + error = EFBIG; + } + if (error) { + device_printf(GET_DEV(accel_dev), + "Failed to enable MSI-X IRQ(s)\n"); + free(vectors, M_QAT); + return error; + } + + if (vectors != NULL) { + error = + pci_remap_msix(info_pci_dev->pci_dev, num_vectors, vectors); + free(vectors, M_QAT); + if (error) { + device_printf(GET_DEV(accel_dev), + "Failed to remap MSI-X IRQ(s)\n"); + pci_release_msi(info_pci_dev->pci_dev); + return error; + } + } + + return 0; +} + +static void +adf_disable_msix(struct adf_accel_pci *info_pci_dev) +{ + pci_release_msi(info_pci_dev->pci_dev); +} + +static void +adf_msix_isr_bundle(void *bank_ptr) +{ + struct adf_etr_bank_data *bank = bank_ptr; + struct adf_etr_data *priv_data = bank->accel_dev->transport; + + WRITE_CSR_INT_FLAG_AND_COL(bank->csr_addr, bank->bank_number, 0); + adf_response_handler((uintptr_t)&priv_data->banks[bank->bank_number]); + return; +} + +static void +adf_msix_isr_ae(void *dev_ptr) +{ + struct adf_accel_dev *accel_dev = dev_ptr; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_bar *pmisc = + &GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)]; + struct resource *pmisc_bar_addr = pmisc->virt_addr; + u32 errsou3; + u32 errsou5; + bool reset_required = false; + + if (hw_data->ras_interrupts && + hw_data->ras_interrupts(accel_dev, &reset_required)) + if (reset_required) { + adf_notify_fatal_error(accel_dev); + goto exit; + } + + if (hw_data->check_slice_hang && hw_data->check_slice_hang(accel_dev)) { + } + +exit: + errsou3 = ADF_CSR_RD(pmisc_bar_addr, ADF_ERRSOU3); + errsou5 = ADF_CSR_RD(pmisc_bar_addr, ADF_ERRSOU5); + if (errsou3 | errsou5) + adf_print_err_registers(accel_dev); + else + device_printf(GET_DEV(accel_dev), "spurious AE interrupt\n"); + + return; +} + +static int +adf_get_irq_affinity(struct adf_accel_dev *accel_dev, int bank) +{ + int core = CPU_FIRST(); + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + char bankName[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + + snprintf(bankName, + ADF_CFG_MAX_KEY_LEN_IN_BYTES - 1, + ADF_ETRMGR_CORE_AFFINITY_FORMAT, + bank); + bankName[ADF_CFG_MAX_KEY_LEN_IN_BYTES - 1] = '\0'; + + if (adf_cfg_get_param_value(accel_dev, "Accelerator0", bankName, val)) { + device_printf(GET_DEV(accel_dev), + "No CoreAffinity Set - using default core: %d\n", + core); + } else { + if (compat_strtouint(val, 10, &core)) { + device_printf(GET_DEV(accel_dev), + "Can't get cpu core ID\n"); + } + } + return (core); +} + +static int +adf_request_irqs(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_pci *info_pci_dev = &accel_dev->accel_pci_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct msix_entry *msixe = info_pci_dev->msix_entries.entries; + int ret = 0, rid = 0, i = 0; + struct adf_etr_data *etr_data = accel_dev->transport; + int computed_core = 0; + + /* Request msix irq for all banks unless SR-IOV enabled */ + if (!accel_dev->u1.pf.vf_info) { + for (i = 0; i < hw_data->num_banks; i++) { + struct adf_etr_bank_data *bank = &etr_data->banks[i]; + + rid = i + 1; + msixe[i].irq = + bus_alloc_resource_any(info_pci_dev->pci_dev, + SYS_RES_IRQ, + &rid, + RF_ACTIVE); + if (msixe[i].irq == NULL) { + device_printf( + GET_DEV(accel_dev), + "failed to allocate IRQ for bundle %d\n", + i); + return ENXIO; + } + + ret = bus_setup_intr(info_pci_dev->pci_dev, + msixe[i].irq, + INTR_TYPE_MISC | INTR_MPSAFE, + NULL, + adf_msix_isr_bundle, + bank, + &msixe[i].cookie); + if (ret) { + device_printf( + GET_DEV(accel_dev), + "failed to enable IRQ for bundle %d\n", + i); + bus_release_resource(info_pci_dev->pci_dev, + SYS_RES_IRQ, + rid, + msixe[i].irq); + msixe[i].irq = NULL; + return ret; + } + + computed_core = adf_get_irq_affinity(accel_dev, i); + bus_describe_intr(info_pci_dev->pci_dev, + msixe[i].irq, + msixe[i].cookie, + "b%d", + i); + bus_bind_intr(info_pci_dev->pci_dev, + msixe[i].irq, + computed_core); + } + } + + /* Request msix irq for AE */ + rid = hw_data->num_banks + 1; + msixe[i].irq = bus_alloc_resource_any(info_pci_dev->pci_dev, + SYS_RES_IRQ, + &rid, + RF_ACTIVE); + if (msixe[i].irq == NULL) { + device_printf(GET_DEV(accel_dev), + "failed to allocate IRQ for ae-cluster\n"); + return ENXIO; + } + + ret = bus_setup_intr(info_pci_dev->pci_dev, + msixe[i].irq, + INTR_TYPE_MISC | INTR_MPSAFE, + NULL, + adf_msix_isr_ae, + accel_dev, + &msixe[i].cookie); + if (ret) { + device_printf(GET_DEV(accel_dev), + "failed to enable IRQ for ae-cluster\n"); + bus_release_resource(info_pci_dev->pci_dev, + SYS_RES_IRQ, + rid, + msixe[i].irq); + msixe[i].irq = NULL; + return ret; + } + + bus_describe_intr(info_pci_dev->pci_dev, + msixe[i].irq, + msixe[i].cookie, + "ae"); + return ret; +} + +static void +adf_free_irqs(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_pci *info_pci_dev = &accel_dev->accel_pci_dev; + struct msix_entry *msixe = info_pci_dev->msix_entries.entries; + int i = 0; + + if (info_pci_dev->msix_entries.num_entries > 0) { + for (i = 0; i < info_pci_dev->msix_entries.num_entries; i++) { + if (msixe[i].irq != NULL && msixe[i].cookie != NULL) { + bus_teardown_intr(info_pci_dev->pci_dev, + msixe[i].irq, + msixe[i].cookie); + bus_free_resource(info_pci_dev->pci_dev, + SYS_RES_IRQ, + msixe[i].irq); + } + } + } +} + +static int +adf_isr_alloc_msix_entry_table(struct adf_accel_dev *accel_dev) +{ + struct msix_entry *entries; + u32 msix_num_entries = 1; + + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + /* If SR-IOV is disabled (vf_info is NULL), add entries for each bank */ + if (!accel_dev->u1.pf.vf_info) + msix_num_entries += hw_data->num_banks; + + entries = malloc(msix_num_entries * sizeof(struct msix_entry), + M_QAT, + M_WAITOK | M_ZERO); + + accel_dev->accel_pci_dev.msix_entries.num_entries = msix_num_entries; + accel_dev->accel_pci_dev.msix_entries.entries = entries; + return 0; +} + +static void +adf_isr_free_msix_entry_table(struct adf_accel_dev *accel_dev) +{ + + free(accel_dev->accel_pci_dev.msix_entries.entries, M_QAT); + accel_dev->accel_pci_dev.msix_entries.entries = NULL; +} + +/** + * adf_vf_isr_resource_free() - Free IRQ for acceleration device + * @accel_dev: Pointer to acceleration device. + * + * Function frees interrupts for acceleration device. + */ +void +adf_isr_resource_free(struct adf_accel_dev *accel_dev) +{ + adf_free_irqs(accel_dev); + adf_disable_msix(&accel_dev->accel_pci_dev); + adf_isr_free_msix_entry_table(accel_dev); +} + +/** + * adf_vf_isr_resource_alloc() - Allocate IRQ for acceleration device + * @accel_dev: Pointer to acceleration device. + * + * Function allocates interrupts for acceleration device. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_isr_resource_alloc(struct adf_accel_dev *accel_dev) +{ + int ret; + + ret = adf_isr_alloc_msix_entry_table(accel_dev); + if (ret) + return ret; + if (adf_enable_msix(accel_dev)) + goto err_out; + + if (adf_request_irqs(accel_dev)) + goto err_out; + + return 0; +err_out: + adf_isr_resource_free(accel_dev); + return EFAULT; +} Index: sys/dev/qat/qat_common/adf_pf2vf_capabilities.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_pf2vf_capabilities.c @@ -0,0 +1,147 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "adf_pf2vf_msg.h" +#include "adf_cfg.h" + +#define ADF_VF2PF_CAPABILITIES_V1_VERSION 1 +#define ADF_VF2PF_CAPABILITIES_V1_LENGTH 4 +#define ADF_VF2PF_CAPABILITIES_V2_VERSION 2 +#define ADF_VF2PF_CAPABILITIES_CAP_OFFSET 4 +#define ADF_VF2PF_CAPABILITIES_V2_LENGTH 8 +#define ADF_VF2PF_CAPABILITIES_V3_VERSION 3 +#define ADF_VF2PF_CAPABILITIES_FREQ_OFFSET 8 +#define ADF_VF2PF_CAPABILITIES_V3_LENGTH 12 + +static int +adf_pf_capabilities_msg_provider(struct adf_accel_dev *accel_dev, + u8 **buffer, + u8 *length, + u8 *block_version, + u8 compatibility, + u8 byte_num) +{ + static u8 data[ADF_VF2PF_CAPABILITIES_V3_LENGTH] = { 0 }; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 ext_dc_caps = hw_data->extended_dc_capabilities; + u32 capabilities = hw_data->accel_capabilities_mask; + u32 frequency = hw_data->clock_frequency; + u16 byte = 0; + u16 index = 0; + + for (byte = 0; byte < sizeof(ext_dc_caps); byte++) { + data[byte] = (ext_dc_caps >> (byte * ADF_PFVF_DATA_SHIFT)) & + ADF_PFVF_DATA_MASK; + } + + for (byte = 0, index = ADF_VF2PF_CAPABILITIES_CAP_OFFSET; + byte < sizeof(capabilities); + byte++, index++) { + data[index] = (capabilities >> (byte * ADF_PFVF_DATA_SHIFT)) & + ADF_PFVF_DATA_MASK; + } + + if (frequency) { + for (byte = 0, index = ADF_VF2PF_CAPABILITIES_FREQ_OFFSET; + byte < sizeof(frequency); + byte++, index++) { + data[index] = + (frequency >> (byte * ADF_PFVF_DATA_SHIFT)) & + ADF_PFVF_DATA_MASK; + } + *length = ADF_VF2PF_CAPABILITIES_V3_LENGTH; + *block_version = ADF_VF2PF_CAPABILITIES_V3_VERSION; + } else { + *length = ADF_VF2PF_CAPABILITIES_V2_LENGTH; + *block_version = ADF_VF2PF_CAPABILITIES_V2_VERSION; + } + + *buffer = data; + return 0; +} + +int +adf_pf_vf_capabilities_init(struct adf_accel_dev *accel_dev) +{ + u8 data[ADF_VF2PF_CAPABILITIES_V3_LENGTH] = { 0 }; + u8 len = ADF_VF2PF_CAPABILITIES_V3_LENGTH; + u8 version = ADF_VF2PF_CAPABILITIES_V2_VERSION; + u32 ex_dc_cap = 0; + u32 capabilities = 0; + u32 frequency = 0; + u16 byte = 0; + u16 index = 0; + + if (!accel_dev->is_vf) { + /* on the pf */ + if (!adf_iov_is_block_provider_registered( + ADF_VF2PF_BLOCK_MSG_CAP_SUMMARY)) + adf_iov_block_provider_register( + ADF_VF2PF_BLOCK_MSG_CAP_SUMMARY, + adf_pf_capabilities_msg_provider); + } else if (accel_dev->u1.vf.pf_version >= + ADF_PFVF_COMPATIBILITY_CAPABILITIES) { + /* on the vf */ + if (adf_iov_block_get(accel_dev, + ADF_VF2PF_BLOCK_MSG_CAP_SUMMARY, + &version, + data, + &len)) { + device_printf(GET_DEV(accel_dev), + "QAT: Failed adf_iov_block_get\n"); + return EFAULT; + } + + if (len < ADF_VF2PF_CAPABILITIES_V1_LENGTH) { + device_printf( + GET_DEV(accel_dev), + "Capabilities message truncated to %d bytes\n", + len); + return EFAULT; + } + + for (byte = 0; byte < sizeof(ex_dc_cap); byte++) { + ex_dc_cap |= data[byte] << (byte * ADF_PFVF_DATA_SHIFT); + } + accel_dev->hw_device->extended_dc_capabilities = ex_dc_cap; + + /* Get capabilities if provided by PF */ + if (len >= ADF_VF2PF_CAPABILITIES_V2_LENGTH) { + for (byte = 0, + index = ADF_VF2PF_CAPABILITIES_CAP_OFFSET; + byte < sizeof(capabilities); + byte++, index++) { + capabilities |= data[index] + << (byte * ADF_PFVF_DATA_SHIFT); + } + accel_dev->hw_device->accel_capabilities_mask = + capabilities; + } else { + device_printf(GET_DEV(accel_dev), + "PF did not communicate capabilities\n"); + } + + /* Get frequency if provided by the PF */ + if (len >= ADF_VF2PF_CAPABILITIES_V3_LENGTH) { + for (byte = 0, + index = ADF_VF2PF_CAPABILITIES_FREQ_OFFSET; + byte < sizeof(frequency); + byte++, index++) { + frequency |= data[index] + << (byte * ADF_PFVF_DATA_SHIFT); + } + accel_dev->hw_device->clock_frequency = frequency; + } else { + device_printf(GET_DEV(accel_dev), + "PF did not communicate frequency\n"); + } + + } else { + /* The PF is too old to support the extended capabilities */ + accel_dev->hw_device->extended_dc_capabilities = 0; + } + return 0; +} Index: sys/dev/qat/qat_common/adf_pf2vf_msg.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_pf2vf_msg.c @@ -0,0 +1,896 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "adf_pf2vf_msg.h" + +adf_iov_block_provider + pf2vf_message_providers[ADF_VF2PF_MAX_LARGE_MESSAGE_TYPE + 1]; +unsigned char pfvf_crc8_table[] = + { 0x00, 0x97, 0xB9, 0x2E, 0xE5, 0x72, 0x5C, 0xCB, 0x5D, 0xCA, 0xE4, 0x73, + 0xB8, 0x2F, 0x01, 0x96, 0xBA, 0x2D, 0x03, 0x94, 0x5F, 0xC8, 0xE6, 0x71, + 0xE7, 0x70, 0x5E, 0xC9, 0x02, 0x95, 0xBB, 0x2C, 0xE3, 0x74, 0x5A, 0xCD, + 0x06, 0x91, 0xBF, 0x28, 0xBE, 0x29, 0x07, 0x90, 0x5B, 0xCC, 0xE2, 0x75, + 0x59, 0xCE, 0xE0, 0x77, 0xBC, 0x2B, 0x05, 0x92, 0x04, 0x93, 0xBD, 0x2A, + 0xE1, 0x76, 0x58, 0xCF, 0x51, 0xC6, 0xE8, 0x7F, 0xB4, 0x23, 0x0D, 0x9A, + 0x0C, 0x9B, 0xB5, 0x22, 0xE9, 0x7E, 0x50, 0xC7, 0xEB, 0x7C, 0x52, 0xC5, + 0x0E, 0x99, 0xB7, 0x20, 0xB6, 0x21, 0x0F, 0x98, 0x53, 0xC4, 0xEA, 0x7D, + 0xB2, 0x25, 0x0B, 0x9C, 0x57, 0xC0, 0xEE, 0x79, 0xEF, 0x78, 0x56, 0xC1, + 0x0A, 0x9D, 0xB3, 0x24, 0x08, 0x9F, 0xB1, 0x26, 0xED, 0x7A, 0x54, 0xC3, + 0x55, 0xC2, 0xEC, 0x7B, 0xB0, 0x27, 0x09, 0x9E, 0xA2, 0x35, 0x1B, 0x8C, + 0x47, 0xD0, 0xFE, 0x69, 0xFF, 0x68, 0x46, 0xD1, 0x1A, 0x8D, 0xA3, 0x34, + 0x18, 0x8F, 0xA1, 0x36, 0xFD, 0x6A, 0x44, 0xD3, 0x45, 0xD2, 0xFC, 0x6B, + 0xA0, 0x37, 0x19, 0x8E, 0x41, 0xD6, 0xF8, 0x6F, 0xA4, 0x33, 0x1D, 0x8A, + 0x1C, 0x8B, 0xA5, 0x32, 0xF9, 0x6E, 0x40, 0xD7, 0xFB, 0x6C, 0x42, 0xD5, + 0x1E, 0x89, 0xA7, 0x30, 0xA6, 0x31, 0x1F, 0x88, 0x43, 0xD4, 0xFA, 0x6D, + 0xF3, 0x64, 0x4A, 0xDD, 0x16, 0x81, 0xAF, 0x38, 0xAE, 0x39, 0x17, 0x80, + 0x4B, 0xDC, 0xF2, 0x65, 0x49, 0xDE, 0xF0, 0x67, 0xAC, 0x3B, 0x15, 0x82, + 0x14, 0x83, 0xAD, 0x3A, 0xF1, 0x66, 0x48, 0xDF, 0x10, 0x87, 0xA9, 0x3E, + 0xF5, 0x62, 0x4C, 0xDB, 0x4D, 0xDA, 0xF4, 0x63, 0xA8, 0x3F, 0x11, 0x86, + 0xAA, 0x3D, 0x13, 0x84, 0x4F, 0xD8, 0xF6, 0x61, 0xF7, 0x60, 0x4E, 0xD9, + 0x12, 0x85, 0xAB, 0x3C }; + +void +adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_pci *pci_info = &accel_dev->accel_pci_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct resource *pmisc_bar_addr = + pci_info->pci_bars[hw_data->get_misc_bar_id(hw_data)].virt_addr; + + ADF_CSR_WR(pmisc_bar_addr, hw_data->get_vintmsk_offset(0), 0x0); +} + +void +adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_pci *pci_info = &accel_dev->accel_pci_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct resource *pmisc_bar_addr = + pci_info->pci_bars[hw_data->get_misc_bar_id(hw_data)].virt_addr; + + ADF_CSR_WR(pmisc_bar_addr, hw_data->get_vintmsk_offset(0), 0x2); +} + +static int +__adf_iov_putmsg(struct adf_accel_dev *accel_dev, + u32 msg, + u8 vf_nr, + bool is_notification) +{ + struct adf_accel_pci *pci_info = &accel_dev->accel_pci_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct resource *pmisc_bar_addr = + pci_info->pci_bars[hw_data->get_misc_bar_id(hw_data)].virt_addr; + u32 val, pf2vf_offset; + u32 total_delay = 0, mdelay = ADF_IOV_MSG_ACK_DELAY_MS, + udelay = ADF_IOV_MSG_ACK_DELAY_US; + u32 local_in_use_mask, local_in_use_pattern; + u32 remote_in_use_mask, remote_in_use_pattern; + struct mutex *lock; /* lock preventing concurrent acces of CSR */ + u32 int_bit; + int ret = 0; + struct pfvf_stats *pfvf_counters = NULL; + + if (accel_dev->is_vf) { + pf2vf_offset = hw_data->get_pf2vf_offset(0); + lock = &accel_dev->u1.vf.vf2pf_lock; + local_in_use_mask = ADF_VF2PF_IN_USE_BY_VF_MASK; + local_in_use_pattern = ADF_VF2PF_IN_USE_BY_VF; + remote_in_use_mask = ADF_PF2VF_IN_USE_BY_PF_MASK; + remote_in_use_pattern = ADF_PF2VF_IN_USE_BY_PF; + int_bit = ADF_VF2PF_INT; + pfvf_counters = &accel_dev->u1.vf.pfvf_counters; + } else { + pf2vf_offset = hw_data->get_pf2vf_offset(vf_nr); + lock = &accel_dev->u1.pf.vf_info[vf_nr].pf2vf_lock; + local_in_use_mask = ADF_PF2VF_IN_USE_BY_PF_MASK; + local_in_use_pattern = ADF_PF2VF_IN_USE_BY_PF; + remote_in_use_mask = ADF_VF2PF_IN_USE_BY_VF_MASK; + remote_in_use_pattern = ADF_VF2PF_IN_USE_BY_VF; + int_bit = ADF_PF2VF_INT; + pfvf_counters = &accel_dev->u1.pf.vf_info[vf_nr].pfvf_counters; + } + + mutex_lock(lock); + + /* Check if PF2VF CSR is in use by remote function */ + val = ADF_CSR_RD(pmisc_bar_addr, pf2vf_offset); + if ((val & remote_in_use_mask) == remote_in_use_pattern) { + device_printf(GET_DEV(accel_dev), + "PF2VF CSR in use by remote function\n"); + ret = EAGAIN; + pfvf_counters->busy++; + goto out; + } + + /* Attempt to get ownership of PF2VF CSR */ + msg &= ~local_in_use_mask; + msg |= local_in_use_pattern; + ADF_CSR_WR(pmisc_bar_addr, pf2vf_offset, msg | int_bit); + pfvf_counters->tx++; + + /* Wait for confirmation from remote func it received the message */ + do { + if (udelay < ADF_IOV_MSG_ACK_EXP_MAX_DELAY_US) { + usleep_range(udelay, udelay * 2); + udelay = udelay * 2; + total_delay = total_delay + udelay; + } else { + pause_ms("adfstop", mdelay); + total_delay = total_delay + (mdelay * 1000); + } + val = ADF_CSR_RD(pmisc_bar_addr, pf2vf_offset); + } while ((val & int_bit) && + (total_delay < ADF_IOV_MSG_ACK_LIN_MAX_DELAY_US)); + + if (val & int_bit) { + device_printf(GET_DEV(accel_dev), + "ACK not received from remote\n"); + pfvf_counters->no_ack++; + val &= ~int_bit; + ret = EIO; + } + + /* For fire-and-forget notifications, the receiver does not clear + * the in-use pattern. This is used to detect collisions. + */ + if (is_notification && (val & ~int_bit) != msg) { + /* Collision must have overwritten the message */ + device_printf(GET_DEV(accel_dev), + "Collision on notification\n"); + pfvf_counters->collision++; + ret = EAGAIN; + goto out; + } + + /* + * If the far side did not clear the in-use pattern it is either + * 1) Notification - message left intact to detect collision + * 2) Older protocol (compatibility version < 3) on the far side + * where the sender is responsible for clearing the in-use + * pattern after the received has acknowledged receipt. + * In either case, clear the in-use pattern now. + */ + if ((val & local_in_use_mask) == local_in_use_pattern) + ADF_CSR_WR(pmisc_bar_addr, + pf2vf_offset, + val & ~local_in_use_mask); + +out: + mutex_unlock(lock); + return ret; +} + +static int +adf_iov_put(struct adf_accel_dev *accel_dev, + u32 msg, + u8 vf_nr, + bool is_notification) +{ + u32 count = 0, delay = ADF_IOV_MSG_RETRY_DELAY; + int ret; + struct pfvf_stats *pfvf_counters = NULL; + + if (accel_dev->is_vf) + pfvf_counters = &accel_dev->u1.vf.pfvf_counters; + else + pfvf_counters = &accel_dev->u1.pf.vf_info[vf_nr].pfvf_counters; + + do { + ret = __adf_iov_putmsg(accel_dev, msg, vf_nr, is_notification); + if (ret == EAGAIN) + pause_ms("adfstop", delay); + delay = delay * 2; + } while (ret == EAGAIN && ++count < ADF_IOV_MSG_MAX_RETRIES); + if (ret == EAGAIN) { + if (is_notification) + pfvf_counters->event_timeout++; + else + pfvf_counters->tx_timeout++; + } + + return ret; +} + +/** + * adf_iov_putmsg() - send PF2VF message + * @accel_dev: Pointer to acceleration device. + * @msg: Message to send + * @vf_nr: VF number to which the message will be sent + * + * Function sends a messge from the PF to a VF + * + * Return: 0 on success, error code otherwise. + */ +int +adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr) +{ + return adf_iov_put(accel_dev, msg, vf_nr, false); +} + +/** + * adf_iov_notify() - send PF2VF notification message + * @accel_dev: Pointer to acceleration device. + * @msg: Message to send + * @vf_nr: VF number to which the message will be sent + * + * Function sends a notification messge from the PF to a VF + * + * Return: 0 on success, error code otherwise. + */ +int +adf_iov_notify(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr) +{ + return adf_iov_put(accel_dev, msg, vf_nr, true); +} + +u8 +adf_pfvf_crc(u8 start_crc, u8 *buf, u8 len) +{ + u8 crc = start_crc; + + while (len-- > 0) + crc = pfvf_crc8_table[(crc ^ *buf++) & 0xff]; + + return crc; +} + +int +adf_iov_block_provider_register(u8 msg_type, + const adf_iov_block_provider provider) +{ + if (msg_type >= ARRAY_SIZE(pf2vf_message_providers)) { + pr_err("QAT: invalid message type %d for PF2VF provider\n", + msg_type); + return -EINVAL; + } + if (pf2vf_message_providers[msg_type]) { + pr_err("QAT: Provider %ps already registered for message %d\n", + pf2vf_message_providers[msg_type], + msg_type); + return -EINVAL; + } + + pf2vf_message_providers[msg_type] = provider; + return 0; +} + +u8 +adf_iov_is_block_provider_registered(u8 msg_type) +{ + if (pf2vf_message_providers[msg_type]) + return 1; + else + return 0; +} + +int +adf_iov_block_provider_unregister(u8 msg_type, + const adf_iov_block_provider provider) +{ + if (msg_type >= ARRAY_SIZE(pf2vf_message_providers)) { + pr_err("QAT: invalid message type %d for PF2VF provider\n", + msg_type); + return -EINVAL; + } + if (pf2vf_message_providers[msg_type] != provider) { + pr_err("QAT: Provider %ps not registered for message %d\n", + provider, + msg_type); + return -EINVAL; + } + + pf2vf_message_providers[msg_type] = NULL; + return 0; +} + +static int +adf_iov_block_get_data(struct adf_accel_dev *accel_dev, + u8 msg_type, + u8 byte_num, + u8 *data, + u8 compatibility, + bool crc) +{ + u8 *buffer; + u8 size; + u8 msg_ver; + u8 crc8; + + if (msg_type >= ARRAY_SIZE(pf2vf_message_providers)) { + pr_err("QAT: invalid message type %d for PF2VF provider\n", + msg_type); + *data = ADF_PF2VF_INVALID_BLOCK_TYPE; + return -EINVAL; + } + + if (!pf2vf_message_providers[msg_type]) { + pr_err("QAT: No registered provider for message %d\n", + msg_type); + *data = ADF_PF2VF_INVALID_BLOCK_TYPE; + return -EINVAL; + } + + if ((*pf2vf_message_providers[msg_type])( + accel_dev, &buffer, &size, &msg_ver, compatibility, byte_num)) { + pr_err("QAT: unknown error from provider for message %d\n", + msg_type); + *data = ADF_PF2VF_UNSPECIFIED_ERROR; + return -EINVAL; + } + + if ((msg_type <= ADF_VF2PF_MAX_SMALL_MESSAGE_TYPE && + size > ADF_VF2PF_SMALL_PAYLOAD_SIZE) || + (msg_type <= ADF_VF2PF_MAX_MEDIUM_MESSAGE_TYPE && + size > ADF_VF2PF_MEDIUM_PAYLOAD_SIZE) || + size > ADF_VF2PF_LARGE_PAYLOAD_SIZE) { + pr_err("QAT: Invalid size %d provided for message type %d\n", + size, + msg_type); + *data = ADF_PF2VF_PAYLOAD_TRUNCATED; + return -EINVAL; + } + + if ((!byte_num && crc) || byte_num >= size + ADF_VF2PF_BLOCK_DATA) { + pr_err("QAT: Invalid byte number %d for message %d\n", + byte_num, + msg_type); + *data = ADF_PF2VF_INVALID_BYTE_NUM_REQ; + return -EINVAL; + } + + if (crc) { + crc8 = adf_pfvf_crc(ADF_CRC8_INIT_VALUE, &msg_ver, 1); + crc8 = adf_pfvf_crc(crc8, &size, 1); + *data = adf_pfvf_crc(crc8, buffer, byte_num - 1); + } else { + if (byte_num == 0) + *data = msg_ver; + else if (byte_num == 1) + *data = size; + else + *data = buffer[byte_num - 2]; + } + + return 0; +} + +static int +adf_iov_block_get_byte(struct adf_accel_dev *accel_dev, + u8 msg_type, + u8 byte_num, + u8 *data, + u8 compatibility) +{ + return adf_iov_block_get_data( + accel_dev, msg_type, byte_num, data, compatibility, false); +} + +static int +adf_iov_block_get_crc(struct adf_accel_dev *accel_dev, + u8 msg_type, + u8 byte_num, + u8 *data, + u8 compatibility) +{ + return adf_iov_block_get_data( + accel_dev, msg_type, byte_num, data, compatibility, true); +} + +int adf_iov_compatibility_check(struct adf_accel_dev *accel_dev, u8 compat_ver); + +void +adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info) +{ + struct adf_accel_dev *accel_dev = vf_info->accel_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + int bar_id = hw_data->get_misc_bar_id(hw_data); + struct adf_bar *pmisc = &GET_BARS(accel_dev)[bar_id]; + struct resource *pmisc_addr = pmisc->virt_addr; + u32 msg, resp = 0, vf_nr = vf_info->vf_nr; + u8 byte_num = 0; + u8 msg_type = 0; + u8 resp_type; + int res; + u8 data; + u8 compat = 0x0; + int vf_compat_ver = 0; + bool is_notification = false; + + /* Read message from the VF */ + msg = ADF_CSR_RD(pmisc_addr, hw_data->get_pf2vf_offset(vf_nr)); + if (!(msg & ADF_VF2PF_INT)) { + device_printf(GET_DEV(accel_dev), + "Spurious VF2PF interrupt. msg %X. Ignored\n", + msg); + vf_info->pfvf_counters.spurious++; + goto out; + } + vf_info->pfvf_counters.rx++; + + if (!(msg & ADF_VF2PF_MSGORIGIN_SYSTEM)) { + /* Ignore legacy non-system (non-kernel) VF2PF messages */ + device_printf(GET_DEV(accel_dev), + "Ignored non-system message from VF%d (0x%x);\n", + vf_nr + 1, + msg); + /* + * To ack, clear the VF2PFINT bit. + * Because this must be a legacy message, the far side + * must clear the in-use pattern. + */ + msg &= ~(ADF_VF2PF_INT); + ADF_CSR_WR(pmisc_addr, hw_data->get_pf2vf_offset(vf_nr), msg); + + goto out; + } + + switch ((msg & ADF_VF2PF_MSGTYPE_MASK) >> ADF_VF2PF_MSGTYPE_SHIFT) { + case ADF_VF2PF_MSGTYPE_COMPAT_VER_REQ: + + { + is_notification = false; + vf_compat_ver = msg >> ADF_VF2PF_COMPAT_VER_REQ_SHIFT; + vf_info->compat_ver = vf_compat_ver; + + resp = (ADF_PF2VF_MSGORIGIN_SYSTEM | + (ADF_PF2VF_MSGTYPE_VERSION_RESP + << ADF_PF2VF_MSGTYPE_SHIFT) | + (ADF_PFVF_COMPATIBILITY_VERSION + << ADF_PF2VF_VERSION_RESP_VERS_SHIFT)); + + device_printf( + GET_DEV(accel_dev), + "Compatibility Version Request from VF%d vers=%u\n", + vf_nr + 1, + vf_info->compat_ver); + + if (vf_compat_ver < ADF_PFVF_COMPATIBILITY_VERSION) + compat = adf_iov_compatibility_check(accel_dev, + vf_compat_ver); + else if (vf_compat_ver == ADF_PFVF_COMPATIBILITY_VERSION) + compat = ADF_PF2VF_VF_COMPATIBLE; + else + compat = ADF_PF2VF_VF_COMPAT_UNKNOWN; + + resp |= compat << ADF_PF2VF_VERSION_RESP_RESULT_SHIFT; + + if (compat == ADF_PF2VF_VF_INCOMPATIBLE) + device_printf(GET_DEV(accel_dev), + "VF%d and PF are incompatible.\n", + vf_nr + 1); + } break; + case ADF_VF2PF_MSGTYPE_VERSION_REQ: + device_printf(GET_DEV(accel_dev), + "Legacy VersionRequest received from VF%d 0x%x\n", + vf_nr + 1, + msg); + is_notification = false; + + /* legacy driver, VF compat_ver is 0 */ + vf_info->compat_ver = 0; + + resp = (ADF_PF2VF_MSGORIGIN_SYSTEM | + (ADF_PF2VF_MSGTYPE_VERSION_RESP + << ADF_PF2VF_MSGTYPE_SHIFT)); + + /* PF always newer than legacy VF */ + compat = + adf_iov_compatibility_check(accel_dev, vf_info->compat_ver); + resp |= compat << ADF_PF2VF_VERSION_RESP_RESULT_SHIFT; + + /* Set legacy major and minor version num */ + resp |= 1 << ADF_PF2VF_MAJORVERSION_SHIFT | + 1 << ADF_PF2VF_MINORVERSION_SHIFT; + + if (compat == ADF_PF2VF_VF_INCOMPATIBLE) + device_printf(GET_DEV(accel_dev), + "VF%d and PF are incompatible.\n", + vf_nr + 1); + break; + case ADF_VF2PF_MSGTYPE_INIT: { + device_printf(GET_DEV(accel_dev), + "Init message received from VF%d 0x%x\n", + vf_nr + 1, + msg); + is_notification = true; + vf_info->init = true; + } break; + case ADF_VF2PF_MSGTYPE_SHUTDOWN: { + device_printf(GET_DEV(accel_dev), + "Shutdown message received from VF%d 0x%x\n", + vf_nr + 1, + msg); + is_notification = true; + vf_info->init = false; + } break; + case ADF_VF2PF_MSGTYPE_GET_LARGE_BLOCK_REQ: + case ADF_VF2PF_MSGTYPE_GET_MEDIUM_BLOCK_REQ: + case ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ: { + is_notification = false; + switch ((msg & ADF_VF2PF_MSGTYPE_MASK) >> + ADF_VF2PF_MSGTYPE_SHIFT) { + case ADF_VF2PF_MSGTYPE_GET_LARGE_BLOCK_REQ: + byte_num = + ((msg & ADF_VF2PF_LARGE_BLOCK_BYTE_NUM_MASK) >> + ADF_VF2PF_LARGE_BLOCK_BYTE_NUM_SHIFT); + msg_type = + ((msg & ADF_VF2PF_LARGE_BLOCK_REQ_TYPE_MASK) >> + ADF_VF2PF_BLOCK_REQ_TYPE_SHIFT); + msg_type += ADF_VF2PF_MIN_LARGE_MESSAGE_TYPE; + break; + case ADF_VF2PF_MSGTYPE_GET_MEDIUM_BLOCK_REQ: + byte_num = + ((msg & ADF_VF2PF_MEDIUM_BLOCK_BYTE_NUM_MASK) >> + ADF_VF2PF_MEDIUM_BLOCK_BYTE_NUM_SHIFT); + msg_type = + ((msg & ADF_VF2PF_MEDIUM_BLOCK_REQ_TYPE_MASK) >> + ADF_VF2PF_BLOCK_REQ_TYPE_SHIFT); + msg_type += ADF_VF2PF_MIN_MEDIUM_MESSAGE_TYPE; + break; + case ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ: + byte_num = + ((msg & ADF_VF2PF_SMALL_BLOCK_BYTE_NUM_MASK) >> + ADF_VF2PF_SMALL_BLOCK_BYTE_NUM_SHIFT); + msg_type = + ((msg & ADF_VF2PF_SMALL_BLOCK_REQ_TYPE_MASK) >> + ADF_VF2PF_BLOCK_REQ_TYPE_SHIFT); + msg_type += ADF_VF2PF_MIN_SMALL_MESSAGE_TYPE; + break; + } + + if (msg >> ADF_VF2PF_BLOCK_REQ_CRC_SHIFT) { + res = adf_iov_block_get_crc(accel_dev, + msg_type, + byte_num, + &data, + vf_info->compat_ver); + if (res) + resp_type = ADF_PF2VF_BLOCK_RESP_TYPE_ERROR; + else + resp_type = ADF_PF2VF_BLOCK_RESP_TYPE_CRC; + } else { + if (!byte_num) + vf_info->pfvf_counters.blk_tx++; + + res = adf_iov_block_get_byte(accel_dev, + msg_type, + byte_num, + &data, + vf_info->compat_ver); + if (res) + resp_type = ADF_PF2VF_BLOCK_RESP_TYPE_ERROR; + else + resp_type = ADF_PF2VF_BLOCK_RESP_TYPE_DATA; + } + resp = + (ADF_PF2VF_MSGORIGIN_SYSTEM | + (ADF_PF2VF_MSGTYPE_BLOCK_RESP << ADF_PF2VF_MSGTYPE_SHIFT) | + (resp_type << ADF_PF2VF_BLOCK_RESP_TYPE_SHIFT) | + (data << ADF_PF2VF_BLOCK_RESP_DATA_SHIFT)); + } break; + default: + device_printf(GET_DEV(accel_dev), + "Unknown message from VF%d (0x%x);\n", + vf_nr + 1, + msg); + } + + /* To ack, clear the VF2PFINT bit and the in-use-by */ + msg &= ~ADF_VF2PF_INT; + /* + * Clear the in-use pattern if the sender won't do it. + * Because the compatibility version must be the first message + * exchanged between the VF and PF, the vf_info->compat_ver must be + * set at this time. + * The in-use pattern is not cleared for notifications so that + * it can be used for collision detection. + */ + if (vf_info->compat_ver >= ADF_PFVF_COMPATIBILITY_FAST_ACK && + !is_notification) + msg &= ~ADF_VF2PF_IN_USE_BY_VF_MASK; + ADF_CSR_WR(pmisc_addr, hw_data->get_pf2vf_offset(vf_nr), msg); + + if (resp && adf_iov_putmsg(accel_dev, resp, vf_nr)) + device_printf(GET_DEV(accel_dev), + "Failed to send response to VF\n"); + +out: + return; +} + +void +adf_pf2vf_notify_restarting(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_vf_info *vf; + u32 msg = (ADF_PF2VF_MSGORIGIN_SYSTEM | + (ADF_PF2VF_MSGTYPE_RESTARTING << ADF_PF2VF_MSGTYPE_SHIFT)); + + int i, num_vfs = accel_dev->u1.pf.num_vfs; + for (i = 0, vf = accel_dev->u1.pf.vf_info; i < num_vfs; i++, vf++) { + if (vf->init && adf_iov_notify(accel_dev, msg, i)) + device_printf(GET_DEV(accel_dev), + "Failed to send restarting msg to VF%d\n", + i); + } +} + +void +adf_pf2vf_notify_fatal_error(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_vf_info *vf; + int i, num_vfs = accel_dev->u1.pf.num_vfs; + u32 msg = (ADF_PF2VF_MSGORIGIN_SYSTEM | + (ADF_PF2VF_MSGTYPE_FATAL_ERROR << ADF_PF2VF_MSGTYPE_SHIFT)); + + for (i = 0, vf = accel_dev->u1.pf.vf_info; i < num_vfs; i++, vf++) { + if (vf->init && adf_iov_notify(accel_dev, msg, i)) + device_printf( + GET_DEV(accel_dev), + "Failed to send fatal error msg 0x%x to VF%d\n", + msg, + i); + } +} + +int +adf_iov_register_compat_checker(struct adf_accel_dev *accel_dev, + const adf_iov_compat_checker_t cc) +{ + struct adf_accel_compat_manager *cm = accel_dev->cm; + int num = 0; + + if (!cm) { + device_printf(GET_DEV(accel_dev), + "QAT: compatibility manager not initialized\n"); + return ENOMEM; + } + + for (num = 0; num < ADF_COMPAT_CHECKER_MAX; num++) { + if (cm->iov_compat_checkers[num]) { + if (cc == cm->iov_compat_checkers[num]) { + device_printf(GET_DEV(accel_dev), + "QAT: already registered\n"); + return EFAULT; + } + } else { + /* registering the new checker */ + cm->iov_compat_checkers[num] = cc; + break; + } + } + + if (num >= ADF_COMPAT_CHECKER_MAX) { + device_printf(GET_DEV(accel_dev), + "QAT: compatibility checkers are overflow.\n"); + return EFAULT; + } + + cm->num_chker = num; + return 0; +} + +int +adf_iov_unregister_compat_checker(struct adf_accel_dev *accel_dev, + const adf_iov_compat_checker_t cc) +{ + struct adf_accel_compat_manager *cm = accel_dev->cm; + int num = 0; + + if (!cm) { + device_printf(GET_DEV(accel_dev), + "QAT: compatibility manager not initialized\n"); + return ENOMEM; + } + num = cm->num_chker - 1; + + if (num < 0) { + device_printf( + GET_DEV(accel_dev), + "QAT: Array 'iov_compat_checkers' may use index value(s) -1\n"); + return EFAULT; + } + if (cc == cm->iov_compat_checkers[num]) { + /* unregistering the given checker */ + cm->iov_compat_checkers[num] = NULL; + } else { + device_printf( + GET_DEV(accel_dev), + "QAT: unregistering not in the registered order\n"); + return EFAULT; + } + + cm->num_chker--; + return 0; +} + +int +adf_iov_init_compat_manager(struct adf_accel_dev *accel_dev, + struct adf_accel_compat_manager **cm) +{ + if (!(*cm)) { + *cm = malloc(sizeof(**cm), M_QAT, M_WAITOK | M_ZERO); + } else { + /* zero the struct */ + explicit_bzero(*cm, sizeof(**cm)); + } + + return 0; +} + +int +adf_iov_shutdown_compat_manager(struct adf_accel_dev *accel_dev, + struct adf_accel_compat_manager **cm) +{ + if (*cm) { + free(*cm, M_QAT); + *cm = NULL; + } + return 0; +} + +int +adf_iov_compatibility_check(struct adf_accel_dev *accel_dev, u8 compat_ver) +{ + int compatible = ADF_PF2VF_VF_COMPATIBLE; + int i = 0; + struct adf_accel_compat_manager *cm = accel_dev->cm; + + if (!cm) { + device_printf(GET_DEV(accel_dev), + "QAT: compatibility manager not initialized\n"); + return ADF_PF2VF_VF_INCOMPATIBLE; + } + for (i = 0; i < cm->num_chker; i++) { + compatible = cm->iov_compat_checkers[i](accel_dev, compat_ver); + if (compatible == ADF_PF2VF_VF_INCOMPATIBLE) { + device_printf( + GET_DEV(accel_dev), + "QAT: PF and VF are incompatible [checker%d]\n", + i); + break; + } + } + return compatible; +} + +static int +adf_vf2pf_request_version(struct adf_accel_dev *accel_dev) +{ + unsigned long timeout = msecs_to_jiffies(ADF_IOV_MSG_RESP_TIMEOUT); + u32 msg = 0; + int ret = 0; + int comp = 0; + int response_received = 0; + int retry_count = 0; + struct pfvf_stats *pfvf_counters = NULL; + + pfvf_counters = &accel_dev->u1.vf.pfvf_counters; + + msg = ADF_VF2PF_MSGORIGIN_SYSTEM; + msg |= ADF_VF2PF_MSGTYPE_COMPAT_VER_REQ << ADF_VF2PF_MSGTYPE_SHIFT; + msg |= ADF_PFVF_COMPATIBILITY_VERSION << ADF_VF2PF_COMPAT_VER_REQ_SHIFT; + BUILD_BUG_ON(ADF_PFVF_COMPATIBILITY_VERSION > 255); + /* Clear communication flag - without that VF will not be waiting for + * the response from host driver, and start sending init. + */ + accel_dev->u1.vf.iov_msg_completion = 0; + do { + /* Send request from VF to PF */ + if (retry_count) + pfvf_counters->retry++; + if (adf_iov_putmsg(accel_dev, msg, 0)) { + device_printf( + GET_DEV(accel_dev), + "Failed to send Compat Version Request.\n"); + return EIO; + } + mutex_lock(&accel_dev->u1.vf.vf2pf_lock); + if (accel_dev->u1.vf.iov_msg_completion == 0 && + sx_sleep(&accel_dev->u1.vf.iov_msg_completion, + &accel_dev->u1.vf.vf2pf_lock.sx, + 0, + "pfver", + timeout) == EWOULDBLOCK) { + /* It's possible that wakeup could be missed */ + if (accel_dev->u1.vf.iov_msg_completion) { + response_received = 1; + } else { + device_printf( + GET_DEV(accel_dev), + "IOV request/response message timeout expired\n"); + } + } else { + response_received = 1; + } + mutex_unlock(&accel_dev->u1.vf.vf2pf_lock); + } while (!response_received && + ++retry_count < ADF_IOV_MSG_RESP_RETRIES); + + if (!response_received) + pfvf_counters->rx_timeout++; + else + pfvf_counters->rx_rsp++; + if (!response_received) + return EIO; + + if (accel_dev->u1.vf.compatible == ADF_PF2VF_VF_COMPAT_UNKNOWN) + /* Response from PF received, check compatibility */ + comp = adf_iov_compatibility_check(accel_dev, + accel_dev->u1.vf.pf_version); + else + comp = accel_dev->u1.vf.compatible; + + ret = (comp == ADF_PF2VF_VF_COMPATIBLE) ? 0 : EFAULT; + if (ret) + device_printf( + GET_DEV(accel_dev), + "VF is not compatible with PF, due to the reason %d\n", + comp); + + return ret; +} + +/** + * adf_enable_vf2pf_comms() - Function enables communication from vf to pf + * + * @accel_dev: Pointer to acceleration device virtual function. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_enable_vf2pf_comms(struct adf_accel_dev *accel_dev) +{ + int ret = 0; + + /* init workqueue for VF */ + ret = adf_init_vf_wq(); + if (ret) + return ret; + + adf_enable_pf2vf_interrupts(accel_dev); + adf_iov_init_compat_manager(accel_dev, &accel_dev->cm); + return adf_vf2pf_request_version(accel_dev); +} +/** + * adf_disable_vf2pf_comms() - Function disables communication from vf to pf + * + * @accel_dev: Pointer to acceleration device virtual function. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_disable_vf2pf_comms(struct adf_accel_dev *accel_dev) +{ + return adf_iov_shutdown_compat_manager(accel_dev, &accel_dev->cm); +} + +/** + * adf_pf_enable_vf2pf_comms() - Function enables communication from pf + * + * @accel_dev: Pointer to acceleration device physical function. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_pf_enable_vf2pf_comms(struct adf_accel_dev *accel_dev) +{ + adf_iov_init_compat_manager(accel_dev, &accel_dev->cm); + return 0; +} + +/** + * adf_pf_disable_vf2pf_comms() - Function disables communication from pf + * + * @accel_dev: Pointer to acceleration device physical function. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_pf_disable_vf2pf_comms(struct adf_accel_dev *accel_dev) +{ + return adf_iov_shutdown_compat_manager(accel_dev, &accel_dev->cm); +} Index: sys/dev/qat/qat_common/adf_pf2vf_ring_to_svc_map.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_pf2vf_ring_to_svc_map.c @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "adf_pf2vf_msg.h" +#include "adf_cfg.h" + +#define ADF_VF2PF_RING_TO_SVC_VERSION 1 +#define ADF_VF2PF_RING_TO_SVC_LENGTH 2 + +int +adf_pf_ring_to_svc_msg_provider(struct adf_accel_dev *accel_dev, + u8 **buffer, + u8 *length, + u8 *block_version, + u8 compatibility, + u8 byte_num) +{ + static u8 data[ADF_VF2PF_RING_TO_SVC_LENGTH] = { 0 }; + struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev); + u16 ring_to_svc_map = hw_data->ring_to_svc_map; + u16 byte = 0; + + for (byte = 0; byte < ADF_VF2PF_RING_TO_SVC_LENGTH; byte++) { + data[byte] = (ring_to_svc_map >> (byte * ADF_PFVF_DATA_SHIFT)) & + ADF_PFVF_DATA_MASK; + } + + *length = ADF_VF2PF_RING_TO_SVC_LENGTH; + *block_version = ADF_VF2PF_RING_TO_SVC_VERSION; + *buffer = data; + + return 0; +} + +int +adf_pf_vf_ring_to_svc_init(struct adf_accel_dev *accel_dev) +{ + u8 data[ADF_VF2PF_RING_TO_SVC_LENGTH] = { 0 }; + u8 len = ADF_VF2PF_RING_TO_SVC_LENGTH; + u8 version = ADF_VF2PF_RING_TO_SVC_VERSION; + u16 ring_to_svc_map = 0; + u16 byte = 0; + + if (!accel_dev->is_vf) { + /* on the pf */ + if (!adf_iov_is_block_provider_registered( + ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ)) + adf_iov_block_provider_register( + ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ, + adf_pf_ring_to_svc_msg_provider); + } else if (accel_dev->u1.vf.pf_version >= + ADF_PFVF_COMPATIBILITY_RING_TO_SVC_MAP) { + /* on the vf */ + if (adf_iov_block_get(accel_dev, + ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ, + &version, + data, + &len)) { + device_printf(GET_DEV(accel_dev), + "QAT: Failed adf_iov_block_get\n"); + return EFAULT; + } + for (byte = 0; byte < ADF_VF2PF_RING_TO_SVC_LENGTH; byte++) { + ring_to_svc_map |= data[byte] + << (byte * ADF_PFVF_DATA_SHIFT); + } + GET_HW_DATA(accel_dev)->ring_to_svc_map = ring_to_svc_map; + } + + return 0; +} Index: sys/dev/qat/qat_common/adf_transport.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_transport.c @@ -0,0 +1,747 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "icp_qat_uclo.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_cfg_strings.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include +#include "adf_accel_devices.h" +#include "adf_transport_internal.h" +#include "adf_transport_access_macros.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" + +#define QAT_RING_ALIGNMENT 64 + +static inline u32 +adf_modulo(u32 data, u32 shift) +{ + u32 div = data >> shift; + u32 mult = div << shift; + + return data - mult; +} + +static inline int +adf_check_ring_alignment(u64 addr, u64 size) +{ + if (((size - 1) & addr) != 0) + return EFAULT; + return 0; +} + +static int +adf_verify_ring_size(u32 msg_size, u32 msg_num) +{ + int i = ADF_MIN_RING_SIZE; + + for (; i <= ADF_MAX_RING_SIZE; i++) + if ((msg_size * msg_num) == ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) + return i; + + return ADF_DEFAULT_RING_SIZE; +} + +static int +adf_reserve_ring(struct adf_etr_bank_data *bank, u32 ring) +{ + mtx_lock(&bank->lock); + if (bank->ring_mask & (1 << ring)) { + mtx_unlock(&bank->lock); + return EFAULT; + } + bank->ring_mask |= (1 << ring); + mtx_unlock(&bank->lock); + return 0; +} + +static void +adf_unreserve_ring(struct adf_etr_bank_data *bank, u32 ring) +{ + mtx_lock(&bank->lock); + bank->ring_mask &= ~(1 << ring); + mtx_unlock(&bank->lock); +} + +static void +adf_enable_ring_irq(struct adf_etr_bank_data *bank, u32 ring) +{ + mtx_lock(&bank->lock); + bank->irq_mask |= (1 << ring); + mtx_unlock(&bank->lock); + WRITE_CSR_INT_COL_EN(bank->csr_addr, bank->bank_number, bank->irq_mask); + WRITE_CSR_INT_COL_CTL(bank->csr_addr, + bank->bank_number, + bank->irq_coalesc_timer); +} + +static void +adf_disable_ring_irq(struct adf_etr_bank_data *bank, u32 ring) +{ + mtx_lock(&bank->lock); + bank->irq_mask &= ~(1 << ring); + mtx_unlock(&bank->lock); + WRITE_CSR_INT_COL_EN(bank->csr_addr, bank->bank_number, bank->irq_mask); +} + +int +adf_send_message(struct adf_etr_ring_data *ring, u32 *msg) +{ + u32 msg_size = 0; + + if (atomic_add_return(1, ring->inflights) > ring->max_inflights) { + atomic_dec(ring->inflights); + return EAGAIN; + } + + msg_size = ADF_MSG_SIZE_TO_BYTES(ring->msg_size); + mtx_lock(&ring->lock); + memcpy((void *)((uintptr_t)ring->base_addr + ring->tail), + msg, + msg_size); + + ring->tail = adf_modulo(ring->tail + msg_size, + ADF_RING_SIZE_MODULO(ring->ring_size)); + + WRITE_CSR_RING_TAIL(ring->bank->csr_addr, + ring->bank->bank_number, + ring->ring_number, + ring->tail); + ring->csr_tail_offset = ring->tail; + mtx_unlock(&ring->lock); + return 0; +} + +int +adf_handle_response(struct adf_etr_ring_data *ring, u32 quota) +{ + u32 msg_counter = 0; + u32 *msg = (u32 *)((uintptr_t)ring->base_addr + ring->head); + + if (!quota) + quota = ADF_NO_RESPONSE_QUOTA; + + while ((*msg != ADF_RING_EMPTY_SIG) && (msg_counter < quota)) { + ring->callback((u32 *)msg); + atomic_dec(ring->inflights); + *msg = ADF_RING_EMPTY_SIG; + ring->head = adf_modulo(ring->head + ADF_MSG_SIZE_TO_BYTES( + ring->msg_size), + ADF_RING_SIZE_MODULO(ring->ring_size)); + msg_counter++; + msg = (u32 *)((uintptr_t)ring->base_addr + ring->head); + } + if (msg_counter > 0) + WRITE_CSR_RING_HEAD(ring->bank->csr_addr, + ring->bank->bank_number, + ring->ring_number, + ring->head); + return msg_counter; +} + +int +adf_poll_bank(u32 accel_id, u32 bank_num, u32 quota) +{ + int num_resp; + struct adf_accel_dev *accel_dev; + struct adf_etr_data *trans_data; + struct adf_etr_bank_data *bank; + struct adf_etr_ring_data *ring; + u32 rings_not_empty; + u32 ring_num; + u32 resp_total = 0; + u32 num_rings_per_bank; + + /* Find the accel device associated with the accelId + * passed in. + */ + accel_dev = adf_devmgr_get_dev_by_id(accel_id); + if (!accel_dev) { + pr_err("There is no device with id: %d\n", accel_id); + return EINVAL; + } + + trans_data = accel_dev->transport; + bank = &trans_data->banks[bank_num]; + mtx_lock(&bank->lock); + + /* Read the ring status CSR to determine which rings are empty. */ + rings_not_empty = READ_CSR_E_STAT(bank->csr_addr, bank->bank_number); + /* Complement to find which rings have data to be processed. */ + rings_not_empty = (~rings_not_empty) & bank->ring_mask; + + /* Return RETRY if the bank polling rings + * are all empty. + */ + if (!(rings_not_empty & bank->ring_mask)) { + mtx_unlock(&bank->lock); + return EAGAIN; + } + + /* + * Loop over all rings within this bank. + * The ring structure is global to all + * rings hence while we loop over all rings in the + * bank we use ring_number to get the global ring. + */ + num_rings_per_bank = accel_dev->hw_device->num_rings_per_bank; + for (ring_num = 0; ring_num < num_rings_per_bank; ring_num++) { + ring = &bank->rings[ring_num]; + + /* And with polling ring mask. + * If the there is no data on this ring + * move to the next one. + */ + if (!(rings_not_empty & (1 << ring->ring_number))) + continue; + + /* Poll the ring. */ + num_resp = adf_handle_response(ring, quota); + resp_total += num_resp; + } + + mtx_unlock(&bank->lock); + /* Return SUCCESS if there's any response message + * returned. + */ + if (resp_total) + return 0; + return EAGAIN; +} + +int +adf_poll_all_banks(u32 accel_id, u32 quota) +{ + int status = EAGAIN; + struct adf_accel_dev *accel_dev; + struct adf_etr_data *trans_data; + struct adf_etr_bank_data *bank; + u32 bank_num; + u32 stat_total = 0; + + /* Find the accel device associated with the accelId + * passed in. + */ + accel_dev = adf_devmgr_get_dev_by_id(accel_id); + if (!accel_dev) { + pr_err("There is no device with id: %d\n", accel_id); + return EINVAL; + } + + /* Loop over banks and call adf_poll_bank */ + trans_data = accel_dev->transport; + for (bank_num = 0; bank_num < GET_MAX_BANKS(accel_dev); bank_num++) { + bank = &trans_data->banks[bank_num]; + /* if there are no polling rings on this bank + * continue to the next bank number. + */ + if (bank->ring_mask == 0) + continue; + status = adf_poll_bank(accel_id, bank_num, quota); + /* The successful status should be AGAIN or 0 */ + if (status == 0) + stat_total++; + else if (status != EAGAIN) + return status; + } + + /* Return SUCCESS if adf_poll_bank returned SUCCESS + * at any stage. adf_poll_bank cannot + * return fail in the above case. + */ + if (stat_total) + return 0; + + return EAGAIN; +} + +static void +adf_configure_tx_ring(struct adf_etr_ring_data *ring) +{ + u32 ring_config = BUILD_RING_CONFIG(ring->ring_size); + + WRITE_CSR_RING_CONFIG(ring->bank->csr_addr, + ring->bank->bank_number, + ring->ring_number, + ring_config); +} + +static void +adf_configure_rx_ring(struct adf_etr_ring_data *ring) +{ + u32 ring_config = BUILD_RESP_RING_CONFIG(ring->ring_size, + ADF_RING_NEAR_WATERMARK_512, + ADF_RING_NEAR_WATERMARK_0); + + WRITE_CSR_RING_CONFIG(ring->bank->csr_addr, + ring->bank->bank_number, + ring->ring_number, + ring_config); +} + +static int +adf_init_ring(struct adf_etr_ring_data *ring) +{ + struct adf_etr_bank_data *bank = ring->bank; + struct adf_accel_dev *accel_dev = bank->accel_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u64 ring_base; + u32 ring_size_bytes = ADF_SIZE_TO_RING_SIZE_IN_BYTES(ring->ring_size); + + ring_size_bytes = ADF_RING_SIZE_BYTES_MIN(ring_size_bytes); + int ret; + + ret = bus_dma_mem_create(&ring->dma_mem, + accel_dev->dma_tag, + ring_size_bytes, + BUS_SPACE_MAXADDR, + ring_size_bytes, + M_WAITOK | M_ZERO); + if (ret) + return ret; + ring->base_addr = ring->dma_mem.dma_vaddr; + ring->dma_addr = ring->dma_mem.dma_baddr; + + memset(ring->base_addr, 0x7F, ring_size_bytes); + /* The base_addr has to be aligned to the size of the buffer */ + if (adf_check_ring_alignment(ring->dma_addr, ring_size_bytes)) { + device_printf(GET_DEV(accel_dev), "Ring address not aligned\n"); + bus_dma_mem_free(&ring->dma_mem); + ring->base_addr = NULL; + return EFAULT; + } + + if (hw_data->tx_rings_mask & (1 << ring->ring_number)) + adf_configure_tx_ring(ring); + else + adf_configure_rx_ring(ring); + + ring_base = BUILD_RING_BASE_ADDR(ring->dma_addr, ring->ring_size); + WRITE_CSR_RING_BASE(ring->bank->csr_addr, + ring->bank->bank_number, + ring->ring_number, + ring_base); + mtx_init(&ring->lock, "adf bank", NULL, MTX_DEF); + return 0; +} + +static void +adf_cleanup_ring(struct adf_etr_ring_data *ring) +{ + u32 ring_size_bytes = ADF_SIZE_TO_RING_SIZE_IN_BYTES(ring->ring_size); + ring_size_bytes = ADF_RING_SIZE_BYTES_MIN(ring_size_bytes); + + if (ring->base_addr) { + explicit_bzero(ring->base_addr, ring_size_bytes); + bus_dma_mem_free(&ring->dma_mem); + } + mtx_destroy(&ring->lock); +} + +int +adf_create_ring(struct adf_accel_dev *accel_dev, + const char *section, + u32 bank_num, + u32 num_msgs, + u32 msg_size, + const char *ring_name, + adf_callback_fn callback, + int poll_mode, + struct adf_etr_ring_data **ring_ptr) +{ + struct adf_etr_data *transport_data = accel_dev->transport; + struct adf_etr_bank_data *bank; + struct adf_etr_ring_data *ring; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + u32 ring_num; + int ret; + u8 num_rings_per_bank = accel_dev->hw_device->num_rings_per_bank; + + if (bank_num >= GET_MAX_BANKS(accel_dev)) { + device_printf(GET_DEV(accel_dev), "Invalid bank number\n"); + return EFAULT; + } + if (msg_size > ADF_MSG_SIZE_TO_BYTES(ADF_MAX_MSG_SIZE)) { + device_printf(GET_DEV(accel_dev), "Invalid msg size\n"); + return EFAULT; + } + if (ADF_MAX_INFLIGHTS(adf_verify_ring_size(msg_size, num_msgs), + ADF_BYTES_TO_MSG_SIZE(msg_size)) < 2) { + device_printf(GET_DEV(accel_dev), + "Invalid ring size for given msg size\n"); + return EFAULT; + } + if (adf_cfg_get_param_value(accel_dev, section, ring_name, val)) { + device_printf(GET_DEV(accel_dev), + "Section %s, no such entry : %s\n", + section, + ring_name); + return EFAULT; + } + if (compat_strtouint(val, 10, &ring_num)) { + device_printf(GET_DEV(accel_dev), "Can't get ring number\n"); + return EFAULT; + } + if (ring_num >= num_rings_per_bank) { + device_printf(GET_DEV(accel_dev), "Invalid ring number\n"); + return EFAULT; + } + + bank = &transport_data->banks[bank_num]; + if (adf_reserve_ring(bank, ring_num)) { + device_printf(GET_DEV(accel_dev), + "Ring %d, %s already exists.\n", + ring_num, + ring_name); + return EFAULT; + } + ring = &bank->rings[ring_num]; + ring->ring_number = ring_num; + ring->bank = bank; + ring->callback = callback; + ring->msg_size = ADF_BYTES_TO_MSG_SIZE(msg_size); + ring->ring_size = adf_verify_ring_size(msg_size, num_msgs); + ring->max_inflights = + ADF_MAX_INFLIGHTS(ring->ring_size, ring->msg_size); + ring->head = 0; + ring->tail = 0; + ring->csr_tail_offset = 0; + ret = adf_init_ring(ring); + if (ret) + goto err; + + /* Enable HW arbitration for the given ring */ + adf_update_ring_arb(ring); + + if (adf_ring_debugfs_add(ring, ring_name)) { + device_printf(GET_DEV(accel_dev), + "Couldn't add ring debugfs entry\n"); + ret = EFAULT; + goto err; + } + + /* Enable interrupts if needed */ + if (callback && !poll_mode) + adf_enable_ring_irq(bank, ring->ring_number); + *ring_ptr = ring; + return 0; +err: + adf_cleanup_ring(ring); + adf_unreserve_ring(bank, ring_num); + adf_update_ring_arb(ring); + return ret; +} + +void +adf_remove_ring(struct adf_etr_ring_data *ring) +{ + struct adf_etr_bank_data *bank = ring->bank; + + /* Disable interrupts for the given ring */ + adf_disable_ring_irq(bank, ring->ring_number); + + /* Clear PCI config space */ + WRITE_CSR_RING_CONFIG(bank->csr_addr, + bank->bank_number, + ring->ring_number, + 0); + WRITE_CSR_RING_BASE(bank->csr_addr, + bank->bank_number, + ring->ring_number, + 0); + adf_ring_debugfs_rm(ring); + adf_unreserve_ring(bank, ring->ring_number); + /* Disable HW arbitration for the given ring */ + adf_update_ring_arb(ring); + adf_cleanup_ring(ring); +} + +static void +adf_ring_response_handler(struct adf_etr_bank_data *bank) +{ + struct adf_accel_dev *accel_dev = bank->accel_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u8 num_rings_per_bank = hw_data->num_rings_per_bank; + u32 empty_rings, i; + + empty_rings = READ_CSR_E_STAT(bank->csr_addr, bank->bank_number); + empty_rings = ~empty_rings & bank->irq_mask; + + for (i = 0; i < num_rings_per_bank; ++i) { + if (empty_rings & (1 << i)) + adf_handle_response(&bank->rings[i], 0); + } +} + +void +adf_response_handler(uintptr_t bank_addr) +{ + struct adf_etr_bank_data *bank = (void *)bank_addr; + + /* Handle all the responses and re-enable IRQs */ + adf_ring_response_handler(bank); + WRITE_CSR_INT_FLAG_AND_COL(bank->csr_addr, + bank->bank_number, + bank->irq_mask); +} + +static inline int +adf_get_cfg_int(struct adf_accel_dev *accel_dev, + const char *section, + const char *format, + u32 key, + u32 *value) +{ + char key_buf[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val_buf[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + + snprintf(key_buf, ADF_CFG_MAX_KEY_LEN_IN_BYTES, format, key); + + if (adf_cfg_get_param_value(accel_dev, section, key_buf, val_buf)) + return EFAULT; + + if (compat_strtouint(val_buf, 10, value)) + return EFAULT; + return 0; +} + +static void +adf_get_coalesc_timer(struct adf_etr_bank_data *bank, + const char *section, + u32 bank_num_in_accel) +{ + struct adf_accel_dev *accel_dev = bank->accel_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 coalesc_timer = ADF_COALESCING_DEF_TIME; + + adf_get_cfg_int(accel_dev, + section, + ADF_ETRMGR_COALESCE_TIMER_FORMAT, + bank_num_in_accel, + &coalesc_timer); + + if (hw_data->get_clock_speed) + bank->irq_coalesc_timer = + (coalesc_timer * + (hw_data->get_clock_speed(hw_data) / USEC_PER_SEC)) / + NSEC_PER_USEC; + else + bank->irq_coalesc_timer = coalesc_timer; + + if (bank->irq_coalesc_timer > ADF_COALESCING_MAX_TIME) + bank->irq_coalesc_timer = ADF_COALESCING_MAX_TIME; + else if (bank->irq_coalesc_timer < ADF_COALESCING_MIN_TIME) + bank->irq_coalesc_timer = ADF_COALESCING_MIN_TIME; +} + +static int +adf_init_bank(struct adf_accel_dev *accel_dev, + struct adf_etr_bank_data *bank, + u32 bank_num, + struct resource *csr_addr) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_etr_ring_data *ring; + struct adf_etr_ring_data *tx_ring; + u32 i, coalesc_enabled = 0; + u8 num_rings_per_bank = hw_data->num_rings_per_bank; + u32 size = 0; + + explicit_bzero(bank, sizeof(*bank)); + bank->bank_number = bank_num; + bank->csr_addr = csr_addr; + bank->accel_dev = accel_dev; + mtx_init(&bank->lock, "adf bank", NULL, MTX_DEF); + + /* Allocate the rings in the bank */ + size = num_rings_per_bank * sizeof(struct adf_etr_ring_data); + bank->rings = kzalloc_node(size, + M_WAITOK | M_ZERO, + dev_to_node(GET_DEV(accel_dev))); + + /* Enable IRQ coalescing always. This will allow to use + * the optimised flag and coalesc register. + * If it is disabled in the config file just use min time value */ + if ((adf_get_cfg_int(accel_dev, + "Accelerator0", + ADF_ETRMGR_COALESCING_ENABLED_FORMAT, + bank_num, + &coalesc_enabled) == 0) && + coalesc_enabled) + adf_get_coalesc_timer(bank, "Accelerator0", bank_num); + else + bank->irq_coalesc_timer = ADF_COALESCING_MIN_TIME; + + for (i = 0; i < num_rings_per_bank; i++) { + WRITE_CSR_RING_CONFIG(csr_addr, bank_num, i, 0); + WRITE_CSR_RING_BASE(csr_addr, bank_num, i, 0); + ring = &bank->rings[i]; + if (hw_data->tx_rings_mask & (1 << i)) { + ring->inflights = + kzalloc_node(sizeof(atomic_t), + M_WAITOK | M_ZERO, + dev_to_node(GET_DEV(accel_dev))); + } else { + if (i < hw_data->tx_rx_gap) { + device_printf(GET_DEV(accel_dev), + "Invalid tx rings mask config\n"); + goto err; + } + tx_ring = &bank->rings[i - hw_data->tx_rx_gap]; + ring->inflights = tx_ring->inflights; + } + } + + if (adf_bank_debugfs_add(bank)) { + device_printf(GET_DEV(accel_dev), + "Failed to add bank debugfs entry\n"); + goto err; + } + + WRITE_CSR_INT_FLAG(csr_addr, bank_num, ADF_BANK_INT_FLAG_CLEAR_MASK); + WRITE_CSR_INT_SRCSEL(csr_addr, bank_num); + return 0; +err: + for (i = 0; i < num_rings_per_bank; i++) { + ring = &bank->rings[i]; + if (hw_data->tx_rings_mask & (1 << i)) { + kfree(ring->inflights); + ring->inflights = NULL; + } + } + kfree(bank->rings); + return ENOMEM; +} + +/** + * adf_init_etr_data() - Initialize transport rings for acceleration device + * @accel_dev: Pointer to acceleration device. + * + * Function initializes the communications channels (rings) to the + * acceleration device accel_dev. + * To be used by QAT device specific drivers. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_init_etr_data(struct adf_accel_dev *accel_dev) +{ + struct adf_etr_data *etr_data; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct resource *csr_addr; + u32 size; + u32 num_banks = 0; + int i, ret; + + etr_data = kzalloc_node(sizeof(*etr_data), + M_WAITOK | M_ZERO, + dev_to_node(GET_DEV(accel_dev))); + + num_banks = GET_MAX_BANKS(accel_dev); + size = num_banks * sizeof(struct adf_etr_bank_data); + etr_data->banks = kzalloc_node(size, + M_WAITOK | M_ZERO, + dev_to_node(GET_DEV(accel_dev))); + + accel_dev->transport = etr_data; + i = hw_data->get_etr_bar_id(hw_data); + csr_addr = accel_dev->accel_pci_dev.pci_bars[i].virt_addr; + + etr_data->debug = + SYSCTL_ADD_NODE(&accel_dev->sysctl_ctx, + SYSCTL_CHILDREN( + device_get_sysctl_tree(GET_DEV(accel_dev))), + OID_AUTO, + "transport", + CTLFLAG_RD, + NULL, + "Transport parameters"); + if (!etr_data->debug) { + device_printf(GET_DEV(accel_dev), + "Unable to create transport debugfs entry\n"); + ret = ENOENT; + goto err_bank_all; + } + + for (i = 0; i < num_banks; i++) { + ret = + adf_init_bank(accel_dev, &etr_data->banks[i], i, csr_addr); + if (ret) + goto err_bank_all; + } + + return 0; + +err_bank_all: + kfree(etr_data->banks); + kfree(etr_data); + accel_dev->transport = NULL; + return ret; +} + +static void +cleanup_bank(struct adf_etr_bank_data *bank) +{ + u32 i; + struct adf_accel_dev *accel_dev = bank->accel_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u8 num_rings_per_bank = hw_data->num_rings_per_bank; + + for (i = 0; i < num_rings_per_bank; i++) { + struct adf_accel_dev *accel_dev = bank->accel_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_etr_ring_data *ring = &bank->rings[i]; + + if (bank->ring_mask & (1 << i)) + adf_cleanup_ring(ring); + + if (hw_data->tx_rings_mask & (1 << i)) { + kfree(ring->inflights); + ring->inflights = NULL; + } + } + kfree(bank->rings); + adf_bank_debugfs_rm(bank); + mtx_destroy(&bank->lock); + explicit_bzero(bank, sizeof(*bank)); +} + +static void +adf_cleanup_etr_handles(struct adf_accel_dev *accel_dev) +{ + struct adf_etr_data *etr_data = accel_dev->transport; + u32 i, num_banks = GET_MAX_BANKS(accel_dev); + + for (i = 0; i < num_banks; i++) + cleanup_bank(&etr_data->banks[i]); +} + +/** + * adf_cleanup_etr_data() - Clear transport rings for acceleration device + * @accel_dev: Pointer to acceleration device. + * + * Function is the clears the communications channels (rings) of the + * acceleration device accel_dev. + * To be used by QAT device specific drivers. + * + * Return: void + */ +void +adf_cleanup_etr_data(struct adf_accel_dev *accel_dev) +{ + struct adf_etr_data *etr_data = accel_dev->transport; + + if (etr_data) { + adf_cleanup_etr_handles(accel_dev); + kfree(etr_data->banks); + kfree(etr_data); + accel_dev->transport = NULL; + } +} Index: sys/dev/qat/qat_common/adf_vf2pf_msg.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_vf2pf_msg.c @@ -0,0 +1,275 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "adf_pf2vf_msg.h" + +/** + * adf_vf2pf_init() - send init msg to PF + * @accel_dev: Pointer to acceleration VF device. + * + * Function sends an init messge from the VF to a PF + * + * Return: 0 on success, error code otherwise. + */ +int +adf_vf2pf_init(struct adf_accel_dev *accel_dev) +{ + u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM | + (ADF_VF2PF_MSGTYPE_INIT << ADF_VF2PF_MSGTYPE_SHIFT)); + if (adf_iov_notify(accel_dev, msg, 0)) { + device_printf(GET_DEV(accel_dev), + "Failed to send Init event to PF\n"); + return -EFAULT; + } + set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status); + return 0; +} + +/** + * adf_vf2pf_shutdown() - send shutdown msg to PF + * @accel_dev: Pointer to acceleration VF device. + * + * Function sends a shutdown messge from the VF to a PF + * + * Return: void + */ +void +adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev) +{ + u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM | + (ADF_VF2PF_MSGTYPE_SHUTDOWN << ADF_VF2PF_MSGTYPE_SHIFT)); + mutex_init(&accel_dev->u1.vf.vf2pf_lock); + if (test_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status)) + if (adf_iov_notify(accel_dev, msg, 0)) + device_printf(GET_DEV(accel_dev), + "Failed to send Shutdown event to PF\n"); + mutex_destroy(&accel_dev->u1.vf.vf2pf_lock); +} + +static int +adf_iov_block_get_bc(struct adf_accel_dev *accel_dev, + u8 msg_type, + u8 msg_index, + u8 *data, + int get_crc) +{ + u8 blk_type; + u32 msg; + unsigned long timeout = msecs_to_jiffies(ADF_IOV_MSG_RESP_TIMEOUT); + int response_received = 0; + int retry_count = 0; + + msg = ADF_VF2PF_MSGORIGIN_SYSTEM; + if (get_crc) + msg |= 1 << ADF_VF2PF_BLOCK_REQ_CRC_SHIFT; + + if (msg_type <= ADF_VF2PF_MAX_SMALL_MESSAGE_TYPE) { + if (msg_index >= + ADF_VF2PF_SMALL_PAYLOAD_SIZE + ADF_VF2PF_BLOCK_DATA) { + device_printf( + GET_DEV(accel_dev), + "Invalid byte index %d for message type %d\n", + msg_index, + msg_type); + return -EINVAL; + } + msg |= ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ + << ADF_VF2PF_MSGTYPE_SHIFT; + blk_type = msg_type; + msg |= blk_type << ADF_VF2PF_BLOCK_REQ_TYPE_SHIFT; + msg |= msg_index << ADF_VF2PF_SMALL_BLOCK_BYTE_NUM_SHIFT; + } else if (msg_type <= ADF_VF2PF_MAX_MEDIUM_MESSAGE_TYPE) { + if (msg_index >= + ADF_VF2PF_MEDIUM_PAYLOAD_SIZE + ADF_VF2PF_BLOCK_DATA) { + device_printf( + GET_DEV(accel_dev), + "Invalid byte index %d for message type %d\n", + msg_index, + msg_type); + return -EINVAL; + } + msg |= ADF_VF2PF_MSGTYPE_GET_MEDIUM_BLOCK_REQ + << ADF_VF2PF_MSGTYPE_SHIFT; + blk_type = msg_type - ADF_VF2PF_MIN_MEDIUM_MESSAGE_TYPE; + msg |= blk_type << ADF_VF2PF_BLOCK_REQ_TYPE_SHIFT; + msg |= msg_index << ADF_VF2PF_MEDIUM_BLOCK_BYTE_NUM_SHIFT; + } else if (msg_type <= ADF_VF2PF_MAX_LARGE_MESSAGE_TYPE) { + if (msg_index >= + ADF_VF2PF_LARGE_PAYLOAD_SIZE + ADF_VF2PF_BLOCK_DATA) { + device_printf( + GET_DEV(accel_dev), + "Invalid byte index %d for message type %d\n", + msg_index, + msg_type); + return -EINVAL; + } + msg |= ADF_VF2PF_MSGTYPE_GET_LARGE_BLOCK_REQ + << ADF_VF2PF_MSGTYPE_SHIFT; + blk_type = msg_type - ADF_VF2PF_MIN_LARGE_MESSAGE_TYPE; + msg |= blk_type << ADF_VF2PF_BLOCK_REQ_TYPE_SHIFT; + msg |= msg_index << ADF_VF2PF_LARGE_BLOCK_BYTE_NUM_SHIFT; + } else { + device_printf(GET_DEV(accel_dev), + "Invalid message type %d\n", + msg_type); + } + accel_dev->u1.vf.iov_msg_completion = 0; + do { + /* Send request from VF to PF */ + if (retry_count) + accel_dev->u1.vf.pfvf_counters.retry++; + if (adf_iov_putmsg(accel_dev, msg, 0)) { + device_printf(GET_DEV(accel_dev), + "Failed to send block request to PF\n"); + return EIO; + } + + /* Wait for response */ + mutex_lock(&accel_dev->u1.vf.vf2pf_lock); + if (accel_dev->u1.vf.iov_msg_completion == 0 && + sx_sleep(&accel_dev->u1.vf.iov_msg_completion, + &accel_dev->u1.vf.vf2pf_lock.sx, + 0, + "pfver", + timeout) == EWOULDBLOCK) { + /* It's possible that wakeup could be missed */ + if (accel_dev->u1.vf.iov_msg_completion) { + response_received = 1; + } else { + device_printf( + GET_DEV(accel_dev), + "IOV request/response message timeout expired\n"); + } + } else { + response_received = 1; + } + mutex_unlock(&accel_dev->u1.vf.vf2pf_lock); + } while (!response_received && + ++retry_count < ADF_IOV_MSG_RESP_RETRIES); + + if (!response_received) + accel_dev->u1.vf.pfvf_counters.rx_timeout++; + else + accel_dev->u1.vf.pfvf_counters.rx_rsp++; + + if (!response_received) + return EIO; + + if (accel_dev->u1.vf.pf2vf_block_resp_type != + (get_crc ? ADF_PF2VF_BLOCK_RESP_TYPE_CRC : + ADF_PF2VF_BLOCK_RESP_TYPE_DATA)) { + device_printf( + GET_DEV(accel_dev), + "%sBlock response type %d, data %d, msg %d, index %d\n", + get_crc ? "CRC " : "", + accel_dev->u1.vf.pf2vf_block_resp_type, + accel_dev->u1.vf.pf2vf_block_byte, + msg_type, + msg_index); + return -EIO; + } + *data = accel_dev->u1.vf.pf2vf_block_byte; + return 0; +} + +static int +adf_iov_block_get_byte(struct adf_accel_dev *accel_dev, + u8 msg_type, + u8 msg_index, + u8 *data) +{ + return adf_iov_block_get_bc(accel_dev, msg_type, msg_index, data, 0); +} + +static int +adf_iov_block_get_crc(struct adf_accel_dev *accel_dev, + u8 msg_type, + u8 msg_index, + u8 *crc) +{ + return adf_iov_block_get_bc(accel_dev, msg_type, msg_index - 1, crc, 1); +} + +int +adf_iov_block_get(struct adf_accel_dev *accel_dev, + u8 msg_type, + u8 *block_version, + u8 *buffer, + u8 *length) +{ + u8 buf_size = *length; + u8 payload_len; + u8 remote_crc; + u8 local_crc; + u8 buf_index; + int ret; + + if (msg_type > ADF_VF2PF_MAX_LARGE_MESSAGE_TYPE) { + device_printf(GET_DEV(accel_dev), + "Invalid message type %d\n", + msg_type); + return -EINVAL; + } + + ret = adf_iov_block_get_byte(accel_dev, + msg_type, + ADF_VF2PF_BLOCK_VERSION_BYTE, + block_version); + if (ret) + return ret; + ret = adf_iov_block_get_byte(accel_dev, + msg_type, + ADF_VF2PF_BLOCK_LEN_BYTE, + length); + + if (ret) + return ret; + + payload_len = *length; + + if (buf_size < payload_len) { + device_printf( + GET_DEV(accel_dev), + "Truncating block type %d response from %d to %d bytes\n", + msg_type, + payload_len, + buf_size); + payload_len = buf_size; + } + + /* Get the data */ + for (buf_index = 0; buf_index < payload_len; buf_index++) { + ret = adf_iov_block_get_byte(accel_dev, + msg_type, + buf_index + ADF_VF2PF_BLOCK_DATA, + buffer + buf_index); + if (ret) + return ret; + } + + ret = adf_iov_block_get_crc(accel_dev, + msg_type, + payload_len + ADF_VF2PF_BLOCK_DATA, + &remote_crc); + if (ret) + return ret; + local_crc = adf_pfvf_crc(ADF_CRC8_INIT_VALUE, block_version, 1); + local_crc = adf_pfvf_crc(local_crc, length, 1); + local_crc = adf_pfvf_crc(local_crc, buffer, payload_len); + if (local_crc != remote_crc) { + device_printf( + GET_DEV(accel_dev), + "CRC error on msg type %d. Local %02X, remote %02X\n", + msg_type, + local_crc, + remote_crc); + accel_dev->u1.vf.pfvf_counters.crc_err++; + return EIO; + } + + accel_dev->u1.vf.pfvf_counters.blk_rx++; + *length = payload_len; + return 0; +} Index: sys/dev/qat/qat_common/adf_vf_isr.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/adf_vf_isr.c @@ -0,0 +1,393 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include +#include +#include +#include +#include +#include +#include +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "adf_cfg.h" +#include "adf_cfg_strings.h" +#include "adf_cfg_common.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include "adf_pf2vf_msg.h" + +#define ADF_VINTSOU_BUN BIT(0) +#define ADF_VINTSOU_PF2VF BIT(1) + +static TASKQUEUE_DEFINE_THREAD(qat_vf); + +static struct workqueue_struct *adf_vf_stop_wq; +static DEFINE_MUTEX(vf_stop_wq_lock); + +struct adf_vf_stop_data { + struct adf_accel_dev *accel_dev; + struct work_struct vf_stop_work; +}; + +static int +adf_enable_msi(struct adf_accel_dev *accel_dev) +{ + int stat; + int count = 1; + stat = pci_alloc_msi(accel_to_pci_dev(accel_dev), &count); + if (stat) { + device_printf(GET_DEV(accel_dev), + "Failed to enable MSI interrupts\n"); + return stat; + } + + return stat; +} + +static void +adf_disable_msi(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_to_pci_dev(accel_dev); + pci_release_msi(pdev); +} + +static void +adf_dev_stop_async(struct work_struct *work) +{ + struct adf_vf_stop_data *stop_data = + container_of(work, struct adf_vf_stop_data, vf_stop_work); + struct adf_accel_dev *accel_dev = stop_data->accel_dev; + + adf_dev_restarting_notify(accel_dev); + adf_dev_stop(accel_dev); + adf_dev_shutdown(accel_dev); + + /* Re-enable PF2VF interrupts */ + adf_enable_pf2vf_interrupts(accel_dev); + kfree(stop_data); +} + +static void +adf_pf2vf_bh_handler(void *data, int pending) +{ + struct adf_accel_dev *accel_dev = data; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_bar *pmisc = + &GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)]; + struct resource *pmisc_bar_addr = pmisc->virt_addr; + u32 msg; + bool is_notification = false; + + /* Read the message from PF */ + msg = ADF_CSR_RD(pmisc_bar_addr, hw_data->get_pf2vf_offset(0)); + if (!(msg & ADF_PF2VF_INT)) { + device_printf(GET_DEV(accel_dev), + "Spurious PF2VF interrupt. msg %X. Ignored\n", + msg); + accel_dev->u1.vf.pfvf_counters.spurious++; + goto out; + } + accel_dev->u1.vf.pfvf_counters.rx++; + + if (!(msg & ADF_PF2VF_MSGORIGIN_SYSTEM)) { + device_printf(GET_DEV(accel_dev), + "Ignore non-system PF2VF message(0x%x)\n", + msg); + /* + * To ack, clear the VF2PFINT bit. + * Because this must be a legacy message, the far side + * must clear the in-use pattern. + */ + msg &= ~ADF_PF2VF_INT; + ADF_CSR_WR(pmisc_bar_addr, hw_data->get_pf2vf_offset(0), msg); + goto out; + } + + switch ((msg & ADF_PF2VF_MSGTYPE_MASK) >> ADF_PF2VF_MSGTYPE_SHIFT) { + case ADF_PF2VF_MSGTYPE_RESTARTING: { + struct adf_vf_stop_data *stop_data; + + is_notification = true; + + device_printf(GET_DEV(accel_dev), + "Restarting msg received from PF 0x%x\n", + msg); + + clear_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status); + stop_data = kzalloc(sizeof(*stop_data), GFP_ATOMIC); + if (!stop_data) { + device_printf(GET_DEV(accel_dev), + "Couldn't schedule stop for vf_%d\n", + accel_dev->accel_id); + goto out; + } + stop_data->accel_dev = accel_dev; + INIT_WORK(&stop_data->vf_stop_work, adf_dev_stop_async); + queue_work(adf_vf_stop_wq, &stop_data->vf_stop_work); + break; + } + case ADF_PF2VF_MSGTYPE_VERSION_RESP: + device_printf(GET_DEV(accel_dev), + "Version resp received from PF 0x%x\n", + msg); + is_notification = false; + accel_dev->u1.vf.pf_version = + (msg & ADF_PF2VF_VERSION_RESP_VERS_MASK) >> + ADF_PF2VF_VERSION_RESP_VERS_SHIFT; + accel_dev->u1.vf.compatible = + (msg & ADF_PF2VF_VERSION_RESP_RESULT_MASK) >> + ADF_PF2VF_VERSION_RESP_RESULT_SHIFT; + accel_dev->u1.vf.iov_msg_completion = 1; + wakeup(&accel_dev->u1.vf.iov_msg_completion); + break; + case ADF_PF2VF_MSGTYPE_BLOCK_RESP: + is_notification = false; + accel_dev->u1.vf.pf2vf_block_byte = + (msg & ADF_PF2VF_BLOCK_RESP_DATA_MASK) >> + ADF_PF2VF_BLOCK_RESP_DATA_SHIFT; + accel_dev->u1.vf.pf2vf_block_resp_type = + (msg & ADF_PF2VF_BLOCK_RESP_TYPE_MASK) >> + ADF_PF2VF_BLOCK_RESP_TYPE_SHIFT; + accel_dev->u1.vf.iov_msg_completion = 1; + wakeup(&accel_dev->u1.vf.iov_msg_completion); + break; + case ADF_PF2VF_MSGTYPE_FATAL_ERROR: + device_printf(GET_DEV(accel_dev), + "Fatal error received from PF 0x%x\n", + msg); + is_notification = true; + if (adf_notify_fatal_error(accel_dev)) + device_printf(GET_DEV(accel_dev), + "Couldn't notify fatal error\n"); + break; + default: + device_printf(GET_DEV(accel_dev), + "Unknown PF2VF message(0x%x)\n", + msg); + } + + /* To ack, clear the PF2VFINT bit */ + msg &= ~ADF_PF2VF_INT; + /* + * Clear the in-use pattern if the sender won't do it. + * Because the compatibility version must be the first message + * exchanged between the VF and PF, the pf.version must be + * set at this time. + * The in-use pattern is not cleared for notifications so that + * it can be used for collision detection. + */ + if (accel_dev->u1.vf.pf_version >= ADF_PFVF_COMPATIBILITY_FAST_ACK && + !is_notification) + msg &= ~ADF_PF2VF_IN_USE_BY_PF_MASK; + ADF_CSR_WR(pmisc_bar_addr, hw_data->get_pf2vf_offset(0), msg); + +out: + /* Re-enable PF2VF interrupts */ + adf_enable_pf2vf_interrupts(accel_dev); + return; +} + +static int +adf_setup_pf2vf_bh(struct adf_accel_dev *accel_dev) +{ + TASK_INIT(&accel_dev->u1.vf.pf2vf_bh_tasklet, + 0, + adf_pf2vf_bh_handler, + accel_dev); + mutex_init(&accel_dev->u1.vf.vf2pf_lock); + + return 0; +} + +static void +adf_cleanup_pf2vf_bh(struct adf_accel_dev *accel_dev) +{ + taskqueue_cancel(taskqueue_qat_vf, + &accel_dev->u1.vf.pf2vf_bh_tasklet, + NULL); + taskqueue_drain(taskqueue_qat_vf, &accel_dev->u1.vf.pf2vf_bh_tasklet); + mutex_destroy(&accel_dev->u1.vf.vf2pf_lock); +} + +static void +adf_isr(void *privdata) +{ + struct adf_accel_dev *accel_dev = privdata; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_bar *pmisc = + &GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)]; + struct resource *pmisc_bar_addr = pmisc->virt_addr; + u32 v_int, v_mask; + int handled = 0; + + /* Read VF INT source CSR to determine the source of VF interrupt */ + v_int = ADF_CSR_RD(pmisc_bar_addr, hw_data->get_vintsou_offset()); + v_mask = ADF_CSR_RD(pmisc_bar_addr, hw_data->get_vintmsk_offset(0)); + + /* Check for PF2VF interrupt */ + if ((v_int & ~v_mask) & ADF_VINTSOU_PF2VF) { + /* Disable PF to VF interrupt */ + adf_disable_pf2vf_interrupts(accel_dev); + + /* Schedule tasklet to handle interrupt BH */ + taskqueue_enqueue(taskqueue_qat_vf, + &accel_dev->u1.vf.pf2vf_bh_tasklet); + handled = 1; + } + + if ((v_int & ~v_mask) & ADF_VINTSOU_BUN) { + struct adf_etr_data *etr_data = accel_dev->transport; + struct adf_etr_bank_data *bank = &etr_data->banks[0]; + + /* Disable Flag and Coalesce Ring Interrupts */ + WRITE_CSR_INT_FLAG_AND_COL(bank->csr_addr, + bank->bank_number, + 0); + adf_response_handler((uintptr_t)&etr_data->banks[0]); + handled = 1; + } + + if (handled) + return; +} + +static int +adf_request_msi_irq(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_to_pci_dev(accel_dev); + int ret; + int rid = 1; + accel_dev->u1.vf.irq = + bus_alloc_resource_any(pdev, SYS_RES_IRQ, &rid, RF_ACTIVE); + if (accel_dev->u1.vf.irq == NULL) { + device_printf(GET_DEV(accel_dev), "failed to allocate IRQ\n"); + return ENXIO; + } + ret = bus_setup_intr(pdev, + accel_dev->u1.vf.irq, + INTR_TYPE_MISC | INTR_MPSAFE, + NULL, + adf_isr, + accel_dev, + &accel_dev->u1.vf.cookie); + if (ret) { + device_printf(GET_DEV(accel_dev), + "failed to enable irq for %s\n", + accel_dev->u1.vf.irq_name); + return ret; + } + return ret; +} + +static int +adf_setup_bh(struct adf_accel_dev *accel_dev) +{ + return 0; +} + +static void +adf_cleanup_bh(struct adf_accel_dev *accel_dev) +{ +} + +/** + * adf_vf_isr_resource_free() - Free IRQ for acceleration device + * @accel_dev: Pointer to acceleration device. + * + * Function frees interrupts for acceleration device virtual function. + */ +void +adf_vf_isr_resource_free(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_to_pci_dev(accel_dev); + bus_teardown_intr(pdev, accel_dev->u1.vf.irq, accel_dev->u1.vf.cookie); + bus_free_resource(pdev, SYS_RES_IRQ, accel_dev->u1.vf.irq); + adf_cleanup_bh(accel_dev); + adf_cleanup_pf2vf_bh(accel_dev); + adf_disable_msi(accel_dev); +} + +/** + * adf_vf_isr_resource_alloc() - Allocate IRQ for acceleration device + * @accel_dev: Pointer to acceleration device. + * + * Function allocates interrupts for acceleration device virtual function. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_vf_isr_resource_alloc(struct adf_accel_dev *accel_dev) +{ + if (adf_enable_msi(accel_dev)) + goto err_out; + + if (adf_setup_pf2vf_bh(accel_dev)) + goto err_out; + + if (adf_setup_bh(accel_dev)) + goto err_out; + + if (adf_request_msi_irq(accel_dev)) + goto err_out; + + return 0; +err_out: + adf_vf_isr_resource_free(accel_dev); + return EFAULT; +} + +/** + * adf_flush_vf_wq() - Flush workqueue for VF + * + * Function flushes workqueue 'adf_vf_stop_wq' for VF. + * + * Return: void. + */ +void +adf_flush_vf_wq(void) +{ + if (adf_vf_stop_wq) + flush_workqueue(adf_vf_stop_wq); +} + +/** + * adf_init_vf_wq() - Init workqueue for VF + * + * Function init workqueue 'adf_vf_stop_wq' for VF. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_init_vf_wq(void) +{ + int ret = 0; + + mutex_lock(&vf_stop_wq_lock); + if (!adf_vf_stop_wq) + adf_vf_stop_wq = + alloc_workqueue("adf_vf_stop_wq", WQ_MEM_RECLAIM, 0); + + if (!adf_vf_stop_wq) + ret = ENOMEM; + + mutex_unlock(&vf_stop_wq_lock); + return ret; +} + +/** + * adf_exit_vf_wq() - Destroy workqueue for VF + * + * Function destroy workqueue 'adf_vf_stop_wq' for VF. + * + * Return: void. + */ +void +adf_exit_vf_wq(void) +{ + if (adf_vf_stop_wq) { + destroy_workqueue(adf_vf_stop_wq); + adf_vf_stop_wq = NULL; + } +} Index: sys/dev/qat/qat_common/qat_common_module.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/qat_common_module.c @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_common_drv.h" + +static int __init +qat_common_register(void) +{ + if (adf_init_aer()) + return EFAULT; + + if (adf_init_fatal_error_wq()) + return EFAULT; + + return 0; +} + +static void __exit +qat_common_unregister(void) +{ + adf_exit_vf_wq(); + adf_exit_aer(); + adf_exit_fatal_error_wq(); + adf_clean_vf_map(false); +} + +static int +qat_common_modevent(module_t mod, int type, void *data) +{ + switch (type) { + case MOD_LOAD: + return qat_common_register(); + case MOD_UNLOAD: + qat_common_unregister(); + return 0; + default: + return EOPNOTSUPP; + } +} + +static moduledata_t qat_common_mod = { "qat_common", qat_common_modevent, 0 }; + +DECLARE_MODULE(qat_common, qat_common_mod, SI_SUB_DRIVERS, SI_ORDER_FIRST); +MODULE_VERSION(qat_common, 1); +MODULE_DEPEND(qat_common, linuxkpi, 1, 1, 1); Index: sys/dev/qat/qat_common/qat_freebsd.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/qat_freebsd.c @@ -0,0 +1,135 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "icp_qat_uclo.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_cfg_strings.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include +#include +#include +#include +#include +#include + +MALLOC_DEFINE(M_QAT, "qat", "qat"); + +struct bus_dma_mem_cb_data { + struct bus_dmamem *mem; + int error; +}; + +static void +bus_dma_mem_cb(void *arg, bus_dma_segment_t *segs, int nseg, int error) +{ + struct bus_dma_mem_cb_data *d; + + d = arg; + d->error = error; + if (error) + return; + d->mem->dma_baddr = segs[0].ds_addr; +} + +int +bus_dma_mem_create(struct bus_dmamem *mem, + bus_dma_tag_t parent, + bus_size_t alignment, + bus_addr_t lowaddr, + bus_size_t len, + int flags) +{ + struct bus_dma_mem_cb_data d; + int error; + + bzero(mem, sizeof(*mem)); + error = bus_dma_tag_create(parent, + alignment, + 0, + lowaddr, + BUS_SPACE_MAXADDR, + NULL, + NULL, + len, + 1, + len, + 0, + NULL, + NULL, + &mem->dma_tag); + if (error) { + bus_dma_mem_free(mem); + return (error); + } + error = bus_dmamem_alloc(mem->dma_tag, + &mem->dma_vaddr, + flags, + &mem->dma_map); + if (error) { + bus_dma_mem_free(mem); + return (error); + } + d.mem = mem; + error = bus_dmamap_load(mem->dma_tag, + mem->dma_map, + mem->dma_vaddr, + len, + bus_dma_mem_cb, + &d, + BUS_DMA_NOWAIT); + if (error == 0) + error = d.error; + if (error) { + bus_dma_mem_free(mem); + return (error); + } + return (0); +} + +void +bus_dma_mem_free(struct bus_dmamem *mem) +{ + + if (mem->dma_baddr != 0) + bus_dmamap_unload(mem->dma_tag, mem->dma_map); + if (mem->dma_vaddr != NULL) + bus_dmamem_free(mem->dma_tag, mem->dma_vaddr, mem->dma_map); + if (mem->dma_tag != NULL) + bus_dma_tag_destroy(mem->dma_tag); + bzero(mem, sizeof(*mem)); +} + +device_t +pci_find_pf(device_t vf) +{ + return (NULL); +} + +int +pci_set_max_payload(device_t dev, int payload_size) +{ + const int packet_sizes[6] = { 128, 256, 512, 1024, 2048, 4096 }; + int cap_reg = 0, reg_value = 0, mask = 0; + + for (mask = 0; mask < 6; mask++) { + if (payload_size == packet_sizes[mask]) + break; + } + if (mask == 6) + return -1; + + if (pci_find_cap(dev, PCIY_EXPRESS, &cap_reg) != 0) + return -1; + + cap_reg += PCIER_DEVICE_CTL; /* Offset for Device Control Register. */ + reg_value = pci_read_config(dev, cap_reg, 1); + reg_value = (reg_value & 0x1f) | (mask << 5); + pci_write_config(dev, cap_reg, reg_value, 1); + return 0; +} Index: sys/dev/qat/qat_common/qat_hal.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/qat_hal.c @@ -0,0 +1,1848 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "icp_qat_uclo.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_cfg_strings.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "icp_qat_hal.h" +#include "icp_qat_uclo.h" + +#define BAD_REGADDR 0xffff +#define MAX_RETRY_TIMES 1000000 +#define INIT_CTX_ARB_VALUE 0x0 +#define INIT_CTX_ENABLE_VALUE 0x0 +#define INIT_PC_VALUE 0x0 +#define INIT_WAKEUP_EVENTS_VALUE 0x1 +#define INIT_SIG_EVENTS_VALUE 0x1 +#define INIT_CCENABLE_VALUE 0x2000 +#define RST_CSR_QAT_LSB 20 +#define RST_CSR_AE_LSB 0 +#define MC_TIMESTAMP_ENABLE (0x1 << 7) + +#define IGNORE_W1C_MASK \ + ((~(1 << CE_BREAKPOINT_BITPOS)) & \ + (~(1 << CE_CNTL_STORE_PARITY_ERROR_BITPOS)) & \ + (~(1 << CE_REG_PAR_ERR_BITPOS))) +#define INSERT_IMMED_GPRA_CONST(inst, const_val) \ + (inst = ((inst & 0xFFFF00C03FFull) | \ + ((((const_val) << 12) & 0x0FF00000ull) | \ + (((const_val) << 10) & 0x0003FC00ull)))) +#define INSERT_IMMED_GPRB_CONST(inst, const_val) \ + (inst = ((inst & 0xFFFF00FFF00ull) | \ + ((((const_val) << 12) & 0x0FF00000ull) | \ + (((const_val) << 0) & 0x000000FFull)))) + +#define AE(handle, ae) ((handle)->hal_handle->aes[ae]) + +static const uint64_t inst_4b[] = { 0x0F0400C0000ull, 0x0F4400C0000ull, + 0x0F040000300ull, 0x0F440000300ull, + 0x0FC066C0000ull, 0x0F0000C0300ull, + 0x0F0000C0300ull, 0x0F0000C0300ull, + 0x0A021000000ull }; + +static const uint64_t inst[] = { + 0x0F0000C0000ull, 0x0F000000380ull, 0x0D805000011ull, 0x0FC082C0300ull, + 0x0F0000C0300ull, 0x0F0000C0300ull, 0x0F0000C0300ull, 0x0F0000C0300ull, + 0x0A0643C0000ull, 0x0BAC0000301ull, 0x0D802000101ull, 0x0F0000C0001ull, + 0x0FC066C0001ull, 0x0F0000C0300ull, 0x0F0000C0300ull, 0x0F0000C0300ull, + 0x0F000400300ull, 0x0A0610C0000ull, 0x0BAC0000301ull, 0x0D804400101ull, + 0x0A0580C0000ull, 0x0A0581C0000ull, 0x0A0582C0000ull, 0x0A0583C0000ull, + 0x0A0584C0000ull, 0x0A0585C0000ull, 0x0A0586C0000ull, 0x0A0587C0000ull, + 0x0A0588C0000ull, 0x0A0589C0000ull, 0x0A058AC0000ull, 0x0A058BC0000ull, + 0x0A058CC0000ull, 0x0A058DC0000ull, 0x0A058EC0000ull, 0x0A058FC0000ull, + 0x0A05C0C0000ull, 0x0A05C1C0000ull, 0x0A05C2C0000ull, 0x0A05C3C0000ull, + 0x0A05C4C0000ull, 0x0A05C5C0000ull, 0x0A05C6C0000ull, 0x0A05C7C0000ull, + 0x0A05C8C0000ull, 0x0A05C9C0000ull, 0x0A05CAC0000ull, 0x0A05CBC0000ull, + 0x0A05CCC0000ull, 0x0A05CDC0000ull, 0x0A05CEC0000ull, 0x0A05CFC0000ull, + 0x0A0400C0000ull, 0x0B0400C0000ull, 0x0A0401C0000ull, 0x0B0401C0000ull, + 0x0A0402C0000ull, 0x0B0402C0000ull, 0x0A0403C0000ull, 0x0B0403C0000ull, + 0x0A0404C0000ull, 0x0B0404C0000ull, 0x0A0405C0000ull, 0x0B0405C0000ull, + 0x0A0406C0000ull, 0x0B0406C0000ull, 0x0A0407C0000ull, 0x0B0407C0000ull, + 0x0A0408C0000ull, 0x0B0408C0000ull, 0x0A0409C0000ull, 0x0B0409C0000ull, + 0x0A040AC0000ull, 0x0B040AC0000ull, 0x0A040BC0000ull, 0x0B040BC0000ull, + 0x0A040CC0000ull, 0x0B040CC0000ull, 0x0A040DC0000ull, 0x0B040DC0000ull, + 0x0A040EC0000ull, 0x0B040EC0000ull, 0x0A040FC0000ull, 0x0B040FC0000ull, + 0x0D81581C010ull, 0x0E000010000ull, 0x0E000010000ull, +}; + +void +qat_hal_set_live_ctx(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask) +{ + AE(handle, ae).live_ctx_mask = ctx_mask; +} + +#define CSR_RETRY_TIMES 500 +static int +qat_hal_rd_ae_csr(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int csr, + unsigned int *value) +{ + unsigned int iterations = CSR_RETRY_TIMES; + + do { + *value = GET_AE_CSR(handle, ae, csr); + if (!(GET_AE_CSR(handle, ae, LOCAL_CSR_STATUS) & LCS_STATUS)) + return 0; + } while (iterations--); + + pr_err("QAT: Read CSR timeout\n"); + return EFAULT; +} + +static int +qat_hal_wr_ae_csr(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int csr, + unsigned int value) +{ + unsigned int iterations = CSR_RETRY_TIMES; + + do { + SET_AE_CSR(handle, ae, csr, value); + if (!(GET_AE_CSR(handle, ae, LOCAL_CSR_STATUS) & LCS_STATUS)) + return 0; + } while (iterations--); + + pr_err("QAT: Write CSR Timeout\n"); + return EFAULT; +} + +static void +qat_hal_get_wakeup_event(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char ctx, + unsigned int *events) +{ + unsigned int cur_ctx; + + qat_hal_rd_ae_csr(handle, ae, CSR_CTX_POINTER, &cur_ctx); + qat_hal_wr_ae_csr(handle, ae, CSR_CTX_POINTER, ctx); + qat_hal_rd_ae_csr(handle, ae, CTX_WAKEUP_EVENTS_INDIRECT, events); + qat_hal_wr_ae_csr(handle, ae, CSR_CTX_POINTER, cur_ctx); +} + +static int +qat_hal_wait_cycles(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int cycles, + int chk_inactive) +{ + unsigned int base_cnt = 0, cur_cnt = 0; + unsigned int csr = (1 << ACS_ABO_BITPOS); + int times = MAX_RETRY_TIMES; + int elapsed_cycles = 0; + + qat_hal_rd_ae_csr(handle, ae, PROFILE_COUNT, &base_cnt); + base_cnt &= 0xffff; + while ((int)cycles > elapsed_cycles && times--) { + if (chk_inactive) + qat_hal_rd_ae_csr(handle, ae, ACTIVE_CTX_STATUS, &csr); + + qat_hal_rd_ae_csr(handle, ae, PROFILE_COUNT, &cur_cnt); + cur_cnt &= 0xffff; + elapsed_cycles = cur_cnt - base_cnt; + + if (elapsed_cycles < 0) + elapsed_cycles += 0x10000; + + /* ensure at least 8 time cycles elapsed in wait_cycles */ + if (elapsed_cycles >= 8 && !(csr & (1 << ACS_ABO_BITPOS))) + return 0; + } + if (times < 0) { + pr_err("QAT: wait_num_cycles time out\n"); + return EFAULT; + } + return 0; +} + +void +qat_hal_get_scs_neigh_ae(unsigned char ae, unsigned char *ae_neigh) +{ + *ae_neigh = (ae & 0x1) ? (ae - 1) : (ae + 1); +} + +#define CLR_BIT(wrd, bit) ((wrd) & ~(1 << (bit))) +#define SET_BIT(wrd, bit) ((wrd) | 1 << (bit)) + +int +qat_hal_set_ae_ctx_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char mode) +{ + unsigned int csr, new_csr; + + if (mode != 4 && mode != 8) { + pr_err("QAT: bad ctx mode=%d\n", mode); + return EINVAL; + } + + /* Sets the accelaration engine context mode to either four or eight */ + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &csr); + csr = IGNORE_W1C_MASK & csr; + new_csr = (mode == 4) ? SET_BIT(csr, CE_INUSE_CONTEXTS_BITPOS) : + CLR_BIT(csr, CE_INUSE_CONTEXTS_BITPOS); + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, new_csr); + return 0; +} + +int +qat_hal_set_ae_nn_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char mode) +{ + unsigned int csr, new_csr; + + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &csr); + csr &= IGNORE_W1C_MASK; + + new_csr = (mode) ? SET_BIT(csr, CE_NN_MODE_BITPOS) : + CLR_BIT(csr, CE_NN_MODE_BITPOS); + + if (new_csr != csr) + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, new_csr); + + return 0; +} + +int +qat_hal_set_ae_lm_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + enum icp_qat_uof_regtype lm_type, + unsigned char mode) +{ + unsigned int csr, new_csr; + + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &csr); + csr &= IGNORE_W1C_MASK; + switch (lm_type) { + case ICP_LMEM0: + new_csr = (mode) ? SET_BIT(csr, CE_LMADDR_0_GLOBAL_BITPOS) : + CLR_BIT(csr, CE_LMADDR_0_GLOBAL_BITPOS); + break; + case ICP_LMEM1: + new_csr = (mode) ? SET_BIT(csr, CE_LMADDR_1_GLOBAL_BITPOS) : + CLR_BIT(csr, CE_LMADDR_1_GLOBAL_BITPOS); + break; + case ICP_LMEM2: + new_csr = (mode) ? SET_BIT(csr, CE_LMADDR_2_GLOBAL_BITPOS) : + CLR_BIT(csr, CE_LMADDR_2_GLOBAL_BITPOS); + break; + case ICP_LMEM3: + new_csr = (mode) ? SET_BIT(csr, CE_LMADDR_3_GLOBAL_BITPOS) : + CLR_BIT(csr, CE_LMADDR_3_GLOBAL_BITPOS); + break; + default: + pr_err("QAT: lmType = 0x%x\n", lm_type); + return EINVAL; + } + + if (new_csr != csr) + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, new_csr); + return 0; +} + +void +qat_hal_set_ae_tindex_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char mode) +{ + unsigned int csr, new_csr; + + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &csr); + csr &= IGNORE_W1C_MASK; + new_csr = (mode) ? SET_BIT(csr, CE_T_INDEX_GLOBAL_BITPOS) : + CLR_BIT(csr, CE_T_INDEX_GLOBAL_BITPOS); + if (new_csr != csr) + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, new_csr); +} + +void +qat_hal_set_ae_scs_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char mode) +{ + unsigned int csr, new_csr; + + qat_hal_rd_ae_csr(handle, ae, AE_MISC_CONTROL, &csr); + new_csr = (mode) ? SET_BIT(csr, MMC_SHARE_CS_BITPOS) : + CLR_BIT(csr, MMC_SHARE_CS_BITPOS); + if (new_csr != csr) + qat_hal_wr_ae_csr(handle, ae, AE_MISC_CONTROL, new_csr); +} + +static unsigned short +qat_hal_get_reg_addr(unsigned int type, unsigned short reg_num) +{ + unsigned short reg_addr; + + switch (type) { + case ICP_GPA_ABS: + case ICP_GPB_ABS: + reg_addr = 0x80 | (reg_num & 0x7f); + break; + case ICP_GPA_REL: + case ICP_GPB_REL: + reg_addr = reg_num & 0x1f; + break; + case ICP_SR_RD_REL: + case ICP_SR_WR_REL: + case ICP_SR_REL: + reg_addr = 0x180 | (reg_num & 0x1f); + break; + case ICP_SR_ABS: + reg_addr = 0x140 | ((reg_num & 0x3) << 1); + break; + case ICP_DR_RD_REL: + case ICP_DR_WR_REL: + case ICP_DR_REL: + reg_addr = 0x1c0 | (reg_num & 0x1f); + break; + case ICP_DR_ABS: + reg_addr = 0x100 | ((reg_num & 0x3) << 1); + break; + case ICP_NEIGH_REL: + reg_addr = 0x280 | (reg_num & 0x1f); + break; + case ICP_LMEM0: + reg_addr = 0x200; + break; + case ICP_LMEM1: + reg_addr = 0x220; + break; + case ICP_LMEM2: + reg_addr = 0x2c0; + break; + case ICP_LMEM3: + reg_addr = 0x2e0; + break; + case ICP_NO_DEST: + reg_addr = 0x300 | (reg_num & 0xff); + break; + default: + reg_addr = BAD_REGADDR; + break; + } + return reg_addr; +} + +void +qat_hal_reset(struct icp_qat_fw_loader_handle *handle) +{ + unsigned int ae_reset_csr[MAX_CPP_NUM]; + unsigned int ae_reset_val[MAX_CPP_NUM]; + unsigned int valid_ae_mask, valid_slice_mask; + unsigned int cpp_num = 1; + unsigned int i; + + if (IS_QAT_GEN3(pci_get_device(GET_DEV(handle->accel_dev)))) { + ae_reset_csr[0] = ICP_RESET_CPP0; + ae_reset_csr[1] = ICP_RESET_CPP1; + if (handle->hal_handle->ae_mask > 0xffff) + ++cpp_num; + } else { + ae_reset_csr[0] = ICP_RESET; + } + + for (i = 0; i < cpp_num; i++) { + if (i == 0) { + valid_ae_mask = handle->hal_handle->ae_mask & 0xFFFF; + valid_slice_mask = + handle->hal_handle->slice_mask & 0x3F; + } else { + valid_ae_mask = + (handle->hal_handle->ae_mask >> AES_PER_CPP) & + 0xFFFF; + valid_slice_mask = + (handle->hal_handle->slice_mask >> SLICES_PER_CPP) & + 0x3F; + } + + ae_reset_val[i] = GET_GLB_CSR(handle, ae_reset_csr[i]); + ae_reset_val[i] |= valid_ae_mask << RST_CSR_AE_LSB; + ae_reset_val[i] |= valid_slice_mask << RST_CSR_QAT_LSB; + SET_GLB_CSR(handle, ae_reset_csr[i], ae_reset_val[i]); + } +} + +static void +qat_hal_wr_indr_csr(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask, + unsigned int ae_csr, + unsigned int csr_val) +{ + unsigned int ctx, cur_ctx; + + qat_hal_rd_ae_csr(handle, ae, CSR_CTX_POINTER, &cur_ctx); + + for (ctx = 0; ctx < ICP_QAT_UCLO_MAX_CTX; ctx++) { + if (!(ctx_mask & (1 << ctx))) + continue; + qat_hal_wr_ae_csr(handle, ae, CSR_CTX_POINTER, ctx); + qat_hal_wr_ae_csr(handle, ae, ae_csr, csr_val); + } + + qat_hal_wr_ae_csr(handle, ae, CSR_CTX_POINTER, cur_ctx); +} + +static void +qat_hal_rd_indr_csr(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char ctx, + unsigned int ae_csr, + unsigned int *csr_val) +{ + unsigned int cur_ctx; + + qat_hal_rd_ae_csr(handle, ae, CSR_CTX_POINTER, &cur_ctx); + qat_hal_wr_ae_csr(handle, ae, CSR_CTX_POINTER, ctx); + qat_hal_rd_ae_csr(handle, ae, ae_csr, csr_val); + qat_hal_wr_ae_csr(handle, ae, CSR_CTX_POINTER, cur_ctx); +} + +static void +qat_hal_put_sig_event(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask, + unsigned int events) +{ + unsigned int ctx, cur_ctx; + + qat_hal_rd_ae_csr(handle, ae, CSR_CTX_POINTER, &cur_ctx); + for (ctx = 0; ctx < ICP_QAT_UCLO_MAX_CTX; ctx++) { + if (!(ctx_mask & (1 << ctx))) + continue; + qat_hal_wr_ae_csr(handle, ae, CSR_CTX_POINTER, ctx); + qat_hal_wr_ae_csr(handle, ae, CTX_SIG_EVENTS_INDIRECT, events); + } + qat_hal_wr_ae_csr(handle, ae, CSR_CTX_POINTER, cur_ctx); +} + +static void +qat_hal_put_wakeup_event(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask, + unsigned int events) +{ + unsigned int ctx, cur_ctx; + + qat_hal_rd_ae_csr(handle, ae, CSR_CTX_POINTER, &cur_ctx); + for (ctx = 0; ctx < ICP_QAT_UCLO_MAX_CTX; ctx++) { + if (!(ctx_mask & (1 << ctx))) + continue; + qat_hal_wr_ae_csr(handle, ae, CSR_CTX_POINTER, ctx); + qat_hal_wr_ae_csr(handle, + ae, + CTX_WAKEUP_EVENTS_INDIRECT, + events); + } + qat_hal_wr_ae_csr(handle, ae, CSR_CTX_POINTER, cur_ctx); +} + +static int +qat_hal_check_ae_alive(struct icp_qat_fw_loader_handle *handle) +{ + unsigned int base_cnt, cur_cnt; + unsigned char ae; + unsigned long ae_mask = handle->hal_handle->ae_mask; + int times = MAX_RETRY_TIMES; + + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) + { + qat_hal_rd_ae_csr(handle, + ae, + PROFILE_COUNT, + (unsigned int *)&base_cnt); + base_cnt &= 0xffff; + + do { + qat_hal_rd_ae_csr(handle, + ae, + PROFILE_COUNT, + (unsigned int *)&cur_cnt); + cur_cnt &= 0xffff; + } while (times-- && (cur_cnt == base_cnt)); + + if (times < 0) { + pr_err("QAT: AE%d is inactive!!\n", ae); + return EFAULT; + } + } + + return 0; +} + +int +qat_hal_check_ae_active(struct icp_qat_fw_loader_handle *handle, + unsigned int ae) +{ + unsigned int enable = 0, active = 0; + + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &enable); + qat_hal_rd_ae_csr(handle, ae, ACTIVE_CTX_STATUS, &active); + if ((enable & (0xff << CE_ENABLE_BITPOS)) || + (active & (1 << ACS_ABO_BITPOS))) + return 1; + else + return 0; +} + +static void +qat_hal_reset_timestamp(struct icp_qat_fw_loader_handle *handle) +{ + unsigned int misc_ctl_csr, misc_ctl; + unsigned char ae; + unsigned long ae_mask = handle->hal_handle->ae_mask; + + misc_ctl_csr = + (IS_QAT_GEN3(pci_get_device(GET_DEV(handle->accel_dev)))) ? + MISC_CONTROL_C4XXX : + MISC_CONTROL; + /* stop the timestamp timers */ + misc_ctl = GET_GLB_CSR(handle, misc_ctl_csr); + if (misc_ctl & MC_TIMESTAMP_ENABLE) + SET_GLB_CSR(handle, + misc_ctl_csr, + misc_ctl & (~MC_TIMESTAMP_ENABLE)); + + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) + { + qat_hal_wr_ae_csr(handle, ae, TIMESTAMP_LOW, 0); + qat_hal_wr_ae_csr(handle, ae, TIMESTAMP_HIGH, 0); + } + /* start timestamp timers */ + SET_GLB_CSR(handle, misc_ctl_csr, misc_ctl | MC_TIMESTAMP_ENABLE); +} + +#define ESRAM_AUTO_TINIT BIT(2) +#define ESRAM_AUTO_TINIT_DONE BIT(3) +#define ESRAM_AUTO_INIT_USED_CYCLES (1640) +#define ESRAM_AUTO_INIT_CSR_OFFSET 0xC1C + +static int +qat_hal_init_esram(struct icp_qat_fw_loader_handle *handle) +{ + uintptr_t csr_addr = + ((uintptr_t)handle->hal_ep_csr_addr_v + ESRAM_AUTO_INIT_CSR_OFFSET); + unsigned int csr_val; + int times = 30; + + if (pci_get_device(GET_DEV(handle->accel_dev)) != + ADF_DH895XCC_PCI_DEVICE_ID) + return 0; + + csr_val = ADF_CSR_RD(handle->hal_misc_addr_v, csr_addr); + if ((csr_val & ESRAM_AUTO_TINIT) && (csr_val & ESRAM_AUTO_TINIT_DONE)) + return 0; + csr_val = ADF_CSR_RD(handle->hal_misc_addr_v, csr_addr); + csr_val |= ESRAM_AUTO_TINIT; + + ADF_CSR_WR(handle->hal_misc_addr_v, csr_addr, csr_val); + do { + qat_hal_wait_cycles(handle, 0, ESRAM_AUTO_INIT_USED_CYCLES, 0); + csr_val = ADF_CSR_RD(handle->hal_misc_addr_v, csr_addr); + + } while (!(csr_val & ESRAM_AUTO_TINIT_DONE) && times--); + if (times < 0) { + pr_err("QAT: Fail to init eSram!\n"); + return EFAULT; + } + return 0; +} + +#define SHRAM_INIT_CYCLES 2060 +int +qat_hal_clr_reset(struct icp_qat_fw_loader_handle *handle) +{ + unsigned int ae_reset_csr[MAX_CPP_NUM]; + unsigned int ae_reset_val[MAX_CPP_NUM]; + unsigned int cpp_num = 1; + unsigned int valid_ae_mask, valid_slice_mask; + unsigned char ae; + unsigned int i; + unsigned int clk_csr[MAX_CPP_NUM]; + unsigned int clk_val[MAX_CPP_NUM]; + unsigned int times = 100; + unsigned long ae_mask = handle->hal_handle->ae_mask; + + if (IS_QAT_GEN3(pci_get_device(GET_DEV(handle->accel_dev)))) { + ae_reset_csr[0] = ICP_RESET_CPP0; + ae_reset_csr[1] = ICP_RESET_CPP1; + clk_csr[0] = ICP_GLOBAL_CLK_ENABLE_CPP0; + clk_csr[1] = ICP_GLOBAL_CLK_ENABLE_CPP1; + if (handle->hal_handle->ae_mask > 0xffff) + ++cpp_num; + } else { + ae_reset_csr[0] = ICP_RESET; + clk_csr[0] = ICP_GLOBAL_CLK_ENABLE; + } + + for (i = 0; i < cpp_num; i++) { + if (i == 0) { + valid_ae_mask = handle->hal_handle->ae_mask & 0xFFFF; + valid_slice_mask = + handle->hal_handle->slice_mask & 0x3F; + } else { + valid_ae_mask = + (handle->hal_handle->ae_mask >> AES_PER_CPP) & + 0xFFFF; + valid_slice_mask = + (handle->hal_handle->slice_mask >> SLICES_PER_CPP) & + 0x3F; + } + /* write to the reset csr */ + ae_reset_val[i] = GET_GLB_CSR(handle, ae_reset_csr[i]); + ae_reset_val[i] &= ~(valid_ae_mask << RST_CSR_AE_LSB); + ae_reset_val[i] &= ~(valid_slice_mask << RST_CSR_QAT_LSB); + do { + SET_GLB_CSR(handle, ae_reset_csr[i], ae_reset_val[i]); + if (!(times--)) + goto out_err; + ae_reset_val[i] = GET_GLB_CSR(handle, ae_reset_csr[i]); + } while ( + (valid_ae_mask | (valid_slice_mask << RST_CSR_QAT_LSB)) & + ae_reset_val[i]); + /* enable clock */ + clk_val[i] = GET_GLB_CSR(handle, clk_csr[i]); + clk_val[i] |= valid_ae_mask << 0; + clk_val[i] |= valid_slice_mask << 20; + SET_GLB_CSR(handle, clk_csr[i], clk_val[i]); + } + if (qat_hal_check_ae_alive(handle)) + goto out_err; + + /* Set undefined power-up/reset states to reasonable default values */ + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) + { + qat_hal_wr_ae_csr(handle, + ae, + CTX_ENABLES, + INIT_CTX_ENABLE_VALUE); + qat_hal_wr_indr_csr(handle, + ae, + ICP_QAT_UCLO_AE_ALL_CTX, + CTX_STS_INDIRECT, + handle->hal_handle->upc_mask & + INIT_PC_VALUE); + qat_hal_wr_ae_csr(handle, ae, CTX_ARB_CNTL, INIT_CTX_ARB_VALUE); + qat_hal_wr_ae_csr(handle, ae, CC_ENABLE, INIT_CCENABLE_VALUE); + qat_hal_put_wakeup_event(handle, + ae, + ICP_QAT_UCLO_AE_ALL_CTX, + INIT_WAKEUP_EVENTS_VALUE); + qat_hal_put_sig_event(handle, + ae, + ICP_QAT_UCLO_AE_ALL_CTX, + INIT_SIG_EVENTS_VALUE); + } + if (qat_hal_init_esram(handle)) + goto out_err; + if (qat_hal_wait_cycles(handle, 0, SHRAM_INIT_CYCLES, 0)) + goto out_err; + qat_hal_reset_timestamp(handle); + + return 0; +out_err: + pr_err("QAT: failed to get device out of reset\n"); + return EFAULT; +} + +static void +qat_hal_disable_ctx(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask) +{ + unsigned int ctx; + + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &ctx); + ctx &= IGNORE_W1C_MASK & + (~((ctx_mask & ICP_QAT_UCLO_AE_ALL_CTX) << CE_ENABLE_BITPOS)); + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, ctx); +} + +static uint64_t +qat_hal_parity_64bit(uint64_t word) +{ + word ^= word >> 1; + word ^= word >> 2; + word ^= word >> 4; + word ^= word >> 8; + word ^= word >> 16; + word ^= word >> 32; + return word & 1; +} + +static uint64_t +qat_hal_set_uword_ecc(uint64_t uword) +{ + uint64_t bit0_mask = 0xff800007fffULL, bit1_mask = 0x1f801ff801fULL, + bit2_mask = 0xe387e0781e1ULL, bit3_mask = 0x7cb8e388e22ULL, + bit4_mask = 0xaf5b2c93244ULL, bit5_mask = 0xf56d5525488ULL, + bit6_mask = 0xdaf69a46910ULL; + + /* clear the ecc bits */ + uword &= ~(0x7fULL << 0x2C); + uword |= qat_hal_parity_64bit(bit0_mask & uword) << 0x2C; + uword |= qat_hal_parity_64bit(bit1_mask & uword) << 0x2D; + uword |= qat_hal_parity_64bit(bit2_mask & uword) << 0x2E; + uword |= qat_hal_parity_64bit(bit3_mask & uword) << 0x2F; + uword |= qat_hal_parity_64bit(bit4_mask & uword) << 0x30; + uword |= qat_hal_parity_64bit(bit5_mask & uword) << 0x31; + uword |= qat_hal_parity_64bit(bit6_mask & uword) << 0x32; + return uword; +} + +void +qat_hal_wr_uwords(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int uaddr, + unsigned int words_num, + const uint64_t *uword) +{ + unsigned int ustore_addr; + unsigned int i; + + qat_hal_rd_ae_csr(handle, ae, USTORE_ADDRESS, &ustore_addr); + uaddr |= UA_ECS; + qat_hal_wr_ae_csr(handle, ae, USTORE_ADDRESS, uaddr); + for (i = 0; i < words_num; i++) { + unsigned int uwrd_lo, uwrd_hi; + uint64_t tmp; + + tmp = qat_hal_set_uword_ecc(uword[i]); + uwrd_lo = (unsigned int)(tmp & 0xffffffff); + uwrd_hi = (unsigned int)(tmp >> 0x20); + qat_hal_wr_ae_csr(handle, ae, USTORE_DATA_LOWER, uwrd_lo); + qat_hal_wr_ae_csr(handle, ae, USTORE_DATA_UPPER, uwrd_hi); + } + qat_hal_wr_ae_csr(handle, ae, USTORE_ADDRESS, ustore_addr); +} + +void +qat_hal_wr_coalesce_uwords(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int uaddr, + unsigned int words_num, + u64 *uword) +{ + u64 *even_uwrods, *odd_uwords; + unsigned char neigh_ae, odd_ae, even_ae; + int i, even_cpy_cnt = 0, odd_cpy_cnt = 0; + + even_uwrods = + malloc(16 * 1024 * sizeof(*uword), M_QAT, M_WAITOK | M_ZERO); + odd_uwords = + malloc(16 * 1024 * sizeof(*uword), M_QAT, M_WAITOK | M_ZERO); + qat_hal_get_scs_neigh_ae(ae, &neigh_ae); + if (ae & 1) { + odd_ae = ae; + even_ae = neigh_ae; + } else { + odd_ae = neigh_ae; + even_ae = ae; + } + for (i = 0; i < words_num; i++) { + if ((uaddr + i) & 1) + odd_uwords[odd_cpy_cnt++] = uword[i]; + else + even_uwrods[even_cpy_cnt++] = uword[i]; + } + if (even_cpy_cnt) + qat_hal_wr_uwords(handle, + even_ae, + (uaddr + 1) / 2, + even_cpy_cnt, + even_uwrods); + if (odd_cpy_cnt) + qat_hal_wr_uwords( + handle, odd_ae, uaddr / 2, odd_cpy_cnt, odd_uwords); + free(even_uwrods, M_QAT); + free(odd_uwords, M_QAT); +} + +static void +qat_hal_enable_ctx(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask) +{ + unsigned int ctx; + + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &ctx); + ctx &= IGNORE_W1C_MASK; + ctx_mask &= (ctx & CE_INUSE_CONTEXTS) ? 0x55 : 0xFF; + ctx |= (ctx_mask << CE_ENABLE_BITPOS); + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, ctx); +} + +static void +qat_hal_clear_xfer(struct icp_qat_fw_loader_handle *handle) +{ + unsigned char ae; + unsigned short reg; + unsigned long ae_mask = handle->hal_handle->ae_mask; + + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) + { + for (reg = 0; reg < ICP_QAT_UCLO_MAX_GPR_REG; reg++) { + qat_hal_init_rd_xfer( + handle, ae, 0, ICP_SR_RD_ABS, reg, 0); + qat_hal_init_rd_xfer( + handle, ae, 0, ICP_DR_RD_ABS, reg, 0); + } + } +} + +static int +qat_hal_clear_gpr(struct icp_qat_fw_loader_handle *handle) +{ + unsigned char ae; + unsigned int ctx_mask = ICP_QAT_UCLO_AE_ALL_CTX; + int times = MAX_RETRY_TIMES; + unsigned int csr_val = 0; + unsigned int savctx = 0; + unsigned int scs_flag = 0; + unsigned long ae_mask = handle->hal_handle->ae_mask; + int ret = 0; + + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) + { + qat_hal_rd_ae_csr(handle, ae, AE_MISC_CONTROL, &csr_val); + scs_flag = csr_val & (1 << MMC_SHARE_CS_BITPOS); + csr_val &= ~(1 << MMC_SHARE_CS_BITPOS); + qat_hal_wr_ae_csr(handle, ae, AE_MISC_CONTROL, csr_val); + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &csr_val); + csr_val &= IGNORE_W1C_MASK; + csr_val |= CE_NN_MODE; + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, csr_val); + qat_hal_wr_uwords( + handle, ae, 0, ARRAY_SIZE(inst), (const uint64_t *)inst); + qat_hal_wr_indr_csr(handle, + ae, + ctx_mask, + CTX_STS_INDIRECT, + handle->hal_handle->upc_mask & + INIT_PC_VALUE); + qat_hal_rd_ae_csr(handle, ae, ACTIVE_CTX_STATUS, &savctx); + qat_hal_wr_ae_csr(handle, ae, ACTIVE_CTX_STATUS, 0); + qat_hal_put_wakeup_event(handle, ae, ctx_mask, XCWE_VOLUNTARY); + qat_hal_wr_indr_csr( + handle, ae, ctx_mask, CTX_SIG_EVENTS_INDIRECT, 0); + qat_hal_wr_ae_csr(handle, ae, CTX_SIG_EVENTS_ACTIVE, 0); + qat_hal_enable_ctx(handle, ae, ctx_mask); + } + + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) + { + /* wait for AE to finish */ + do { + ret = qat_hal_wait_cycles(handle, ae, 20, 1); + } while (ret && times--); + + if (times < 0) { + pr_err("QAT: clear GPR of AE %d failed", ae); + return EINVAL; + } + qat_hal_disable_ctx(handle, ae, ctx_mask); + qat_hal_rd_ae_csr(handle, ae, AE_MISC_CONTROL, &csr_val); + if (scs_flag) + csr_val |= (1 << MMC_SHARE_CS_BITPOS); + qat_hal_wr_ae_csr(handle, ae, AE_MISC_CONTROL, csr_val); + qat_hal_wr_ae_csr(handle, + ae, + ACTIVE_CTX_STATUS, + savctx & ACS_ACNO); + qat_hal_wr_ae_csr(handle, + ae, + CTX_ENABLES, + INIT_CTX_ENABLE_VALUE); + qat_hal_wr_indr_csr(handle, + ae, + ctx_mask, + CTX_STS_INDIRECT, + handle->hal_handle->upc_mask & + INIT_PC_VALUE); + qat_hal_wr_ae_csr(handle, ae, CTX_ARB_CNTL, INIT_CTX_ARB_VALUE); + qat_hal_wr_ae_csr(handle, ae, CC_ENABLE, INIT_CCENABLE_VALUE); + qat_hal_put_wakeup_event(handle, + ae, + ctx_mask, + INIT_WAKEUP_EVENTS_VALUE); + qat_hal_put_sig_event(handle, + ae, + ctx_mask, + INIT_SIG_EVENTS_VALUE); + } + return 0; +} + +static int +qat_hal_check_imr(struct icp_qat_fw_loader_handle *handle) +{ + device_t dev = accel_to_pci_dev(handle->accel_dev); + u8 reg_val = 0; + + if (pci_get_device(GET_DEV(handle->accel_dev)) != + ADF_C3XXX_PCI_DEVICE_ID && + pci_get_device(GET_DEV(handle->accel_dev)) != + ADF_200XX_PCI_DEVICE_ID) + return 0; + + reg_val = pci_read_config(dev, 0x04, 1); + /* + * PCI command register memory bit and rambaseaddr_lo address + * are checked to confirm IMR2 is enabled in BIOS settings + */ + if ((reg_val & 0x2) && GET_FCU_CSR(handle, FCU_RAMBASE_ADDR_LO)) + return 0; + + return EINVAL; +} + +int +qat_hal_init(struct adf_accel_dev *accel_dev) +{ + unsigned char ae; + unsigned int cap_offset, ae_offset, ep_offset; + unsigned int sram_offset = 0; + unsigned int max_en_ae_id = 0; + int ret = 0; + unsigned long ae_mask; + struct icp_qat_fw_loader_handle *handle; + if (!accel_dev) { + return EFAULT; + } + struct adf_accel_pci *pci_info = &accel_dev->accel_pci_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_bar *misc_bar = + &pci_info->pci_bars[hw_data->get_misc_bar_id(hw_data)]; + struct adf_bar *sram_bar; + + handle = malloc(sizeof(*handle), M_QAT, M_WAITOK | M_ZERO); + + handle->hal_misc_addr_v = misc_bar->virt_addr; + handle->accel_dev = accel_dev; + if (pci_get_device(GET_DEV(handle->accel_dev)) == + ADF_DH895XCC_PCI_DEVICE_ID || + IS_QAT_GEN3(pci_get_device(GET_DEV(handle->accel_dev)))) { + sram_bar = + &pci_info->pci_bars[hw_data->get_sram_bar_id(hw_data)]; + if (IS_QAT_GEN3(pci_get_device(GET_DEV(handle->accel_dev)))) + sram_offset = + 0x400000 + accel_dev->aram_info->mmp_region_offset; + handle->hal_sram_addr_v = sram_bar->virt_addr; + handle->hal_sram_offset = sram_offset; + handle->hal_sram_size = sram_bar->size; + } + GET_CSR_OFFSET(pci_get_device(GET_DEV(handle->accel_dev)), + cap_offset, + ae_offset, + ep_offset); + handle->hal_cap_g_ctl_csr_addr_v = cap_offset; + handle->hal_cap_ae_xfer_csr_addr_v = ae_offset; + handle->hal_ep_csr_addr_v = ep_offset; + handle->hal_cap_ae_local_csr_addr_v = + ((uintptr_t)handle->hal_cap_ae_xfer_csr_addr_v + + LOCAL_TO_XFER_REG_OFFSET); + handle->fw_auth = (pci_get_device(GET_DEV(handle->accel_dev)) == + ADF_DH895XCC_PCI_DEVICE_ID) ? + false : + true; + if (handle->fw_auth && qat_hal_check_imr(handle)) { + device_printf(GET_DEV(accel_dev), "IMR2 not enabled in BIOS\n"); + ret = EINVAL; + goto out_hal_handle; + } + + handle->hal_handle = + malloc(sizeof(*handle->hal_handle), M_QAT, M_WAITOK | M_ZERO); + handle->hal_handle->revision_id = accel_dev->accel_pci_dev.revid; + handle->hal_handle->ae_mask = hw_data->ae_mask; + handle->hal_handle->slice_mask = hw_data->accel_mask; + handle->cfg_ae_mask = 0xFFFFFFFF; + /* create AE objects */ + if (IS_QAT_GEN3(pci_get_device(GET_DEV(handle->accel_dev)))) { + handle->hal_handle->upc_mask = 0xffff; + handle->hal_handle->max_ustore = 0x2000; + } else { + handle->hal_handle->upc_mask = 0x1ffff; + handle->hal_handle->max_ustore = 0x4000; + } + + ae_mask = hw_data->ae_mask; + + for_each_set_bit(ae, &ae_mask, ICP_QAT_UCLO_MAX_AE) + { + handle->hal_handle->aes[ae].free_addr = 0; + handle->hal_handle->aes[ae].free_size = + handle->hal_handle->max_ustore; + handle->hal_handle->aes[ae].ustore_size = + handle->hal_handle->max_ustore; + handle->hal_handle->aes[ae].live_ctx_mask = + ICP_QAT_UCLO_AE_ALL_CTX; + max_en_ae_id = ae; + } + handle->hal_handle->ae_max_num = max_en_ae_id + 1; + /* take all AEs out of reset */ + if (qat_hal_clr_reset(handle)) { + device_printf(GET_DEV(accel_dev), "qat_hal_clr_reset error\n"); + ret = EIO; + goto out_err; + } + qat_hal_clear_xfer(handle); + if (!handle->fw_auth) { + if (qat_hal_clear_gpr(handle)) { + ret = EIO; + goto out_err; + } + } + + /* Set SIGNATURE_ENABLE[0] to 0x1 in order to enable ALU_OUT csr */ + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) + { + unsigned int csr_val = 0; + + qat_hal_rd_ae_csr(handle, ae, SIGNATURE_ENABLE, &csr_val); + csr_val |= 0x1; + qat_hal_wr_ae_csr(handle, ae, SIGNATURE_ENABLE, csr_val); + } + accel_dev->fw_loader->fw_loader = handle; + return 0; + +out_err: + free(handle->hal_handle, M_QAT); +out_hal_handle: + free(handle, M_QAT); + return ret; +} + +void +qat_hal_deinit(struct icp_qat_fw_loader_handle *handle) +{ + if (!handle) + return; + free(handle->hal_handle, M_QAT); + free(handle, M_QAT); +} + +void +qat_hal_start(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask) +{ + int retry = 0; + unsigned int fcu_sts = 0; + unsigned int fcu_ctl_csr, fcu_sts_csr; + + if (handle->fw_auth) { + if (IS_QAT_GEN3(pci_get_device(GET_DEV(handle->accel_dev)))) { + fcu_ctl_csr = FCU_CONTROL_C4XXX; + fcu_sts_csr = FCU_STATUS_C4XXX; + + } else { + fcu_ctl_csr = FCU_CONTROL; + fcu_sts_csr = FCU_STATUS; + } + SET_FCU_CSR(handle, fcu_ctl_csr, FCU_CTRL_CMD_START); + do { + pause_ms("adfstop", FW_AUTH_WAIT_PERIOD); + fcu_sts = GET_FCU_CSR(handle, fcu_sts_csr); + if (((fcu_sts >> FCU_STS_DONE_POS) & 0x1)) + return; + } while (retry++ < FW_AUTH_MAX_RETRY); + pr_err("QAT: start error (AE 0x%x FCU_STS = 0x%x)\n", + ae, + fcu_sts); + } else { + qat_hal_put_wakeup_event(handle, + ae, + (~ctx_mask) & ICP_QAT_UCLO_AE_ALL_CTX, + 0x10000); + qat_hal_enable_ctx(handle, ae, ctx_mask); + } +} + +void +qat_hal_stop(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask) +{ + if (!handle->fw_auth) + qat_hal_disable_ctx(handle, ae, ctx_mask); +} + +void +qat_hal_set_pc(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask, + unsigned int upc) +{ + qat_hal_wr_indr_csr(handle, + ae, + ctx_mask, + CTX_STS_INDIRECT, + handle->hal_handle->upc_mask & upc); +} + +static void +qat_hal_get_uwords(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int uaddr, + unsigned int words_num, + uint64_t *uword) +{ + unsigned int i, uwrd_lo, uwrd_hi; + unsigned int ustore_addr, misc_control; + unsigned int scs_flag = 0; + + qat_hal_rd_ae_csr(handle, ae, AE_MISC_CONTROL, &misc_control); + scs_flag = misc_control & (0x1 << MMC_SHARE_CS_BITPOS); + /*disable scs*/ + qat_hal_wr_ae_csr(handle, + ae, + AE_MISC_CONTROL, + misc_control & 0xfffffffb); + qat_hal_rd_ae_csr(handle, ae, USTORE_ADDRESS, &ustore_addr); + uaddr |= UA_ECS; + for (i = 0; i < words_num; i++) { + qat_hal_wr_ae_csr(handle, ae, USTORE_ADDRESS, uaddr); + uaddr++; + qat_hal_rd_ae_csr(handle, ae, USTORE_DATA_LOWER, &uwrd_lo); + qat_hal_rd_ae_csr(handle, ae, USTORE_DATA_UPPER, &uwrd_hi); + uword[i] = uwrd_hi; + uword[i] = (uword[i] << 0x20) | uwrd_lo; + } + if (scs_flag) + misc_control |= (0x1 << MMC_SHARE_CS_BITPOS); + qat_hal_wr_ae_csr(handle, ae, AE_MISC_CONTROL, misc_control); + qat_hal_wr_ae_csr(handle, ae, USTORE_ADDRESS, ustore_addr); +} + +void +qat_hal_wr_umem(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int uaddr, + unsigned int words_num, + unsigned int *data) +{ + unsigned int i, ustore_addr; + + qat_hal_rd_ae_csr(handle, ae, USTORE_ADDRESS, &ustore_addr); + uaddr |= UA_ECS; + qat_hal_wr_ae_csr(handle, ae, USTORE_ADDRESS, uaddr); + for (i = 0; i < words_num; i++) { + unsigned int uwrd_lo, uwrd_hi, tmp; + + uwrd_lo = ((data[i] & 0xfff0000) << 4) | (0x3 << 18) | + ((data[i] & 0xff00) << 2) | (0x3 << 8) | (data[i] & 0xff); + uwrd_hi = (0xf << 4) | ((data[i] & 0xf0000000) >> 28); + uwrd_hi |= (bitcount32(data[i] & 0xffff) & 0x1) << 8; + tmp = ((data[i] >> 0x10) & 0xffff); + uwrd_hi |= (bitcount32(tmp) & 0x1) << 9; + qat_hal_wr_ae_csr(handle, ae, USTORE_DATA_LOWER, uwrd_lo); + qat_hal_wr_ae_csr(handle, ae, USTORE_DATA_UPPER, uwrd_hi); + } + qat_hal_wr_ae_csr(handle, ae, USTORE_ADDRESS, ustore_addr); +} + +#define MAX_EXEC_INST 100 +static int +qat_hal_exec_micro_inst(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char ctx, + uint64_t *micro_inst, + unsigned int inst_num, + int code_off, + unsigned int max_cycle, + unsigned int *endpc) +{ + uint64_t savuwords[MAX_EXEC_INST]; + unsigned int ind_lm_addr0, ind_lm_addr1; + unsigned int ind_lm_addr2, ind_lm_addr3; + unsigned int ind_lm_addr_byte0, ind_lm_addr_byte1; + unsigned int ind_lm_addr_byte2, ind_lm_addr_byte3; + unsigned int ind_t_index, ind_t_index_byte; + unsigned int ind_cnt_sig; + unsigned int ind_sig, act_sig; + unsigned int csr_val = 0, newcsr_val; + unsigned int savctx, scs_flag; + unsigned int savcc, wakeup_events, savpc; + unsigned int ctxarb_ctl, ctx_enables; + + if (inst_num > handle->hal_handle->max_ustore || !micro_inst) { + pr_err("QAT: invalid instruction num %d\n", inst_num); + return EINVAL; + } + /* save current context */ + qat_hal_rd_indr_csr(handle, ae, ctx, LM_ADDR_0_INDIRECT, &ind_lm_addr0); + qat_hal_rd_indr_csr(handle, ae, ctx, LM_ADDR_1_INDIRECT, &ind_lm_addr1); + qat_hal_rd_indr_csr( + handle, ae, ctx, INDIRECT_LM_ADDR_0_BYTE_INDEX, &ind_lm_addr_byte0); + qat_hal_rd_indr_csr( + handle, ae, ctx, INDIRECT_LM_ADDR_1_BYTE_INDEX, &ind_lm_addr_byte1); + if (IS_QAT_GEN3(pci_get_device(GET_DEV(handle->accel_dev)))) { + qat_hal_rd_indr_csr( + handle, ae, ctx, LM_ADDR_2_INDIRECT, &ind_lm_addr2); + qat_hal_rd_indr_csr( + handle, ae, ctx, LM_ADDR_3_INDIRECT, &ind_lm_addr3); + qat_hal_rd_indr_csr(handle, + ae, + ctx, + INDIRECT_LM_ADDR_2_BYTE_INDEX, + &ind_lm_addr_byte2); + qat_hal_rd_indr_csr(handle, + ae, + ctx, + INDIRECT_LM_ADDR_3_BYTE_INDEX, + &ind_lm_addr_byte3); + qat_hal_rd_indr_csr( + handle, ae, ctx, INDIRECT_T_INDEX, &ind_t_index); + qat_hal_rd_indr_csr(handle, + ae, + ctx, + INDIRECT_T_INDEX_BYTE_INDEX, + &ind_t_index_byte); + } + qat_hal_rd_ae_csr(handle, ae, AE_MISC_CONTROL, &csr_val); + scs_flag = csr_val & (1 << MMC_SHARE_CS_BITPOS); + newcsr_val = CLR_BIT(csr_val, MMC_SHARE_CS_BITPOS); + qat_hal_wr_ae_csr(handle, ae, AE_MISC_CONTROL, newcsr_val); + if (inst_num <= MAX_EXEC_INST) + qat_hal_get_uwords(handle, ae, 0, inst_num, savuwords); + qat_hal_get_wakeup_event(handle, ae, ctx, &wakeup_events); + qat_hal_rd_indr_csr(handle, ae, ctx, CTX_STS_INDIRECT, &savpc); + savpc = (savpc & handle->hal_handle->upc_mask) >> 0; + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &ctx_enables); + ctx_enables &= IGNORE_W1C_MASK; + qat_hal_rd_ae_csr(handle, ae, CC_ENABLE, &savcc); + qat_hal_rd_ae_csr(handle, ae, ACTIVE_CTX_STATUS, &savctx); + qat_hal_rd_ae_csr(handle, ae, CTX_ARB_CNTL, &ctxarb_ctl); + qat_hal_rd_indr_csr( + handle, ae, ctx, FUTURE_COUNT_SIGNAL_INDIRECT, &ind_cnt_sig); + qat_hal_rd_indr_csr(handle, ae, ctx, CTX_SIG_EVENTS_INDIRECT, &ind_sig); + qat_hal_rd_ae_csr(handle, ae, CTX_SIG_EVENTS_ACTIVE, &act_sig); + /* execute micro codes */ + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, ctx_enables); + qat_hal_wr_uwords(handle, ae, 0, inst_num, micro_inst); + qat_hal_wr_indr_csr(handle, ae, (1 << ctx), CTX_STS_INDIRECT, 0); + qat_hal_wr_ae_csr(handle, ae, ACTIVE_CTX_STATUS, ctx & ACS_ACNO); + if (code_off) + qat_hal_wr_ae_csr(handle, ae, CC_ENABLE, savcc & 0xffffdfff); + qat_hal_put_wakeup_event(handle, ae, (1 << ctx), XCWE_VOLUNTARY); + qat_hal_wr_indr_csr(handle, ae, (1 << ctx), CTX_SIG_EVENTS_INDIRECT, 0); + qat_hal_wr_ae_csr(handle, ae, CTX_SIG_EVENTS_ACTIVE, 0); + qat_hal_enable_ctx(handle, ae, (1 << ctx)); + /* wait for micro codes to finish */ + if (qat_hal_wait_cycles(handle, ae, max_cycle, 1) != 0) + return EFAULT; + if (endpc) { + unsigned int ctx_status; + + qat_hal_rd_indr_csr( + handle, ae, ctx, CTX_STS_INDIRECT, &ctx_status); + *endpc = ctx_status & handle->hal_handle->upc_mask; + } + /* retore to saved context */ + qat_hal_disable_ctx(handle, ae, (1 << ctx)); + if (inst_num <= MAX_EXEC_INST) + qat_hal_wr_uwords(handle, ae, 0, inst_num, savuwords); + qat_hal_put_wakeup_event(handle, ae, (1 << ctx), wakeup_events); + qat_hal_wr_indr_csr(handle, + ae, + (1 << ctx), + CTX_STS_INDIRECT, + handle->hal_handle->upc_mask & savpc); + qat_hal_rd_ae_csr(handle, ae, AE_MISC_CONTROL, &csr_val); + newcsr_val = scs_flag ? SET_BIT(csr_val, MMC_SHARE_CS_BITPOS) : + CLR_BIT(csr_val, MMC_SHARE_CS_BITPOS); + qat_hal_wr_ae_csr(handle, ae, AE_MISC_CONTROL, newcsr_val); + qat_hal_wr_ae_csr(handle, ae, CC_ENABLE, savcc); + qat_hal_wr_ae_csr(handle, ae, ACTIVE_CTX_STATUS, savctx & ACS_ACNO); + qat_hal_wr_ae_csr(handle, ae, CTX_ARB_CNTL, ctxarb_ctl); + qat_hal_wr_indr_csr( + handle, ae, (1 << ctx), LM_ADDR_0_INDIRECT, ind_lm_addr0); + qat_hal_wr_indr_csr( + handle, ae, (1 << ctx), LM_ADDR_1_INDIRECT, ind_lm_addr1); + qat_hal_wr_indr_csr(handle, + ae, + (1 << ctx), + INDIRECT_LM_ADDR_0_BYTE_INDEX, + ind_lm_addr_byte0); + qat_hal_wr_indr_csr(handle, + ae, + (1 << ctx), + INDIRECT_LM_ADDR_1_BYTE_INDEX, + ind_lm_addr_byte1); + if (IS_QAT_GEN3(pci_get_device(GET_DEV(handle->accel_dev)))) { + qat_hal_wr_indr_csr( + handle, ae, (1 << ctx), LM_ADDR_2_INDIRECT, ind_lm_addr2); + qat_hal_wr_indr_csr( + handle, ae, (1 << ctx), LM_ADDR_3_INDIRECT, ind_lm_addr3); + qat_hal_wr_indr_csr(handle, + ae, + (1 << ctx), + INDIRECT_LM_ADDR_2_BYTE_INDEX, + ind_lm_addr_byte2); + qat_hal_wr_indr_csr(handle, + ae, + (1 << ctx), + INDIRECT_LM_ADDR_3_BYTE_INDEX, + ind_lm_addr_byte3); + qat_hal_wr_indr_csr( + handle, ae, (1 << ctx), INDIRECT_T_INDEX, ind_t_index); + qat_hal_wr_indr_csr(handle, + ae, + (1 << ctx), + INDIRECT_T_INDEX_BYTE_INDEX, + ind_t_index_byte); + } + qat_hal_wr_indr_csr( + handle, ae, (1 << ctx), FUTURE_COUNT_SIGNAL_INDIRECT, ind_cnt_sig); + qat_hal_wr_indr_csr( + handle, ae, (1 << ctx), CTX_SIG_EVENTS_INDIRECT, ind_sig); + qat_hal_wr_ae_csr(handle, ae, CTX_SIG_EVENTS_ACTIVE, act_sig); + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, ctx_enables); + + return 0; +} + +static int +qat_hal_rd_rel_reg(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char ctx, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int *data) +{ + unsigned int savctx, uaddr, uwrd_lo, uwrd_hi; + unsigned int ctxarb_cntl, ustore_addr, ctx_enables; + unsigned short reg_addr; + int status = 0; + unsigned int scs_flag = 0; + unsigned int csr_val = 0, newcsr_val = 0; + u64 insts, savuword; + + reg_addr = qat_hal_get_reg_addr(reg_type, reg_num); + if (reg_addr == BAD_REGADDR) { + pr_err("QAT: bad regaddr=0x%x\n", reg_addr); + return EINVAL; + } + switch (reg_type) { + case ICP_GPA_REL: + insts = 0xA070000000ull | (reg_addr & 0x3ff); + break; + default: + insts = (uint64_t)0xA030000000ull | ((reg_addr & 0x3ff) << 10); + break; + } + qat_hal_rd_ae_csr(handle, ae, AE_MISC_CONTROL, &csr_val); + scs_flag = csr_val & (1 << MMC_SHARE_CS_BITPOS); + newcsr_val = CLR_BIT(csr_val, MMC_SHARE_CS_BITPOS); + qat_hal_wr_ae_csr(handle, ae, AE_MISC_CONTROL, newcsr_val); + qat_hal_rd_ae_csr(handle, ae, ACTIVE_CTX_STATUS, &savctx); + qat_hal_rd_ae_csr(handle, ae, CTX_ARB_CNTL, &ctxarb_cntl); + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &ctx_enables); + ctx_enables &= IGNORE_W1C_MASK; + if (ctx != (savctx & ACS_ACNO)) + qat_hal_wr_ae_csr(handle, + ae, + ACTIVE_CTX_STATUS, + ctx & ACS_ACNO); + qat_hal_get_uwords(handle, ae, 0, 1, &savuword); + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, ctx_enables); + qat_hal_rd_ae_csr(handle, ae, USTORE_ADDRESS, &ustore_addr); + uaddr = UA_ECS; + qat_hal_wr_ae_csr(handle, ae, USTORE_ADDRESS, uaddr); + insts = qat_hal_set_uword_ecc(insts); + uwrd_lo = (unsigned int)(insts & 0xffffffff); + uwrd_hi = (unsigned int)(insts >> 0x20); + qat_hal_wr_ae_csr(handle, ae, USTORE_DATA_LOWER, uwrd_lo); + qat_hal_wr_ae_csr(handle, ae, USTORE_DATA_UPPER, uwrd_hi); + qat_hal_wr_ae_csr(handle, ae, USTORE_ADDRESS, uaddr); + /* delay for at least 8 cycles */ + qat_hal_wait_cycles(handle, ae, 0x8, 0); + /* + * read ALU output + * the instruction should have been executed + * prior to clearing the ECS in putUwords + */ + qat_hal_rd_ae_csr(handle, ae, ALU_OUT, data); + qat_hal_wr_ae_csr(handle, ae, USTORE_ADDRESS, ustore_addr); + qat_hal_wr_uwords(handle, ae, 0, 1, &savuword); + if (ctx != (savctx & ACS_ACNO)) + qat_hal_wr_ae_csr(handle, + ae, + ACTIVE_CTX_STATUS, + savctx & ACS_ACNO); + qat_hal_wr_ae_csr(handle, ae, CTX_ARB_CNTL, ctxarb_cntl); + qat_hal_rd_ae_csr(handle, ae, AE_MISC_CONTROL, &csr_val); + newcsr_val = scs_flag ? SET_BIT(csr_val, MMC_SHARE_CS_BITPOS) : + CLR_BIT(csr_val, MMC_SHARE_CS_BITPOS); + qat_hal_wr_ae_csr(handle, ae, AE_MISC_CONTROL, newcsr_val); + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, ctx_enables); + + return status; +} + +static int +qat_hal_wr_rel_reg(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char ctx, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int data) +{ + unsigned short src_hiaddr, src_lowaddr, dest_addr, data16hi, data16lo; + uint64_t insts[] = { 0x0F440000000ull, + 0x0F040000000ull, + 0x0F0000C0300ull, + 0x0E000010000ull }; + const int num_inst = ARRAY_SIZE(insts), code_off = 1; + const int imm_w1 = 0, imm_w0 = 1; + + dest_addr = qat_hal_get_reg_addr(reg_type, reg_num); + if (dest_addr == BAD_REGADDR) { + pr_err("QAT: bad destAddr=0x%x\n", dest_addr); + return EINVAL; + } + + data16lo = 0xffff & data; + data16hi = 0xffff & (data >> 0x10); + src_hiaddr = qat_hal_get_reg_addr(ICP_NO_DEST, + (unsigned short)(0xff & data16hi)); + src_lowaddr = qat_hal_get_reg_addr(ICP_NO_DEST, + (unsigned short)(0xff & data16lo)); + switch (reg_type) { + case ICP_GPA_REL: + insts[imm_w1] = insts[imm_w1] | ((data16hi >> 8) << 20) | + ((src_hiaddr & 0x3ff) << 10) | (dest_addr & 0x3ff); + insts[imm_w0] = insts[imm_w0] | ((data16lo >> 8) << 20) | + ((src_lowaddr & 0x3ff) << 10) | (dest_addr & 0x3ff); + break; + default: + insts[imm_w1] = insts[imm_w1] | ((data16hi >> 8) << 20) | + ((dest_addr & 0x3ff) << 10) | (src_hiaddr & 0x3ff); + + insts[imm_w0] = insts[imm_w0] | ((data16lo >> 8) << 20) | + ((dest_addr & 0x3ff) << 10) | (src_lowaddr & 0x3ff); + break; + } + + return qat_hal_exec_micro_inst( + handle, ae, ctx, insts, num_inst, code_off, num_inst * 0x5, NULL); +} + +int +qat_hal_get_ins_num(void) +{ + return ARRAY_SIZE(inst_4b); +} + +static int +qat_hal_concat_micro_code(uint64_t *micro_inst, + unsigned int inst_num, + unsigned int size, + unsigned int addr, + unsigned int *value) +{ + int i; + unsigned int cur_value; + const uint64_t *inst_arr; + unsigned int fixup_offset; + int usize = 0; + unsigned int orig_num; + unsigned int delta; + + orig_num = inst_num; + fixup_offset = inst_num; + cur_value = value[0]; + inst_arr = inst_4b; + usize = ARRAY_SIZE(inst_4b); + for (i = 0; i < usize; i++) + micro_inst[inst_num++] = inst_arr[i]; + INSERT_IMMED_GPRA_CONST(micro_inst[fixup_offset], (addr)); + fixup_offset++; + INSERT_IMMED_GPRA_CONST(micro_inst[fixup_offset], 0); + fixup_offset++; + INSERT_IMMED_GPRB_CONST(micro_inst[fixup_offset], (cur_value >> 0)); + fixup_offset++; + INSERT_IMMED_GPRB_CONST(micro_inst[fixup_offset], (cur_value >> 0x10)); + + delta = inst_num - orig_num; + + return (int)delta; +} + +static int +qat_hal_exec_micro_init_lm(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char ctx, + int *pfirst_exec, + uint64_t *micro_inst, + unsigned int inst_num) +{ + int stat = 0; + unsigned int gpra0 = 0, gpra1 = 0, gpra2 = 0; + unsigned int gprb0 = 0, gprb1 = 0; + + if (*pfirst_exec) { + qat_hal_rd_rel_reg(handle, ae, ctx, ICP_GPA_REL, 0, &gpra0); + qat_hal_rd_rel_reg(handle, ae, ctx, ICP_GPA_REL, 0x1, &gpra1); + qat_hal_rd_rel_reg(handle, ae, ctx, ICP_GPA_REL, 0x2, &gpra2); + qat_hal_rd_rel_reg(handle, ae, ctx, ICP_GPB_REL, 0, &gprb0); + qat_hal_rd_rel_reg(handle, ae, ctx, ICP_GPB_REL, 0x1, &gprb1); + *pfirst_exec = 0; + } + stat = qat_hal_exec_micro_inst( + handle, ae, ctx, micro_inst, inst_num, 1, inst_num * 0x5, NULL); + if (stat != 0) + return EFAULT; + qat_hal_wr_rel_reg(handle, ae, ctx, ICP_GPA_REL, 0, gpra0); + qat_hal_wr_rel_reg(handle, ae, ctx, ICP_GPA_REL, 0x1, gpra1); + qat_hal_wr_rel_reg(handle, ae, ctx, ICP_GPA_REL, 0x2, gpra2); + qat_hal_wr_rel_reg(handle, ae, ctx, ICP_GPB_REL, 0, gprb0); + qat_hal_wr_rel_reg(handle, ae, ctx, ICP_GPB_REL, 0x1, gprb1); + + return 0; +} + +int +qat_hal_batch_wr_lm(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + struct icp_qat_uof_batch_init *lm_init_header) +{ + struct icp_qat_uof_batch_init *plm_init; + uint64_t *micro_inst_arry; + int micro_inst_num; + int alloc_inst_size; + int first_exec = 1; + int stat = 0; + + if (!lm_init_header) + return 0; + plm_init = lm_init_header->next; + alloc_inst_size = lm_init_header->size; + if ((unsigned int)alloc_inst_size > handle->hal_handle->max_ustore) + alloc_inst_size = handle->hal_handle->max_ustore; + micro_inst_arry = malloc(alloc_inst_size * sizeof(uint64_t), + M_QAT, + M_WAITOK | M_ZERO); + micro_inst_num = 0; + while (plm_init) { + unsigned int addr, *value, size; + + ae = plm_init->ae; + addr = plm_init->addr; + value = plm_init->value; + size = plm_init->size; + micro_inst_num += qat_hal_concat_micro_code( + micro_inst_arry, micro_inst_num, size, addr, value); + plm_init = plm_init->next; + } + /* exec micro codes */ + if (micro_inst_arry && micro_inst_num > 0) { + micro_inst_arry[micro_inst_num++] = 0x0E000010000ull; + stat = qat_hal_exec_micro_init_lm(handle, + ae, + 0, + &first_exec, + micro_inst_arry, + micro_inst_num); + } + free(micro_inst_arry, M_QAT); + return stat; +} + +static int +qat_hal_put_rel_rd_xfer(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char ctx, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int val) +{ + int status = 0; + unsigned int reg_addr; + unsigned int ctx_enables; + unsigned short mask; + unsigned short dr_offset = 0x10; + + status = qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &ctx_enables); + if (CE_INUSE_CONTEXTS & ctx_enables) { + if (ctx & 0x1) { + pr_err("QAT: bad 4-ctx mode,ctx=0x%x\n", ctx); + return EINVAL; + } + mask = 0x1f; + dr_offset = 0x20; + } else { + mask = 0x0f; + } + if (reg_num & ~mask) + return EINVAL; + reg_addr = reg_num + (ctx << 0x5); + switch (reg_type) { + case ICP_SR_RD_REL: + case ICP_SR_REL: + SET_AE_XFER(handle, ae, reg_addr, val); + break; + case ICP_DR_RD_REL: + case ICP_DR_REL: + SET_AE_XFER(handle, ae, (reg_addr + dr_offset), val); + break; + default: + status = EINVAL; + break; + } + return status; +} + +static int +qat_hal_put_rel_wr_xfer(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char ctx, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int data) +{ + unsigned int gprval, ctx_enables; + unsigned short src_hiaddr, src_lowaddr, gpr_addr, xfr_addr, data16hi, + data16low; + unsigned short reg_mask; + int status = 0; + uint64_t micro_inst[] = { 0x0F440000000ull, + 0x0F040000000ull, + 0x0A000000000ull, + 0x0F0000C0300ull, + 0x0E000010000ull }; + const int num_inst = ARRAY_SIZE(micro_inst), code_off = 1; + const unsigned short gprnum = 0, dly = num_inst * 0x5; + + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &ctx_enables); + if (CE_INUSE_CONTEXTS & ctx_enables) { + if (ctx & 0x1) { + pr_err("QAT: 4-ctx mode,ctx=0x%x\n", ctx); + return EINVAL; + } + reg_mask = (unsigned short)~0x1f; + } else { + reg_mask = (unsigned short)~0xf; + } + if (reg_num & reg_mask) + return EINVAL; + xfr_addr = qat_hal_get_reg_addr(reg_type, reg_num); + if (xfr_addr == BAD_REGADDR) { + pr_err("QAT: bad xfrAddr=0x%x\n", xfr_addr); + return EINVAL; + } + qat_hal_rd_rel_reg(handle, ae, ctx, ICP_GPB_REL, gprnum, &gprval); + gpr_addr = qat_hal_get_reg_addr(ICP_GPB_REL, gprnum); + data16low = 0xffff & data; + data16hi = 0xffff & (data >> 0x10); + src_hiaddr = qat_hal_get_reg_addr(ICP_NO_DEST, + (unsigned short)(0xff & data16hi)); + src_lowaddr = qat_hal_get_reg_addr(ICP_NO_DEST, + (unsigned short)(0xff & data16low)); + micro_inst[0] = micro_inst[0x0] | ((data16hi >> 8) << 20) | + ((gpr_addr & 0x3ff) << 10) | (src_hiaddr & 0x3ff); + micro_inst[1] = micro_inst[0x1] | ((data16low >> 8) << 20) | + ((gpr_addr & 0x3ff) << 10) | (src_lowaddr & 0x3ff); + micro_inst[0x2] = micro_inst[0x2] | ((xfr_addr & 0x3ff) << 20) | + ((gpr_addr & 0x3ff) << 10); + status = qat_hal_exec_micro_inst( + handle, ae, ctx, micro_inst, num_inst, code_off, dly, NULL); + qat_hal_wr_rel_reg(handle, ae, ctx, ICP_GPB_REL, gprnum, gprval); + return status; +} + +static int +qat_hal_put_rel_nn(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char ctx, + unsigned short nn, + unsigned int val) +{ + unsigned int ctx_enables; + int stat = 0; + + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &ctx_enables); + ctx_enables &= IGNORE_W1C_MASK; + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, ctx_enables | CE_NN_MODE); + + stat = qat_hal_put_rel_wr_xfer(handle, ae, ctx, ICP_NEIGH_REL, nn, val); + qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, ctx_enables); + return stat; +} + +static int +qat_hal_convert_abs_to_rel(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned short absreg_num, + unsigned short *relreg, + unsigned char *ctx) +{ + unsigned int ctx_enables; + + qat_hal_rd_ae_csr(handle, ae, CTX_ENABLES, &ctx_enables); + if (ctx_enables & CE_INUSE_CONTEXTS) { + /* 4-ctx mode */ + *relreg = absreg_num & 0x1F; + *ctx = (absreg_num >> 0x4) & 0x6; + } else { + /* 8-ctx mode */ + *relreg = absreg_num & 0x0F; + *ctx = (absreg_num >> 0x4) & 0x7; + } + return 0; +} + +int +qat_hal_init_gpr(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned long ctx_mask, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int regdata) +{ + int stat = 0; + unsigned short reg; + unsigned char ctx = 0; + enum icp_qat_uof_regtype type; + + if (reg_num >= ICP_QAT_UCLO_MAX_GPR_REG) + return EINVAL; + + do { + if (ctx_mask == 0) { + qat_hal_convert_abs_to_rel( + handle, ae, reg_num, ®, &ctx); + type = reg_type - 1; + } else { + reg = reg_num; + type = reg_type; + if (!test_bit(ctx, &ctx_mask)) + continue; + } + stat = qat_hal_wr_rel_reg(handle, ae, ctx, type, reg, regdata); + if (stat) { + pr_err("QAT: write gpr fail\n"); + return EINVAL; + } + } while (ctx_mask && (ctx++ < ICP_QAT_UCLO_MAX_CTX)); + + return 0; +} + +int +qat_hal_init_wr_xfer(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned long ctx_mask, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int regdata) +{ + int stat = 0; + unsigned short reg; + unsigned char ctx = 0; + enum icp_qat_uof_regtype type; + + if (reg_num >= ICP_QAT_UCLO_MAX_XFER_REG) + return EINVAL; + + do { + if (ctx_mask == 0) { + qat_hal_convert_abs_to_rel( + handle, ae, reg_num, ®, &ctx); + type = reg_type - 3; + } else { + reg = reg_num; + type = reg_type; + if (!test_bit(ctx, &ctx_mask)) + continue; + } + stat = qat_hal_put_rel_wr_xfer( + handle, ae, ctx, type, reg, regdata); + if (stat) { + pr_err("QAT: write wr xfer fail\n"); + return EINVAL; + } + } while (ctx_mask && (ctx++ < ICP_QAT_UCLO_MAX_CTX)); + + return 0; +} + +int +qat_hal_init_rd_xfer(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned long ctx_mask, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int regdata) +{ + int stat = 0; + unsigned short reg; + unsigned char ctx = 0; + enum icp_qat_uof_regtype type; + + if (reg_num >= ICP_QAT_UCLO_MAX_XFER_REG) + return EINVAL; + + do { + if (ctx_mask == 0) { + qat_hal_convert_abs_to_rel( + handle, ae, reg_num, ®, &ctx); + type = reg_type - 3; + } else { + reg = reg_num; + type = reg_type; + if (!test_bit(ctx, &ctx_mask)) + continue; + } + stat = qat_hal_put_rel_rd_xfer( + handle, ae, ctx, type, reg, regdata); + if (stat) { + pr_err("QAT: write rd xfer fail\n"); + return EINVAL; + } + } while (ctx_mask && (ctx++ < ICP_QAT_UCLO_MAX_CTX)); + + return 0; +} + +int +qat_hal_init_nn(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned long ctx_mask, + unsigned short reg_num, + unsigned int regdata) +{ + int stat = 0; + unsigned char ctx; + + if (ctx_mask == 0) + return EINVAL; + + for_each_set_bit(ctx, &ctx_mask, ICP_QAT_UCLO_MAX_CTX) + { + stat = qat_hal_put_rel_nn(handle, ae, ctx, reg_num, regdata); + if (stat) { + pr_err("QAT: write neigh error\n"); + return EINVAL; + } + } + + return 0; +} Index: sys/dev/qat/qat_common/qat_uclo.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_common/qat_uclo.c @@ -0,0 +1,2188 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "icp_qat_uclo.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_init_admin.h" +#include "adf_cfg_strings.h" +#include "adf_transport_access_macros.h" +#include "adf_transport_internal.h" +#include +#include +#include +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "icp_qat_uclo.h" +#include "icp_qat_hal.h" +#include "icp_qat_fw_loader_handle.h" + +#define UWORD_CPYBUF_SIZE 1024 +#define INVLD_UWORD 0xffffffffffull +#define PID_MINOR_REV 0xf +#define PID_MAJOR_REV (0xf << 4) +#define MAX_UINT32_VAL 0xfffffffful + +static int +qat_uclo_init_ae_data(struct icp_qat_uclo_objhandle *obj_handle, + unsigned int ae, + unsigned int image_num) +{ + struct icp_qat_uclo_aedata *ae_data; + struct icp_qat_uclo_encapme *encap_image; + struct icp_qat_uclo_page *page = NULL; + struct icp_qat_uclo_aeslice *ae_slice = NULL; + + ae_data = &obj_handle->ae_data[ae]; + encap_image = &obj_handle->ae_uimage[image_num]; + ae_slice = &ae_data->ae_slices[ae_data->slice_num]; + ae_slice->encap_image = encap_image; + + if (encap_image->img_ptr) { + ae_slice->ctx_mask_assigned = + encap_image->img_ptr->ctx_assigned; + ae_data->shareable_ustore = + ICP_QAT_SHARED_USTORE_MODE(encap_image->img_ptr->ae_mode); + ae_data->eff_ustore_size = ae_data->shareable_ustore ? + (obj_handle->ustore_phy_size << 1) : + obj_handle->ustore_phy_size; + } else { + ae_slice->ctx_mask_assigned = 0; + } + ae_slice->region = + malloc(sizeof(*ae_slice->region), M_QAT, M_WAITOK | M_ZERO); + ae_slice->page = + malloc(sizeof(*ae_slice->page), M_QAT, M_WAITOK | M_ZERO); + page = ae_slice->page; + page->encap_page = encap_image->page; + ae_slice->page->region = ae_slice->region; + ae_data->slice_num++; + return 0; +} + +static int +qat_uclo_free_ae_data(struct icp_qat_uclo_aedata *ae_data) +{ + unsigned int i; + + if (!ae_data) { + pr_err("QAT: bad argument, ae_data is NULL\n "); + return EINVAL; + } + + for (i = 0; i < ae_data->slice_num; i++) { + free(ae_data->ae_slices[i].region, M_QAT); + ae_data->ae_slices[i].region = NULL; + free(ae_data->ae_slices[i].page, M_QAT); + ae_data->ae_slices[i].page = NULL; + } + return 0; +} + +static char * +qat_uclo_get_string(struct icp_qat_uof_strtable *str_table, + unsigned int str_offset) +{ + if (!str_table->table_len || str_offset > str_table->table_len) + return NULL; + return (char *)(((uintptr_t)(str_table->strings)) + str_offset); +} + +static int +qat_uclo_check_uof_format(struct icp_qat_uof_filehdr *hdr) +{ + int maj = hdr->maj_ver & 0xff; + int min = hdr->min_ver & 0xff; + + if (hdr->file_id != ICP_QAT_UOF_FID) { + pr_err("QAT: Invalid header 0x%x\n", hdr->file_id); + return EINVAL; + } + if (min != ICP_QAT_UOF_MINVER || maj != ICP_QAT_UOF_MAJVER) { + pr_err("QAT: bad UOF version, major 0x%x, minor 0x%x\n", + maj, + min); + return EINVAL; + } + return 0; +} + +static int +qat_uclo_check_suof_format(const struct icp_qat_suof_filehdr *suof_hdr) +{ + int maj = suof_hdr->maj_ver & 0xff; + int min = suof_hdr->min_ver & 0xff; + + if (suof_hdr->file_id != ICP_QAT_SUOF_FID) { + pr_err("QAT: invalid header 0x%x\n", suof_hdr->file_id); + return EINVAL; + } + if (suof_hdr->fw_type != 0) { + pr_err("QAT: unsupported firmware type\n"); + return EINVAL; + } + if (suof_hdr->num_chunks <= 0x1) { + pr_err("QAT: SUOF chunk amount is incorrect\n"); + return EINVAL; + } + if (maj != ICP_QAT_SUOF_MAJVER || min != ICP_QAT_SUOF_MINVER) { + pr_err("QAT: bad SUOF version, major 0x%x, minor 0x%x\n", + maj, + min); + return EINVAL; + } + return 0; +} + +static int +qat_uclo_wr_sram_by_words(struct icp_qat_fw_loader_handle *handle, + unsigned int addr, + const unsigned int *val, + unsigned int num_in_bytes) +{ + unsigned int outval; + const unsigned char *ptr = (const unsigned char *)val; + + if (num_in_bytes > handle->hal_sram_size) { + pr_err("QAT: error, mmp size overflow %d\n", num_in_bytes); + return EINVAL; + } + while (num_in_bytes) { + memcpy(&outval, ptr, 4); + SRAM_WRITE(handle, addr, outval); + num_in_bytes -= 4; + ptr += 4; + addr += 4; + } + return 0; +} + +static void +qat_uclo_wr_umem_by_words(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int addr, + unsigned int *val, + unsigned int num_in_bytes) +{ + unsigned int outval; + unsigned char *ptr = (unsigned char *)val; + + addr >>= 0x2; /* convert to uword address */ + + while (num_in_bytes) { + memcpy(&outval, ptr, 4); + qat_hal_wr_umem(handle, ae, addr++, 1, &outval); + num_in_bytes -= 4; + ptr += 4; + } +} + +static void +qat_uclo_batch_wr_umem(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + struct icp_qat_uof_batch_init *umem_init_header) +{ + struct icp_qat_uof_batch_init *umem_init; + + if (!umem_init_header) + return; + umem_init = umem_init_header->next; + while (umem_init) { + unsigned int addr, *value, size; + + ae = umem_init->ae; + addr = umem_init->addr; + value = umem_init->value; + size = umem_init->size; + qat_uclo_wr_umem_by_words(handle, ae, addr, value, size); + umem_init = umem_init->next; + } +} + +static void +qat_uclo_cleanup_batch_init_list(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_uof_batch_init **base) +{ + struct icp_qat_uof_batch_init *umem_init; + + umem_init = *base; + while (umem_init) { + struct icp_qat_uof_batch_init *pre; + + pre = umem_init; + umem_init = umem_init->next; + free(pre, M_QAT); + } + *base = NULL; +} + +static int +qat_uclo_parse_num(char *str, unsigned int *num) +{ + char buf[16] = { 0 }; + unsigned long ae = 0; + int i; + + strncpy(buf, str, 15); + for (i = 0; i < 16; i++) { + if (!isdigit(buf[i])) { + buf[i] = '\0'; + break; + } + } + if ((compat_strtoul(buf, 10, &ae))) + return EFAULT; + + if (ae > MAX_UINT32_VAL) + return EFAULT; + + *num = (unsigned int)ae; + return 0; +} + +static int +qat_uclo_fetch_initmem_ae(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_uof_initmem *init_mem, + unsigned int size_range, + unsigned int *ae) +{ + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + char *str; + + if ((init_mem->addr + init_mem->num_in_bytes) > (size_range << 0x2)) { + pr_err("QAT: initmem is out of range"); + return EINVAL; + } + if (init_mem->scope != ICP_QAT_UOF_LOCAL_SCOPE) { + pr_err("QAT: Memory scope for init_mem error\n"); + return EINVAL; + } + str = qat_uclo_get_string(&obj_handle->str_table, init_mem->sym_name); + if (!str) { + pr_err("QAT: AE name assigned in UOF init table is NULL\n"); + return EINVAL; + } + if (qat_uclo_parse_num(str, ae)) { + pr_err("QAT: Parse num for AE number failed\n"); + return EINVAL; + } + if (*ae >= ICP_QAT_UCLO_MAX_AE) { + pr_err("QAT: ae %d out of range\n", *ae); + return EINVAL; + } + return 0; +} + +static int +qat_uclo_create_batch_init_list(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_uof_initmem *init_mem, + unsigned int ae, + struct icp_qat_uof_batch_init **init_tab_base) +{ + struct icp_qat_uof_batch_init *init_header, *tail; + struct icp_qat_uof_batch_init *mem_init, *tail_old; + struct icp_qat_uof_memvar_attr *mem_val_attr; + unsigned int i = 0; + + mem_val_attr = + (struct icp_qat_uof_memvar_attr *)((uintptr_t)init_mem + + sizeof( + struct icp_qat_uof_initmem)); + + init_header = *init_tab_base; + if (!init_header) { + init_header = + malloc(sizeof(*init_header), M_QAT, M_WAITOK | M_ZERO); + init_header->size = 1; + *init_tab_base = init_header; + } + tail_old = init_header; + while (tail_old->next) + tail_old = tail_old->next; + tail = tail_old; + for (i = 0; i < init_mem->val_attr_num; i++) { + mem_init = malloc(sizeof(*mem_init), M_QAT, M_WAITOK | M_ZERO); + mem_init->ae = ae; + mem_init->addr = init_mem->addr + mem_val_attr->offset_in_byte; + mem_init->value = &mem_val_attr->value; + mem_init->size = 4; + mem_init->next = NULL; + tail->next = mem_init; + tail = mem_init; + init_header->size += qat_hal_get_ins_num(); + mem_val_attr++; + } + return 0; +} + +static int +qat_uclo_init_lmem_seg(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_uof_initmem *init_mem) +{ + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + unsigned int ae; + + if (qat_uclo_fetch_initmem_ae( + handle, init_mem, ICP_QAT_UCLO_MAX_LMEM_REG, &ae)) + return EINVAL; + if (qat_uclo_create_batch_init_list( + handle, init_mem, ae, &obj_handle->lm_init_tab[ae])) + return EINVAL; + return 0; +} + +static int +qat_uclo_init_umem_seg(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_uof_initmem *init_mem) +{ + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + unsigned int ae, ustore_size, uaddr, i; + struct icp_qat_uclo_aedata *aed; + + ustore_size = obj_handle->ustore_phy_size; + if (qat_uclo_fetch_initmem_ae(handle, init_mem, ustore_size, &ae)) + return EINVAL; + if (qat_uclo_create_batch_init_list( + handle, init_mem, ae, &obj_handle->umem_init_tab[ae])) + return EINVAL; + /* set the highest ustore address referenced */ + uaddr = (init_mem->addr + init_mem->num_in_bytes) >> 0x2; + aed = &obj_handle->ae_data[ae]; + for (i = 0; i < aed->slice_num; i++) { + if (aed->ae_slices[i].encap_image->uwords_num < uaddr) + aed->ae_slices[i].encap_image->uwords_num = uaddr; + } + return 0; +} + +#define ICP_DH895XCC_PESRAM_BAR_SIZE 0x80000 +static int +qat_uclo_init_ae_memory(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_uof_initmem *init_mem) +{ + switch (init_mem->region) { + case ICP_QAT_UOF_LMEM_REGION: + if (qat_uclo_init_lmem_seg(handle, init_mem)) + return EINVAL; + break; + case ICP_QAT_UOF_UMEM_REGION: + if (qat_uclo_init_umem_seg(handle, init_mem)) + return EINVAL; + break; + default: + pr_err("QAT: initmem region error. region type=0x%x\n", + init_mem->region); + return EINVAL; + } + return 0; +} + +static int +qat_uclo_init_ustore(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_uclo_encapme *image) +{ + unsigned int i; + struct icp_qat_uclo_encap_page *page; + struct icp_qat_uof_image *uof_image; + unsigned char ae = 0; + unsigned char neigh_ae; + unsigned int ustore_size; + unsigned int patt_pos; + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + uint64_t *fill_data; + static unsigned int init[32] = { 0 }; + unsigned long ae_mask = handle->hal_handle->ae_mask; + + uof_image = image->img_ptr; + /*if shared CS mode, the ustore size should be 2*ustore_phy_size*/ + fill_data = malloc(obj_handle->ustore_phy_size * 2 * sizeof(uint64_t), + M_QAT, + M_WAITOK | M_ZERO); + for (i = 0; i < obj_handle->ustore_phy_size * 2; i++) + memcpy(&fill_data[i], + &uof_image->fill_pattern, + sizeof(uint64_t)); + page = image->page; + + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) + { + unsigned long cfg_ae_mask = handle->cfg_ae_mask; + unsigned long ae_assigned = uof_image->ae_assigned; + + if (!test_bit(ae, &cfg_ae_mask)) + continue; + + if (!test_bit(ae, &ae_assigned)) + continue; + + if (obj_handle->ae_data[ae].shareable_ustore && (ae & 1)) { + qat_hal_get_scs_neigh_ae(ae, &neigh_ae); + + if (test_bit(neigh_ae, &ae_assigned)) + continue; + } + + ustore_size = obj_handle->ae_data[ae].eff_ustore_size; + patt_pos = page->beg_addr_p + page->micro_words_num; + if (obj_handle->ae_data[ae].shareable_ustore) { + qat_hal_get_scs_neigh_ae(ae, &neigh_ae); + if (init[ae] == 0 && page->beg_addr_p != 0) { + qat_hal_wr_coalesce_uwords(handle, + (unsigned char)ae, + 0, + page->beg_addr_p, + &fill_data[0]); + } + qat_hal_wr_coalesce_uwords( + handle, + (unsigned char)ae, + patt_pos, + ustore_size - patt_pos, + &fill_data[page->beg_addr_p]); + init[ae] = 1; + init[neigh_ae] = 1; + } else { + qat_hal_wr_uwords(handle, + (unsigned char)ae, + 0, + page->beg_addr_p, + &fill_data[0]); + qat_hal_wr_uwords(handle, + (unsigned char)ae, + patt_pos, + ustore_size - patt_pos + 1, + &fill_data[page->beg_addr_p]); + } + } + free(fill_data, M_QAT); + return 0; +} + +static int +qat_uclo_init_memory(struct icp_qat_fw_loader_handle *handle) +{ + int i; + int ae = 0; + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + struct icp_qat_uof_initmem *initmem = obj_handle->init_mem_tab.init_mem; + unsigned long ae_mask = handle->hal_handle->ae_mask; + + for (i = 0; i < obj_handle->init_mem_tab.entry_num; i++) { + if (initmem->num_in_bytes) { + if (qat_uclo_init_ae_memory(handle, initmem)) + return EINVAL; + } + initmem = + (struct icp_qat_uof_initmem + *)((uintptr_t)((uintptr_t)initmem + + sizeof(struct icp_qat_uof_initmem)) + + (sizeof(struct icp_qat_uof_memvar_attr) * + initmem->val_attr_num)); + } + + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) + { + if (qat_hal_batch_wr_lm(handle, + ae, + obj_handle->lm_init_tab[ae])) { + pr_err("QAT: fail to batch init lmem for AE %d\n", ae); + return EINVAL; + } + qat_uclo_cleanup_batch_init_list(handle, + &obj_handle->lm_init_tab[ae]); + qat_uclo_batch_wr_umem(handle, + ae, + obj_handle->umem_init_tab[ae]); + qat_uclo_cleanup_batch_init_list( + handle, &obj_handle->umem_init_tab[ae]); + } + return 0; +} + +static void * +qat_uclo_find_chunk(struct icp_qat_uof_objhdr *obj_hdr, + char *chunk_id, + void *cur) +{ + int i; + struct icp_qat_uof_chunkhdr *chunk_hdr = + (struct icp_qat_uof_chunkhdr *)((uintptr_t)obj_hdr + + sizeof(struct icp_qat_uof_objhdr)); + + for (i = 0; i < obj_hdr->num_chunks; i++) { + if ((cur < (void *)&chunk_hdr[i]) && + !strncmp(chunk_hdr[i].chunk_id, + chunk_id, + ICP_QAT_UOF_OBJID_LEN)) { + return &chunk_hdr[i]; + } + } + return NULL; +} + +static unsigned int +qat_uclo_calc_checksum(unsigned int reg, int ch) +{ + int i; + unsigned int topbit = 1 << 0xF; + unsigned int inbyte = (unsigned int)((reg >> 0x18) ^ ch); + + reg ^= inbyte << 0x8; + for (i = 0; i < 0x8; i++) { + if (reg & topbit) + reg = (reg << 1) ^ 0x1021; + else + reg <<= 1; + } + return reg & 0xFFFF; +} + +static unsigned int +qat_uclo_calc_str_checksum(const char *ptr, int num) +{ + unsigned int chksum = 0; + + if (ptr) + while (num--) + chksum = qat_uclo_calc_checksum(chksum, *ptr++); + return chksum; +} + +static struct icp_qat_uclo_objhdr * +qat_uclo_map_chunk(char *buf, + struct icp_qat_uof_filehdr *file_hdr, + char *chunk_id) +{ + struct icp_qat_uof_filechunkhdr *file_chunk; + struct icp_qat_uclo_objhdr *obj_hdr; + char *chunk; + int i; + + file_chunk = (struct icp_qat_uof_filechunkhdr + *)(buf + sizeof(struct icp_qat_uof_filehdr)); + for (i = 0; i < file_hdr->num_chunks; i++) { + if (!strncmp(file_chunk->chunk_id, + chunk_id, + ICP_QAT_UOF_OBJID_LEN)) { + chunk = buf + file_chunk->offset; + if (file_chunk->checksum != + qat_uclo_calc_str_checksum(chunk, file_chunk->size)) + break; + obj_hdr = + malloc(sizeof(*obj_hdr), M_QAT, M_WAITOK | M_ZERO); + obj_hdr->file_buff = chunk; + obj_hdr->checksum = file_chunk->checksum; + obj_hdr->size = file_chunk->size; + return obj_hdr; + } + file_chunk++; + } + return NULL; +} + +static unsigned int +qat_uclo_check_image_compat(struct icp_qat_uof_encap_obj *encap_uof_obj, + struct icp_qat_uof_image *image) +{ + struct icp_qat_uof_objtable *uc_var_tab, *imp_var_tab, *imp_expr_tab; + struct icp_qat_uof_objtable *neigh_reg_tab; + struct icp_qat_uof_code_page *code_page; + + code_page = + (struct icp_qat_uof_code_page *)((char *)image + + sizeof(struct icp_qat_uof_image)); + uc_var_tab = + (struct icp_qat_uof_objtable *)(encap_uof_obj->beg_uof + + code_page->uc_var_tab_offset); + imp_var_tab = + (struct icp_qat_uof_objtable *)(encap_uof_obj->beg_uof + + code_page->imp_var_tab_offset); + imp_expr_tab = + (struct icp_qat_uof_objtable *)(encap_uof_obj->beg_uof + + code_page->imp_expr_tab_offset); + if (uc_var_tab->entry_num || imp_var_tab->entry_num || + imp_expr_tab->entry_num) { + pr_err("QAT: UOF can't contain imported variable to be parsed"); + return EINVAL; + } + neigh_reg_tab = + (struct icp_qat_uof_objtable *)(encap_uof_obj->beg_uof + + code_page->neigh_reg_tab_offset); + if (neigh_reg_tab->entry_num) { + pr_err("QAT: UOF can't contain neighbor register table\n"); + return EINVAL; + } + if (image->numpages > 1) { + pr_err("QAT: UOF can't contain multiple pages\n"); + return EINVAL; + } + if (RELOADABLE_CTX_SHARED_MODE(image->ae_mode)) { + pr_err("QAT: UOF can't use reloadable feature\n"); + return EFAULT; + } + return 0; +} + +static void +qat_uclo_map_image_page(struct icp_qat_uof_encap_obj *encap_uof_obj, + struct icp_qat_uof_image *img, + struct icp_qat_uclo_encap_page *page) +{ + struct icp_qat_uof_code_page *code_page; + struct icp_qat_uof_code_area *code_area; + struct icp_qat_uof_objtable *uword_block_tab; + struct icp_qat_uof_uword_block *uwblock; + int i; + + code_page = + (struct icp_qat_uof_code_page *)((char *)img + + sizeof(struct icp_qat_uof_image)); + page->def_page = code_page->def_page; + page->page_region = code_page->page_region; + page->beg_addr_v = code_page->beg_addr_v; + page->beg_addr_p = code_page->beg_addr_p; + code_area = + (struct icp_qat_uof_code_area *)(encap_uof_obj->beg_uof + + code_page->code_area_offset); + page->micro_words_num = code_area->micro_words_num; + uword_block_tab = + (struct icp_qat_uof_objtable *)(encap_uof_obj->beg_uof + + code_area->uword_block_tab); + page->uwblock_num = uword_block_tab->entry_num; + uwblock = (struct icp_qat_uof_uword_block + *)((char *)uword_block_tab + + sizeof(struct icp_qat_uof_objtable)); + page->uwblock = (struct icp_qat_uclo_encap_uwblock *)uwblock; + for (i = 0; i < uword_block_tab->entry_num; i++) + page->uwblock[i].micro_words = + (uintptr_t)encap_uof_obj->beg_uof + uwblock[i].uword_offset; +} + +static int +qat_uclo_map_uimage(struct icp_qat_uclo_objhandle *obj_handle, + struct icp_qat_uclo_encapme *ae_uimage, + int max_image) +{ + int i, j; + struct icp_qat_uof_chunkhdr *chunk_hdr = NULL; + struct icp_qat_uof_image *image; + struct icp_qat_uof_objtable *ae_regtab; + struct icp_qat_uof_objtable *init_reg_sym_tab; + struct icp_qat_uof_objtable *sbreak_tab; + struct icp_qat_uof_encap_obj *encap_uof_obj = + &obj_handle->encap_uof_obj; + + for (j = 0; j < max_image; j++) { + chunk_hdr = qat_uclo_find_chunk(encap_uof_obj->obj_hdr, + ICP_QAT_UOF_IMAG, + chunk_hdr); + if (!chunk_hdr) + break; + image = (struct icp_qat_uof_image *)(encap_uof_obj->beg_uof + + chunk_hdr->offset); + ae_regtab = + (struct icp_qat_uof_objtable *)(image->reg_tab_offset + + obj_handle->obj_hdr + ->file_buff); + ae_uimage[j].ae_reg_num = ae_regtab->entry_num; + ae_uimage[j].ae_reg = + (struct icp_qat_uof_ae_reg + *)(((char *)ae_regtab) + + sizeof(struct icp_qat_uof_objtable)); + init_reg_sym_tab = + (struct icp_qat_uof_objtable *)(image->init_reg_sym_tab + + obj_handle->obj_hdr + ->file_buff); + ae_uimage[j].init_regsym_num = init_reg_sym_tab->entry_num; + ae_uimage[j].init_regsym = + (struct icp_qat_uof_init_regsym + *)(((char *)init_reg_sym_tab) + + sizeof(struct icp_qat_uof_objtable)); + sbreak_tab = (struct icp_qat_uof_objtable *)(image->sbreak_tab + + obj_handle->obj_hdr + ->file_buff); + ae_uimage[j].sbreak_num = sbreak_tab->entry_num; + ae_uimage[j].sbreak = + (struct icp_qat_uof_sbreak + *)(((char *)sbreak_tab) + + sizeof(struct icp_qat_uof_objtable)); + ae_uimage[j].img_ptr = image; + if (qat_uclo_check_image_compat(encap_uof_obj, image)) + goto out_err; + ae_uimage[j].page = + malloc(sizeof(struct icp_qat_uclo_encap_page), + M_QAT, + M_WAITOK | M_ZERO); + qat_uclo_map_image_page(encap_uof_obj, + image, + ae_uimage[j].page); + } + return j; +out_err: + for (i = 0; i < j; i++) + free(ae_uimage[i].page, M_QAT); + return 0; +} + +static int +qat_uclo_map_ae(struct icp_qat_fw_loader_handle *handle, int max_ae) +{ + int i; + int ae = 0; + unsigned long ae_mask = handle->hal_handle->ae_mask; + unsigned long cfg_ae_mask = handle->cfg_ae_mask; + int mflag = 0; + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + + for_each_set_bit(ae, &ae_mask, max_ae) + { + if (!test_bit(ae, &cfg_ae_mask)) + continue; + + for (i = 0; i < obj_handle->uimage_num; i++) { + unsigned long ae_assigned = + obj_handle->ae_uimage[i].img_ptr->ae_assigned; + if (!test_bit(ae, &ae_assigned)) + continue; + mflag = 1; + if (qat_uclo_init_ae_data(obj_handle, ae, i)) + return EINVAL; + } + } + if (!mflag) { + pr_err("QAT: uimage uses AE not set"); + return EINVAL; + } + return 0; +} + +static struct icp_qat_uof_strtable * +qat_uclo_map_str_table(struct icp_qat_uclo_objhdr *obj_hdr, + char *tab_name, + struct icp_qat_uof_strtable *str_table) +{ + struct icp_qat_uof_chunkhdr *chunk_hdr; + + chunk_hdr = + qat_uclo_find_chunk((struct icp_qat_uof_objhdr *)obj_hdr->file_buff, + tab_name, + NULL); + if (chunk_hdr) { + int hdr_size; + + memcpy(&str_table->table_len, + obj_hdr->file_buff + chunk_hdr->offset, + sizeof(str_table->table_len)); + hdr_size = (char *)&str_table->strings - (char *)str_table; + str_table->strings = (uintptr_t)obj_hdr->file_buff + + chunk_hdr->offset + hdr_size; + return str_table; + } + return NULL; +} + +static void +qat_uclo_map_initmem_table(struct icp_qat_uof_encap_obj *encap_uof_obj, + struct icp_qat_uclo_init_mem_table *init_mem_tab) +{ + struct icp_qat_uof_chunkhdr *chunk_hdr; + + chunk_hdr = + qat_uclo_find_chunk(encap_uof_obj->obj_hdr, ICP_QAT_UOF_IMEM, NULL); + if (chunk_hdr) { + memmove(&init_mem_tab->entry_num, + encap_uof_obj->beg_uof + chunk_hdr->offset, + sizeof(unsigned int)); + init_mem_tab->init_mem = + (struct icp_qat_uof_initmem *)(encap_uof_obj->beg_uof + + chunk_hdr->offset + + sizeof(unsigned int)); + } +} + +static unsigned int +qat_uclo_get_dev_type(struct icp_qat_fw_loader_handle *handle) +{ + switch (pci_get_device(GET_DEV(handle->accel_dev))) { + case ADF_DH895XCC_PCI_DEVICE_ID: + return ICP_QAT_AC_895XCC_DEV_TYPE; + case ADF_C62X_PCI_DEVICE_ID: + return ICP_QAT_AC_C62X_DEV_TYPE; + case ADF_C3XXX_PCI_DEVICE_ID: + return ICP_QAT_AC_C3XXX_DEV_TYPE; + case ADF_200XX_PCI_DEVICE_ID: + return ICP_QAT_AC_200XX_DEV_TYPE; + case ADF_C4XXX_PCI_DEVICE_ID: + return ICP_QAT_AC_C4XXX_DEV_TYPE; + default: + pr_err("QAT: unsupported device 0x%x\n", + pci_get_device(GET_DEV(handle->accel_dev))); + return 0; + } +} + +static int +qat_uclo_check_uof_compat(struct icp_qat_uclo_objhandle *obj_handle) +{ + unsigned int maj_ver, prod_type = obj_handle->prod_type; + + if (!(prod_type & obj_handle->encap_uof_obj.obj_hdr->ac_dev_type)) { + pr_err("QAT: UOF type 0x%x doesn't match with platform 0x%x\n", + obj_handle->encap_uof_obj.obj_hdr->ac_dev_type, + prod_type); + return EINVAL; + } + maj_ver = obj_handle->prod_rev & 0xff; + if (obj_handle->encap_uof_obj.obj_hdr->max_cpu_ver < maj_ver || + obj_handle->encap_uof_obj.obj_hdr->min_cpu_ver > maj_ver) { + pr_err("QAT: UOF maj_ver 0x%x out of range\n", maj_ver); + return EINVAL; + } + return 0; +} + +static int +qat_uclo_init_reg(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char ctx_mask, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_addr, + unsigned int value) +{ + switch (reg_type) { + case ICP_GPA_ABS: + case ICP_GPB_ABS: + ctx_mask = 0; + return qat_hal_init_gpr( + handle, ae, ctx_mask, reg_type, reg_addr, value); + case ICP_GPA_REL: + case ICP_GPB_REL: + return qat_hal_init_gpr( + handle, ae, ctx_mask, reg_type, reg_addr, value); + case ICP_SR_ABS: + case ICP_DR_ABS: + case ICP_SR_RD_ABS: + case ICP_DR_RD_ABS: + ctx_mask = 0; + return qat_hal_init_rd_xfer( + handle, ae, ctx_mask, reg_type, reg_addr, value); + case ICP_SR_REL: + case ICP_DR_REL: + case ICP_SR_RD_REL: + case ICP_DR_RD_REL: + return qat_hal_init_rd_xfer( + handle, ae, ctx_mask, reg_type, reg_addr, value); + case ICP_SR_WR_ABS: + case ICP_DR_WR_ABS: + ctx_mask = 0; + return qat_hal_init_wr_xfer( + handle, ae, ctx_mask, reg_type, reg_addr, value); + case ICP_SR_WR_REL: + case ICP_DR_WR_REL: + return qat_hal_init_wr_xfer( + handle, ae, ctx_mask, reg_type, reg_addr, value); + case ICP_NEIGH_REL: + return qat_hal_init_nn(handle, ae, ctx_mask, reg_addr, value); + default: + pr_err("QAT: UOF uses unsupported reg type 0x%x\n", reg_type); + return EFAULT; + } + return 0; +} + +static int +qat_uclo_init_reg_sym(struct icp_qat_fw_loader_handle *handle, + unsigned int ae, + struct icp_qat_uclo_encapme *encap_ae) +{ + unsigned int i; + unsigned char ctx_mask; + struct icp_qat_uof_init_regsym *init_regsym; + + if (ICP_QAT_CTX_MODE(encap_ae->img_ptr->ae_mode) == + ICP_QAT_UCLO_MAX_CTX) + ctx_mask = 0xff; + else + ctx_mask = 0x55; + + for (i = 0; i < encap_ae->init_regsym_num; i++) { + unsigned int exp_res; + + init_regsym = &encap_ae->init_regsym[i]; + exp_res = init_regsym->value; + switch (init_regsym->init_type) { + case ICP_QAT_UOF_INIT_REG: + qat_uclo_init_reg(handle, + ae, + ctx_mask, + (enum icp_qat_uof_regtype) + init_regsym->reg_type, + (unsigned short)init_regsym->reg_addr, + exp_res); + break; + case ICP_QAT_UOF_INIT_REG_CTX: + /* check if ctx is appropriate for the ctxMode */ + if (!((1 << init_regsym->ctx) & ctx_mask)) { + pr_err("QAT: invalid ctx num = 0x%x\n", + init_regsym->ctx); + return EINVAL; + } + qat_uclo_init_reg( + handle, + ae, + (unsigned char)(1 << init_regsym->ctx), + (enum icp_qat_uof_regtype)init_regsym->reg_type, + (unsigned short)init_regsym->reg_addr, + exp_res); + break; + case ICP_QAT_UOF_INIT_EXPR: + pr_err("QAT: INIT_EXPR feature not supported\n"); + return EINVAL; + case ICP_QAT_UOF_INIT_EXPR_ENDIAN_SWAP: + pr_err("QAT: INIT_EXPR_ENDIAN_SWAP not supported\n"); + return EINVAL; + default: + break; + } + } + return 0; +} + +static int +qat_uclo_init_globals(struct icp_qat_fw_loader_handle *handle) +{ + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + unsigned int s; + unsigned int ae = 0; + struct icp_qat_uclo_aedata *aed; + unsigned long ae_mask = handle->hal_handle->ae_mask; + + if (obj_handle->global_inited) + return 0; + if (obj_handle->init_mem_tab.entry_num) { + if (qat_uclo_init_memory(handle)) { + pr_err("QAT: initialize memory failed\n"); + return EINVAL; + } + } + + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) + { + aed = &obj_handle->ae_data[ae]; + for (s = 0; s < aed->slice_num; s++) { + if (!aed->ae_slices[s].encap_image) + continue; + if (qat_uclo_init_reg_sym( + handle, ae, aed->ae_slices[s].encap_image)) + return EINVAL; + } + } + obj_handle->global_inited = 1; + return 0; +} + +static int +qat_hal_set_modes(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_uclo_objhandle *obj_handle, + unsigned char ae, + struct icp_qat_uof_image *uof_image) +{ + unsigned char nn_mode; + char ae_mode = 0; + + ae_mode = (char)ICP_QAT_CTX_MODE(uof_image->ae_mode); + if (qat_hal_set_ae_ctx_mode(handle, ae, ae_mode)) { + pr_err("QAT: qat_hal_set_ae_ctx_mode error\n"); + return EFAULT; + } + + ae_mode = (char)ICP_QAT_SHARED_USTORE_MODE(uof_image->ae_mode); + qat_hal_set_ae_scs_mode(handle, ae, ae_mode); + nn_mode = ICP_QAT_NN_MODE(uof_image->ae_mode); + + if (qat_hal_set_ae_nn_mode(handle, ae, nn_mode)) { + pr_err("QAT: qat_hal_set_ae_nn_mode error\n"); + return EFAULT; + } + ae_mode = (char)ICP_QAT_LOC_MEM0_MODE(uof_image->ae_mode); + if (qat_hal_set_ae_lm_mode(handle, ae, ICP_LMEM0, ae_mode)) { + pr_err("QAT: qat_hal_set_ae_lm_mode LMEM0 error\n"); + return EFAULT; + } + ae_mode = (char)ICP_QAT_LOC_MEM1_MODE(uof_image->ae_mode); + if (qat_hal_set_ae_lm_mode(handle, ae, ICP_LMEM1, ae_mode)) { + pr_err("QAT: qat_hal_set_ae_lm_mode LMEM1 error\n"); + return EFAULT; + } + if (obj_handle->prod_type == ICP_QAT_AC_C4XXX_DEV_TYPE) { + ae_mode = (char)ICP_QAT_LOC_MEM2_MODE(uof_image->ae_mode); + if (qat_hal_set_ae_lm_mode(handle, ae, ICP_LMEM2, ae_mode)) { + pr_err("QAT: qat_hal_set_ae_lm_mode LMEM2 error\n"); + return EFAULT; + } + ae_mode = (char)ICP_QAT_LOC_MEM3_MODE(uof_image->ae_mode); + if (qat_hal_set_ae_lm_mode(handle, ae, ICP_LMEM3, ae_mode)) { + pr_err("QAT: qat_hal_set_ae_lm_mode LMEM3 error\n"); + return EFAULT; + } + ae_mode = (char)ICP_QAT_LOC_TINDEX_MODE(uof_image->ae_mode); + qat_hal_set_ae_tindex_mode(handle, ae, ae_mode); + } + return 0; +} + +static int +qat_uclo_set_ae_mode(struct icp_qat_fw_loader_handle *handle) +{ + int error; + unsigned char s; + unsigned char ae = 0; + struct icp_qat_uof_image *uof_image; + struct icp_qat_uclo_aedata *ae_data; + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + unsigned long ae_mask = handle->hal_handle->ae_mask; + + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) + { + unsigned long cfg_ae_mask = handle->cfg_ae_mask; + + if (!test_bit(ae, &cfg_ae_mask)) + continue; + + ae_data = &obj_handle->ae_data[ae]; + for (s = 0; s < min_t(unsigned int, + ae_data->slice_num, + ICP_QAT_UCLO_MAX_CTX); + s++) { + if (!obj_handle->ae_data[ae].ae_slices[s].encap_image) + continue; + uof_image = ae_data->ae_slices[s].encap_image->img_ptr; + error = qat_hal_set_modes(handle, + obj_handle, + ae, + uof_image); + if (error) + return error; + } + } + return 0; +} + +static void +qat_uclo_init_uword_num(struct icp_qat_fw_loader_handle *handle) +{ + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + struct icp_qat_uclo_encapme *image; + int a; + + for (a = 0; a < obj_handle->uimage_num; a++) { + image = &obj_handle->ae_uimage[a]; + image->uwords_num = + image->page->beg_addr_p + image->page->micro_words_num; + } +} + +static int +qat_uclo_parse_uof_obj(struct icp_qat_fw_loader_handle *handle) +{ + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + unsigned int ae; + + obj_handle->encap_uof_obj.beg_uof = obj_handle->obj_hdr->file_buff; + obj_handle->encap_uof_obj.obj_hdr = + (struct icp_qat_uof_objhdr *)obj_handle->obj_hdr->file_buff; + obj_handle->uword_in_bytes = 6; + obj_handle->prod_type = qat_uclo_get_dev_type(handle); + obj_handle->prod_rev = + PID_MAJOR_REV | (PID_MINOR_REV & handle->hal_handle->revision_id); + if (qat_uclo_check_uof_compat(obj_handle)) { + pr_err("QAT: UOF incompatible\n"); + return EINVAL; + } + obj_handle->uword_buf = malloc(UWORD_CPYBUF_SIZE * sizeof(uint64_t), + M_QAT, + M_WAITOK | M_ZERO); + obj_handle->ustore_phy_size = + (obj_handle->prod_type == ICP_QAT_AC_C4XXX_DEV_TYPE) ? 0x2000 : + 0x4000; + if (!obj_handle->obj_hdr->file_buff || + !qat_uclo_map_str_table(obj_handle->obj_hdr, + ICP_QAT_UOF_STRT, + &obj_handle->str_table)) { + pr_err("QAT: UOF doesn't have effective images\n"); + goto out_err; + } + obj_handle->uimage_num = + qat_uclo_map_uimage(obj_handle, + obj_handle->ae_uimage, + ICP_QAT_UCLO_MAX_AE * ICP_QAT_UCLO_MAX_CTX); + if (!obj_handle->uimage_num) + goto out_err; + if (qat_uclo_map_ae(handle, handle->hal_handle->ae_max_num)) { + pr_err("QAT: Bad object\n"); + goto out_check_uof_aemask_err; + } + qat_uclo_init_uword_num(handle); + qat_uclo_map_initmem_table(&obj_handle->encap_uof_obj, + &obj_handle->init_mem_tab); + if (qat_uclo_set_ae_mode(handle)) + goto out_check_uof_aemask_err; + return 0; +out_check_uof_aemask_err: + for (ae = 0; ae < obj_handle->uimage_num; ae++) + free(obj_handle->ae_uimage[ae].page, M_QAT); +out_err: + free(obj_handle->uword_buf, M_QAT); + obj_handle->uword_buf = NULL; + return EFAULT; +} + +static int +qat_uclo_map_suof_file_hdr(const struct icp_qat_fw_loader_handle *handle, + const struct icp_qat_suof_filehdr *suof_ptr, + int suof_size) +{ + unsigned int check_sum = 0; + unsigned int min_ver_offset = 0; + struct icp_qat_suof_handle *suof_handle = handle->sobj_handle; + + suof_handle->file_id = ICP_QAT_SUOF_FID; + suof_handle->suof_buf = (const char *)suof_ptr; + suof_handle->suof_size = suof_size; + min_ver_offset = + suof_size - offsetof(struct icp_qat_suof_filehdr, min_ver); + check_sum = qat_uclo_calc_str_checksum((const char *)&suof_ptr->min_ver, + min_ver_offset); + if (check_sum != suof_ptr->check_sum) { + pr_err("QAT: incorrect SUOF checksum\n"); + return EINVAL; + } + suof_handle->check_sum = suof_ptr->check_sum; + suof_handle->min_ver = suof_ptr->min_ver; + suof_handle->maj_ver = suof_ptr->maj_ver; + suof_handle->fw_type = suof_ptr->fw_type; + return 0; +} + +static void +qat_uclo_map_simg(struct icp_qat_suof_handle *suof_handle, + struct icp_qat_suof_img_hdr *suof_img_hdr, + struct icp_qat_suof_chunk_hdr *suof_chunk_hdr) +{ + const struct icp_qat_simg_ae_mode *ae_mode; + struct icp_qat_suof_objhdr *suof_objhdr; + + suof_img_hdr->simg_buf = + (suof_handle->suof_buf + suof_chunk_hdr->offset + + sizeof(*suof_objhdr)); + suof_img_hdr->simg_len = + ((struct icp_qat_suof_objhdr *)(uintptr_t)(suof_handle->suof_buf + + suof_chunk_hdr->offset)) + ->img_length; + + suof_img_hdr->css_header = suof_img_hdr->simg_buf; + suof_img_hdr->css_key = + (suof_img_hdr->css_header + sizeof(struct icp_qat_css_hdr)); + suof_img_hdr->css_signature = suof_img_hdr->css_key + + ICP_QAT_CSS_FWSK_MODULUS_LEN + ICP_QAT_CSS_FWSK_EXPONENT_LEN; + suof_img_hdr->css_simg = + suof_img_hdr->css_signature + ICP_QAT_CSS_SIGNATURE_LEN; + + ae_mode = (const struct icp_qat_simg_ae_mode *)(suof_img_hdr->css_simg); + suof_img_hdr->ae_mask = ae_mode->ae_mask; + suof_img_hdr->simg_name = (unsigned long)&ae_mode->simg_name; + suof_img_hdr->appmeta_data = (unsigned long)&ae_mode->appmeta_data; + suof_img_hdr->fw_type = ae_mode->fw_type; +} + +static void +qat_uclo_map_suof_symobjs(struct icp_qat_suof_handle *suof_handle, + struct icp_qat_suof_chunk_hdr *suof_chunk_hdr) +{ + char **sym_str = (char **)&suof_handle->sym_str; + unsigned int *sym_size = &suof_handle->sym_size; + struct icp_qat_suof_strtable *str_table_obj; + + *sym_size = *(unsigned int *)(uintptr_t)(suof_chunk_hdr->offset + + suof_handle->suof_buf); + *sym_str = + (char *)(uintptr_t)(suof_handle->suof_buf + suof_chunk_hdr->offset + + sizeof(str_table_obj->tab_length)); +} + +static int +qat_uclo_check_simg_compat(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_suof_img_hdr *img_hdr) +{ + const struct icp_qat_simg_ae_mode *img_ae_mode = NULL; + unsigned int prod_rev, maj_ver, prod_type; + + prod_type = qat_uclo_get_dev_type(handle); + img_ae_mode = (const struct icp_qat_simg_ae_mode *)img_hdr->css_simg; + prod_rev = + PID_MAJOR_REV | (PID_MINOR_REV & handle->hal_handle->revision_id); + if (img_ae_mode->dev_type != prod_type) { + pr_err("QAT: incompatible product type %x\n", + img_ae_mode->dev_type); + return EINVAL; + } + maj_ver = prod_rev & 0xff; + if (maj_ver > img_ae_mode->devmax_ver || + maj_ver < img_ae_mode->devmin_ver) { + pr_err("QAT: incompatible device maj_ver 0x%x\n", maj_ver); + return EINVAL; + } + return 0; +} + +static void +qat_uclo_del_suof(struct icp_qat_fw_loader_handle *handle) +{ + struct icp_qat_suof_handle *sobj_handle = handle->sobj_handle; + + free(sobj_handle->img_table.simg_hdr, M_QAT); + sobj_handle->img_table.simg_hdr = NULL; + free(handle->sobj_handle, M_QAT); + handle->sobj_handle = NULL; +} + +static void +qat_uclo_tail_img(struct icp_qat_suof_img_hdr *suof_img_hdr, + unsigned int img_id, + unsigned int num_simgs) +{ + struct icp_qat_suof_img_hdr img_header; + + if ((img_id != num_simgs - 1) && img_id != ICP_QAT_UCLO_MAX_AE) { + memcpy(&img_header, + &suof_img_hdr[num_simgs - 1], + sizeof(*suof_img_hdr)); + memcpy(&suof_img_hdr[num_simgs - 1], + &suof_img_hdr[img_id], + sizeof(*suof_img_hdr)); + memcpy(&suof_img_hdr[img_id], + &img_header, + sizeof(*suof_img_hdr)); + } +} + +static int +qat_uclo_map_suof(struct icp_qat_fw_loader_handle *handle, + const struct icp_qat_suof_filehdr *suof_ptr, + int suof_size) +{ + struct icp_qat_suof_handle *suof_handle = handle->sobj_handle; + struct icp_qat_suof_chunk_hdr *suof_chunk_hdr = NULL; + struct icp_qat_suof_img_hdr *suof_img_hdr = NULL; + int ret = 0, ae0_img = ICP_QAT_UCLO_MAX_AE; + unsigned int i = 0; + struct icp_qat_suof_img_hdr img_header; + + if (!suof_ptr || suof_size == 0) { + pr_err("QAT: input parameter SUOF pointer/size is NULL\n"); + return EINVAL; + } + if (qat_uclo_check_suof_format(suof_ptr)) + return EINVAL; + ret = qat_uclo_map_suof_file_hdr(handle, suof_ptr, suof_size); + if (ret) + return ret; + suof_chunk_hdr = (struct icp_qat_suof_chunk_hdr *)((uintptr_t)suof_ptr + + sizeof(*suof_ptr)); + + qat_uclo_map_suof_symobjs(suof_handle, suof_chunk_hdr); + suof_handle->img_table.num_simgs = suof_ptr->num_chunks - 1; + + if (suof_handle->img_table.num_simgs != 0) { + suof_img_hdr = malloc(suof_handle->img_table.num_simgs * + sizeof(img_header), + M_QAT, + M_WAITOK | M_ZERO); + suof_handle->img_table.simg_hdr = suof_img_hdr; + } + + for (i = 0; i < suof_handle->img_table.num_simgs; i++) { + qat_uclo_map_simg(handle->sobj_handle, + &suof_img_hdr[i], + &suof_chunk_hdr[1 + i]); + ret = qat_uclo_check_simg_compat(handle, &suof_img_hdr[i]); + if (ret) + return ret; + suof_img_hdr[i].ae_mask &= handle->cfg_ae_mask; + if ((suof_img_hdr[i].ae_mask & 0x1) != 0) + ae0_img = i; + } + qat_uclo_tail_img(suof_img_hdr, + ae0_img, + suof_handle->img_table.num_simgs); + return 0; +} + +#define ADD_ADDR(high, low) ((((uint64_t)high) << 32) + (low)) +#define BITS_IN_DWORD 32 + +static int +qat_uclo_auth_fw(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_fw_auth_desc *desc) +{ + unsigned int fcu_sts, mem_cfg_err, retry = 0; + unsigned int fcu_ctl_csr, fcu_sts_csr; + unsigned int fcu_dram_hi_csr, fcu_dram_lo_csr; + u64 bus_addr; + + bus_addr = ADD_ADDR(desc->css_hdr_high, desc->css_hdr_low) - + sizeof(struct icp_qat_auth_chunk); + if (IS_QAT_GEN3(pci_get_device(GET_DEV(handle->accel_dev)))) { + fcu_ctl_csr = FCU_CONTROL_C4XXX; + fcu_sts_csr = FCU_STATUS_C4XXX; + fcu_dram_hi_csr = FCU_DRAM_ADDR_HI_C4XXX; + fcu_dram_lo_csr = FCU_DRAM_ADDR_LO_C4XXX; + } else { + fcu_ctl_csr = FCU_CONTROL; + fcu_sts_csr = FCU_STATUS; + fcu_dram_hi_csr = FCU_DRAM_ADDR_HI; + fcu_dram_lo_csr = FCU_DRAM_ADDR_LO; + } + SET_FCU_CSR(handle, fcu_dram_hi_csr, (bus_addr >> BITS_IN_DWORD)); + SET_FCU_CSR(handle, fcu_dram_lo_csr, bus_addr); + SET_FCU_CSR(handle, fcu_ctl_csr, FCU_CTRL_CMD_AUTH); + + do { + pause_ms("adfstop", FW_AUTH_WAIT_PERIOD); + fcu_sts = GET_FCU_CSR(handle, fcu_sts_csr); + if ((fcu_sts & FCU_AUTH_STS_MASK) == FCU_STS_VERI_FAIL) + goto auth_fail; + if (((fcu_sts >> FCU_STS_AUTHFWLD_POS) & 0x1)) + if ((fcu_sts & FCU_AUTH_STS_MASK) == FCU_STS_VERI_DONE) + return 0; + } while (retry++ < FW_AUTH_MAX_RETRY); +auth_fail: + pr_err("QAT: authentication error (FCU_STATUS = 0x%x),retry = %d\n", + fcu_sts & FCU_AUTH_STS_MASK, + retry); + if (IS_QAT_GEN3(pci_get_device(GET_DEV(handle->accel_dev)))) { + mem_cfg_err = + (GET_FCU_CSR(handle, FCU_STATUS1_C4XXX) & MEM_CFG_ERR_BIT); + if (mem_cfg_err) + pr_err("QAT: MEM_CFG_ERR\n"); + } + return EINVAL; +} + +static int +qat_uclo_simg_alloc(struct icp_qat_fw_loader_handle *handle, + struct icp_firml_dram_desc *dram_desc, + unsigned int size) +{ + int ret; + + ret = bus_dma_mem_create(&dram_desc->dram_mem, + handle->accel_dev->dma_tag, + 1, + BUS_SPACE_MAXADDR, + size, + 0); + if (ret != 0) + return ret; + dram_desc->dram_base_addr_v = dram_desc->dram_mem.dma_vaddr; + dram_desc->dram_bus_addr = dram_desc->dram_mem.dma_baddr; + dram_desc->dram_size = size; + return 0; +} + +static void +qat_uclo_simg_free(struct icp_qat_fw_loader_handle *handle, + struct icp_firml_dram_desc *dram_desc) +{ + if (handle && dram_desc && dram_desc->dram_base_addr_v) + bus_dma_mem_free(&dram_desc->dram_mem); + + if (dram_desc) + explicit_bzero(dram_desc, sizeof(*dram_desc)); +} + +static int +qat_uclo_map_auth_fw(struct icp_qat_fw_loader_handle *handle, + const char *image, + unsigned int size, + struct icp_firml_dram_desc *img_desc, + struct icp_qat_fw_auth_desc **desc) +{ + const struct icp_qat_css_hdr *css_hdr = + (const struct icp_qat_css_hdr *)image; + struct icp_qat_fw_auth_desc *auth_desc; + struct icp_qat_auth_chunk *auth_chunk; + u64 virt_addr, bus_addr, virt_base; + unsigned int length, simg_offset = sizeof(*auth_chunk); + + if (size > (ICP_QAT_AE_IMG_OFFSET + ICP_QAT_CSS_MAX_IMAGE_LEN)) { + pr_err("QAT: error, input image size overflow %d\n", size); + return EINVAL; + } + length = (css_hdr->fw_type == CSS_AE_FIRMWARE) ? + ICP_QAT_CSS_AE_SIMG_LEN + simg_offset : + size + ICP_QAT_CSS_FWSK_PAD_LEN + simg_offset; + if (qat_uclo_simg_alloc(handle, img_desc, length)) { + pr_err("QAT: error, allocate continuous dram fail\n"); + return -ENOMEM; + } + + auth_chunk = img_desc->dram_base_addr_v; + auth_chunk->chunk_size = img_desc->dram_size; + auth_chunk->chunk_bus_addr = img_desc->dram_bus_addr; + virt_base = (uintptr_t)img_desc->dram_base_addr_v + simg_offset; + bus_addr = img_desc->dram_bus_addr + simg_offset; + auth_desc = img_desc->dram_base_addr_v; + auth_desc->css_hdr_high = (unsigned int)(bus_addr >> BITS_IN_DWORD); + auth_desc->css_hdr_low = (unsigned int)bus_addr; + virt_addr = virt_base; + + memcpy((void *)(uintptr_t)virt_addr, image, sizeof(*css_hdr)); + /* pub key */ + bus_addr = ADD_ADDR(auth_desc->css_hdr_high, auth_desc->css_hdr_low) + + sizeof(*css_hdr); + virt_addr = virt_addr + sizeof(*css_hdr); + + auth_desc->fwsk_pub_high = (unsigned int)(bus_addr >> BITS_IN_DWORD); + auth_desc->fwsk_pub_low = (unsigned int)bus_addr; + + memcpy((void *)(uintptr_t)virt_addr, + (const void *)(image + sizeof(*css_hdr)), + ICP_QAT_CSS_FWSK_MODULUS_LEN); + /* padding */ + explicit_bzero((void *)(uintptr_t)(virt_addr + + ICP_QAT_CSS_FWSK_MODULUS_LEN), + ICP_QAT_CSS_FWSK_PAD_LEN); + + /* exponent */ + memcpy((void *)(uintptr_t)(virt_addr + ICP_QAT_CSS_FWSK_MODULUS_LEN + + ICP_QAT_CSS_FWSK_PAD_LEN), + (const void *)(image + sizeof(*css_hdr) + + ICP_QAT_CSS_FWSK_MODULUS_LEN), + sizeof(unsigned int)); + + /* signature */ + bus_addr = ADD_ADDR(auth_desc->fwsk_pub_high, auth_desc->fwsk_pub_low) + + ICP_QAT_CSS_FWSK_PUB_LEN; + virt_addr = virt_addr + ICP_QAT_CSS_FWSK_PUB_LEN; + auth_desc->signature_high = (unsigned int)(bus_addr >> BITS_IN_DWORD); + auth_desc->signature_low = (unsigned int)bus_addr; + + memcpy((void *)(uintptr_t)virt_addr, + (const void *)(image + sizeof(*css_hdr) + + ICP_QAT_CSS_FWSK_MODULUS_LEN + + ICP_QAT_CSS_FWSK_EXPONENT_LEN), + ICP_QAT_CSS_SIGNATURE_LEN); + + bus_addr = + ADD_ADDR(auth_desc->signature_high, auth_desc->signature_low) + + ICP_QAT_CSS_SIGNATURE_LEN; + virt_addr += ICP_QAT_CSS_SIGNATURE_LEN; + + auth_desc->img_high = (unsigned int)(bus_addr >> BITS_IN_DWORD); + auth_desc->img_low = (unsigned int)bus_addr; + auth_desc->img_len = size - ICP_QAT_AE_IMG_OFFSET; + memcpy((void *)(uintptr_t)virt_addr, + (const void *)(image + ICP_QAT_AE_IMG_OFFSET), + auth_desc->img_len); + virt_addr = virt_base; + /* AE firmware */ + if (((struct icp_qat_css_hdr *)(uintptr_t)virt_addr)->fw_type == + CSS_AE_FIRMWARE) { + auth_desc->img_ae_mode_data_high = auth_desc->img_high; + auth_desc->img_ae_mode_data_low = auth_desc->img_low; + bus_addr = ADD_ADDR(auth_desc->img_ae_mode_data_high, + auth_desc->img_ae_mode_data_low) + + sizeof(struct icp_qat_simg_ae_mode); + + auth_desc->img_ae_init_data_high = + (unsigned int)(bus_addr >> BITS_IN_DWORD); + auth_desc->img_ae_init_data_low = (unsigned int)bus_addr; + bus_addr += ICP_QAT_SIMG_AE_INIT_SEQ_LEN; + auth_desc->img_ae_insts_high = + (unsigned int)(bus_addr >> BITS_IN_DWORD); + auth_desc->img_ae_insts_low = (unsigned int)bus_addr; + virt_addr += sizeof(struct icp_qat_css_hdr) + + ICP_QAT_CSS_FWSK_PUB_LEN + ICP_QAT_CSS_SIGNATURE_LEN; + auth_desc->ae_mask = + ((struct icp_qat_simg_ae_mode *)virt_addr)->ae_mask & + handle->cfg_ae_mask; + } else { + auth_desc->img_ae_insts_high = auth_desc->img_high; + auth_desc->img_ae_insts_low = auth_desc->img_low; + } + *desc = auth_desc; + return 0; +} + +static int +qat_uclo_load_fw(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_fw_auth_desc *desc) +{ + unsigned int i = 0; + unsigned int fcu_sts; + unsigned int fcu_sts_csr, fcu_ctl_csr; + unsigned int loaded_aes = FCU_LOADED_AE_POS; + unsigned long ae_mask = handle->hal_handle->ae_mask; + + if (IS_QAT_GEN3(pci_get_device(GET_DEV(handle->accel_dev)))) { + fcu_ctl_csr = FCU_CONTROL_C4XXX; + fcu_sts_csr = FCU_STATUS_C4XXX; + + } else { + fcu_ctl_csr = FCU_CONTROL; + fcu_sts_csr = FCU_STATUS; + } + + for_each_set_bit(i, &ae_mask, handle->hal_handle->ae_max_num) + { + int retry = 0; + + if (!((desc->ae_mask >> i) & 0x1)) + continue; + if (qat_hal_check_ae_active(handle, i)) { + pr_err("QAT: AE %d is active\n", i); + return EINVAL; + } + SET_FCU_CSR(handle, + fcu_ctl_csr, + (FCU_CTRL_CMD_LOAD | (i << FCU_CTRL_AE_POS))); + + do { + pause_ms("adfstop", FW_AUTH_WAIT_PERIOD); + fcu_sts = GET_FCU_CSR(handle, fcu_sts_csr); + if ((fcu_sts & FCU_AUTH_STS_MASK) == + FCU_STS_LOAD_DONE) { + loaded_aes = IS_QAT_GEN3(pci_get_device( + GET_DEV(handle->accel_dev))) ? + GET_FCU_CSR(handle, FCU_AE_LOADED_C4XXX) : + (fcu_sts >> FCU_LOADED_AE_POS); + if (loaded_aes & (1 << i)) + break; + } + } while (retry++ < FW_AUTH_MAX_RETRY); + if (retry > FW_AUTH_MAX_RETRY) { + pr_err("QAT: firmware load failed timeout %x\n", retry); + return EINVAL; + } + } + return 0; +} + +static int +qat_uclo_map_suof_obj(struct icp_qat_fw_loader_handle *handle, + const void *addr_ptr, + int mem_size) +{ + struct icp_qat_suof_handle *suof_handle; + + suof_handle = malloc(sizeof(*suof_handle), M_QAT, M_WAITOK | M_ZERO); + handle->sobj_handle = suof_handle; + if (qat_uclo_map_suof(handle, addr_ptr, mem_size)) { + qat_uclo_del_suof(handle); + pr_err("QAT: map SUOF failed\n"); + return EINVAL; + } + return 0; +} + +int +qat_uclo_wr_mimage(struct icp_qat_fw_loader_handle *handle, + const void *addr_ptr, + int mem_size) +{ + struct icp_qat_fw_auth_desc *desc = NULL; + struct icp_firml_dram_desc img_desc; + int status = 0; + + if (handle->fw_auth) { + status = qat_uclo_map_auth_fw( + handle, addr_ptr, mem_size, &img_desc, &desc); + if (!status) + status = qat_uclo_auth_fw(handle, desc); + + qat_uclo_simg_free(handle, &img_desc); + } else { + if (pci_get_device(GET_DEV(handle->accel_dev)) == + ADF_C3XXX_PCI_DEVICE_ID) { + pr_err("QAT: C3XXX doesn't support unsigned MMP\n"); + return EINVAL; + } + status = qat_uclo_wr_sram_by_words(handle, + handle->hal_sram_offset, + addr_ptr, + mem_size); + } + return status; +} + +static int +qat_uclo_map_uof_obj(struct icp_qat_fw_loader_handle *handle, + const void *addr_ptr, + int mem_size) +{ + struct icp_qat_uof_filehdr *filehdr; + struct icp_qat_uclo_objhandle *objhdl; + + objhdl = malloc(sizeof(*objhdl), M_QAT, M_WAITOK | M_ZERO); + objhdl->obj_buf = malloc(mem_size, M_QAT, M_WAITOK); + bcopy(addr_ptr, objhdl->obj_buf, mem_size); + filehdr = (struct icp_qat_uof_filehdr *)objhdl->obj_buf; + if (qat_uclo_check_uof_format(filehdr)) + goto out_objhdr_err; + objhdl->obj_hdr = qat_uclo_map_chunk((char *)objhdl->obj_buf, + filehdr, + ICP_QAT_UOF_OBJS); + if (!objhdl->obj_hdr) { + pr_err("QAT: object file chunk is null\n"); + goto out_objhdr_err; + } + handle->obj_handle = objhdl; + if (qat_uclo_parse_uof_obj(handle)) + goto out_overlay_obj_err; + return 0; + +out_overlay_obj_err: + handle->obj_handle = NULL; + free(objhdl->obj_hdr, M_QAT); +out_objhdr_err: + free(objhdl->obj_buf, M_QAT); + free(objhdl, M_QAT); + return ENOMEM; +} + +static int +qat_uclo_map_mof_file_hdr(struct icp_qat_fw_loader_handle *handle, + const struct icp_qat_mof_file_hdr *mof_ptr, + u32 mof_size) +{ + unsigned int checksum = 0; + unsigned int min_ver_offset = 0; + struct icp_qat_mof_handle *mobj_handle = handle->mobj_handle; + + mobj_handle->file_id = ICP_QAT_MOF_FID; + mobj_handle->mof_buf = (const char *)mof_ptr; + mobj_handle->mof_size = mof_size; + + min_ver_offset = + mof_size - offsetof(struct icp_qat_mof_file_hdr, min_ver); + checksum = qat_uclo_calc_str_checksum((const char *)&mof_ptr->min_ver, + min_ver_offset); + if (checksum != mof_ptr->checksum) { + pr_err("QAT: incorrect MOF checksum\n"); + return EINVAL; + } + mobj_handle->checksum = mof_ptr->checksum; + mobj_handle->min_ver = mof_ptr->min_ver; + mobj_handle->maj_ver = mof_ptr->maj_ver; + return 0; +} + +void +qat_uclo_del_mof(struct icp_qat_fw_loader_handle *handle) +{ + struct icp_qat_mof_handle *mobj_handle = handle->mobj_handle; + + free(mobj_handle->obj_table.obj_hdr, M_QAT); + mobj_handle->obj_table.obj_hdr = NULL; + free(handle->mobj_handle, M_QAT); + handle->mobj_handle = NULL; +} + +static int +qat_uclo_seek_obj_inside_mof(struct icp_qat_mof_handle *mobj_handle, + const char *obj_name, + const char **obj_ptr, + unsigned int *obj_size) +{ + unsigned int i; + struct icp_qat_mof_objhdr *obj_hdr = mobj_handle->obj_table.obj_hdr; + + for (i = 0; i < mobj_handle->obj_table.num_objs; i++) { + if (!strncmp(obj_hdr[i].obj_name, + obj_name, + ICP_QAT_SUOF_OBJ_NAME_LEN)) { + *obj_ptr = obj_hdr[i].obj_buf; + *obj_size = obj_hdr[i].obj_size; + break; + } + } + + if (i >= mobj_handle->obj_table.num_objs) { + pr_err("QAT: object %s is not found inside MOF\n", obj_name); + return EFAULT; + } + return 0; +} + +static int +qat_uclo_map_obj_from_mof(struct icp_qat_mof_handle *mobj_handle, + struct icp_qat_mof_objhdr *mobj_hdr, + struct icp_qat_mof_obj_chunkhdr *obj_chunkhdr) +{ + if ((strncmp((char *)obj_chunkhdr->chunk_id, + ICP_QAT_UOF_IMAG, + ICP_QAT_MOF_OBJ_CHUNKID_LEN)) == 0) { + mobj_hdr->obj_buf = + (const char *)((unsigned long)obj_chunkhdr->offset + + mobj_handle->uobjs_hdr); + } else if ((strncmp((char *)(obj_chunkhdr->chunk_id), + ICP_QAT_SUOF_IMAG, + ICP_QAT_MOF_OBJ_CHUNKID_LEN)) == 0) { + mobj_hdr->obj_buf = + (const char *)((unsigned long)obj_chunkhdr->offset + + mobj_handle->sobjs_hdr); + + } else { + pr_err("QAT: unsupported chunk id\n"); + return EINVAL; + } + mobj_hdr->obj_size = (unsigned int)obj_chunkhdr->size; + mobj_hdr->obj_name = + (char *)(obj_chunkhdr->name + mobj_handle->sym_str); + return 0; +} + +static int +qat_uclo_map_objs_from_mof(struct icp_qat_mof_handle *mobj_handle) +{ + struct icp_qat_mof_objhdr *mof_obj_hdr; + const struct icp_qat_mof_obj_hdr *uobj_hdr; + const struct icp_qat_mof_obj_hdr *sobj_hdr; + struct icp_qat_mof_obj_chunkhdr *uobj_chunkhdr; + struct icp_qat_mof_obj_chunkhdr *sobj_chunkhdr; + unsigned int uobj_chunk_num = 0, sobj_chunk_num = 0; + unsigned int *valid_chunks = 0; + int ret, i; + + uobj_hdr = (const struct icp_qat_mof_obj_hdr *)mobj_handle->uobjs_hdr; + sobj_hdr = (const struct icp_qat_mof_obj_hdr *)mobj_handle->sobjs_hdr; + if (uobj_hdr) + uobj_chunk_num = uobj_hdr->num_chunks; + if (sobj_hdr) + sobj_chunk_num = sobj_hdr->num_chunks; + + mof_obj_hdr = (struct icp_qat_mof_objhdr *) + malloc((uobj_chunk_num + sobj_chunk_num) * sizeof(*mof_obj_hdr), + M_QAT, + M_WAITOK | M_ZERO); + + mobj_handle->obj_table.obj_hdr = mof_obj_hdr; + valid_chunks = &mobj_handle->obj_table.num_objs; + uobj_chunkhdr = + (struct icp_qat_mof_obj_chunkhdr *)((uintptr_t)uobj_hdr + + sizeof(*uobj_hdr)); + sobj_chunkhdr = + (struct icp_qat_mof_obj_chunkhdr *)((uintptr_t)sobj_hdr + + sizeof(*sobj_hdr)); + + /* map uof objects */ + for (i = 0; i < uobj_chunk_num; i++) { + ret = qat_uclo_map_obj_from_mof(mobj_handle, + &mof_obj_hdr[*valid_chunks], + &uobj_chunkhdr[i]); + if (ret) + return ret; + (*valid_chunks)++; + } + + /* map suof objects */ + for (i = 0; i < sobj_chunk_num; i++) { + ret = qat_uclo_map_obj_from_mof(mobj_handle, + &mof_obj_hdr[*valid_chunks], + &sobj_chunkhdr[i]); + if (ret) + return ret; + (*valid_chunks)++; + } + + if ((uobj_chunk_num + sobj_chunk_num) != *valid_chunks) { + pr_err("QAT: inconsistent UOF/SUOF chunk amount\n"); + return EINVAL; + } + return 0; +} + +static void +qat_uclo_map_mof_symobjs(struct icp_qat_mof_handle *mobj_handle, + struct icp_qat_mof_chunkhdr *mof_chunkhdr) +{ + char **sym_str = (char **)&mobj_handle->sym_str; + unsigned int *sym_size = &mobj_handle->sym_size; + struct icp_qat_mof_str_table *str_table_obj; + + *sym_size = *(unsigned int *)(uintptr_t)(mof_chunkhdr->offset + + mobj_handle->mof_buf); + *sym_str = + (char *)(uintptr_t)(mobj_handle->mof_buf + mof_chunkhdr->offset + + sizeof(str_table_obj->tab_len)); +} + +static void +qat_uclo_map_mof_chunk(struct icp_qat_mof_handle *mobj_handle, + struct icp_qat_mof_chunkhdr *mof_chunkhdr) +{ + if (!strncmp(mof_chunkhdr->chunk_id, + ICP_QAT_MOF_SYM_OBJS, + ICP_QAT_MOF_OBJ_ID_LEN)) + qat_uclo_map_mof_symobjs(mobj_handle, mof_chunkhdr); + else if (!strncmp(mof_chunkhdr->chunk_id, + ICP_QAT_UOF_OBJS, + ICP_QAT_MOF_OBJ_ID_LEN)) + mobj_handle->uobjs_hdr = + mobj_handle->mof_buf + (unsigned long)mof_chunkhdr->offset; + else if (!strncmp(mof_chunkhdr->chunk_id, + ICP_QAT_SUOF_OBJS, + ICP_QAT_MOF_OBJ_ID_LEN)) + mobj_handle->sobjs_hdr = + mobj_handle->mof_buf + (unsigned long)mof_chunkhdr->offset; +} + +static int +qat_uclo_check_mof_format(const struct icp_qat_mof_file_hdr *mof_hdr) +{ + int maj = mof_hdr->maj_ver & 0xff; + int min = mof_hdr->min_ver & 0xff; + + if (mof_hdr->file_id != ICP_QAT_MOF_FID) { + pr_err("QAT: invalid header 0x%x\n", mof_hdr->file_id); + return EINVAL; + } + + if (mof_hdr->num_chunks <= 0x1) { + pr_err("QAT: MOF chunk amount is incorrect\n"); + return EINVAL; + } + if (maj != ICP_QAT_MOF_MAJVER || min != ICP_QAT_MOF_MINVER) { + pr_err("QAT: bad MOF version, major 0x%x, minor 0x%x\n", + maj, + min); + return EINVAL; + } + return 0; +} + +static int +qat_uclo_map_mof_obj(struct icp_qat_fw_loader_handle *handle, + const struct icp_qat_mof_file_hdr *mof_ptr, + u32 mof_size, + const char *obj_name, + const char **obj_ptr, + unsigned int *obj_size) +{ + struct icp_qat_mof_handle *mobj_handle; + struct icp_qat_mof_chunkhdr *mof_chunkhdr; + unsigned short chunks_num; + int ret; + unsigned int i; + + if (mof_ptr->file_id == ICP_QAT_UOF_FID || + mof_ptr->file_id == ICP_QAT_SUOF_FID) { + if (obj_ptr) + *obj_ptr = (const char *)mof_ptr; + if (obj_size) + *obj_size = (unsigned int)mof_size; + return 0; + } + if (qat_uclo_check_mof_format(mof_ptr)) + return EINVAL; + mobj_handle = malloc(sizeof(*mobj_handle), M_QAT, M_WAITOK | M_ZERO); + handle->mobj_handle = mobj_handle; + ret = qat_uclo_map_mof_file_hdr(handle, mof_ptr, mof_size); + if (ret) + return ret; + mof_chunkhdr = (struct icp_qat_mof_chunkhdr *)((uintptr_t)mof_ptr + + sizeof(*mof_ptr)); + chunks_num = mof_ptr->num_chunks; + /*Parse MOF file chunks*/ + for (i = 0; i < chunks_num; i++) + qat_uclo_map_mof_chunk(mobj_handle, &mof_chunkhdr[i]); + /*All sym_objs uobjs and sobjs should be available*/ + if (!mobj_handle->sym_str || + (!mobj_handle->uobjs_hdr && !mobj_handle->sobjs_hdr)) + return EINVAL; + ret = qat_uclo_map_objs_from_mof(mobj_handle); + if (ret) + return ret; + /*Seek specified uof object in MOF*/ + ret = qat_uclo_seek_obj_inside_mof(mobj_handle, + obj_name, + obj_ptr, + obj_size); + if (ret) + return ret; + return 0; +} + +int +qat_uclo_map_obj(struct icp_qat_fw_loader_handle *handle, + const void *addr_ptr, + u32 mem_size, + const char *obj_name) +{ + const char *obj_addr; + u32 obj_size; + int ret; + + BUILD_BUG_ON(ICP_QAT_UCLO_MAX_AE > + (sizeof(handle->hal_handle->ae_mask) * 8)); + + if (!handle || !addr_ptr || mem_size < 24) + return EINVAL; + + if (obj_name) { + ret = qat_uclo_map_mof_obj( + handle, addr_ptr, mem_size, obj_name, &obj_addr, &obj_size); + if (ret) + return ret; + } else { + obj_addr = addr_ptr; + obj_size = mem_size; + } + + return (handle->fw_auth) ? + qat_uclo_map_suof_obj(handle, obj_addr, obj_size) : + qat_uclo_map_uof_obj(handle, obj_addr, obj_size); +} + +void +qat_uclo_del_obj(struct icp_qat_fw_loader_handle *handle) +{ + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + unsigned int a; + unsigned long ae_mask = handle->hal_handle->ae_mask; + + if (handle->mobj_handle) + qat_uclo_del_mof(handle); + if (handle->sobj_handle) + qat_uclo_del_suof(handle); + if (!obj_handle) + return; + + free(obj_handle->uword_buf, M_QAT); + for (a = 0; a < obj_handle->uimage_num; a++) + free(obj_handle->ae_uimage[a].page, M_QAT); + + for_each_set_bit(a, &ae_mask, handle->hal_handle->ae_max_num) + { + qat_uclo_free_ae_data(&obj_handle->ae_data[a]); + } + + free(obj_handle->obj_hdr, M_QAT); + free(obj_handle->obj_buf, M_QAT); + free(obj_handle, M_QAT); + handle->obj_handle = NULL; +} + +static void +qat_uclo_fill_uwords(struct icp_qat_uclo_objhandle *obj_handle, + struct icp_qat_uclo_encap_page *encap_page, + uint64_t *uword, + unsigned int addr_p, + unsigned int raddr, + uint64_t fill) +{ + uint64_t uwrd = 0; + unsigned int i, addr; + + if (!encap_page) { + *uword = fill; + return; + } + addr = (encap_page->page_region) ? raddr : addr_p; + for (i = 0; i < encap_page->uwblock_num; i++) { + if (addr >= encap_page->uwblock[i].start_addr && + addr <= encap_page->uwblock[i].start_addr + + encap_page->uwblock[i].words_num - 1) { + addr -= encap_page->uwblock[i].start_addr; + addr *= obj_handle->uword_in_bytes; + memcpy(&uwrd, + (void *)(((uintptr_t)encap_page->uwblock[i] + .micro_words) + + addr), + obj_handle->uword_in_bytes); + uwrd = uwrd & 0xbffffffffffull; + } + } + *uword = uwrd; + if (*uword == INVLD_UWORD) + *uword = fill; +} + +static void +qat_uclo_wr_uimage_raw_page(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_uclo_encap_page *encap_page, + unsigned int ae) +{ + unsigned int uw_physical_addr, uw_relative_addr, i, words_num, cpylen; + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + uint64_t fill_pat; + + /* load the page starting at appropriate ustore address */ + /* get fill-pattern from an image -- they are all the same */ + memcpy(&fill_pat, + obj_handle->ae_uimage[0].img_ptr->fill_pattern, + sizeof(uint64_t)); + uw_physical_addr = encap_page->beg_addr_p; + uw_relative_addr = 0; + words_num = encap_page->micro_words_num; + while (words_num) { + if (words_num < UWORD_CPYBUF_SIZE) + cpylen = words_num; + else + cpylen = UWORD_CPYBUF_SIZE; + + /* load the buffer */ + for (i = 0; i < cpylen; i++) + qat_uclo_fill_uwords(obj_handle, + encap_page, + &obj_handle->uword_buf[i], + uw_physical_addr + i, + uw_relative_addr + i, + fill_pat); + + if (obj_handle->ae_data[ae].shareable_ustore) + /* copy the buffer to ustore */ + qat_hal_wr_coalesce_uwords(handle, + (unsigned char)ae, + uw_physical_addr, + cpylen, + obj_handle->uword_buf); + else + /* copy the buffer to ustore */ + qat_hal_wr_uwords(handle, + (unsigned char)ae, + uw_physical_addr, + cpylen, + obj_handle->uword_buf); + uw_physical_addr += cpylen; + uw_relative_addr += cpylen; + words_num -= cpylen; + } +} + +static void +qat_uclo_wr_uimage_page(struct icp_qat_fw_loader_handle *handle, + struct icp_qat_uof_image *image) +{ + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + unsigned int ctx_mask, s; + struct icp_qat_uclo_page *page; + unsigned char ae = 0; + int ctx; + struct icp_qat_uclo_aedata *aed; + unsigned long ae_mask = handle->hal_handle->ae_mask; + + if (ICP_QAT_CTX_MODE(image->ae_mode) == ICP_QAT_UCLO_MAX_CTX) + ctx_mask = 0xff; + else + ctx_mask = 0x55; + /* load the default page and set assigned CTX PC + * to the entrypoint address + */ + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) + { + unsigned long cfg_ae_mask = handle->cfg_ae_mask; + unsigned long ae_assigned = image->ae_assigned; + + if (!test_bit(ae, &cfg_ae_mask)) + continue; + + if (!test_bit(ae, &ae_assigned)) + continue; + + aed = &obj_handle->ae_data[ae]; + /* find the slice to which this image is assigned */ + for (s = 0; s < aed->slice_num; s++) { + if (image->ctx_assigned & + aed->ae_slices[s].ctx_mask_assigned) + break; + } + if (s >= aed->slice_num) + continue; + page = aed->ae_slices[s].page; + if (!page->encap_page->def_page) + continue; + qat_uclo_wr_uimage_raw_page(handle, page->encap_page, ae); + + page = aed->ae_slices[s].page; + for (ctx = 0; ctx < ICP_QAT_UCLO_MAX_CTX; ctx++) + aed->ae_slices[s].cur_page[ctx] = + (ctx_mask & (1 << ctx)) ? page : NULL; + qat_hal_set_live_ctx(handle, + (unsigned char)ae, + image->ctx_assigned); + qat_hal_set_pc(handle, + (unsigned char)ae, + image->ctx_assigned, + image->entry_address); + } +} + +static int +qat_uclo_wr_suof_img(struct icp_qat_fw_loader_handle *handle) +{ + unsigned int i; + struct icp_qat_fw_auth_desc *desc = NULL; + struct icp_firml_dram_desc img_desc; + struct icp_qat_suof_handle *sobj_handle = handle->sobj_handle; + struct icp_qat_suof_img_hdr *simg_hdr = sobj_handle->img_table.simg_hdr; + + for (i = 0; i < sobj_handle->img_table.num_simgs; i++) { + if (qat_uclo_map_auth_fw(handle, + (const char *)simg_hdr[i].simg_buf, + (unsigned int)(simg_hdr[i].simg_len), + &img_desc, + &desc)) + goto wr_err; + if (qat_uclo_auth_fw(handle, desc)) + goto wr_err; + if (qat_uclo_load_fw(handle, desc)) + goto wr_err; + qat_uclo_simg_free(handle, &img_desc); + } + return 0; +wr_err: + qat_uclo_simg_free(handle, &img_desc); + return -EINVAL; +} + +static int +qat_uclo_wr_uof_img(struct icp_qat_fw_loader_handle *handle) +{ + struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle; + unsigned int i; + + if (qat_uclo_init_globals(handle)) + return EINVAL; + for (i = 0; i < obj_handle->uimage_num; i++) { + if (!obj_handle->ae_uimage[i].img_ptr) + return EINVAL; + if (qat_uclo_init_ustore(handle, &obj_handle->ae_uimage[i])) + return EINVAL; + qat_uclo_wr_uimage_page(handle, + obj_handle->ae_uimage[i].img_ptr); + } + return 0; +} + +int +qat_uclo_wr_all_uimage(struct icp_qat_fw_loader_handle *handle) +{ + return (handle->fw_auth) ? qat_uclo_wr_suof_img(handle) : + qat_uclo_wr_uof_img(handle); +} + +int +qat_uclo_set_cfg_ae_mask(struct icp_qat_fw_loader_handle *handle, + unsigned int cfg_ae_mask) +{ + if (!cfg_ae_mask) + return EINVAL; + + handle->cfg_ae_mask = cfg_ae_mask; + return 0; +} Index: sys/dev/qat/qat_d15xx.c =================================================================== --- sys/dev/qat/qat_d15xx.c +++ /dev/null @@ -1,314 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_d15xx.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2014 Intel Corporation. - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -__FBSDID("$FreeBSD$"); -#if 0 -__KERNEL_RCSID(0, "$NetBSD: qat_d15xx.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $"); -#endif - -#include -#include -#include - -#include - -#include -#include - -#include "qatreg.h" -#include "qat_hw17reg.h" -#include "qat_d15xxreg.h" -#include "qatvar.h" -#include "qat_hw17var.h" - -static uint32_t -qat_d15xx_get_accel_mask(struct qat_softc *sc) -{ - uint32_t fusectl, strap; - - fusectl = pci_read_config(sc->sc_dev, FUSECTL_REG, 4); - strap = pci_read_config(sc->sc_dev, SOFTSTRAP_REG_D15XX, 4); - - return (((~(fusectl | strap)) >> ACCEL_REG_OFFSET_D15XX) & - ACCEL_MASK_D15XX); -} - -static uint32_t -qat_d15xx_get_ae_mask(struct qat_softc *sc) -{ - uint32_t fusectl, me_strap, me_disable, ssms_disabled; - - fusectl = pci_read_config(sc->sc_dev, FUSECTL_REG, 4); - me_strap = pci_read_config(sc->sc_dev, SOFTSTRAP_REG_D15XX, 4); - - /* If SSMs are disabled, then disable the corresponding MEs */ - ssms_disabled = (~qat_d15xx_get_accel_mask(sc)) & ACCEL_MASK_D15XX; - me_disable = 0x3; - while (ssms_disabled) { - if (ssms_disabled & 1) - me_strap |= me_disable; - ssms_disabled >>= 1; - me_disable <<= 2; - } - - return (~(fusectl | me_strap)) & AE_MASK_D15XX; -} - -static enum qat_sku -qat_d15xx_get_sku(struct qat_softc *sc) -{ - switch (sc->sc_ae_num) { - case 8: - return QAT_SKU_2; - case MAX_AE_D15XX: - return QAT_SKU_4; - } - - return QAT_SKU_UNKNOWN; -} - -static uint32_t -qat_d15xx_get_accel_cap(struct qat_softc *sc) -{ - uint32_t cap, legfuse, strap; - - legfuse = pci_read_config(sc->sc_dev, LEGFUSE_REG, 4); - strap = pci_read_config(sc->sc_dev, SOFTSTRAP_REG_D15XX, 4); - - cap = QAT_ACCEL_CAP_CRYPTO_SYMMETRIC + - QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC + - QAT_ACCEL_CAP_CIPHER + - QAT_ACCEL_CAP_AUTHENTICATION + - QAT_ACCEL_CAP_COMPRESSION + - QAT_ACCEL_CAP_ZUC + - QAT_ACCEL_CAP_SHA3; - - if (legfuse & LEGFUSE_ACCEL_MASK_CIPHER_SLICE) { - cap &= ~QAT_ACCEL_CAP_CRYPTO_SYMMETRIC; - cap &= ~QAT_ACCEL_CAP_CIPHER; - } - if (legfuse & LEGFUSE_ACCEL_MASK_AUTH_SLICE) - cap &= ~QAT_ACCEL_CAP_AUTHENTICATION; - if (legfuse & LEGFUSE_ACCEL_MASK_PKE_SLICE) - cap &= ~QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC; - if (legfuse & LEGFUSE_ACCEL_MASK_COMPRESS_SLICE) - cap &= ~QAT_ACCEL_CAP_COMPRESSION; - if (legfuse & LEGFUSE_ACCEL_MASK_EIA3_SLICE) - cap &= ~QAT_ACCEL_CAP_ZUC; - - if ((strap | legfuse) & SOFTSTRAP_SS_POWERGATE_PKE_D15XX) - cap &= ~QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC; - if ((strap | legfuse) & SOFTSTRAP_SS_POWERGATE_CY_D15XX) - cap &= ~QAT_ACCEL_CAP_COMPRESSION; - - return cap; -} - -static const char * -qat_d15xx_get_fw_uof_name(struct qat_softc *sc) -{ - - return AE_FW_UOF_NAME_D15XX; -} - -static void -qat_d15xx_enable_intr(struct qat_softc *sc) -{ - - /* Enable bundle and misc interrupts */ - qat_misc_write_4(sc, SMIAPF0_D15XX, SMIA0_MASK_D15XX); - qat_misc_write_4(sc, SMIAPF1_D15XX, SMIA1_MASK_D15XX); -} - -/* Worker thread to service arbiter mappings */ -static uint32_t thrd_to_arb_map[] = { - 0x12222AAA, 0x11222AAA, 0x12222AAA, 0x11222AAA, 0x12222AAA, - 0x11222AAA, 0x12222AAA, 0x11222AAA, 0x12222AAA, 0x11222AAA -}; - -static void -qat_d15xx_get_arb_mapping(struct qat_softc *sc, const uint32_t **arb_map_config) -{ - int i; - - for (i = 1; i < MAX_AE_D15XX; i++) { - if ((~sc->sc_ae_mask) & (1 << i)) - thrd_to_arb_map[i] = 0; - } - *arb_map_config = thrd_to_arb_map; -} - -static void -qat_d15xx_enable_error_interrupts(struct qat_softc *sc) -{ - qat_misc_write_4(sc, ERRMSK0, ERRMSK0_CERR_D15XX); /* ME0-ME3 */ - qat_misc_write_4(sc, ERRMSK1, ERRMSK1_CERR_D15XX); /* ME4-ME7 */ - qat_misc_write_4(sc, ERRMSK4, ERRMSK4_CERR_D15XX); /* ME8-ME9 */ - qat_misc_write_4(sc, ERRMSK5, ERRMSK5_CERR_D15XX); /* SSM2-SSM4 */ - - /* Reset everything except VFtoPF1_16. */ - qat_misc_read_write_and_4(sc, ERRMSK3, VF2PF1_16_D15XX); - /* Disable Secure RAM correctable error interrupt */ - qat_misc_read_write_or_4(sc, ERRMSK3, ERRMSK3_CERR_D15XX); - - /* RI CPP bus interface error detection and reporting. */ - qat_misc_write_4(sc, RICPPINTCTL_D15XX, RICPP_EN_D15XX); - - /* TI CPP bus interface error detection and reporting. */ - qat_misc_write_4(sc, TICPPINTCTL_D15XX, TICPP_EN_D15XX); - - /* Enable CFC Error interrupts and logging. */ - qat_misc_write_4(sc, CPP_CFC_ERR_CTRL_D15XX, CPP_CFC_UE_D15XX); - - /* Enable SecureRAM to fix and log Correctable errors */ - qat_misc_write_4(sc, SECRAMCERR_D15XX, SECRAM_CERR_D15XX); - - /* Enable SecureRAM Uncorrectable error interrupts and logging */ - qat_misc_write_4(sc, SECRAMUERR, SECRAM_UERR_D15XX); - - /* Enable Push/Pull Misc Uncorrectable error interrupts and logging */ - qat_misc_write_4(sc, CPPMEMTGTERR, TGT_UERR_D15XX); -} - -static void -qat_d15xx_disable_error_interrupts(struct qat_softc *sc) -{ - /* ME0-ME3 */ - qat_misc_write_4(sc, ERRMSK0, ERRMSK0_UERR_D15XX | ERRMSK0_CERR_D15XX); - /* ME4-ME7 */ - qat_misc_write_4(sc, ERRMSK1, ERRMSK1_UERR_D15XX | ERRMSK1_CERR_D15XX); - /* Secure RAM, CPP Push Pull, RI, TI, SSM0-SSM1, CFC */ - qat_misc_write_4(sc, ERRMSK3, ERRMSK3_UERR_D15XX | ERRMSK3_CERR_D15XX); - /* ME8-ME9 */ - qat_misc_write_4(sc, ERRMSK4, ERRMSK4_UERR_D15XX | ERRMSK4_CERR_D15XX); - /* SSM2-SSM4 */ - qat_misc_write_4(sc, ERRMSK5, ERRMSK5_UERR_D15XX | ERRMSK5_CERR_D15XX); -} - -static void -qat_d15xx_enable_error_correction(struct qat_softc *sc) -{ - u_int i, mask; - - /* Enable Accel Engine error detection & correction */ - for (i = 0, mask = sc->sc_ae_mask; mask; i++, mask >>= 1) { - if (!(mask & 1)) - continue; - qat_misc_read_write_or_4(sc, AE_CTX_ENABLES_D15XX(i), - ENABLE_AE_ECC_ERR_D15XX); - qat_misc_read_write_or_4(sc, AE_MISC_CONTROL_D15XX(i), - ENABLE_AE_ECC_PARITY_CORR_D15XX); - } - - /* Enable shared memory error detection & correction */ - for (i = 0, mask = sc->sc_accel_mask; mask; i++, mask >>= 1) { - if (!(mask & 1)) - continue; - - qat_misc_read_write_or_4(sc, UERRSSMSH(i), ERRSSMSH_EN_D15XX); - qat_misc_read_write_or_4(sc, CERRSSMSH(i), ERRSSMSH_EN_D15XX); - qat_misc_read_write_or_4(sc, PPERR(i), PPERR_EN_D15XX); - } - - qat_d15xx_enable_error_interrupts(sc); -} - -const struct qat_hw qat_hw_d15xx = { - .qhw_sram_bar_id = BAR_SRAM_ID_D15XX, - .qhw_misc_bar_id = BAR_PMISC_ID_D15XX, - .qhw_etr_bar_id = BAR_ETR_ID_D15XX, - .qhw_cap_global_offset = CAP_GLOBAL_OFFSET_D15XX, - .qhw_ae_offset = AE_OFFSET_D15XX, - .qhw_ae_local_offset = AE_LOCAL_OFFSET_D15XX, - .qhw_etr_bundle_size = ETR_BUNDLE_SIZE_D15XX, - .qhw_num_banks = ETR_MAX_BANKS_D15XX, - .qhw_num_rings_per_bank = ETR_MAX_RINGS_PER_BANK, - .qhw_num_accel = MAX_ACCEL_D15XX, - .qhw_num_engines = MAX_AE_D15XX, - .qhw_tx_rx_gap = ETR_TX_RX_GAP_D15XX, - .qhw_tx_rings_mask = ETR_TX_RINGS_MASK_D15XX, - .qhw_clock_per_sec = CLOCK_PER_SEC_D15XX, - .qhw_fw_auth = true, - .qhw_fw_req_size = FW_REQ_DEFAULT_SZ_HW17, - .qhw_fw_resp_size = FW_RESP_DEFAULT_SZ_HW17, - .qhw_ring_asym_tx = 0, - .qhw_ring_asym_rx = 8, - .qhw_ring_sym_tx = 2, - .qhw_ring_sym_rx = 10, - .qhw_mof_fwname = AE_FW_MOF_NAME_D15XX, - .qhw_mmp_fwname = AE_FW_MMP_NAME_D15XX, - .qhw_prod_type = AE_FW_PROD_TYPE_D15XX, - .qhw_get_accel_mask = qat_d15xx_get_accel_mask, - .qhw_get_ae_mask = qat_d15xx_get_ae_mask, - .qhw_get_sku = qat_d15xx_get_sku, - .qhw_get_accel_cap = qat_d15xx_get_accel_cap, - .qhw_get_fw_uof_name = qat_d15xx_get_fw_uof_name, - .qhw_enable_intr = qat_d15xx_enable_intr, - .qhw_init_admin_comms = qat_adm_mailbox_init, - .qhw_send_admin_init = qat_adm_mailbox_send_init, - .qhw_init_arb = qat_arb_init, - .qhw_get_arb_mapping = qat_d15xx_get_arb_mapping, - .qhw_enable_error_correction = qat_d15xx_enable_error_correction, - .qhw_disable_error_interrupts = qat_d15xx_disable_error_interrupts, - .qhw_set_ssm_wdtimer = qat_set_ssm_wdtimer, - .qhw_check_slice_hang = qat_check_slice_hang, - .qhw_crypto_setup_desc = qat_hw17_crypto_setup_desc, - .qhw_crypto_setup_req_params = qat_hw17_crypto_setup_req_params, - .qhw_crypto_opaque_offset = offsetof(struct fw_la_resp, opaque_data), -}; Index: sys/dev/qat/qat_d15xxreg.h =================================================================== --- sys/dev/qat/qat_d15xxreg.h +++ /dev/null @@ -1,201 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_d15xxreg.h,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2014 Intel Corporation. - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* $FreeBSD$ */ - -#ifndef _DEV_PCI_QAT_D15XXREG_H_ -#define _DEV_PCI_QAT_D15XXREG_H_ - -/* Max number of accelerators and engines */ -#define MAX_ACCEL_D15XX 5 -#define MAX_AE_D15XX 10 - -/* PCIe BAR index */ -#define BAR_SRAM_ID_D15XX 0 -#define BAR_PMISC_ID_D15XX 1 -#define BAR_ETR_ID_D15XX 2 - -/* BAR PMISC sub-regions */ -#define AE_OFFSET_D15XX 0x20000 -#define AE_LOCAL_OFFSET_D15XX 0x20800 -#define CAP_GLOBAL_OFFSET_D15XX 0x30000 - -#define SOFTSTRAP_REG_D15XX 0x2EC -#define SOFTSTRAP_SS_POWERGATE_CY_D15XX __BIT(23) -#define SOFTSTRAP_SS_POWERGATE_PKE_D15XX __BIT(24) - -#define ACCEL_REG_OFFSET_D15XX 16 -#define ACCEL_MASK_D15XX 0x1F -#define AE_MASK_D15XX 0x3FF - -#define SMIAPF0_D15XX 0x3A028 -#define SMIAPF1_D15XX 0x3A030 -#define SMIA0_MASK_D15XX 0xFFFF -#define SMIA1_MASK_D15XX 0x1 - -/* Error detection and correction */ -#define AE_CTX_ENABLES_D15XX(i) ((i) * 0x1000 + 0x20818) -#define AE_MISC_CONTROL_D15XX(i) ((i) * 0x1000 + 0x20960) -#define ENABLE_AE_ECC_ERR_D15XX __BIT(28) -#define ENABLE_AE_ECC_PARITY_CORR_D15XX (__BIT(24) | __BIT(12)) -#define ERRSSMSH_EN_D15XX __BIT(3) -/* BIT(2) enables the logging of push/pull data errors. */ -#define PPERR_EN_D15XX (__BIT(2)) - -/* Mask for VF2PF interrupts */ -#define VF2PF1_16_D15XX (0xFFFF << 9) -#define ERRSOU3_VF2PF_D15XX(errsou3) (((errsou3) & 0x01FFFE00) >> 9) -#define ERRMSK3_VF2PF_D15XX(vf_mask) (((vf_mask) & 0xFFFF) << 9) - -/* Masks for correctable error interrupts. */ -#define ERRMSK0_CERR_D15XX (__BIT(24) | __BIT(16) | __BIT(8) | __BIT(0)) -#define ERRMSK1_CERR_D15XX (__BIT(24) | __BIT(16) | __BIT(8) | __BIT(0)) -#define ERRMSK3_CERR_D15XX (__BIT(7)) -#define ERRMSK4_CERR_D15XX (__BIT(8) | __BIT(0)) -#define ERRMSK5_CERR_D15XX (0) - -/* Masks for uncorrectable error interrupts. */ -#define ERRMSK0_UERR_D15XX (__BIT(25) | __BIT(17) | __BIT(9) | __BIT(1)) -#define ERRMSK1_UERR_D15XX (__BIT(25) | __BIT(17) | __BIT(9) | __BIT(1)) -#define ERRMSK3_UERR_D15XX (__BIT(8) | __BIT(6) | __BIT(5) | __BIT(4) | \ - __BIT(3) | __BIT(2) | __BIT(0)) -#define ERRMSK4_UERR_D15XX (__BIT(9) | __BIT(1)) -#define ERRMSK5_UERR_D15XX (__BIT(18) | __BIT(17) | __BIT(16)) - -/* RI CPP control */ -#define RICPPINTCTL_D15XX (0x3A000 + 0x110) -/* - * BIT(2) enables error detection and reporting on the RI Parity Error. - * BIT(1) enables error detection and reporting on the RI CPP Pull interface. - * BIT(0) enables error detection and reporting on the RI CPP Push interface. - */ -#define RICPP_EN_D15XX (__BIT(2) | __BIT(1) | __BIT(0)) - -/* TI CPP control */ -#define TICPPINTCTL_D15XX (0x3A400 + 0x138) -/* - * BIT(3) enables error detection and reporting on the ETR Parity Error. - * BIT(2) enables error detection and reporting on the TI Parity Error. - * BIT(1) enables error detection and reporting on the TI CPP Pull interface. - * BIT(0) enables error detection and reporting on the TI CPP Push interface. - */ -#define TICPP_EN_D15XX \ - (__BIT(4) | __BIT(3) | __BIT(2) | __BIT(1) | __BIT(0)) - -/* CFC Uncorrectable Errors */ -#define CPP_CFC_ERR_CTRL_D15XX (0x30000 + 0xC00) -/* - * BIT(1) enables interrupt. - * BIT(0) enables detecting and logging of push/pull data errors. - */ -#define CPP_CFC_UE_D15XX (__BIT(1) | __BIT(0)) - -/* Correctable SecureRAM Error Reg */ -#define SECRAMCERR_D15XX (0x3AC00 + 0x00) -/* BIT(3) enables fixing and logging of correctable errors. */ -#define SECRAM_CERR_D15XX (__BIT(3)) - -/* Uncorrectable SecureRAM Error Reg */ -/* - * BIT(17) enables interrupt. - * BIT(3) enables detecting and logging of uncorrectable errors. - */ -#define SECRAM_UERR_D15XX (__BIT(17) | __BIT(3)) - -/* Miscellaneous Memory Target Errors Register */ -/* - * BIT(3) enables detecting and logging push/pull data errors. - * BIT(2) enables interrupt. - */ -#define TGT_UERR_D15XX (__BIT(3) | __BIT(2)) - - -#define SLICEPWRDOWN_D15XX(i) ((i) * 0x4000 + 0x2C) -/* Enabling PKE4-PKE0. */ -#define MMP_PWR_UP_MSK_D15XX \ - (__BIT(20) | __BIT(19) | __BIT(18) | __BIT(17) | __BIT(16)) - -/* CPM Uncorrectable Errors */ -#define INTMASKSSM_D15XX(i) ((i) * 0x4000 + 0x0) -/* Disabling interrupts for correctable errors. */ -#define INTMASKSSM_UERR_D15XX \ - (__BIT(11) | __BIT(9) | __BIT(7) | __BIT(5) | __BIT(3) | __BIT(1)) - -/* MMP */ -/* BIT(3) enables correction. */ -#define CERRSSMMMP_EN_D15XX (__BIT(3)) - -/* BIT(3) enables logging. */ -#define UERRSSMMMP_EN_D15XX (__BIT(3)) - -/* ETR */ -#define ETR_MAX_BANKS_D15XX 16 -#define ETR_TX_RX_GAP_D15XX 8 -#define ETR_TX_RINGS_MASK_D15XX 0xFF -#define ETR_BUNDLE_SIZE_D15XX 0x1000 - -/* AE firmware */ -#define AE_FW_PROD_TYPE_D15XX 0x01000000 -#define AE_FW_MOF_NAME_D15XX "qat_d15xxfw" -#define AE_FW_MMP_NAME_D15XX "qat_d15xx_mmp" -#define AE_FW_UOF_NAME_D15XX "icp_qat_ae.suof" - -/* Clock frequency */ -#define CLOCK_PER_SEC_D15XX (685 * 1000000 / 16) - -#endif Index: sys/dev/qat/qat_dh895xcc.c =================================================================== --- sys/dev/qat/qat_dh895xcc.c +++ /dev/null @@ -1,271 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause AND BSD-3-Clause */ -/* - * Copyright (c) 2020 Rubicon Communications, LLC (Netgate) - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are - * met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - */ - -/* - * Copyright(c) 2014 - 2020 Intel Corporation. - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -__FBSDID("$FreeBSD$"); - -#include -#include - -#include - -#include -#include - -#include "qatreg.h" -#include "qatvar.h" -#include "qat_hw17reg.h" -#include "qat_hw17var.h" -#include "qat_dh895xccreg.h" - -static uint32_t -qat_dh895xcc_get_accel_mask(struct qat_softc *sc) -{ - uint32_t fusectl, strap; - - fusectl = pci_read_config(sc->sc_dev, FUSECTL_REG, 4); - strap = pci_read_config(sc->sc_dev, SOFTSTRAP_REG_DH895XCC, 4); - - return (((~(fusectl | strap)) >> ACCEL_REG_OFFSET_DH895XCC) & - ACCEL_MASK_DH895XCC); -} - -static uint32_t -qat_dh895xcc_get_ae_mask(struct qat_softc *sc) -{ - uint32_t fusectl, strap; - - fusectl = pci_read_config(sc->sc_dev, FUSECTL_REG, 4); - strap = pci_read_config(sc->sc_dev, SOFTSTRAP_REG_DH895XCC, 4); - - return (~(fusectl | strap)) & AE_MASK_DH895XCC; -} - -static enum qat_sku -qat_dh895xcc_get_sku(struct qat_softc *sc) -{ - uint32_t fusectl, sku; - - fusectl = pci_read_config(sc->sc_dev, FUSECTL_REG, 4); - sku = (fusectl & FUSECTL_SKU_MASK_DH895XCC) >> - FUSECTL_SKU_SHIFT_DH895XCC; - switch (sku) { - case FUSECTL_SKU_1_DH895XCC: - return QAT_SKU_1; - case FUSECTL_SKU_2_DH895XCC: - return QAT_SKU_2; - case FUSECTL_SKU_3_DH895XCC: - return QAT_SKU_3; - case FUSECTL_SKU_4_DH895XCC: - return QAT_SKU_4; - default: - return QAT_SKU_UNKNOWN; - } -} - -static uint32_t -qat_dh895xcc_get_accel_cap(struct qat_softc *sc) -{ - uint32_t cap, legfuse; - - legfuse = pci_read_config(sc->sc_dev, LEGFUSE_REG, 4); - - cap = QAT_ACCEL_CAP_CRYPTO_SYMMETRIC + - QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC + - QAT_ACCEL_CAP_CIPHER + - QAT_ACCEL_CAP_AUTHENTICATION + - QAT_ACCEL_CAP_COMPRESSION + - QAT_ACCEL_CAP_ZUC + - QAT_ACCEL_CAP_SHA3; - - if (legfuse & LEGFUSE_ACCEL_MASK_CIPHER_SLICE) { - cap &= ~QAT_ACCEL_CAP_CRYPTO_SYMMETRIC; - cap &= ~QAT_ACCEL_CAP_CIPHER; - } - if (legfuse & LEGFUSE_ACCEL_MASK_AUTH_SLICE) - cap &= ~QAT_ACCEL_CAP_AUTHENTICATION; - if (legfuse & LEGFUSE_ACCEL_MASK_PKE_SLICE) - cap &= ~QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC; - if (legfuse & LEGFUSE_ACCEL_MASK_COMPRESS_SLICE) - cap &= ~QAT_ACCEL_CAP_COMPRESSION; - if (legfuse & LEGFUSE_ACCEL_MASK_EIA3_SLICE) - cap &= ~QAT_ACCEL_CAP_ZUC; - - return cap; -} - -static const char * -qat_dh895xcc_get_fw_uof_name(struct qat_softc *sc) -{ - return AE_FW_UOF_NAME_DH895XCC; -} - -static void -qat_dh895xcc_enable_intr(struct qat_softc *sc) -{ - /* Enable bundle and misc interrupts */ - qat_misc_write_4(sc, SMIAPF0_DH895XCC, SMIA0_MASK_DH895XCC); - qat_misc_write_4(sc, SMIAPF1_DH895XCC, SMIA1_MASK_DH895XCC); -} - -/* Worker thread to service arbiter mappings based on dev SKUs */ -static uint32_t thrd_to_arb_map_sku4[] = { - 0x12222AAA, 0x11666666, 0x12222AAA, 0x11666666, - 0x12222AAA, 0x11222222, 0x12222AAA, 0x11222222, - 0x00000000, 0x00000000, 0x00000000, 0x00000000, -}; - -static uint32_t thrd_to_arb_map_sku6[] = { - 0x12222AAA, 0x11666666, 0x12222AAA, 0x11666666, - 0x12222AAA, 0x11222222, 0x12222AAA, 0x11222222, - 0x12222AAA, 0x11222222, 0x12222AAA, 0x11222222, -}; - -static void -qat_dh895xcc_get_arb_mapping(struct qat_softc *sc, - const uint32_t **arb_map_config) -{ - uint32_t *map, sku; - int i; - - sku = qat_dh895xcc_get_sku(sc); - switch (sku) { - case QAT_SKU_1: - map = thrd_to_arb_map_sku4; - break; - case QAT_SKU_2: - case QAT_SKU_4: - map = thrd_to_arb_map_sku6; - break; - default: - *arb_map_config = NULL; - return; - } - - for (i = 1; i < MAX_AE_DH895XCC; i++) { - if ((~sc->sc_ae_mask) & (1 << i)) - map[i] = 0; - } - *arb_map_config = map; -} - -static void -qat_dh895xcc_enable_error_correction(struct qat_softc *sc) -{ - uint32_t mask; - u_int i; - - /* Enable Accel Engine error detection & correction */ - for (i = 0, mask = sc->sc_ae_mask; mask; i++, mask >>= 1) { - if (!(mask & 1)) - continue; - qat_misc_read_write_or_4(sc, AE_CTX_ENABLES_DH895XCC(i), - ENABLE_AE_ECC_ERR_DH895XCC); - qat_misc_read_write_or_4(sc, AE_MISC_CONTROL_DH895XCC(i), - ENABLE_AE_ECC_PARITY_CORR_DH895XCC); - } - - /* Enable shared memory error detection & correction */ - for (i = 0, mask = sc->sc_accel_mask; mask; i++, mask >>= 1) { - if (!(mask & 1)) - continue; - - qat_misc_read_write_or_4(sc, UERRSSMSH(i), ERRSSMSH_EN_DH895XCC); - qat_misc_read_write_or_4(sc, CERRSSMSH(i), ERRSSMSH_EN_DH895XCC); - qat_misc_read_write_or_4(sc, PPERR(i), PPERR_EN_DH895XCC); - } -} - -const struct qat_hw qat_hw_dh895xcc = { - .qhw_sram_bar_id = BAR_SRAM_ID_DH895XCC, - .qhw_misc_bar_id = BAR_PMISC_ID_DH895XCC, - .qhw_etr_bar_id = BAR_ETR_ID_DH895XCC, - .qhw_cap_global_offset = CAP_GLOBAL_OFFSET_DH895XCC, - .qhw_ae_offset = AE_OFFSET_DH895XCC, - .qhw_ae_local_offset = AE_LOCAL_OFFSET_DH895XCC, - .qhw_etr_bundle_size = ETR_BUNDLE_SIZE_DH895XCC, - .qhw_num_banks = ETR_MAX_BANKS_DH895XCC, - .qhw_num_rings_per_bank = ETR_MAX_RINGS_PER_BANK, - .qhw_num_accel = MAX_ACCEL_DH895XCC, - .qhw_num_engines = MAX_AE_DH895XCC, - .qhw_tx_rx_gap = ETR_TX_RX_GAP_DH895XCC, - .qhw_tx_rings_mask = ETR_TX_RINGS_MASK_DH895XCC, - .qhw_clock_per_sec = CLOCK_PER_SEC_DH895XCC, - .qhw_fw_auth = false, - .qhw_fw_req_size = FW_REQ_DEFAULT_SZ_HW17, - .qhw_fw_resp_size = FW_RESP_DEFAULT_SZ_HW17, - .qhw_ring_asym_tx = 0, - .qhw_ring_asym_rx = 8, - .qhw_ring_sym_tx = 2, - .qhw_ring_sym_rx = 10, - .qhw_mof_fwname = AE_FW_MOF_NAME_DH895XCC, - .qhw_mmp_fwname = AE_FW_MMP_NAME_DH895XCC, - .qhw_prod_type = AE_FW_PROD_TYPE_DH895XCC, - .qhw_get_accel_mask = qat_dh895xcc_get_accel_mask, - .qhw_get_ae_mask = qat_dh895xcc_get_ae_mask, - .qhw_get_sku = qat_dh895xcc_get_sku, - .qhw_get_accel_cap = qat_dh895xcc_get_accel_cap, - .qhw_get_fw_uof_name = qat_dh895xcc_get_fw_uof_name, - .qhw_enable_intr = qat_dh895xcc_enable_intr, - .qhw_init_admin_comms = qat_adm_mailbox_init, - .qhw_send_admin_init = qat_adm_mailbox_send_init, - .qhw_init_arb = qat_arb_init, - .qhw_get_arb_mapping = qat_dh895xcc_get_arb_mapping, - .qhw_enable_error_correction = qat_dh895xcc_enable_error_correction, - .qhw_check_slice_hang = qat_check_slice_hang, - .qhw_crypto_setup_desc = qat_hw17_crypto_setup_desc, - .qhw_crypto_setup_req_params = qat_hw17_crypto_setup_req_params, - .qhw_crypto_opaque_offset = offsetof(struct fw_la_resp, opaque_data), -}; Index: sys/dev/qat/qat_dh895xccreg.h =================================================================== --- sys/dev/qat/qat_dh895xccreg.h +++ /dev/null @@ -1,119 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2014-2020 Intel Corporation. - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* $FreeBSD$ */ - -#ifndef _DEV_PCI_QAT_DH895XCCREG_H_ -#define _DEV_PCI_QAT_DH895XCCREG_H_ - -/* Max number of accelerators and engines */ -#define MAX_ACCEL_DH895XCC 6 -#define MAX_AE_DH895XCC 12 - -/* PCIe BAR index */ -#define BAR_SRAM_ID_DH895XCC 0 -#define BAR_PMISC_ID_DH895XCC 1 -#define BAR_ETR_ID_DH895XCC 2 - -/* BAR PMISC sub-regions */ -#define AE_OFFSET_DH895XCC 0x20000 -#define AE_LOCAL_OFFSET_DH895XCC 0x20800 -#define CAP_GLOBAL_OFFSET_DH895XCC 0x30000 - -#define SOFTSTRAP_REG_DH895XCC 0x2EC - -#define FUSECTL_SKU_MASK_DH895XCC 0x300000 -#define FUSECTL_SKU_SHIFT_DH895XCC 20 -#define FUSECTL_SKU_1_DH895XCC 0 -#define FUSECTL_SKU_2_DH895XCC 1 -#define FUSECTL_SKU_3_DH895XCC 2 -#define FUSECTL_SKU_4_DH895XCC 3 - -#define ACCEL_REG_OFFSET_DH895XCC 13 -#define ACCEL_MASK_DH895XCC 0x3F -#define AE_MASK_DH895XCC 0xFFF - -#define SMIAPF0_DH895XCC 0x3A028 -#define SMIAPF1_DH895XCC 0x3A030 -#define SMIA0_MASK_DH895XCC 0xFFFFFFFF -#define SMIA1_MASK_DH895XCC 0x1 - -/* Error detection and correction */ -#define AE_CTX_ENABLES_DH895XCC(i) ((i) * 0x1000 + 0x20818) -#define AE_MISC_CONTROL_DH895XCC(i) ((i) * 0x1000 + 0x20960) -#define ENABLE_AE_ECC_ERR_DH895XCC __BIT(28) -#define ENABLE_AE_ECC_PARITY_CORR_DH895XCC (__BIT(24) | __BIT(12)) -#define ERRSSMSH_EN_DH895XCC __BIT(3) -/* BIT(2) enables the logging of push/pull data errors. */ -#define PPERR_EN_DH895XCC (__BIT(2)) - -/* ETR */ -#define ETR_MAX_BANKS_DH895XCC 32 -#define ETR_TX_RX_GAP_DH895XCC 8 -#define ETR_TX_RINGS_MASK_DH895XCC 0xFF -#define ETR_BUNDLE_SIZE_DH895XCC 0x1000 - -/* AE firmware */ -#define AE_FW_PROD_TYPE_DH895XCC 0x00400000 -#define AE_FW_MOF_NAME_DH895XCC "qat_dh895xccfw" -#define AE_FW_MMP_NAME_DH895XCC "qat_895xcc_mmp" -#define AE_FW_UOF_NAME_DH895XCC "icp_qat_ae.uof" - -/* Clock frequency */ -#define CLOCK_PER_SEC_DH895XCC (685 * 1000000 / 16) - -#endif Index: sys/dev/qat/qat_hw/qat_200xx/adf_200xx_hw_data.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_200xx/adf_200xx_hw_data.h @@ -0,0 +1,128 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_200XX_HW_DATA_H_ +#define ADF_200XX_HW_DATA_H_ + +/* PCIe configuration space */ +#define ADF_200XX_PMISC_BAR 0 +#define ADF_200XX_ETR_BAR 1 +#define ADF_200XX_RX_RINGS_OFFSET 8 +#define ADF_200XX_TX_RINGS_MASK 0xFF +#define ADF_200XX_MAX_ACCELERATORS 3 +#define ADF_200XX_MAX_ACCELENGINES 6 +#define ADF_200XX_ACCELERATORS_REG_OFFSET 16 +#define ADF_200XX_ACCELERATORS_MASK 0x7 +#define ADF_200XX_ACCELENGINES_MASK 0x3F +#define ADF_200XX_ETR_MAX_BANKS 16 +#define ADF_200XX_SMIAPF0_MASK_OFFSET (0x3A000 + 0x28) +#define ADF_200XX_SMIAPF1_MASK_OFFSET (0x3A000 + 0x30) +#define ADF_200XX_SMIA0_MASK 0xFFFF +#define ADF_200XX_SMIA1_MASK 0x1 +#define ADF_200XX_SOFTSTRAP_CSR_OFFSET 0x2EC +#define ADF_200XX_POWERGATE_PKE BIT(24) +#define ADF_200XX_POWERGATE_CY BIT(23) + +#define ADF_200XX_PFIEERRUNCSTSR 0x280 + +/* Error detection and correction */ +#define ADF_200XX_AE_CTX_ENABLES(i) ((i)*0x1000 + 0x20818) +#define ADF_200XX_AE_MISC_CONTROL(i) ((i)*0x1000 + 0x20960) +#define ADF_200XX_ENABLE_AE_ECC_ERR BIT(28) +#define ADF_200XX_ENABLE_AE_ECC_PARITY_CORR (BIT(24) | BIT(12)) +#define ADF_200XX_UERRSSMSH(i) (i * 0x4000 + 0x18) +#define ADF_200XX_CERRSSMSH(i) (i * 0x4000 + 0x10) +#define ADF_200XX_ERRSSMSH_EN BIT(3) +#define ADF_200XX_ERRSOU3 (0x3A000 + 0x0C) +#define ADF_200XX_ERRSOU5 (0x3A000 + 0xD8) + +/* BIT(2) enables the logging of push/pull data errors. */ +#define ADF_200XX_PPERR_EN (BIT(2)) + +/* Mask for VF2PF interrupts */ +#define ADF_200XX_VF2PF1_16 (0xFFFF << 9) +#define ADF_200XX_ERRSOU3_VF2PF(errsou3) (((errsou3)&0x01FFFE00) >> 9) +#define ADF_200XX_ERRMSK3_VF2PF(vf_mask) (((vf_mask)&0xFFFF) << 9) + +/* Masks for correctable error interrupts. */ +#define ADF_200XX_ERRMSK0_CERR (BIT(24) | BIT(16) | BIT(8) | BIT(0)) +#define ADF_200XX_ERRMSK1_CERR (BIT(8) | BIT(0)) +#define ADF_200XX_ERRMSK5_CERR (0) + +/* Masks for uncorrectable error interrupts. */ +#define ADF_200XX_ERRMSK0_UERR (BIT(25) | BIT(17) | BIT(9) | BIT(1)) +#define ADF_200XX_ERRMSK1_UERR (BIT(9) | BIT(1)) +#define ADF_200XX_ERRMSK3_UERR \ + (BIT(6) | BIT(5) | BIT(4) | BIT(3) | BIT(2) | BIT(0)) +#define ADF_200XX_ERRMSK5_UERR (BIT(16)) + +/* RI CPP control */ +#define ADF_200XX_RICPPINTCTL (0x3A000 + 0x110) +/* + * BIT(2) enables error detection and reporting on the RI Parity Error. + * BIT(1) enables error detection and reporting on the RI CPP Pull interface. + * BIT(0) enables error detection and reporting on the RI CPP Push interface. + */ +#define ADF_200XX_RICPP_EN (BIT(2) | BIT(1) | BIT(0)) + +/* TI CPP control */ +#define ADF_200XX_TICPPINTCTL (0x3A400 + 0x138) +/* + * BIT(3) enables error detection and reporting on the ETR Parity Error. + * BIT(2) enables error detection and reporting on the TI Parity Error. + * BIT(1) enables error detection and reporting on the TI CPP Pull interface. + * BIT(0) enables error detection and reporting on the TI CPP Push interface. + */ +#define ADF_200XX_TICPP_EN (BIT(3) | BIT(2) | BIT(1) | BIT(0)) + +/* CFC Uncorrectable Errors */ +#define ADF_200XX_CPP_CFC_ERR_CTRL (0x30000 + 0xC00) +/* + * BIT(1) enables interrupt. + * BIT(0) enables detecting and logging of push/pull data errors. + */ +#define ADF_200XX_CPP_CFC_UE (BIT(1) | BIT(0)) + +#define ADF_200XX_SLICEPWRDOWN(i) ((i)*0x4000 + 0x2C) +/* Enabling PKE4-PKE0. */ +#define ADF_200XX_MMP_PWR_UP_MSK \ + (BIT(20) | BIT(19) | BIT(18) | BIT(17) | BIT(16)) + +/* CPM Uncorrectable Errors */ +#define ADF_200XX_INTMASKSSM(i) ((i)*0x4000 + 0x0) +/* Disabling interrupts for correctable errors. */ +#define ADF_200XX_INTMASKSSM_UERR \ + (BIT(11) | BIT(9) | BIT(7) | BIT(5) | BIT(3) | BIT(1)) + +/* MMP */ +/* BIT(3) enables correction. */ +#define ADF_200XX_CERRSSMMMP_EN (BIT(3)) + +/* BIT(3) enables logging. */ +#define ADF_200XX_UERRSSMMMP_EN (BIT(3)) + +#define ADF_200XX_PF2VF_OFFSET(i) (0x3A000 + 0x280 + ((i)*0x04)) +#define ADF_200XX_VINTMSK_OFFSET(i) (0x3A000 + 0x200 + ((i)*0x04)) + +/* Arbiter configuration */ +#define ADF_200XX_ARB_OFFSET 0x30000 +#define ADF_200XX_ARB_WRK_2_SER_MAP_OFFSET 0x180 +#define ADF_200XX_ARB_WQCFG_OFFSET 0x100 + +/* Admin Interface Reg Offset */ +#define ADF_200XX_ADMINMSGUR_OFFSET (0x3A000 + 0x574) +#define ADF_200XX_ADMINMSGLR_OFFSET (0x3A000 + 0x578) +#define ADF_200XX_MAILBOX_BASE_OFFSET 0x20970 + +/* Firmware Binary */ +#define ADF_200XX_FW "qat_200xx_fw" +#define ADF_200XX_MMP "qat_200xx_mmp_fw" + +void adf_init_hw_data_200xx(struct adf_hw_device_data *hw_data); +void adf_clean_hw_data_200xx(struct adf_hw_device_data *hw_data); + +#define ADF_200XX_AE_FREQ (685 * 1000000) +#define ADF_200XX_MIN_AE_FREQ (333 * 1000000) +#define ADF_200XX_MAX_AE_FREQ (685 * 1000000) + +#endif Index: sys/dev/qat/qat_hw/qat_200xx/adf_200xx_hw_data.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_200xx/adf_200xx_hw_data.c @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include +#include +#include +#include +#include +#include "adf_200xx_hw_data.h" +#include "icp_qat_hw.h" +#include "adf_heartbeat.h" + +/* Worker thread to service arbiter mappings */ +static const u32 thrd_to_arb_map[ADF_200XX_MAX_ACCELENGINES] = + { 0x12222AAA, 0x11222AAA, 0x12222AAA, 0x11222AAA, 0x12222AAA, 0x11222AAA }; + +enum { DEV_200XX_SKU_1 = 0, DEV_200XX_SKU_2 = 1, DEV_200XX_SKU_3 = 2 }; + +static u32 thrd_to_arb_map_gen[ADF_200XX_MAX_ACCELENGINES] = { 0 }; + +static struct adf_hw_device_class qat_200xx_class = {.name = + ADF_200XX_DEVICE_NAME, + .type = DEV_200XX, + .instances = 0 }; + +static u32 +get_accel_mask(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + + u32 fuse; + u32 straps; + + fuse = pci_read_config(pdev, ADF_DEVICE_FUSECTL_OFFSET, 4); + straps = pci_read_config(pdev, ADF_200XX_SOFTSTRAP_CSR_OFFSET, 4); + + return (~(fuse | straps)) >> ADF_200XX_ACCELERATORS_REG_OFFSET & + ADF_200XX_ACCELERATORS_MASK; +} + +static u32 +get_ae_mask(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 fuse; + u32 me_straps; + u32 me_disable; + u32 ssms_disabled; + + fuse = pci_read_config(pdev, ADF_DEVICE_FUSECTL_OFFSET, 4); + me_straps = pci_read_config(pdev, ADF_200XX_SOFTSTRAP_CSR_OFFSET, 4); + + /* If SSMs are disabled, then disable the corresponding MEs */ + ssms_disabled = + (~get_accel_mask(accel_dev)) & ADF_200XX_ACCELERATORS_MASK; + me_disable = 0x3; + while (ssms_disabled) { + if (ssms_disabled & 1) + me_straps |= me_disable; + ssms_disabled >>= 1; + me_disable <<= 2; + } + + return (~(fuse | me_straps)) & ADF_200XX_ACCELENGINES_MASK; +} + +static u32 +get_num_accels(struct adf_hw_device_data *self) +{ + u32 i, ctr = 0; + + if (!self || !self->accel_mask) + return 0; + + for (i = 0; i < ADF_200XX_MAX_ACCELERATORS; i++) { + if (self->accel_mask & (1 << i)) + ctr++; + } + return ctr; +} + +static u32 +get_num_aes(struct adf_hw_device_data *self) +{ + u32 i, ctr = 0; + + if (!self || !self->ae_mask) + return 0; + + for (i = 0; i < ADF_200XX_MAX_ACCELENGINES; i++) { + if (self->ae_mask & (1 << i)) + ctr++; + } + return ctr; +} + +static u32 +get_misc_bar_id(struct adf_hw_device_data *self) +{ + return ADF_200XX_PMISC_BAR; +} + +static u32 +get_etr_bar_id(struct adf_hw_device_data *self) +{ + return ADF_200XX_ETR_BAR; +} + +static u32 +get_sram_bar_id(struct adf_hw_device_data *self) +{ + return 0; +} + +static enum dev_sku_info +get_sku(struct adf_hw_device_data *self) +{ + int aes = get_num_aes(self); + + if (aes == 6) + return DEV_SKU_4; + + return DEV_SKU_UNKNOWN; +} + +static void +adf_get_arbiter_mapping(struct adf_accel_dev *accel_dev, + u32 const **arb_map_config) +{ + int i; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + + for (i = 0; i < ADF_200XX_MAX_ACCELENGINES; i++) { + thrd_to_arb_map_gen[i] = 0; + if (hw_device->ae_mask & (1 << i)) + thrd_to_arb_map_gen[i] = thrd_to_arb_map[i]; + } + adf_cfg_gen_dispatch_arbiter(accel_dev, + thrd_to_arb_map, + thrd_to_arb_map_gen, + ADF_200XX_MAX_ACCELENGINES); + *arb_map_config = thrd_to_arb_map_gen; +} + +static u32 +get_pf2vf_offset(u32 i) +{ + return ADF_200XX_PF2VF_OFFSET(i); +} + +static u32 +get_vintmsk_offset(u32 i) +{ + return ADF_200XX_VINTMSK_OFFSET(i); +} + +static void +get_arb_info(struct arb_info *arb_csrs_info) +{ + arb_csrs_info->arbiter_offset = ADF_200XX_ARB_OFFSET; + arb_csrs_info->wrk_thd_2_srv_arb_map = + ADF_200XX_ARB_WRK_2_SER_MAP_OFFSET; + arb_csrs_info->wrk_cfg_offset = ADF_200XX_ARB_WQCFG_OFFSET; +} + +static void +get_admin_info(struct admin_info *admin_csrs_info) +{ + admin_csrs_info->mailbox_offset = ADF_200XX_MAILBOX_BASE_OFFSET; + admin_csrs_info->admin_msg_ur = ADF_200XX_ADMINMSGUR_OFFSET; + admin_csrs_info->admin_msg_lr = ADF_200XX_ADMINMSGLR_OFFSET; +} + +static void +get_errsou_offset(u32 *errsou3, u32 *errsou5) +{ + *errsou3 = ADF_200XX_ERRSOU3; + *errsou5 = ADF_200XX_ERRSOU5; +} + +static u32 +get_clock_speed(struct adf_hw_device_data *self) +{ + /* CPP clock is half high-speed clock */ + return self->clock_frequency / 2; +} + +static void +adf_enable_error_interrupts(struct resource *csr) +{ + ADF_CSR_WR(csr, ADF_ERRMSK0, ADF_200XX_ERRMSK0_CERR); /* ME0-ME3 */ + ADF_CSR_WR(csr, ADF_ERRMSK1, ADF_200XX_ERRMSK1_CERR); /* ME4-ME5 */ + ADF_CSR_WR(csr, ADF_ERRMSK5, ADF_200XX_ERRMSK5_CERR); /* SSM2 */ + + /* Reset everything except VFtoPF1_16. */ + adf_csr_fetch_and_and(csr, ADF_ERRMSK3, ADF_200XX_VF2PF1_16); + + /* RI CPP bus interface error detection and reporting. */ + ADF_CSR_WR(csr, ADF_200XX_RICPPINTCTL, ADF_200XX_RICPP_EN); + + /* TI CPP bus interface error detection and reporting. */ + ADF_CSR_WR(csr, ADF_200XX_TICPPINTCTL, ADF_200XX_TICPP_EN); + + /* Enable CFC Error interrupts and logging. */ + ADF_CSR_WR(csr, ADF_200XX_CPP_CFC_ERR_CTRL, ADF_200XX_CPP_CFC_UE); +} + +static void +adf_disable_error_interrupts(struct adf_accel_dev *accel_dev) +{ + struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_200XX_PMISC_BAR]; + struct resource *csr = misc_bar->virt_addr; + + /* ME0-ME3 */ + ADF_CSR_WR(csr, + ADF_ERRMSK0, + ADF_200XX_ERRMSK0_UERR | ADF_200XX_ERRMSK0_CERR); + /* ME4-ME5 */ + ADF_CSR_WR(csr, + ADF_ERRMSK1, + ADF_200XX_ERRMSK1_UERR | ADF_200XX_ERRMSK1_CERR); + /* CPP Push Pull, RI, TI, SSM0-SSM1, CFC */ + ADF_CSR_WR(csr, ADF_ERRMSK3, ADF_200XX_ERRMSK3_UERR); + /* SSM2 */ + ADF_CSR_WR(csr, ADF_ERRMSK5, ADF_200XX_ERRMSK5_UERR); +} + +static int +adf_check_uncorrectable_error(struct adf_accel_dev *accel_dev) +{ + struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_200XX_PMISC_BAR]; + struct resource *csr = misc_bar->virt_addr; + + u32 errsou0 = ADF_CSR_RD(csr, ADF_ERRSOU0) & ADF_200XX_ERRMSK0_UERR; + u32 errsou1 = ADF_CSR_RD(csr, ADF_ERRSOU1) & ADF_200XX_ERRMSK1_UERR; + u32 errsou3 = ADF_CSR_RD(csr, ADF_ERRSOU3) & ADF_200XX_ERRMSK3_UERR; + u32 errsou5 = ADF_CSR_RD(csr, ADF_ERRSOU5) & ADF_200XX_ERRMSK5_UERR; + + return (errsou0 | errsou1 | errsou3 | errsou5); +} + +static void +adf_enable_mmp_error_correction(struct resource *csr, + struct adf_hw_device_data *hw_data) +{ + unsigned int dev, mmp; + unsigned int mask; + + /* Enable MMP Logging */ + for (dev = 0, mask = hw_data->accel_mask; mask; dev++, mask >>= 1) { + if (!(mask & 1)) + continue; + /* Set power-up */ + adf_csr_fetch_and_and(csr, + ADF_200XX_SLICEPWRDOWN(dev), + ~ADF_200XX_MMP_PWR_UP_MSK); + + if (hw_data->accel_capabilities_mask & + ADF_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC) { + for (mmp = 0; mmp < ADF_MAX_MMP; ++mmp) { + /* + * The device supports PKE, + * so enable error reporting from MMP memory + */ + adf_csr_fetch_and_or(csr, + ADF_UERRSSMMMP(dev, mmp), + ADF_200XX_UERRSSMMMP_EN); + /* + * The device supports PKE, + * so enable error correction from MMP memory + */ + adf_csr_fetch_and_or(csr, + ADF_CERRSSMMMP(dev, mmp), + ADF_200XX_CERRSSMMMP_EN); + } + } else { + for (mmp = 0; mmp < ADF_MAX_MMP; ++mmp) { + /* + * The device doesn't support PKE, + * so disable error reporting from MMP memory + */ + adf_csr_fetch_and_and(csr, + ADF_UERRSSMMMP(dev, mmp), + ~ADF_200XX_UERRSSMMMP_EN); + /* + * The device doesn't support PKE, + * so disable error correction from MMP memory + */ + adf_csr_fetch_and_and(csr, + ADF_CERRSSMMMP(dev, mmp), + ~ADF_200XX_CERRSSMMMP_EN); + } + } + + /* Restore power-down value */ + adf_csr_fetch_and_or(csr, + ADF_200XX_SLICEPWRDOWN(dev), + ADF_200XX_MMP_PWR_UP_MSK); + + /* Disabling correctable error interrupts. */ + ADF_CSR_WR(csr, + ADF_200XX_INTMASKSSM(dev), + ADF_200XX_INTMASKSSM_UERR); + } +} + +static void +adf_enable_error_correction(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_200XX_PMISC_BAR]; + struct resource *csr = misc_bar->virt_addr; + unsigned int val, i; + unsigned int mask; + + /* Enable Accel Engine error detection & correction */ + mask = hw_device->ae_mask; + for (i = 0; mask; i++, mask >>= 1) { + if (!(mask & 1)) + continue; + val = ADF_CSR_RD(csr, ADF_200XX_AE_CTX_ENABLES(i)); + val |= ADF_200XX_ENABLE_AE_ECC_ERR; + ADF_CSR_WR(csr, ADF_200XX_AE_CTX_ENABLES(i), val); + val = ADF_CSR_RD(csr, ADF_200XX_AE_MISC_CONTROL(i)); + val |= ADF_200XX_ENABLE_AE_ECC_PARITY_CORR; + ADF_CSR_WR(csr, ADF_200XX_AE_MISC_CONTROL(i), val); + } + + /* Enable shared memory error detection & correction */ + mask = hw_device->accel_mask; + for (i = 0; mask; i++, mask >>= 1) { + if (!(mask & 1)) + continue; + val = ADF_CSR_RD(csr, ADF_200XX_UERRSSMSH(i)); + val |= ADF_200XX_ERRSSMSH_EN; + ADF_CSR_WR(csr, ADF_200XX_UERRSSMSH(i), val); + val = ADF_CSR_RD(csr, ADF_200XX_CERRSSMSH(i)); + val |= ADF_200XX_ERRSSMSH_EN; + ADF_CSR_WR(csr, ADF_200XX_CERRSSMSH(i), val); + val = ADF_CSR_RD(csr, ADF_PPERR(i)); + val |= ADF_200XX_PPERR_EN; + ADF_CSR_WR(csr, ADF_PPERR(i), val); + } + + adf_enable_error_interrupts(csr); + adf_enable_mmp_error_correction(csr, hw_device); +} + +static void +adf_enable_ints(struct adf_accel_dev *accel_dev) +{ + struct resource *addr; + + addr = (&GET_BARS(accel_dev)[ADF_200XX_PMISC_BAR])->virt_addr; + + /* Enable bundle and misc interrupts */ + ADF_CSR_WR(addr, ADF_200XX_SMIAPF0_MASK_OFFSET, ADF_200XX_SMIA0_MASK); + ADF_CSR_WR(addr, ADF_200XX_SMIAPF1_MASK_OFFSET, ADF_200XX_SMIA1_MASK); +} + +static u32 +get_ae_clock(struct adf_hw_device_data *self) +{ + /* + * Clock update interval is <16> ticks for 200xx. + */ + return self->clock_frequency / 16; +} + +static int +get_storage_enabled(struct adf_accel_dev *accel_dev, uint32_t *storage_enabled) +{ + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + + strlcpy(key, ADF_STORAGE_FIRMWARE_ENABLED, sizeof(key)); + if (!adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) { + if (kstrtouint(val, 0, storage_enabled)) + return -EFAULT; + } + return 0; +} + +static int +measure_clock(struct adf_accel_dev *accel_dev) +{ + u32 frequency; + int ret = 0; + + ret = adf_dev_measure_clock(accel_dev, + &frequency, + ADF_200XX_MIN_AE_FREQ, + ADF_200XX_MAX_AE_FREQ); + if (ret) + return ret; + + accel_dev->hw_device->clock_frequency = frequency; + return 0; +} + +static u32 +adf_200xx_get_hw_cap(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 legfuses; + u32 capabilities; + u32 straps; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 fuses = hw_data->fuses; + + /* Read accelerator capabilities mask */ + legfuses = pci_read_config(pdev, ADF_DEVICE_LEGFUSE_OFFSET, 4); + capabilities = ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC + + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC + + ICP_ACCEL_CAPABILITIES_CIPHER + + ICP_ACCEL_CAPABILITIES_AUTHENTICATION + + ICP_ACCEL_CAPABILITIES_COMPRESSION + ICP_ACCEL_CAPABILITIES_ZUC + + ICP_ACCEL_CAPABILITIES_SHA3 + ICP_ACCEL_CAPABILITIES_HKDF + + ICP_ACCEL_CAPABILITIES_ECEDMONT + + ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN; + if (legfuses & ICP_ACCEL_MASK_CIPHER_SLICE) + capabilities &= ~(ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC | + ICP_ACCEL_CAPABILITIES_CIPHER | + ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN); + if (legfuses & ICP_ACCEL_MASK_AUTH_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_AUTHENTICATION; + if (legfuses & ICP_ACCEL_MASK_PKE_SLICE) + capabilities &= ~(ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC | + ICP_ACCEL_CAPABILITIES_ECEDMONT); + if (legfuses & ICP_ACCEL_MASK_COMPRESS_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_COMPRESSION; + if (legfuses & ICP_ACCEL_MASK_EIA3_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_ZUC; + if (legfuses & ICP_ACCEL_MASK_SHA3_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_SHA3; + + straps = pci_read_config(pdev, ADF_200XX_SOFTSTRAP_CSR_OFFSET, 4); + if ((straps | fuses) & ADF_200XX_POWERGATE_PKE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC; + if ((straps | fuses) & ADF_200XX_POWERGATE_CY) + capabilities &= ~ICP_ACCEL_CAPABILITIES_COMPRESSION; + + return capabilities; +} + +static const char * +get_obj_name(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services service) +{ + return ADF_CXXX_AE_FW_NAME_CUSTOM1; +} + +static uint32_t +get_objs_num(struct adf_accel_dev *accel_dev) +{ + return 1; +} + +static uint32_t +get_obj_cfg_ae_mask(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services services) +{ + return accel_dev->hw_device->ae_mask; +} + +void +adf_init_hw_data_200xx(struct adf_hw_device_data *hw_data) +{ + hw_data->dev_class = &qat_200xx_class; + hw_data->instance_id = qat_200xx_class.instances++; + hw_data->num_banks = ADF_200XX_ETR_MAX_BANKS; + hw_data->num_rings_per_bank = ADF_ETR_MAX_RINGS_PER_BANK; + hw_data->num_accel = ADF_200XX_MAX_ACCELERATORS; + hw_data->num_logical_accel = 1; + hw_data->num_engines = ADF_200XX_MAX_ACCELENGINES; + hw_data->tx_rx_gap = ADF_200XX_RX_RINGS_OFFSET; + hw_data->tx_rings_mask = ADF_200XX_TX_RINGS_MASK; + hw_data->alloc_irq = adf_isr_resource_alloc; + hw_data->free_irq = adf_isr_resource_free; + hw_data->enable_error_correction = adf_enable_error_correction; + hw_data->check_uncorrectable_error = adf_check_uncorrectable_error; + hw_data->print_err_registers = adf_print_err_registers; + hw_data->disable_error_interrupts = adf_disable_error_interrupts; + hw_data->get_accel_mask = get_accel_mask; + hw_data->get_ae_mask = get_ae_mask; + hw_data->get_num_accels = get_num_accels; + hw_data->get_num_aes = get_num_aes; + hw_data->get_sram_bar_id = get_sram_bar_id; + hw_data->get_etr_bar_id = get_etr_bar_id; + hw_data->get_misc_bar_id = get_misc_bar_id; + hw_data->get_pf2vf_offset = get_pf2vf_offset; + hw_data->get_vintmsk_offset = get_vintmsk_offset; + hw_data->get_arb_info = get_arb_info; + hw_data->get_admin_info = get_admin_info; + hw_data->get_errsou_offset = get_errsou_offset; + hw_data->get_clock_speed = get_clock_speed; + hw_data->get_sku = get_sku; + hw_data->fw_name = ADF_200XX_FW; + hw_data->fw_mmp_name = ADF_200XX_MMP; + hw_data->init_admin_comms = adf_init_admin_comms; + hw_data->exit_admin_comms = adf_exit_admin_comms; + hw_data->disable_iov = adf_disable_sriov; + hw_data->send_admin_init = adf_send_admin_init; + hw_data->init_arb = adf_init_gen2_arb; + hw_data->exit_arb = adf_exit_arb; + hw_data->get_arb_mapping = adf_get_arbiter_mapping; + hw_data->enable_ints = adf_enable_ints; + hw_data->set_ssm_wdtimer = adf_set_ssm_wdtimer; + hw_data->check_slice_hang = adf_check_slice_hang; + hw_data->enable_vf2pf_comms = adf_pf_enable_vf2pf_comms; + hw_data->disable_vf2pf_comms = adf_pf_disable_vf2pf_comms; + hw_data->restore_device = adf_dev_restore; + hw_data->reset_device = adf_reset_flr; + hw_data->min_iov_compat_ver = ADF_PFVF_COMPATIBILITY_VERSION; + hw_data->measure_clock = measure_clock; + hw_data->get_ae_clock = get_ae_clock; + hw_data->reset_device = adf_reset_flr; + hw_data->get_objs_num = get_objs_num; + hw_data->get_obj_name = get_obj_name; + hw_data->get_obj_cfg_ae_mask = get_obj_cfg_ae_mask; + hw_data->get_accel_cap = adf_200xx_get_hw_cap; + hw_data->clock_frequency = ADF_200XX_AE_FREQ; + hw_data->extended_dc_capabilities = 0; + hw_data->get_storage_enabled = get_storage_enabled; + hw_data->query_storage_cap = 1; + hw_data->get_heartbeat_status = adf_get_heartbeat_status; + hw_data->get_ae_clock = get_ae_clock; + hw_data->storage_enable = 0; + hw_data->get_ring_to_svc_map = adf_cfg_get_services_enabled; + hw_data->config_device = adf_config_device; + hw_data->set_asym_rings_mask = adf_cfg_set_asym_rings_mask; + hw_data->ring_to_svc_map = ADF_DEFAULT_RING_TO_SRV_MAP; + hw_data->pre_reset = adf_dev_pre_reset; + hw_data->post_reset = adf_dev_post_reset; +} + +void +adf_clean_hw_data_200xx(struct adf_hw_device_data *hw_data) +{ + hw_data->dev_class->instances--; +} Index: sys/dev/qat/qat_hw/qat_200xx/adf_drv.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_200xx/adf_drv.c @@ -0,0 +1,280 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "adf_200xx_hw_data.h" +#include "adf_fw_counters.h" +#include "adf_cfg_device.h" +#include +#include +#include +#include +#include +#include "adf_heartbeat_dbg.h" +#include "adf_cnvnr_freq_counters.h" + +static MALLOC_DEFINE(M_QAT_200XX, "qat_200xx", "qat_200xx"); + +#define ADF_SYSTEM_DEVICE(device_id) \ + { \ + PCI_VENDOR_ID_INTEL, device_id \ + } + +static const struct pci_device_id adf_pci_tbl[] = + { ADF_SYSTEM_DEVICE(ADF_200XX_PCI_DEVICE_ID), + { + 0, + } }; + +static int +adf_probe(device_t dev) +{ + const struct pci_device_id *id; + + for (id = adf_pci_tbl; id->vendor != 0; id++) { + if (pci_get_vendor(dev) == id->vendor && + pci_get_device(dev) == id->device) { + device_set_desc(dev, + "Intel " ADF_200XX_DEVICE_NAME + " QuickAssist"); + return BUS_PROBE_GENERIC; + } + } + return ENXIO; +} + +static void +adf_cleanup_accel(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_pci *accel_pci_dev = &accel_dev->accel_pci_dev; + int i; + + if (accel_dev->dma_tag) + bus_dma_tag_destroy(accel_dev->dma_tag); + for (i = 0; i < ADF_PCI_MAX_BARS; i++) { + struct adf_bar *bar = &accel_pci_dev->pci_bars[i]; + + if (bar->virt_addr) + bus_free_resource(accel_pci_dev->pci_dev, + SYS_RES_MEMORY, + bar->virt_addr); + } + + if (accel_dev->hw_device) { + switch (pci_get_device(accel_pci_dev->pci_dev)) { + case ADF_200XX_PCI_DEVICE_ID: + adf_clean_hw_data_200xx(accel_dev->hw_device); + break; + default: + break; + } + free(accel_dev->hw_device, M_QAT_200XX); + accel_dev->hw_device = NULL; + } + adf_cfg_dev_remove(accel_dev); + adf_devmgr_rm_dev(accel_dev, NULL); +} + +static int +adf_attach(device_t dev) +{ + struct adf_accel_dev *accel_dev; + struct adf_accel_pci *accel_pci_dev; + struct adf_hw_device_data *hw_data; + unsigned int i = 0, bar_nr = 0, reg_val = 0; + int ret, rid; + struct adf_cfg_device *cfg_dev = NULL; + + /* Set pci MaxPayLoad to 256. Implemented to avoid the issue of + * Pci-passthrough causing Maxpayload to be reset to 128 bytes + * when the device is reset. + */ + if (pci_get_max_payload(dev) != 256) + pci_set_max_payload(dev, 256); + + accel_dev = device_get_softc(dev); + + INIT_LIST_HEAD(&accel_dev->crypto_list); + accel_pci_dev = &accel_dev->accel_pci_dev; + accel_pci_dev->pci_dev = dev; + + if (bus_get_domain(dev, &accel_pci_dev->node) != 0) + accel_pci_dev->node = 0; + + /* XXX: Revisit if we actually need a devmgr table at all. */ + + /* Add accel device to accel table. + * This should be called before adf_cleanup_accel is called + */ + if (adf_devmgr_add_dev(accel_dev, NULL)) { + device_printf(dev, "Failed to add new accelerator device.\n"); + return ENXIO; + } + + /* Allocate and configure device configuration structure */ + hw_data = malloc(sizeof(*hw_data), M_QAT_200XX, M_WAITOK | M_ZERO); + + accel_dev->hw_device = hw_data; + adf_init_hw_data_200xx(accel_dev->hw_device); + accel_pci_dev->revid = pci_get_revid(dev); + hw_data->fuses = pci_read_config(dev, ADF_DEVICE_FUSECTL_OFFSET, 4); + if (accel_pci_dev->revid == 0x00) { + device_printf(dev, "A0 stepping is not supported.\n"); + ret = ENODEV; + goto out_err; + } + + /* Get PPAERUCM values and store */ + ret = adf_aer_store_ppaerucm_reg(dev, hw_data); + if (ret) + goto out_err; + + /* Clear PFIEERRUNCSTSR register bits if they are set */ + reg_val = pci_read_config(dev, ADF_200XX_PFIEERRUNCSTSR, 4); + if (reg_val) { + device_printf( + dev, + "Clearing PFIEERRUNCSTSR, previous status : %0x\n", + reg_val); + pci_write_config(dev, ADF_200XX_PFIEERRUNCSTSR, reg_val, 4); + } + + /* Get Accelerators and Accelerators Engines masks */ + hw_data->accel_mask = hw_data->get_accel_mask(accel_dev); + hw_data->ae_mask = hw_data->get_ae_mask(accel_dev); + + accel_pci_dev->sku = hw_data->get_sku(hw_data); + /* If the device has no acceleration engines then ignore it. */ + if (!hw_data->accel_mask || !hw_data->ae_mask || + (~hw_data->ae_mask & 0x01)) { + device_printf(dev, "No acceleration units found\n"); + ret = ENXIO; + goto out_err; + } + + /* Create device configuration table */ + ret = adf_cfg_dev_add(accel_dev); + if (ret) + goto out_err; + ret = adf_clock_debugfs_add(accel_dev); + if (ret) + goto out_err; + + pci_set_max_read_req(dev, 1024); + + ret = bus_dma_tag_create(bus_get_dma_tag(dev), + 1, + 0, + BUS_SPACE_MAXADDR, + BUS_SPACE_MAXADDR, + NULL, + NULL, + BUS_SPACE_MAXSIZE, + /* BUS_SPACE_UNRESTRICTED */ 1, + BUS_SPACE_MAXSIZE, + 0, + NULL, + NULL, + &accel_dev->dma_tag); + if (ret) + goto out_err; + + if (hw_data->get_accel_cap) { + hw_data->accel_capabilities_mask = + hw_data->get_accel_cap(accel_dev); + } + + /* Find and map all the device's BARS */ + for (bar_nr = 0; i < ADF_PCI_MAX_BARS && bar_nr < PCIR_MAX_BAR_0; + bar_nr++) { + struct adf_bar *bar; + + /* + * XXX: This isn't quite right as it will ignore a BAR + * that wasn't assigned a valid resource range by the + * firmware. + */ + rid = PCIR_BAR(bar_nr); + if (bus_get_resource(dev, SYS_RES_MEMORY, rid, NULL, NULL) != 0) + continue; + bar = &accel_pci_dev->pci_bars[i++]; + bar->virt_addr = bus_alloc_resource_any(dev, + SYS_RES_MEMORY, + &rid, + RF_ACTIVE); + if (!bar->virt_addr) { + device_printf(dev, "Failed to map BAR %d\n", bar_nr); + ret = ENXIO; + goto out_err; + } + bar->base_addr = rman_get_start(bar->virt_addr); + bar->size = rman_get_size(bar->virt_addr); + } + pci_enable_busmaster(dev); + + if (!accel_dev->hw_device->config_device) { + ret = EFAULT; + goto out_err; + } + + ret = accel_dev->hw_device->config_device(accel_dev); + if (ret) + goto out_err; + + ret = adf_dev_init(accel_dev); + if (ret) + goto out_dev_shutdown; + + ret = adf_dev_start(accel_dev); + if (ret) + goto out_dev_stop; + + cfg_dev = accel_dev->cfg->dev; + adf_cfg_device_clear(cfg_dev, accel_dev); + free(cfg_dev, M_QAT); + accel_dev->cfg->dev = NULL; + return ret; +out_dev_stop: + adf_dev_stop(accel_dev); +out_dev_shutdown: + adf_dev_shutdown(accel_dev); +out_err: + adf_cleanup_accel(accel_dev); + return ret; +} + +static int +adf_detach(device_t dev) +{ + struct adf_accel_dev *accel_dev = device_get_softc(dev); + + if (adf_dev_stop(accel_dev)) { + device_printf(dev, "Failed to stop QAT accel dev\n"); + return EBUSY; + } + + adf_dev_shutdown(accel_dev); + + adf_cleanup_accel(accel_dev); + + return 0; +} + +static device_method_t adf_methods[] = { DEVMETHOD(device_probe, adf_probe), + DEVMETHOD(device_attach, adf_attach), + DEVMETHOD(device_detach, adf_detach), + + DEVMETHOD_END }; + +static driver_t adf_driver = { "qat", + adf_methods, + sizeof(struct adf_accel_dev) }; + +DRIVER_MODULE_ORDERED(qat_200xx, pci, adf_driver, NULL, NULL, SI_ORDER_THIRD); +MODULE_VERSION(qat_200xx, 1); +MODULE_DEPEND(qat_200xx, qat_common, 1, 1, 1); +MODULE_DEPEND(qat_200xx, qat_api, 1, 1, 1); +MODULE_DEPEND(qat_200xx, linuxkpi, 1, 1, 1); Index: sys/dev/qat/qat_hw/qat_c3xxx/adf_c3xxx_hw_data.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c3xxx/adf_c3xxx_hw_data.h @@ -0,0 +1,127 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_C3XXX_HW_DATA_H_ +#define ADF_C3XXX_HW_DATA_H_ + +/* PCIe configuration space */ +#define ADF_C3XXX_PMISC_BAR 0 +#define ADF_C3XXX_ETR_BAR 1 +#define ADF_C3XXX_RX_RINGS_OFFSET 8 +#define ADF_C3XXX_TX_RINGS_MASK 0xFF +#define ADF_C3XXX_MAX_ACCELERATORS 3 +#define ADF_C3XXX_MAX_ACCELENGINES 6 +#define ADF_C3XXX_ACCELERATORS_REG_OFFSET 16 +#define ADF_C3XXX_ACCELERATORS_MASK 0x7 +#define ADF_C3XXX_ACCELENGINES_MASK 0x3F +#define ADF_C3XXX_ETR_MAX_BANKS 16 +#define ADF_C3XXX_SMIAPF0_MASK_OFFSET (0x3A000 + 0x28) +#define ADF_C3XXX_SMIAPF1_MASK_OFFSET (0x3A000 + 0x30) +#define ADF_C3XXX_SMIA0_MASK 0xFFFF +#define ADF_C3XXX_SMIA1_MASK 0x1 +#define ADF_C3XXX_SOFTSTRAP_CSR_OFFSET 0x2EC +#define ADF_C3XXX_POWERGATE_PKE BIT(24) +#define ADF_C3XXX_POWERGATE_CY BIT(23) + +/* Error detection and correction */ +#define ADF_C3XXX_AE_CTX_ENABLES(i) (i * 0x1000 + 0x20818) +#define ADF_C3XXX_AE_MISC_CONTROL(i) (i * 0x1000 + 0x20960) +#define ADF_C3XXX_ENABLE_AE_ECC_ERR BIT(28) +#define ADF_C3XXX_ENABLE_AE_ECC_PARITY_CORR (BIT(24) | BIT(12)) +#define ADF_C3XXX_UERRSSMSH(i) (i * 0x4000 + 0x18) +#define ADF_C3XXX_CERRSSMSH(i) (i * 0x4000 + 0x10) +#define ADF_C3XXX_ERRSSMSH_EN BIT(3) +#define ADF_C3XXX_ERRSOU3 (0x3A000 + 0x0C) +#define ADF_C3XXX_ERRSOU5 (0x3A000 + 0xD8) + +/* BIT(2) enables the logging of push/pull data errors. */ +#define ADF_C3XXX_PPERR_EN (BIT(2)) + +/* Mask for VF2PF interrupts */ +#define ADF_C3XXX_VF2PF1_16 (0xFFFF << 9) +#define ADF_C3XXX_ERRSOU3_VF2PF(errsou3) (((errsou3)&0x01FFFE00) >> 9) +#define ADF_C3XXX_ERRMSK3_VF2PF(vf_mask) (((vf_mask)&0xFFFF) << 9) + +/* Masks for correctable error interrupts. */ +#define ADF_C3XXX_ERRMSK0_CERR (BIT(24) | BIT(16) | BIT(8) | BIT(0)) +#define ADF_C3XXX_ERRMSK1_CERR (BIT(8) | BIT(0)) +#define ADF_C3XXX_ERRMSK5_CERR (0) + +/* Masks for uncorrectable error interrupts. */ +#define ADF_C3XXX_ERRMSK0_UERR (BIT(25) | BIT(17) | BIT(9) | BIT(1)) +#define ADF_C3XXX_ERRMSK1_UERR (BIT(9) | BIT(1)) +#define ADF_C3XXX_ERRMSK3_UERR \ + (BIT(6) | BIT(5) | BIT(4) | BIT(3) | BIT(2) | BIT(0)) +#define ADF_C3XXX_ERRMSK5_UERR (BIT(16)) + +/* RI CPP control */ +#define ADF_C3XXX_RICPPINTCTL (0x3A000 + 0x110) +/* + * BIT(2) enables error detection and reporting on the RI Parity Error. + * BIT(1) enables error detection and reporting on the RI CPP Pull interface. + * BIT(0) enables error detection and reporting on the RI CPP Push interface. + */ +#define ADF_C3XXX_RICPP_EN (BIT(2) | BIT(1) | BIT(0)) + +/* TI CPP control */ +#define ADF_C3XXX_TICPPINTCTL (0x3A400 + 0x138) +/* + * BIT(3) enables error detection and reporting on the ETR Parity Error. + * BIT(2) enables error detection and reporting on the TI Parity Error. + * BIT(1) enables error detection and reporting on the TI CPP Pull interface. + * BIT(0) enables error detection and reporting on the TI CPP Push interface. + */ +#define ADF_C3XXX_TICPP_EN (BIT(3) | BIT(2) | BIT(1) | BIT(0)) + +/* CFC Uncorrectable Errors */ +#define ADF_C3XXX_CPP_CFC_ERR_CTRL (0x30000 + 0xC00) +/* + * BIT(1) enables interrupt. + * BIT(0) enables detecting and logging of push/pull data errors. + */ +#define ADF_C3XXX_CPP_CFC_UE (BIT(1) | BIT(0)) + +#define ADF_C3XXX_SLICEPWRDOWN(i) ((i)*0x4000 + 0x2C) +/* Enabling PKE4-PKE0. */ +#define ADF_C3XXX_MMP_PWR_UP_MSK \ + (BIT(20) | BIT(19) | BIT(18) | BIT(17) | BIT(16)) + +/* CPM Uncorrectable Errors */ +#define ADF_C3XXX_INTMASKSSM(i) ((i)*0x4000 + 0x0) +/* Disabling interrupts for correctable errors. */ +#define ADF_C3XXX_INTMASKSSM_UERR \ + (BIT(11) | BIT(9) | BIT(7) | BIT(5) | BIT(3) | BIT(1)) + +/* MMP */ +/* BIT(3) enables correction. */ +#define ADF_C3XXX_CERRSSMMMP_EN (BIT(3)) + +#define ADF_C3X_CLK_PER_SEC (343 * 1000000) +/* BIT(3) enables logging. */ +#define ADF_C3XXX_UERRSSMMMP_EN (BIT(3)) + +#define ADF_C3XXX_PF2VF_OFFSET(i) (0x3A000 + 0x280 + ((i)*0x04)) +#define ADF_C3XXX_VINTMSK_OFFSET(i) (0x3A000 + 0x200 + ((i)*0x04)) + +/* Arbiter configuration */ +#define ADF_C3XXX_ARB_OFFSET 0x30000 +#define ADF_C3XXX_ARB_WRK_2_SER_MAP_OFFSET 0x180 +#define ADF_C3XXX_ARB_WQCFG_OFFSET 0x100 + +/* Admin Interface Reg Offset */ +#define ADF_C3XXX_ADMINMSGUR_OFFSET (0x3A000 + 0x574) +#define ADF_C3XXX_ADMINMSGLR_OFFSET (0x3A000 + 0x578) +#define ADF_C3XXX_MAILBOX_BASE_OFFSET 0x20970 + +/* Firmware Binary */ +#define ADF_C3XXX_FW "qat_c3xxx_fw" +#define ADF_C3XXX_MMP "qat_c3xxx_mmp_fw" + +void adf_init_hw_data_c3xxx(struct adf_hw_device_data *hw_data); +void adf_clean_hw_data_c3xxx(struct adf_hw_device_data *hw_data); + +#define ADF_C3XXX_AE_FREQ (685 * 1000000) +#define ADF_C3XXX_MIN_AE_FREQ (320 * 1000000) +#define ADF_C3XXX_MAX_AE_FREQ (685 * 1000000) + +#endif Index: sys/dev/qat/qat_hw/qat_c3xxx/adf_c3xxx_hw_data.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c3xxx/adf_c3xxx_hw_data.c @@ -0,0 +1,415 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include +#include +#include +#include +#include +#include "adf_c3xxx_hw_data.h" +#include "icp_qat_hw.h" +#include "adf_heartbeat.h" + +/* Worker thread to service arbiter mappings */ +static const u32 thrd_to_arb_map[ADF_C3XXX_MAX_ACCELENGINES] = + { 0x12222AAA, 0x11222AAA, 0x12222AAA, 0x11222AAA, 0x12222AAA, 0x11222AAA }; + +enum { DEV_C3XXX_SKU_1 = 0, DEV_C3XXX_SKU_2 = 1, DEV_C3XXX_SKU_3 = 2 }; + +static u32 thrd_to_arb_map_gen[ADF_C3XXX_MAX_ACCELENGINES] = { 0 }; + +static struct adf_hw_device_class c3xxx_class = {.name = ADF_C3XXX_DEVICE_NAME, + .type = DEV_C3XXX, + .instances = 0 }; + +static u32 +get_accel_mask(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + + u32 fuse; + u32 straps; + + fuse = pci_read_config(pdev, ADF_DEVICE_FUSECTL_OFFSET, 4); + straps = pci_read_config(pdev, ADF_C3XXX_SOFTSTRAP_CSR_OFFSET, 4); + + return (~(fuse | straps)) >> ADF_C3XXX_ACCELERATORS_REG_OFFSET & + ADF_C3XXX_ACCELERATORS_MASK; +} + +static u32 +get_ae_mask(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 fuse; + u32 me_straps; + u32 me_disable; + u32 ssms_disabled; + + fuse = pci_read_config(pdev, ADF_DEVICE_FUSECTL_OFFSET, 4); + me_straps = pci_read_config(pdev, ADF_C3XXX_SOFTSTRAP_CSR_OFFSET, 4); + + /* If SSMs are disabled, then disable the corresponding MEs */ + ssms_disabled = + (~get_accel_mask(accel_dev)) & ADF_C3XXX_ACCELERATORS_MASK; + me_disable = 0x3; + while (ssms_disabled) { + if (ssms_disabled & 1) + me_straps |= me_disable; + ssms_disabled >>= 1; + me_disable <<= 2; + } + + return (~(fuse | me_straps)) & ADF_C3XXX_ACCELENGINES_MASK; +} + +static u32 +get_num_accels(struct adf_hw_device_data *self) +{ + u32 i, ctr = 0; + + if (!self || !self->accel_mask) + return 0; + + for (i = 0; i < ADF_C3XXX_MAX_ACCELERATORS; i++) { + if (self->accel_mask & (1 << i)) + ctr++; + } + return ctr; +} + +static u32 +get_num_aes(struct adf_hw_device_data *self) +{ + u32 i, ctr = 0; + + if (!self || !self->ae_mask) + return 0; + + for (i = 0; i < ADF_C3XXX_MAX_ACCELENGINES; i++) { + if (self->ae_mask & (1 << i)) + ctr++; + } + return ctr; +} + +static u32 +get_misc_bar_id(struct adf_hw_device_data *self) +{ + return ADF_C3XXX_PMISC_BAR; +} + +static u32 +get_etr_bar_id(struct adf_hw_device_data *self) +{ + return ADF_C3XXX_ETR_BAR; +} + +static u32 +get_sram_bar_id(struct adf_hw_device_data *self) +{ + return 0; +} + +static enum dev_sku_info +get_sku(struct adf_hw_device_data *self) +{ + int aes = get_num_aes(self); + + if (aes == 6) + return DEV_SKU_4; + + return DEV_SKU_UNKNOWN; +} + +static void +adf_get_arbiter_mapping(struct adf_accel_dev *accel_dev, + u32 const **arb_map_config) +{ + int i; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + + for (i = 0; i < ADF_C3XXX_MAX_ACCELENGINES; i++) { + thrd_to_arb_map_gen[i] = 0; + if (hw_device->ae_mask & (1 << i)) + thrd_to_arb_map_gen[i] = thrd_to_arb_map[i]; + } + adf_cfg_gen_dispatch_arbiter(accel_dev, + thrd_to_arb_map, + thrd_to_arb_map_gen, + ADF_C3XXX_MAX_ACCELENGINES); + *arb_map_config = thrd_to_arb_map_gen; +} + +static u32 +get_pf2vf_offset(u32 i) +{ + return ADF_C3XXX_PF2VF_OFFSET(i); +} + +static u32 +get_vintmsk_offset(u32 i) +{ + return ADF_C3XXX_VINTMSK_OFFSET(i); +} + +static void +get_arb_info(struct arb_info *arb_csrs_info) +{ + arb_csrs_info->arbiter_offset = ADF_C3XXX_ARB_OFFSET; + arb_csrs_info->wrk_thd_2_srv_arb_map = + ADF_C3XXX_ARB_WRK_2_SER_MAP_OFFSET; + arb_csrs_info->wrk_cfg_offset = ADF_C3XXX_ARB_WQCFG_OFFSET; +} + +static void +get_admin_info(struct admin_info *admin_csrs_info) +{ + admin_csrs_info->mailbox_offset = ADF_C3XXX_MAILBOX_BASE_OFFSET; + admin_csrs_info->admin_msg_ur = ADF_C3XXX_ADMINMSGUR_OFFSET; + admin_csrs_info->admin_msg_lr = ADF_C3XXX_ADMINMSGLR_OFFSET; +} + +static void +get_errsou_offset(u32 *errsou3, u32 *errsou5) +{ + *errsou3 = ADF_C3XXX_ERRSOU3; + *errsou5 = ADF_C3XXX_ERRSOU5; +} + +static u32 +get_clock_speed(struct adf_hw_device_data *self) +{ + /* CPP clock is half high-speed clock */ + return self->clock_frequency / 2; +} + +static void +adf_enable_error_correction(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_C3XXX_PMISC_BAR]; + struct resource *csr = misc_bar->virt_addr; + unsigned int val, i; + unsigned int mask; + + /* Enable Accel Engine error detection & correction */ + mask = hw_device->ae_mask; + for (i = 0; mask; i++, mask >>= 1) { + if (!(mask & 1)) + continue; + val = ADF_CSR_RD(csr, ADF_C3XXX_AE_CTX_ENABLES(i)); + val |= ADF_C3XXX_ENABLE_AE_ECC_ERR; + ADF_CSR_WR(csr, ADF_C3XXX_AE_CTX_ENABLES(i), val); + val = ADF_CSR_RD(csr, ADF_C3XXX_AE_MISC_CONTROL(i)); + val |= ADF_C3XXX_ENABLE_AE_ECC_PARITY_CORR; + ADF_CSR_WR(csr, ADF_C3XXX_AE_MISC_CONTROL(i), val); + } + + /* Enable shared memory error detection & correction */ + mask = hw_device->accel_mask; + for (i = 0; mask; i++, mask >>= 1) { + if (!(mask & 1)) + continue; + val = ADF_CSR_RD(csr, ADF_C3XXX_UERRSSMSH(i)); + val |= ADF_C3XXX_ERRSSMSH_EN; + ADF_CSR_WR(csr, ADF_C3XXX_UERRSSMSH(i), val); + val = ADF_CSR_RD(csr, ADF_C3XXX_CERRSSMSH(i)); + val |= ADF_C3XXX_ERRSSMSH_EN; + ADF_CSR_WR(csr, ADF_C3XXX_CERRSSMSH(i), val); + } +} + +static void +adf_enable_ints(struct adf_accel_dev *accel_dev) +{ + struct resource *addr; + + addr = (&GET_BARS(accel_dev)[ADF_C3XXX_PMISC_BAR])->virt_addr; + + /* Enable bundle and misc interrupts */ + ADF_CSR_WR(addr, ADF_C3XXX_SMIAPF0_MASK_OFFSET, ADF_C3XXX_SMIA0_MASK); + ADF_CSR_WR(addr, ADF_C3XXX_SMIAPF1_MASK_OFFSET, ADF_C3XXX_SMIA1_MASK); +} + +static u32 +get_ae_clock(struct adf_hw_device_data *self) +{ + /* + * Clock update interval is <16> ticks for c3xxx. + */ + return self->clock_frequency / 16; +} + +static int +get_storage_enabled(struct adf_accel_dev *accel_dev, uint32_t *storage_enabled) +{ + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + + strlcpy(key, ADF_STORAGE_FIRMWARE_ENABLED, sizeof(key)); + if (!adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) { + if (kstrtouint(val, 0, storage_enabled)) + return -EFAULT; + } + return 0; +} + +static int +measure_clock(struct adf_accel_dev *accel_dev) +{ + u32 frequency; + int ret = 0; + + ret = adf_dev_measure_clock(accel_dev, + &frequency, + ADF_C3XXX_MIN_AE_FREQ, + ADF_C3XXX_MAX_AE_FREQ); + if (ret) + return ret; + + accel_dev->hw_device->clock_frequency = frequency; + return 0; +} + +static u32 +c3xxx_get_hw_cap(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 legfuses; + u32 capabilities; + u32 straps; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 fuses = hw_data->fuses; + + /* Read accelerator capabilities mask */ + legfuses = pci_read_config(pdev, ADF_DEVICE_LEGFUSE_OFFSET, 4); + capabilities = ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC + + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC + + ICP_ACCEL_CAPABILITIES_CIPHER + + ICP_ACCEL_CAPABILITIES_AUTHENTICATION + + ICP_ACCEL_CAPABILITIES_COMPRESSION + ICP_ACCEL_CAPABILITIES_ZUC + + ICP_ACCEL_CAPABILITIES_SHA3 + ICP_ACCEL_CAPABILITIES_HKDF + + ICP_ACCEL_CAPABILITIES_ECEDMONT + + ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN; + if (legfuses & ICP_ACCEL_MASK_CIPHER_SLICE) + capabilities &= ~(ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC | + ICP_ACCEL_CAPABILITIES_CIPHER | + ICP_ACCEL_CAPABILITIES_HKDF | + ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN); + if (legfuses & ICP_ACCEL_MASK_AUTH_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_AUTHENTICATION; + if (legfuses & ICP_ACCEL_MASK_PKE_SLICE) + capabilities &= ~(ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC | + ICP_ACCEL_CAPABILITIES_ECEDMONT); + if (legfuses & ICP_ACCEL_MASK_COMPRESS_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_COMPRESSION; + if (legfuses & ICP_ACCEL_MASK_EIA3_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_ZUC; + if (legfuses & ICP_ACCEL_MASK_SHA3_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_SHA3; + + straps = pci_read_config(pdev, ADF_C3XXX_SOFTSTRAP_CSR_OFFSET, 4); + if ((straps | fuses) & ADF_C3XXX_POWERGATE_PKE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC; + if ((straps | fuses) & ADF_C3XXX_POWERGATE_CY) + capabilities &= ~ICP_ACCEL_CAPABILITIES_COMPRESSION; + + return capabilities; +} + +static const char * +get_obj_name(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services service) +{ + return ADF_CXXX_AE_FW_NAME_CUSTOM1; +} + +static uint32_t +get_objs_num(struct adf_accel_dev *accel_dev) +{ + return 1; +} + +static uint32_t +get_obj_cfg_ae_mask(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services services) +{ + return accel_dev->hw_device->ae_mask; +} + +void +adf_init_hw_data_c3xxx(struct adf_hw_device_data *hw_data) +{ + hw_data->dev_class = &c3xxx_class; + hw_data->instance_id = c3xxx_class.instances++; + hw_data->num_banks = ADF_C3XXX_ETR_MAX_BANKS; + hw_data->num_rings_per_bank = ADF_ETR_MAX_RINGS_PER_BANK; + hw_data->num_accel = ADF_C3XXX_MAX_ACCELERATORS; + hw_data->num_logical_accel = 1; + hw_data->num_engines = ADF_C3XXX_MAX_ACCELENGINES; + hw_data->tx_rx_gap = ADF_C3XXX_RX_RINGS_OFFSET; + hw_data->tx_rings_mask = ADF_C3XXX_TX_RINGS_MASK; + hw_data->alloc_irq = adf_isr_resource_alloc; + hw_data->free_irq = adf_isr_resource_free; + hw_data->enable_error_correction = adf_enable_error_correction; + hw_data->print_err_registers = adf_print_err_registers; + hw_data->get_accel_mask = get_accel_mask; + hw_data->get_ae_mask = get_ae_mask; + hw_data->get_num_accels = get_num_accels; + hw_data->get_num_aes = get_num_aes; + hw_data->get_sram_bar_id = get_sram_bar_id; + hw_data->get_etr_bar_id = get_etr_bar_id; + hw_data->get_misc_bar_id = get_misc_bar_id; + hw_data->get_pf2vf_offset = get_pf2vf_offset; + hw_data->get_vintmsk_offset = get_vintmsk_offset; + hw_data->get_arb_info = get_arb_info; + hw_data->get_admin_info = get_admin_info; + hw_data->get_errsou_offset = get_errsou_offset; + hw_data->get_clock_speed = get_clock_speed; + hw_data->get_sku = get_sku; + hw_data->fw_name = ADF_C3XXX_FW; + hw_data->fw_mmp_name = ADF_C3XXX_MMP; + hw_data->init_admin_comms = adf_init_admin_comms; + hw_data->exit_admin_comms = adf_exit_admin_comms; + hw_data->disable_iov = adf_disable_sriov; + hw_data->send_admin_init = adf_send_admin_init; + hw_data->init_arb = adf_init_gen2_arb; + hw_data->exit_arb = adf_exit_arb; + hw_data->get_arb_mapping = adf_get_arbiter_mapping; + hw_data->enable_ints = adf_enable_ints; + hw_data->set_ssm_wdtimer = adf_set_ssm_wdtimer; + hw_data->check_slice_hang = adf_check_slice_hang; + hw_data->enable_vf2pf_comms = adf_pf_enable_vf2pf_comms; + hw_data->disable_vf2pf_comms = adf_pf_disable_vf2pf_comms; + hw_data->restore_device = adf_dev_restore; + hw_data->reset_device = adf_reset_flr; + hw_data->min_iov_compat_ver = ADF_PFVF_COMPATIBILITY_VERSION; + hw_data->measure_clock = measure_clock; + hw_data->get_ae_clock = get_ae_clock; + hw_data->reset_device = adf_reset_flr; + hw_data->get_objs_num = get_objs_num; + hw_data->get_obj_name = get_obj_name; + hw_data->get_obj_cfg_ae_mask = get_obj_cfg_ae_mask; + hw_data->get_accel_cap = c3xxx_get_hw_cap; + hw_data->clock_frequency = ADF_C3XXX_AE_FREQ; + hw_data->extended_dc_capabilities = 0; + hw_data->get_storage_enabled = get_storage_enabled; + hw_data->query_storage_cap = 1; + hw_data->get_heartbeat_status = adf_get_heartbeat_status; + hw_data->get_ae_clock = get_ae_clock; + hw_data->storage_enable = 0; + hw_data->get_fw_image_type = adf_cfg_get_fw_image_type; + hw_data->get_ring_to_svc_map = adf_cfg_get_services_enabled; + hw_data->config_device = adf_config_device; + hw_data->set_asym_rings_mask = adf_cfg_set_asym_rings_mask; + hw_data->ring_to_svc_map = ADF_DEFAULT_RING_TO_SRV_MAP; + hw_data->pre_reset = adf_dev_pre_reset; + hw_data->post_reset = adf_dev_post_reset; +} + +void +adf_clean_hw_data_c3xxx(struct adf_hw_device_data *hw_data) +{ + hw_data->dev_class->instances--; +} Index: sys/dev/qat/qat_hw/qat_c3xxx/adf_drv.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c3xxx/adf_drv.c @@ -0,0 +1,269 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "adf_c3xxx_hw_data.h" +#include "adf_fw_counters.h" +#include "adf_cfg_device.h" +#include +#include +#include +#include +#include +#include "adf_heartbeat_dbg.h" +#include "adf_cnvnr_freq_counters.h" + +static MALLOC_DEFINE(M_QAT_C3XXX, "qat_c3xxx", "qat_c3xxx"); + +#define ADF_SYSTEM_DEVICE(device_id) \ + { \ + PCI_VENDOR_ID_INTEL, device_id \ + } + +static const struct pci_device_id adf_pci_tbl[] = + { ADF_SYSTEM_DEVICE(ADF_C3XXX_PCI_DEVICE_ID), + { + 0, + } }; + +static int +adf_probe(device_t dev) +{ + const struct pci_device_id *id; + + for (id = adf_pci_tbl; id->vendor != 0; id++) { + if (pci_get_vendor(dev) == id->vendor && + pci_get_device(dev) == id->device) { + device_set_desc(dev, + "Intel " ADF_C3XXX_DEVICE_NAME + " QuickAssist"); + return BUS_PROBE_GENERIC; + } + } + return ENXIO; +} + +static void +adf_cleanup_accel(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_pci *accel_pci_dev = &accel_dev->accel_pci_dev; + int i; + + if (accel_dev->dma_tag) + bus_dma_tag_destroy(accel_dev->dma_tag); + for (i = 0; i < ADF_PCI_MAX_BARS; i++) { + struct adf_bar *bar = &accel_pci_dev->pci_bars[i]; + + if (bar->virt_addr) + bus_free_resource(accel_pci_dev->pci_dev, + SYS_RES_MEMORY, + bar->virt_addr); + } + + if (accel_dev->hw_device) { + switch (pci_get_device(accel_pci_dev->pci_dev)) { + case ADF_C3XXX_PCI_DEVICE_ID: + adf_clean_hw_data_c3xxx(accel_dev->hw_device); + break; + default: + break; + } + free(accel_dev->hw_device, M_QAT_C3XXX); + accel_dev->hw_device = NULL; + } + adf_cfg_dev_remove(accel_dev); + adf_devmgr_rm_dev(accel_dev, NULL); +} + +static int +adf_attach(device_t dev) +{ + struct adf_accel_dev *accel_dev; + struct adf_accel_pci *accel_pci_dev; + struct adf_hw_device_data *hw_data; + unsigned int i, bar_nr; + int ret, rid; + struct adf_cfg_device *cfg_dev = NULL; + + /* Set pci MaxPayLoad to 256. Implemented to avoid the issue of + * Pci-passthrough causing Maxpayload to be reset to 128 bytes + * when the device is reset. */ + if (pci_get_max_payload(dev) != 256) + pci_set_max_payload(dev, 256); + + accel_dev = device_get_softc(dev); + + INIT_LIST_HEAD(&accel_dev->crypto_list); + accel_pci_dev = &accel_dev->accel_pci_dev; + accel_pci_dev->pci_dev = dev; + + if (bus_get_domain(dev, &accel_pci_dev->node) != 0) + accel_pci_dev->node = 0; + + /* XXX: Revisit if we actually need a devmgr table at all. */ + + /* Add accel device to accel table. + * This should be called before adf_cleanup_accel is called */ + if (adf_devmgr_add_dev(accel_dev, NULL)) { + device_printf(dev, "Failed to add new accelerator device.\n"); + return ENXIO; + } + + /* Allocate and configure device configuration structure */ + hw_data = malloc(sizeof(*hw_data), M_QAT_C3XXX, M_WAITOK | M_ZERO); + + accel_dev->hw_device = hw_data; + adf_init_hw_data_c3xxx(accel_dev->hw_device); + accel_pci_dev->revid = pci_get_revid(dev); + hw_data->fuses = pci_read_config(dev, ADF_DEVICE_FUSECTL_OFFSET, 4); + if (accel_pci_dev->revid == 0x00) { + device_printf(dev, "A0 stepping is not supported.\n"); + ret = ENODEV; + goto out_err; + } + + /* Get PPAERUCM values and store */ + ret = adf_aer_store_ppaerucm_reg(dev, hw_data); + if (ret) + goto out_err; + + /* Get Accelerators and Accelerators Engines masks */ + hw_data->accel_mask = hw_data->get_accel_mask(accel_dev); + hw_data->ae_mask = hw_data->get_ae_mask(accel_dev); + + accel_pci_dev->sku = hw_data->get_sku(hw_data); + /* If the device has no acceleration engines then ignore it. */ + if (!hw_data->accel_mask || !hw_data->ae_mask || + ((~hw_data->ae_mask) & 0x01)) { + device_printf(dev, "No acceleration units found\n"); + ret = ENXIO; + goto out_err; + } + + /* Create device configuration table */ + ret = adf_cfg_dev_add(accel_dev); + if (ret) + goto out_err; + ret = adf_clock_debugfs_add(accel_dev); + if (ret) + goto out_err; + + pci_set_max_read_req(dev, 1024); + + ret = bus_dma_tag_create(bus_get_dma_tag(dev), + 1, + 0, + BUS_SPACE_MAXADDR, + BUS_SPACE_MAXADDR, + NULL, + NULL, + BUS_SPACE_MAXSIZE, + /* BUS_SPACE_UNRESTRICTED */ 1, + BUS_SPACE_MAXSIZE, + 0, + NULL, + NULL, + &accel_dev->dma_tag); + if (ret) + goto out_err; + + if (hw_data->get_accel_cap) { + hw_data->accel_capabilities_mask = + hw_data->get_accel_cap(accel_dev); + } + + /* Find and map all the device's BARS */ + i = 0; + for (bar_nr = 0; i < ADF_PCI_MAX_BARS && bar_nr < PCIR_MAX_BAR_0; + bar_nr++) { + struct adf_bar *bar; + + /* + * XXX: This isn't quite right as it will ignore a BAR + * that wasn't assigned a valid resource range by the + * firmware. + */ + rid = PCIR_BAR(bar_nr); + if (bus_get_resource(dev, SYS_RES_MEMORY, rid, NULL, NULL) != 0) + continue; + bar = &accel_pci_dev->pci_bars[i++]; + bar->virt_addr = bus_alloc_resource_any(dev, + SYS_RES_MEMORY, + &rid, + RF_ACTIVE); + if (bar->virt_addr == NULL) { + device_printf(dev, "Failed to map BAR %d\n", bar_nr); + ret = ENXIO; + goto out_err; + } + bar->base_addr = rman_get_start(bar->virt_addr); + bar->size = rman_get_size(bar->virt_addr); + } + pci_enable_busmaster(dev); + + if (!accel_dev->hw_device->config_device) { + ret = EFAULT; + goto out_err; + } + + ret = accel_dev->hw_device->config_device(accel_dev); + if (ret) + goto out_err; + + ret = adf_dev_init(accel_dev); + if (ret) + goto out_dev_shutdown; + + ret = adf_dev_start(accel_dev); + if (ret) + goto out_dev_stop; + + cfg_dev = accel_dev->cfg->dev; + adf_cfg_device_clear(cfg_dev, accel_dev); + free(cfg_dev, M_QAT); + accel_dev->cfg->dev = NULL; + return ret; +out_dev_stop: + adf_dev_stop(accel_dev); +out_dev_shutdown: + adf_dev_shutdown(accel_dev); +out_err: + adf_cleanup_accel(accel_dev); + return ret; +} + +static int +adf_detach(device_t dev) +{ + struct adf_accel_dev *accel_dev = device_get_softc(dev); + + if (adf_dev_stop(accel_dev)) { + device_printf(dev, "Failed to stop QAT accel dev\n"); + return EBUSY; + } + + adf_dev_shutdown(accel_dev); + + adf_cleanup_accel(accel_dev); + + return 0; +} + +static device_method_t adf_methods[] = { DEVMETHOD(device_probe, adf_probe), + DEVMETHOD(device_attach, adf_attach), + DEVMETHOD(device_detach, adf_detach), + + DEVMETHOD_END }; + +static driver_t adf_driver = { "qat", + adf_methods, + sizeof(struct adf_accel_dev) }; + +DRIVER_MODULE_ORDERED(qat_c3xxx, pci, adf_driver, NULL, NULL, SI_ORDER_THIRD); +MODULE_VERSION(qat_c3xxx, 1); +MODULE_DEPEND(qat_c3xxx, qat_common, 1, 1, 1); +MODULE_DEPEND(qat_c3xxx, qat_api, 1, 1, 1); +MODULE_DEPEND(qat_c3xxx, linuxkpi, 1, 1, 1); Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_ae_config.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_ae_config.c @@ -0,0 +1,167 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_c4xxx_hw_data.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* String buffer size */ +#define AE_INFO_BUFFER_SIZE 50 + +#define AE_CONFIG_DBG_FILE "ae_config" + +static u8 +find_first_me_index(const u32 au_mask) +{ + u8 i; + u32 mask = au_mask; + + /* Retrieve the index of the first ME of an accel unit */ + for (i = 0; i < ADF_C4XXX_MAX_ACCELENGINES; i++) { + if (mask & BIT(i)) + return i; + } + + return 0; +} + +static u8 +get_au_index(u8 au_mask) +{ + u8 au_index = 0; + + while (au_mask) { + if (au_mask == BIT(0)) + return au_index; + au_index++; + au_mask = au_mask >> 1; + } + + return 0; +} + +static int adf_ae_config_show(SYSCTL_HANDLER_ARGS) +{ + struct sbuf sb; + struct adf_accel_dev *accel_dev = arg1; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_accel_unit *accel_unit = accel_dev->au_info->au; + u8 i, j; + u8 au_index; + u8 ae_index; + u8 num_aes; + int ret = 0; + u32 num_au = hw_data->get_num_accel_units(hw_data); + + sbuf_new_for_sysctl(&sb, NULL, 2048, req); + + sbuf_printf(&sb, "\n"); + for (i = 0; i < num_au; i++) { + /* Retrieve accel unit index */ + au_index = get_au_index(accel_unit[i].au_mask); + + /* Retrieve index of fist ME in current accel unit */ + ae_index = find_first_me_index(accel_unit[i].ae_mask); + num_aes = accel_unit[i].num_ae; + + /* Retrieve accel unit type */ + switch (accel_unit[i].services) { + case ADF_ACCEL_CRYPTO: + sbuf_printf(&sb, + "\tAccel unit %d - CRYPTO\n", + au_index); + /* Display ME assignment for a particular accel unit */ + for (j = ae_index; j < (num_aes + ae_index); j++) + sbuf_printf(&sb, "\t\tAE[%d]: crypto\n", j); + break; + case ADF_ACCEL_COMPRESSION: + sbuf_printf(&sb, + "\tAccel unit %d - COMPRESSION\n", + au_index); + /* Display ME assignment for a particular accel unit */ + for (j = ae_index; j < (num_aes + ae_index); j++) + sbuf_printf(&sb, + "\t\tAE[%d]: compression\n", + j); + break; + case ADF_ACCEL_SERVICE_NULL: + default: + break; + } + } + + sbuf_finish(&sb); + ret = SYSCTL_OUT(req, sbuf_data(&sb), sbuf_len(&sb)); + sbuf_delete(&sb); + + return ret; +} + +static int +c4xxx_add_debugfs_ae_config(struct adf_accel_dev *accel_dev) +{ + struct sysctl_ctx_list *qat_sysctl_ctx = NULL; + struct sysctl_oid *qat_sysctl_tree = NULL; + struct sysctl_oid *ae_conf_ctl = NULL; + + qat_sysctl_ctx = + device_get_sysctl_ctx(accel_dev->accel_pci_dev.pci_dev); + qat_sysctl_tree = + device_get_sysctl_tree(accel_dev->accel_pci_dev.pci_dev); + + ae_conf_ctl = SYSCTL_ADD_PROC(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_sysctl_tree), + OID_AUTO, + AE_CONFIG_DBG_FILE, + CTLTYPE_STRING | CTLFLAG_RD, + accel_dev, + 0, + adf_ae_config_show, + "A", + "AE config"); + accel_dev->debugfs_ae_config = ae_conf_ctl; + if (!accel_dev->debugfs_ae_config) { + device_printf(GET_DEV(accel_dev), + "Could not create debug ae config entry.\n"); + return EFAULT; + } + return 0; +} + +int +c4xxx_init_ae_config(struct adf_accel_dev *accel_dev) +{ + int ret = 0; + + /* Add a new file in debug file system with h/w version. */ + ret = c4xxx_add_debugfs_ae_config(accel_dev); + if (ret) { + c4xxx_exit_ae_config(accel_dev); + device_printf(GET_DEV(accel_dev), + "Could not create debugfs ae config file\n"); + return EINVAL; + } + + return 0; +} + +void +c4xxx_exit_ae_config(struct adf_accel_dev *accel_dev) +{ + if (!accel_dev->debugfs_ae_config) + return; + + /* Delete ae configuration file */ + remove_oid(accel_dev, accel_dev->debugfs_ae_config); + + accel_dev->debugfs_ae_config = NULL; +} Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_hw_data.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_hw_data.h @@ -0,0 +1,570 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_C4XXX_HW_DATA_H_ +#define ADF_C4XXX_HW_DATA_H_ + +#include + +/* PCIe configuration space */ +#define ADF_C4XXX_SRAM_BAR 0 +#define ADF_C4XXX_PMISC_BAR 1 +#define ADF_C4XXX_ETR_BAR 2 +#define ADF_C4XXX_RX_RINGS_OFFSET 4 +#define ADF_C4XXX_TX_RINGS_MASK 0xF + +#define ADF_C4XXX_MAX_ACCELERATORS 12 +#define ADF_C4XXX_MAX_ACCELUNITS 6 +#define ADF_C4XXX_MAX_ACCELENGINES 32 +#define ADF_C4XXX_ACCELERATORS_REG_OFFSET 16 + +/* Soft straps offsets */ +#define ADF_C4XXX_SOFTSTRAPPULL0_OFFSET (0x344) +#define ADF_C4XXX_SOFTSTRAPPULL1_OFFSET (0x348) +#define ADF_C4XXX_SOFTSTRAPPULL2_OFFSET (0x34C) + +/* Physical function fuses offsets */ +#define ADF_C4XXX_FUSECTL0_OFFSET (0x350) +#define ADF_C4XXX_FUSECTL1_OFFSET (0x354) +#define ADF_C4XXX_FUSECTL2_OFFSET (0x358) + +#define ADF_C4XXX_FUSE_PKE_MASK (0xFFF000) +#define ADF_C4XXX_FUSE_COMP_MASK (0x000FFF) +#define ADF_C4XXX_FUSE_PROD_SKU_MASK BIT(31) + +#define ADF_C4XXX_LEGFUSE_BASE_SKU_MASK (BIT(2) | BIT(3)) + +#define ADF_C4XXX_FUSE_DISABLE_INLINE_INGRESS BIT(12) +#define ADF_C4XXX_FUSE_DISABLE_INLINE_EGRESS BIT(13) +#define ADF_C4XXX_FUSE_DISABLE_INLINE_MASK \ + (ADF_C4XXX_FUSE_DISABLE_INLINE_INGRESS | \ + ADF_C4XXX_FUSE_DISABLE_INLINE_EGRESS) + +#define ADF_C4XXX_ACCELERATORS_MASK (0xFFF) +#define ADF_C4XXX_ACCELENGINES_MASK (0xFFFFFFFF) + +#define ADF_C4XXX_ETR_MAX_BANKS 128 +#define ADF_C4XXX_SMIAPF0_MASK_OFFSET (0x60000 + 0x20) +#define ADF_C4XXX_SMIAPF1_MASK_OFFSET (0x60000 + 0x24) +#define ADF_C4XXX_SMIAPF2_MASK_OFFSET (0x60000 + 0x28) +#define ADF_C4XXX_SMIAPF3_MASK_OFFSET (0x60000 + 0x2C) +#define ADF_C4XXX_SMIAPF4_MASK_OFFSET (0x60000 + 0x30) +#define ADF_C4XXX_SMIA0_MASK 0xFFFFFFFF +#define ADF_C4XXX_SMIA1_MASK 0xFFFFFFFF +#define ADF_C4XXX_SMIA2_MASK 0xFFFFFFFF +#define ADF_C4XXX_SMIA3_MASK 0xFFFFFFFF +#define ADF_C4XXX_SMIA4_MASK 0x1 +/* Bank and ring configuration */ +#define ADF_C4XXX_NUM_RINGS_PER_BANK 8 +/* Error detection and correction */ +#define ADF_C4XXX_AE_CTX_ENABLES(i) (0x40818 + ((i)*0x1000)) +#define ADF_C4XXX_AE_MISC_CONTROL(i) (0x40960 + ((i)*0x1000)) +#define ADF_C4XXX_ENABLE_AE_ECC_ERR BIT(28) +#define ADF_C4XXX_ENABLE_AE_ECC_PARITY_CORR (BIT(24) | BIT(12)) +#define ADF_C4XXX_UERRSSMSH(i) (0x18 + ((i)*0x4000)) +#define ADF_C4XXX_UERRSSMSH_INTS_CLEAR_MASK (~BIT(0) ^ BIT(16)) +#define ADF_C4XXX_CERRSSMSH(i) (0x10 + ((i)*0x4000)) +#define ADF_C4XXX_CERRSSMSH_INTS_CLEAR_MASK (~BIT(0)) +#define ADF_C4XXX_ERRSSMSH_EN BIT(3) +#define ADF_C4XXX_PF2VF_OFFSET(i) (0x62400 + ((i)*0x04)) +#define ADF_C4XXX_VINTMSK_OFFSET(i) (0x62200 + ((i)*0x04)) + +/* Doorbell interrupt detection in ERRSOU11 */ +#define ADF_C4XXX_DOORBELL_INT_SRC BIT(10) + +/* Doorbell interrupt register definitions */ +#define ADF_C4XXX_ETH_DOORBELL_INT (0x60108) + +/* Clear <3:0> in ETH_DOORBELL_INT */ +#define ADF_C4XXX_ETH_DOORBELL_MASK 0xF + +/* Doorbell register definitions */ +#define ADF_C4XXX_NUM_ETH_DOORBELL_REGS (4) +#define ADF_C4XXX_ETH_DOORBELL(i) (0x61500 + ((i)*0x04)) + +/* Error source registers */ +#define ADF_C4XXX_ERRSOU0 (0x60000 + 0x40) +#define ADF_C4XXX_ERRSOU1 (0x60000 + 0x44) +#define ADF_C4XXX_ERRSOU2 (0x60000 + 0x48) +#define ADF_C4XXX_ERRSOU3 (0x60000 + 0x4C) +#define ADF_C4XXX_ERRSOU4 (0x60000 + 0x50) +#define ADF_C4XXX_ERRSOU5 (0x60000 + 0x54) +#define ADF_C4XXX_ERRSOU6 (0x60000 + 0x58) +#define ADF_C4XXX_ERRSOU7 (0x60000 + 0x5C) +#define ADF_C4XXX_ERRSOU8 (0x60000 + 0x60) +#define ADF_C4XXX_ERRSOU9 (0x60000 + 0x64) +#define ADF_C4XXX_ERRSOU10 (0x60000 + 0x68) +#define ADF_C4XXX_ERRSOU11 (0x60000 + 0x6C) + +/* Error source mask registers */ +#define ADF_C4XXX_ERRMSK0 (0x60000 + 0xC0) +#define ADF_C4XXX_ERRMSK1 (0x60000 + 0xC4) +#define ADF_C4XXX_ERRMSK2 (0x60000 + 0xC8) +#define ADF_C4XXX_ERRMSK3 (0x60000 + 0xCC) +#define ADF_C4XXX_ERRMSK4 (0x60000 + 0xD0) +#define ADF_C4XXX_ERRMSK5 (0x60000 + 0xD4) +#define ADF_C4XXX_ERRMSK6 (0x60000 + 0xD8) +#define ADF_C4XXX_ERRMSK7 (0x60000 + 0xDC) +#define ADF_C4XXX_ERRMSK8 (0x60000 + 0xE0) +#define ADF_C4XXX_ERRMSK9 (0x60000 + 0xE4) +#define ADF_C4XXX_ERRMSK10 (0x60000 + 0xE8) +#define ADF_C4XXX_ERRMSK11 (0x60000 + 0xEC) + +/* Slice Hang enabling related registers */ +#define ADF_C4XXX_SHINTMASKSSM (0x1018) +#define ADF_C4XXX_SSMWDTL (0x54) +#define ADF_C4XXX_SSMWDTH (0x5C) +#define ADF_C4XXX_SSMWDTPKEL (0x58) +#define ADF_C4XXX_SSMWDTPKEH (0x60) +#define ADF_C4XXX_SLICEHANGSTATUS (0x4C) +#define ADF_C4XXX_IASLICEHANGSTATUS (0x50) + +#define ADF_C4XXX_SHINTMASKSSM_VAL (0x00) + +/* Set default value of Slice Hang watchdogs in clock cycles */ +#define ADF_C4XXX_SSM_WDT_64BIT_DEFAULT_VALUE 0x3D0900 +#define ADF_C4XXX_SSM_WDT_PKE_64BIT_DEFAULT_VALUE 0x3000000 + +/* Return interrupt accelerator source mask */ +#define ADF_C4XXX_IRQ_SRC_MASK(accel) (1 << (accel)) + +/* Return address of SHINTMASKSSM register for a given accelerator */ +#define ADF_C4XXX_SHINTMASKSSM_OFFSET(accel) \ + (ADF_C4XXX_SHINTMASKSSM + ((accel)*0x4000)) + +/* Return address of SSMWDTL register for a given accelerator */ +#define ADF_C4XXX_SSMWDTL_OFFSET(accel) (ADF_C4XXX_SSMWDTL + ((accel)*0x4000)) + +/* Return address of SSMWDTH register for a given accelerator */ +#define ADF_C4XXX_SSMWDTH_OFFSET(accel) (ADF_C4XXX_SSMWDTH + ((accel)*0x4000)) + +/* Return address of SSMWDTPKEL register for a given accelerator */ +#define ADF_C4XXX_SSMWDTPKEL_OFFSET(accel) \ + (ADF_C4XXX_SSMWDTPKEL + ((accel)*0x4000)) + +/* Return address of SSMWDTPKEH register for a given accelerator */ +#define ADF_C4XXX_SSMWDTPKEH_OFFSET(accel) \ + (ADF_C4XXX_SSMWDTPKEH + ((accel)*0x4000)) + +/* Return address of SLICEHANGSTATUS register for a given accelerator */ +#define ADF_C4XXX_SLICEHANGSTATUS_OFFSET(accel) \ + (ADF_C4XXX_SLICEHANGSTATUS + ((accel)*0x4000)) + +/* Return address of IASLICEHANGSTATUS register for a given accelerator */ +#define ADF_C4XXX_IASLICEHANGSTATUS_OFFSET(accel) \ + (ADF_C4XXX_IASLICEHANGSTATUS + ((accel)*0x4000)) + +/* RAS enabling related registers */ +#define ADF_C4XXX_SSMFEATREN (0x2010) +#define ADF_C4XXX_SSMSOFTERRORPARITY_MASK (0x1008) + +/* Return address of SSMFEATREN register for given accel */ +#define ADF_C4XXX_GET_SSMFEATREN_OFFSET(accel) \ + (ADF_C4XXX_SSMFEATREN + ((accel)*0x4000)) + +/* Return address of SSMSOFTERRORPARITY_MASK register for given accel */ +#define ADF_C4XXX_GET_SSMSOFTERRORPARITY_MASK_OFFSET(accel) \ + (ADF_C4XXX_SSMSOFTERRORPARITY_MASK + ((accel)*0x4000)) + +/* RAS enabling related registers values to be written */ +#define ADF_C4XXX_SSMFEATREN_VAL (0xFD) +#define ADF_C4XXX_SSMSOFTERRORPARITY_MASK_VAL (0x00) + +/* Enable VF2PF interrupt in ERRMSK4 to ERRMSK7 */ +#define ADF_C4XXX_VF2PF0_31 0x0 +#define ADF_C4XXX_VF2PF32_63 0x0 +#define ADF_C4XXX_VF2PF64_95 0x0 +#define ADF_C4XXX_VF2PF96_127 0x0 + +/* AEx Correctable Error Mask in ERRMSK8 */ +#define ADF_C4XXX_ERRMSK8_COERR 0x0 +#define ADF_C4XXX_ERRSOU8_MECORR_MASK BIT(0) +#define ADF_C4XXX_HI_ME_COR_ERRLOG (0x60104) +#define ADF_C4XXX_HI_ME_COR_ERRLOG_ENABLE (0x61600) +#define ADF_C4XXX_HI_ME_COR_ERRLOG_ENABLE_MASK (0xFFFFFFFF) +#define ADF_C4XXX_HI_ME_COR_ERRLOG_SIZE_IN_BITS (32) + +/* Group of registers related to ERRSOU9 handling + * + * AEx Uncorrectable Error Mask in ERRMSK9 + * CPP Command Parity Errors Mask in ERRMSK9 + * RI Memory Parity Errors Mask in ERRMSK9 + * TI Memory Parity Errors Mask in ERRMSK9 + */ +#define ADF_C4XXX_ERRMSK9_IRQ_MASK 0x0 +#define ADF_C4XXX_ME_UNCORR_ERROR BIT(0) +#define ADF_C4XXX_CPP_CMD_PAR_ERR BIT(1) +#define ADF_C4XXX_RI_MEM_PAR_ERR BIT(2) +#define ADF_C4XXX_TI_MEM_PAR_ERR BIT(3) + +#define ADF_C4XXX_ERRSOU9_ERROR_MASK \ + (ADF_C4XXX_ME_UNCORR_ERROR | ADF_C4XXX_CPP_CMD_PAR_ERR | \ + ADF_C4XXX_RI_MEM_PAR_ERR | ADF_C4XXX_TI_MEM_PAR_ERR) + +#define ADF_C4XXX_HI_ME_UNCERR_LOG (0x60100) +#define ADF_C4XXX_HI_ME_UNCERR_LOG_ENABLE (0x61608) +#define ADF_C4XXX_HI_ME_UNCERR_LOG_ENABLE_MASK (0xFFFFFFFF) +#define ADF_C4XXX_HI_ME_UNCOR_ERRLOG_BITS (32) + +/* HI CPP Agents Command parity Error Log + * CSR name: hicppagentcmdparerrlog + */ +#define ADF_C4XXX_HI_CPP_AGENT_CMD_PAR_ERR_LOG (0x6010C) +#define ADF_C4XXX_HI_CPP_AGENT_CMD_PAR_ERR_LOG_ENABLE (0x61604) +#define ADF_C4XXX_HI_CPP_AGENT_CMD_PAR_ERR_LOG_ENABLE_MASK (0xFFFFFFFF) +#define ADF_C4XXX_TI_CMD_PAR_ERR BIT(0) +#define ADF_C4XXX_RI_CMD_PAR_ERR BIT(1) +#define ADF_C4XXX_ICI_CMD_PAR_ERR BIT(2) +#define ADF_C4XXX_ICE_CMD_PAR_ERR BIT(3) +#define ADF_C4XXX_ARAM_CMD_PAR_ERR BIT(4) +#define ADF_C4XXX_CFC_CMD_PAR_ERR BIT(5) +#define ADF_C4XXX_SSM_CMD_PAR_ERR(value) (((u32)(value) >> 6) & 0xFFF) + +/* RI Memory Parity Error Status Register + * CSR name: rimem_parerr_sts + */ +#define ADF_C4XXX_RI_MEM_PAR_ERR_STS (0x61610) +#define ADF_C4XXX_RI_MEM_PAR_ERR_EN0 (0x61614) +#define ADF_C4XXX_RI_MEM_PAR_ERR_FERR (0x61618) +#define ADF_C4XXX_RI_MEM_PAR_ERR_EN0_MASK (0x7FFFFF) +#define ADF_C4XXX_RI_MEM_MSIX_TBL_INT_MASK (BIT(22)) +#define ADF_C4XXX_RI_MEM_PAR_ERR_STS_MASK \ + (ADF_C4XXX_RI_MEM_PAR_ERR_EN0_MASK ^ ADF_C4XXX_RI_MEM_MSIX_TBL_INT_MASK) + +/* TI Memory Parity Error Status Register + * CSR name: ti_mem_par_err_sts0, ti_mem_par_err_sts1 + */ +#define ADF_C4XXX_TI_MEM_PAR_ERR_STS0 (0x68604) +#define ADF_C4XXX_TI_MEM_PAR_ERR_EN0 (0x68608) +#define ADF_C4XXX_TI_MEM_PAR_ERR_EN0_MASK (0xFFFFFFFF) +#define ADF_C4XXX_TI_MEM_PAR_ERR_STS1 (0x68610) +#define ADF_C4XXX_TI_MEM_PAR_ERR_EN1 (0x68614) +#define ADF_C4XXX_TI_MEM_PAR_ERR_EN1_MASK (0x7FFFF) +#define ADF_C4XXX_TI_MEM_PAR_ERR_STS1_MASK (ADF_C4XXX_TI_MEM_PAR_ERR_EN1_MASK) +#define ADF_C4XXX_TI_MEM_PAR_ERR_FIRST_ERROR (0x68618) + +/* Enable SSM<11:0> in ERRMSK10 */ +#define ADF_C4XXX_ERRMSK10_SSM_ERR 0x0 +#define ADF_C4XXX_ERRSOU10_RAS_MASK 0x1FFF +#define ADF_C4XXX_ERRSOU10_PUSHPULL_MASK BIT(12) + +#define ADF_C4XXX_IASTATSSM_UERRSSMSH_MASK BIT(0) +#define ADF_C4XXX_IASTATSSM_CERRSSMSH_MASK BIT(1) +#define ADF_C4XXX_IASTATSSM_UERRSSMMMP0_MASK BIT(2) +#define ADF_C4XXX_IASTATSSM_CERRSSMMMP0_MASK BIT(3) +#define ADF_C4XXX_IASTATSSM_UERRSSMMMP1_MASK BIT(4) +#define ADF_C4XXX_IASTATSSM_CERRSSMMMP1_MASK BIT(5) +#define ADF_C4XXX_IASTATSSM_UERRSSMMMP2_MASK BIT(6) +#define ADF_C4XXX_IASTATSSM_CERRSSMMMP2_MASK BIT(7) +#define ADF_C4XXX_IASTATSSM_UERRSSMMMP3_MASK BIT(8) +#define ADF_C4XXX_IASTATSSM_CERRSSMMMP3_MASK BIT(9) +#define ADF_C4XXX_IASTATSSM_UERRSSMMMP4_MASK BIT(10) +#define ADF_C4XXX_IASTATSSM_CERRSSMMMP4_MASK BIT(11) +#define ADF_C4XXX_IASTATSSM_PPERR_MASK BIT(12) +#define ADF_C4XXX_IASTATSSM_SPPPAR_ERR_MASK BIT(14) +#define ADF_C4XXX_IASTATSSM_CPPPAR_ERR_MASK BIT(15) +#define ADF_C4XXX_IASTATSSM_RFPAR_ERR_MASK BIT(16) + +#define ADF_C4XXX_IAINTSTATSSM(i) ((i)*0x4000 + 0x206C) +#define ADF_C4XXX_IASTATSSM_MASK 0x1DFFF +#define ADF_C4XXX_IASTATSSM_CLR_MASK 0xFFFE2000 +#define ADF_C4XXX_IASTATSSM_BITS 17 +#define ADF_C4XXX_IASTATSSM_SLICE_HANG_ERR_BIT 13 +#define ADF_C4XXX_IASTATSSM_SPP_PAR_ERR_BIT 14 +#define ADF_C4XXX_IASTATSSM_CPP_PAR_ERR_BIT 15 + +/* Accelerator Interrupt Mask (SSM) + * CSR name: intmaskssm[0..11] + * Returns address of INTMASKSSM register for a given accel. + * This register is used to unmask SSM interrupts to host + * reported by ERRSOU10. + */ +#define ADF_C4XXX_GET_INTMASKSSM_OFFSET(accel) ((accel)*0x4000) + +/* Base address of SPP parity error mask register + * CSR name: sppparerrmsk[0..11] + */ +#define ADF_C4XXX_SPPPARERRMSK_OFFSET (0x2028) + +/* Returns address of SPPPARERRMSK register for a given accel. + * This register is used to unmask SPP parity errors interrupts to host + * reported by ERRSOU10. + */ +#define ADF_C4XXX_GET_SPPPARERRMSK_OFFSET(accel) \ + (ADF_C4XXX_SPPPARERRMSK_OFFSET + ((accel)*0x4000)) + +#define ADF_C4XXX_EXPRPSSMCPR0(i) ((i)*0x4000 + 0x400) +#define ADF_C4XXX_EXPRPSSMXLT0(i) ((i)*0x4000 + 0x500) +#define ADF_C4XXX_EXPRPSSMCPR1(i) ((i)*0x4000 + 0x1400) +#define ADF_C4XXX_EXPRPSSMXLT1(i) ((i)*0x4000 + 0x1500) + +#define ADF_C4XXX_EXPRPSSM_FATAL_MASK BIT(2) +#define ADF_C4XXX_EXPRPSSM_SOFT_MASK BIT(3) + +#define ADF_C4XXX_PPERR_INTS_CLEAR_MASK BIT(0) + +#define ADF_C4XXX_SSMSOFTERRORPARITY(i) ((i)*0x4000 + 0x1000) +#define ADF_C4XXX_SSMCPPERR(i) ((i)*0x4000 + 0x2030) + +/* ethernet doorbell in ERRMSK11 + * timisc in ERRMSK11 + * rimisc in ERRMSK11 + * ppmiscerr in ERRMSK11 + * cerr in ERRMSK11 + * uerr in ERRMSK11 + * ici in ERRMSK11 + * ice in ERRMSK11 + */ +#define ADF_C4XXX_ERRMSK11_ERR 0x0 +/* + * BIT(7) disables ICI interrupt + * BIT(8) disables ICE interrupt + */ +#define ADF_C4XXX_ERRMSK11_ERR_DISABLE_ICI_ICE_INTR (BIT(7) | BIT(8)) + +/* RAS mask for errors reported by ERRSOU11 */ +#define ADF_C4XXX_ERRSOU11_ERROR_MASK (0x1FF) +#define ADF_C4XXX_TI_MISC BIT(0) +#define ADF_C4XXX_RI_PUSH_PULL_PAR_ERR BIT(1) +#define ADF_C4XXX_TI_PUSH_PULL_PAR_ERR BIT(2) +#define ADF_C4XXX_ARAM_CORR_ERR BIT(3) +#define ADF_C4XXX_ARAM_UNCORR_ERR BIT(4) +#define ADF_C4XXX_TI_PULL_PAR_ERR BIT(5) +#define ADF_C4XXX_RI_PUSH_PAR_ERR BIT(6) +#define ADF_C4XXX_INLINE_INGRESS_INTR BIT(7) +#define ADF_C4XXX_INLINE_EGRESS_INTR BIT(8) + +/* TI Misc error status */ +#define ADF_C4XXX_TI_MISC_STS (0x6854C) +#define ADF_C4XXX_TI_MISC_ERR_MASK (BIT(0)) +#define ADF_C4XXX_GET_TI_MISC_ERR_TYPE(status) ((status) >> 1 & 0x3) +#define ADF_C4XXX_TI_BME_RESP_ORDER_ERR (0x1) +#define ADF_C4XXX_TI_RESP_ORDER_ERR (0x2) + +/* RI CPP interface status register */ +#define ADF_C4XXX_RI_CPP_INT_STS (0x61118) +#define ADF_C4XXX_RI_CPP_INT_STS_PUSH_ERR BIT(0) +#define ADF_C4XXX_RI_CPP_INT_STS_PULL_ERR BIT(1) +#define ADF_C4XXX_RI_CPP_INT_STS_PUSH_DATA_PAR_ERR BIT(2) +#define ADF_C4XXX_GET_CPP_BUS_FROM_STS(status) ((status) >> 31 & 0x1) + +/* RI CPP interface control register. */ +#define ADF_C4XXX_RICPPINTCTL (0x61000 + 0x004) +/* + * BIT(3) enables error parity checking on CPP. + * BIT(2) enables error detection and reporting on the RI Parity Error. + * BIT(1) enables error detection and reporting on the RI CPP Pull interface. + * BIT(0) enables error detection and reporting on the RI CPP Push interface. + */ +#define ADF_C4XXX_RICPP_EN (BIT(3) | BIT(2) | BIT(1) | BIT(0)) + +/* TI CPP interface status register */ +#define ADF_C4XXX_TI_CPP_INT_STS (0x6853C) +#define ADF_C4XXX_TI_CPP_INT_STS_PUSH_ERR BIT(0) +#define ADF_C4XXX_TI_CPP_INT_STS_PULL_ERR BIT(1) +#define ADF_C4XXX_TI_CPP_INT_STS_PUSH_DATA_PAR_ERR BIT(2) + +#define ADF_C4XXX_TICPPINTCTL (0x68000 + 0x538) +/* + * BIT(4) enables 'stop and scream' feature for TI RF. + * BIT(3) enables CPP command and pull data parity checking. + * BIT(2) enables data parity error detection and reporting on the TI CPP + * Pull interface. + * BIT(1) enables error detection and reporting on the TI CPP Pull interface. + * BIT(0) enables error detection and reporting on the TI CPP Push interface. + */ +#define ADF_C4XXX_TICPP_EN (BIT(4) | BIT(3) | BIT(2) | BIT(1) | BIT(0)) + +/* CPP error control and logging register */ +#define ADF_C4XXX_CPP_CFC_ERR_CTRL (0x70000 + 0xC00) + +/* + * BIT(1) enables generation of irqs to the PCIe endpoint + * for the errors specified in CPP_CFC_ERR_STATUS + * BIT(0) enables detecting and logging of push/pull data errors. + */ +#define ADF_C4XXX_CPP_CFC_UE (BIT(1) | BIT(0)) + +/* ARAM error interrupt enable registers */ +#define ADF_C4XXX_ARAMCERR (0x101700) +#define ADF_C4XXX_ARAMUERR (0x101704) +#define ADF_C4XXX_CPPMEMTGTERR (0x101710) +#define ADF_C4XXX_ARAM_CORR_ERR_MASK (BIT(0)) +#define ADF_C4XXX_ARAM_UNCORR_ERR_MASK (BIT(0)) +#define ADF_C4XXX_CLEAR_CSR_BIT(csr, bit_num) ((csr) &= ~(BIT(bit_num))) + +/* ARAM correctable errors defined in ARAMCERR + * Bit<3> Enable fixing and logging correctable errors by hardware. + * Bit<26> Enable interrupt to host for ARAM correctable errors. + */ +#define ADF_C4XXX_ARAM_CERR (BIT(3) | BIT(26)) + +/* ARAM correctable errors defined in ARAMUERR + * Bit<3> Enable detection and logging of ARAM uncorrectable errors. + * Bit<19> Enable interrupt to host for ARAM uncorrectable errors. + */ +#define ADF_C4XXX_ARAM_UERR (BIT(3) | BIT(19)) + +/* Misc memory target error registers in CPPMEMTGTERR + * Bit<2> CPP memory push/pull error enable bit + * Bit<7> RI push/pull error enable bit + * Bit<8> ARAM pull data parity check bit + * Bit<9> RAS push error enable bit + */ +#define ADF_C4XXX_TGT_UERR (BIT(9) | BIT(8) | BIT(7) | BIT(2)) + +/* Slice power down register */ +#define ADF_C4XXX_SLICEPWRDOWN(i) (((i)*0x4000) + 0x2C) + +/* Enabling PKE0 to PKE4. */ +#define ADF_C4XXX_MMP_PWR_UP_MSK \ + (BIT(20) | BIT(19) | BIT(18) | BIT(17) | BIT(16)) + +/* Error registers for MMP0-MMP4. */ +#define ADF_C4XXX_MAX_MMP (5) + +#define ADF_C4XXX_MMP_BASE(i) ((i)*0x1000 % 0x3800) +#define ADF_C4XXX_CERRSSMMMP(i, n) ((i)*0x4000 + ADF_C4XXX_MMP_BASE(n) + 0x380) +#define ADF_C4XXX_UERRSSMMMP(i, n) ((i)*0x4000 + ADF_C4XXX_MMP_BASE(n) + 0x388) +#define ADF_C4XXX_UERRSSMMMPAD(i, n) \ + ((i)*0x4000 + ADF_C4XXX_MMP_BASE(n) + 0x38C) +#define ADF_C4XXX_INTMASKSSM(i) ((i)*0x4000 + 0x0) + +#define ADF_C4XXX_UERRSSMMMP_INTS_CLEAR_MASK ((BIT(16) | BIT(0))) +#define ADF_C4XXX_CERRSSMMMP_INTS_CLEAR_MASK BIT(0) + +/* Bit<3> enables logging of MMP uncorrectable errors */ +#define ADF_C4XXX_UERRSSMMMP_EN BIT(3) + +/* Bit<3> enables logging of MMP correctable errors */ +#define ADF_C4XXX_CERRSSMMMP_EN BIT(3) + +#define ADF_C4XXX_ERRMSK_VF2PF_OFFSET(i) (ADF_C4XXX_ERRMSK4 + ((i)*0x04)) + +/* RAM base address registers */ +#define ADF_C4XXX_RAMBASEADDRHI 0x71020 + +#define ADF_C4XXX_NUM_ARAM_ENTRIES 8 + +/* ARAM region sizes in bytes */ +#define ADF_C4XXX_1MB_SIZE (1024 * 1024) +#define ADF_C4XXX_2MB_ARAM_SIZE (2 * ADF_C4XXX_1MB_SIZE) +#define ADF_C4XXX_4MB_ARAM_SIZE (4 * ADF_C4XXX_1MB_SIZE) +#define ADF_C4XXX_DEFAULT_MMP_REGION_SIZE (1024 * 256) +#define ADF_C4XXX_DEFAULT_SKM_REGION_SIZE (1024 * 256) +#define ADF_C4XXX_AU_COMPR_INTERM_SIZE (1024 * 128 * 2 * 2) +#define ADF_C4XXX_DEF_ASYM_MASK 0x1 + +/* Arbiter configuration */ +#define ADF_C4XXX_ARB_OFFSET 0x80000 +#define ADF_C4XXX_ARB_WQCFG_OFFSET 0x200 + +/* Admin Interface Reg Offset */ +#define ADF_C4XXX_ADMINMSGUR_OFFSET (0x60000 + 0x8000 + 0x400 + 0x174) +#define ADF_C4XXX_ADMINMSGLR_OFFSET (0x60000 + 0x8000 + 0x400 + 0x178) +#define ADF_C4XXX_MAILBOX_BASE_OFFSET 0x40970 + +/* AE to function mapping */ +#define ADF_C4XXX_AE2FUNC_REG_PER_AE 8 +#define ADF_C4XXX_AE2FUNC_MAP_OFFSET 0x68800 +#define ADF_C4XXX_AE2FUNC_MAP_REG_SIZE 4 +#define ADF_C4XXX_AE2FUNC_MAP_VALID BIT(8) + +/* Enable each of the units on the chip */ +#define ADF_C4XXX_GLOBAL_CLK_ENABLE_GENERIC 0x7096C +#define ADF_C4XXX_GLOBAL_CLK_ENABLE_GENERIC_DISABLE_ALL 0x0 +#define ADF_C4XXX_GLOBAL_CLK_ENABLE_GENERIC_ICE_ENABLE BIT(4) +#define ADF_C4XXX_GLOBAL_CLK_ENABLE_GENERIC_ICI_ENABLE BIT(3) +#define ADF_C4XXX_GLOBAL_CLK_ENABLE_GENERIC_ARAM BIT(2) + +/* Clock is fully sets up after some delay */ +#define ADF_C4XXX_GLOBAL_CLK_ENABLE_GENERIC_RESTART_DELAY 10 +#define ADF_C4XXX_GLOBAL_CLK_RESTART_LOOP 10 + +/* Reset each of the PPC units on the chip */ +#define ADF_C4XXX_IXP_RESET_GENERIC 0x70940 +#define ADF_C4XXX_IXP_RESET_GENERIC_OUT_OF_RESET_TRIGGER 0x0 +#define ADF_C4XXX_IXP_RESET_GENERIC_INLINE_INGRESS BIT(4) +#define ADF_C4XXX_IXP_RESET_GENERIC_INLINE_EGRESS BIT(3) +#define ADF_C4XXX_IXP_RESET_GENERIC_ARAM BIT(2) + +/* Default accel unit configuration */ +#define ADF_C4XXX_NUM_CY_AU \ + { \ + [DEV_SKU_1] = 4, [DEV_SKU_1_CY] = 6, [DEV_SKU_2] = 3, \ + [DEV_SKU_2_CY] = 4, [DEV_SKU_3] = 1, [DEV_SKU_3_CY] = 2, \ + [DEV_SKU_UNKNOWN] = 0 \ + } +#define ADF_C4XXX_NUM_DC_AU \ + { \ + [DEV_SKU_1] = 2, [DEV_SKU_1_CY] = 0, [DEV_SKU_2] = 1, \ + [DEV_SKU_2_CY] = 0, [DEV_SKU_3] = 1, [DEV_SKU_3_CY] = 0, \ + [DEV_SKU_UNKNOWN] = 0 \ + } + +#define ADF_C4XXX_NUM_ACCEL_PER_AU 2 +#define ADF_C4XXX_NUM_INLINE_AU \ + { \ + [DEV_SKU_1] = 0, [DEV_SKU_1_CY] = 0, [DEV_SKU_2] = 0, \ + [DEV_SKU_2_CY] = 0, [DEV_SKU_3] = 0, [DEV_SKU_3_CY] = 0, \ + [DEV_SKU_UNKNOWN] = 0 \ + } +#define ADF_C4XXX_6_AE 6 +#define ADF_C4XXX_4_AE 4 +#define ADF_C4XXX_100 100 +#define ADF_C4XXX_ROUND_LIMIT 5 +#define ADF_C4XXX_PERCENTAGE "%" + +#define ADF_C4XXX_ARB_CY 0x12222222 +#define ADF_C4XXX_ARB_DC 0x00000888 + +/* Default accel firmware maximal object*/ +#define ADF_C4XXX_MAX_OBJ 4 + +/* Default 4 partitions for services */ +#define ADF_C4XXX_PART_ASYM 0 +#define ADF_C4XXX_PART_SYM 1 +#define ADF_C4XXX_PART_UNUSED 2 +#define ADF_C4XXX_PART_DC 3 +#define ADF_C4XXX_PARTS_PER_GRP 16 + +#define ADF_C4XXX_PARTITION_LUT_OFFSET 0x81000 +#define ADF_C4XXX_WRKTHD2PARTMAP 0x82000 +#define ADF_C4XXX_WQM_SIZE 0x4 + +#define ADF_C4XXX_DEFAULT_PARTITIONS \ + (ADF_C4XXX_PART_ASYM | ADF_C4XXX_PART_SYM << 8 | \ + ADF_C4XXX_PART_UNUSED << 16 | ADF_C4XXX_PART_DC << 24) + +/* SKU configurations */ +#define ADF_C4XXX_HIGH_SKU_AES 32 +#define ADF_C4XXX_MED_SKU_AES 24 +#define ADF_C4XXX_LOW_SKU_AES 12 + +#define READ_CSR_WQM(csr_addr, csr_offset, index) \ + ADF_CSR_RD(csr_addr, (csr_offset) + ((index)*ADF_C4XXX_WQM_SIZE)) + +#define WRITE_CSR_WQM(csr_addr, csr_offset, index, value) \ + ADF_CSR_WR(csr_addr, (csr_offset) + ((index)*ADF_C4XXX_WQM_SIZE), value) + +/* Firmware Binary */ +#define ADF_C4XXX_FW "qat_c4xxx_fw" +#define ADF_C4XXX_MMP "qat_c4xxx_mmp_fw" +#define ADF_C4XXX_INLINE_OBJ "qat_c4xxx_inline.bin" +#define ADF_C4XXX_DC_OBJ "qat_c4xxx_dc.bin" +#define ADF_C4XXX_CY_OBJ "qat_c4xxx_cy.bin" +#define ADF_C4XXX_SYM_OBJ "qat_c4xxx_sym.bin" + +void adf_init_hw_data_c4xxx(struct adf_hw_device_data *hw_data); +void adf_clean_hw_data_c4xxx(struct adf_hw_device_data *hw_data); +int adf_init_arb_c4xxx(struct adf_accel_dev *accel_dev); +void adf_exit_arb_c4xxx(struct adf_accel_dev *accel_dev); + +#define ADF_C4XXX_AE_FREQ (800 * 1000000) +#define ADF_C4XXX_MIN_AE_FREQ (571 * 1000000) +#define ADF_C4XXX_MAX_AE_FREQ (800 * 1000000) + +int c4xxx_init_ae_config(struct adf_accel_dev *accel_dev); +void c4xxx_exit_ae_config(struct adf_accel_dev *accel_dev); +void remove_oid(struct adf_accel_dev *accel_dev, struct sysctl_oid *oid); +#endif Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_hw_data.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_hw_data.c @@ -0,0 +1,2302 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include +#include +#include +#include +#include +#include +#include +#include +#include "adf_c4xxx_hw_data.h" +#include "adf_c4xxx_reset.h" +#include "adf_c4xxx_inline.h" +#include "adf_c4xxx_ras.h" +#include "adf_c4xxx_misc_error_stats.h" +#include "adf_c4xxx_pke_replay_stats.h" +#include "adf_heartbeat.h" +#include "icp_qat_fw_init_admin.h" +#include "icp_qat_hw.h" + +/* accel unit information */ +static struct adf_accel_unit adf_c4xxx_au_32_ae[] = + { { 0x1, 0x3, 0x3F, 0x1B, 6, ADF_ACCEL_SERVICE_NULL }, + { 0x2, 0xC, 0xFC0, 0x6C0, 6, ADF_ACCEL_SERVICE_NULL }, + { 0x4, 0x30, 0xF000, 0xF000, 4, ADF_ACCEL_SERVICE_NULL }, + { 0x8, 0xC0, 0x3F0000, 0x1B0000, 6, ADF_ACCEL_SERVICE_NULL }, + { 0x10, 0x300, 0xFC00000, 0x6C00000, 6, ADF_ACCEL_SERVICE_NULL }, + { 0x20, 0xC00, 0xF0000000, 0xF0000000, 4, ADF_ACCEL_SERVICE_NULL } }; + +static struct adf_accel_unit adf_c4xxx_au_24_ae[] = { + { 0x1, 0x3, 0x3F, 0x1B, 6, ADF_ACCEL_SERVICE_NULL }, + { 0x2, 0xC, 0xFC0, 0x6C0, 6, ADF_ACCEL_SERVICE_NULL }, + { 0x8, 0xC0, 0x3F0000, 0x1B0000, 6, ADF_ACCEL_SERVICE_NULL }, + { 0x10, 0x300, 0xFC00000, 0x6C00000, 6, ADF_ACCEL_SERVICE_NULL }, +}; + +static struct adf_accel_unit adf_c4xxx_au_12_ae[] = { + { 0x1, 0x3, 0x3F, 0x1B, 6, ADF_ACCEL_SERVICE_NULL }, + { 0x8, 0xC0, 0x3F0000, 0x1B0000, 6, ADF_ACCEL_SERVICE_NULL }, +}; + +static struct adf_accel_unit adf_c4xxx_au_emulation[] = + { { 0x1, 0x3, 0x3F, 0x1B, 6, ADF_ACCEL_SERVICE_NULL }, + { 0x2, 0xC, 0xC0, 0xC0, 2, ADF_ACCEL_SERVICE_NULL } }; + +/* Accel engine threads for each of the following services + * , , , + */ + +/* Thread mapping for SKU capable of symmetric cryptography */ +static const struct adf_ae_info adf_c4xxx_32_ae_sym[] = + { { 2, 6, 3 }, { 2, 6, 3 }, { 1, 7, 0 }, { 2, 6, 3 }, { 2, 6, 3 }, + { 1, 7, 0 }, { 2, 6, 3 }, { 2, 6, 3 }, { 1, 7, 0 }, { 2, 6, 3 }, + { 2, 6, 3 }, { 1, 7, 0 }, { 2, 6, 3 }, { 2, 6, 3 }, { 2, 6, 3 }, + { 2, 6, 3 }, { 2, 6, 3 }, { 2, 6, 3 }, { 1, 7, 0 }, { 2, 6, 3 }, + { 2, 6, 3 }, { 1, 7, 0 }, { 2, 6, 3 }, { 2, 6, 3 }, { 1, 7, 0 }, + { 2, 6, 3 }, { 2, 6, 3 }, { 1, 7, 0 }, { 2, 6, 3 }, { 2, 6, 3 }, + { 2, 6, 3 }, { 2, 6, 3 } }; + +static const struct adf_ae_info adf_c4xxx_24_ae_sym[] = + { { 2, 6, 3 }, { 2, 6, 3 }, { 1, 7, 0 }, { 2, 6, 3 }, { 2, 6, 3 }, + { 1, 7, 0 }, { 2, 6, 3 }, { 2, 6, 3 }, { 1, 7, 0 }, { 2, 6, 3 }, + { 2, 6, 3 }, { 1, 7, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, + { 0, 0, 0 }, { 2, 6, 3 }, { 2, 6, 3 }, { 1, 7, 0 }, { 2, 6, 3 }, + { 2, 6, 3 }, { 1, 7, 0 }, { 2, 6, 3 }, { 2, 6, 3 }, { 1, 7, 0 }, + { 2, 6, 3 }, { 2, 6, 3 }, { 1, 7, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, + { 0, 0, 0 }, { 0, 0, 0 } }; + +static const struct adf_ae_info adf_c4xxx_12_ae_sym[] = + { { 2, 6, 3 }, { 2, 6, 3 }, { 1, 7, 0 }, { 2, 6, 3 }, { 2, 6, 3 }, + { 1, 7, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, + { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, + { 0, 0, 0 }, { 2, 6, 3 }, { 2, 6, 3 }, { 1, 7, 0 }, { 2, 6, 3 }, + { 2, 6, 3 }, { 1, 7, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, + { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, + { 0, 0, 0 }, { 0, 0, 0 } }; + +/* Thread mapping for SKU capable of asymmetric and symmetric cryptography */ +static const struct adf_ae_info adf_c4xxx_32_ae[] = + { { 2, 5, 3 }, { 2, 5, 3 }, { 1, 6, 0 }, { 2, 5, 3 }, { 2, 5, 3 }, + { 1, 6, 0 }, { 2, 5, 3 }, { 2, 5, 3 }, { 1, 6, 0 }, { 2, 5, 3 }, + { 2, 5, 3 }, { 1, 6, 0 }, { 2, 5, 3 }, { 2, 5, 3 }, { 2, 5, 3 }, + { 2, 5, 3 }, { 2, 5, 3 }, { 2, 5, 3 }, { 1, 6, 0 }, { 2, 5, 3 }, + { 2, 5, 3 }, { 1, 6, 0 }, { 2, 5, 3 }, { 2, 5, 3 }, { 1, 6, 0 }, + { 2, 5, 3 }, { 2, 5, 3 }, { 1, 6, 0 }, { 2, 5, 3 }, { 2, 5, 3 }, + { 2, 5, 3 }, { 2, 5, 3 } }; + +static const struct adf_ae_info adf_c4xxx_24_ae[] = + { { 2, 5, 3 }, { 2, 5, 3 }, { 1, 6, 0 }, { 2, 5, 3 }, { 2, 5, 3 }, + { 1, 6, 0 }, { 2, 5, 3 }, { 2, 5, 3 }, { 1, 6, 0 }, { 2, 5, 3 }, + { 2, 5, 3 }, { 1, 6, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, + { 0, 0, 0 }, { 2, 5, 3 }, { 2, 5, 3 }, { 1, 6, 0 }, { 2, 5, 3 }, + { 2, 5, 3 }, { 1, 6, 0 }, { 2, 5, 3 }, { 2, 5, 3 }, { 1, 6, 0 }, + { 2, 5, 3 }, { 2, 5, 3 }, { 1, 6, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, + { 0, 0, 0 }, { 0, 0, 0 } }; + +static const struct adf_ae_info adf_c4xxx_12_ae[] = + { { 2, 5, 3 }, { 2, 5, 3 }, { 1, 6, 0 }, { 2, 5, 3 }, { 2, 5, 3 }, + { 1, 6, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, + { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, + { 0, 0, 0 }, { 2, 5, 3 }, { 2, 5, 3 }, { 1, 6, 0 }, { 2, 5, 3 }, + { 2, 5, 3 }, { 1, 6, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, + { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 }, + { 0, 0, 0 }, { 0, 0, 0 } }; + +static struct adf_hw_device_class c4xxx_class = {.name = ADF_C4XXX_DEVICE_NAME, + .type = DEV_C4XXX, + .instances = 0 }; + +struct icp_qat_fw_init_c4xxx_admin_hb_stats { + struct icp_qat_fw_init_admin_hb_cnt stats[ADF_NUM_THREADS_PER_AE]; +}; + +struct adf_hb_count { + u16 ae_thread[ADF_NUM_THREADS_PER_AE]; +}; + +static const int sku_cy_au[] = ADF_C4XXX_NUM_CY_AU; +static const int sku_dc_au[] = ADF_C4XXX_NUM_DC_AU; +static const int sku_inline_au[] = ADF_C4XXX_NUM_INLINE_AU; + +/* + * C4xxx devices introduce new fuses and soft straps and + * are different from previous gen device implementations. + */ + +static u32 +get_accel_mask(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 fusectl0; + u32 softstrappull0; + + fusectl0 = pci_read_config(pdev, ADF_C4XXX_FUSECTL0_OFFSET, 4); + softstrappull0 = + pci_read_config(pdev, ADF_C4XXX_SOFTSTRAPPULL0_OFFSET, 4); + + return (~(fusectl0 | softstrappull0)) & ADF_C4XXX_ACCELERATORS_MASK; +} + +static u32 +get_ae_mask(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 fusectl1; + u32 softstrappull1; + + fusectl1 = pci_read_config(pdev, ADF_C4XXX_FUSECTL1_OFFSET, 4); + softstrappull1 = + pci_read_config(pdev, ADF_C4XXX_SOFTSTRAPPULL1_OFFSET, 4); + + /* Assume that AE and AU disable masks are consistent, so no + * checks against the AU mask are performed + */ + return (~(fusectl1 | softstrappull1)) & ADF_C4XXX_ACCELENGINES_MASK; +} + +static u32 +get_num_accels(struct adf_hw_device_data *self) +{ + return self ? hweight32(self->accel_mask) : 0; +} + +static u32 +get_num_aes(struct adf_hw_device_data *self) +{ + return self ? hweight32(self->ae_mask) : 0; +} + +static u32 +get_misc_bar_id(struct adf_hw_device_data *self) +{ + return ADF_C4XXX_PMISC_BAR; +} + +static u32 +get_etr_bar_id(struct adf_hw_device_data *self) +{ + return ADF_C4XXX_ETR_BAR; +} + +static u32 +get_sram_bar_id(struct adf_hw_device_data *self) +{ + return ADF_C4XXX_SRAM_BAR; +} + +static inline void +c4xxx_unpack_ssm_wdtimer(u64 value, u32 *upper, u32 *lower) +{ + *lower = lower_32_bits(value); + *upper = upper_32_bits(value); +} + +/** + * c4xxx_set_ssm_wdtimer() - Initialize the slice hang watchdog timer. + * + * @param accel_dev Structure holding accelerator data. + * @return 0 on success, error code otherwise. + */ +static int +c4xxx_set_ssm_wdtimer(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + struct adf_bar *misc_bar = + &GET_BARS(accel_dev)[hw_device->get_misc_bar_id(hw_device)]; + struct resource *csr = misc_bar->virt_addr; + unsigned long accel_mask = hw_device->accel_mask; + u32 accel = 0; + u64 timer_val = ADF_C4XXX_SSM_WDT_64BIT_DEFAULT_VALUE; + u64 timer_val_pke = ADF_C4XXX_SSM_WDT_PKE_64BIT_DEFAULT_VALUE; + u32 ssm_wdt_low = 0, ssm_wdt_high = 0; + u32 ssm_wdt_pke_low = 0, ssm_wdt_pke_high = 0; + + /* Convert 64bit Slice Hang watchdog value into 32bit values for + * mmio write to 32bit CSRs. + */ + c4xxx_unpack_ssm_wdtimer(timer_val, &ssm_wdt_high, &ssm_wdt_low); + c4xxx_unpack_ssm_wdtimer(timer_val_pke, + &ssm_wdt_pke_high, + &ssm_wdt_pke_low); + + /* Configures Slice Hang watchdogs */ + for_each_set_bit(accel, &accel_mask, ADF_C4XXX_MAX_ACCELERATORS) + { + ADF_CSR_WR(csr, ADF_C4XXX_SSMWDTL_OFFSET(accel), ssm_wdt_low); + ADF_CSR_WR(csr, ADF_C4XXX_SSMWDTH_OFFSET(accel), ssm_wdt_high); + ADF_CSR_WR(csr, + ADF_C4XXX_SSMWDTPKEL_OFFSET(accel), + ssm_wdt_pke_low); + ADF_CSR_WR(csr, + ADF_C4XXX_SSMWDTPKEH_OFFSET(accel), + ssm_wdt_pke_high); + } + + return 0; +} + +/** + * c4xxx_check_slice_hang() - Check slice hang status + * + * Return: true if a slice hange interrupt is serviced.. + */ +static bool +c4xxx_check_slice_hang(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + struct adf_bar *misc_bar = + &GET_BARS(accel_dev)[hw_device->get_misc_bar_id(hw_device)]; + struct resource *csr = misc_bar->virt_addr; + u32 slice_hang_offset; + u32 ia_slice_hang_offset; + u32 fw_irq_source; + u32 ia_irq_source; + u32 accel_num = 0; + bool handled = false; + u32 errsou10 = ADF_CSR_RD(csr, ADF_C4XXX_ERRSOU10); + unsigned long accel_mask; + + accel_mask = hw_device->accel_mask; + + for_each_set_bit(accel_num, &accel_mask, ADF_C4XXX_MAX_ACCELERATORS) + { + if (!(errsou10 & ADF_C4XXX_IRQ_SRC_MASK(accel_num))) + continue; + + fw_irq_source = ADF_CSR_RD(csr, ADF_INTSTATSSM(accel_num)); + ia_irq_source = + ADF_CSR_RD(csr, ADF_C4XXX_IAINTSTATSSM(accel_num)); + ia_slice_hang_offset = + ADF_C4XXX_IASLICEHANGSTATUS_OFFSET(accel_num); + + /* FW did not clear SliceHang error, IA logs and clears + * the error + */ + if ((fw_irq_source & ADF_INTSTATSSM_SHANGERR) && + (ia_irq_source & ADF_INTSTATSSM_SHANGERR)) { + slice_hang_offset = + ADF_C4XXX_SLICEHANGSTATUS_OFFSET(accel_num); + + /* Bring hung slice out of reset */ + adf_csr_fetch_and_and(csr, slice_hang_offset, ~0); + + /* Log SliceHang error and clear an interrupt */ + handled = adf_handle_slice_hang(accel_dev, + accel_num, + csr, + ia_slice_hang_offset); + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + } + /* FW cleared SliceHang, IA only logs an error */ + else if (!(fw_irq_source & ADF_INTSTATSSM_SHANGERR) && + (ia_irq_source & ADF_INTSTATSSM_SHANGERR)) { + /* Log SliceHang error and clear an interrupt */ + handled = adf_handle_slice_hang(accel_dev, + accel_num, + csr, + ia_slice_hang_offset); + + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + } + + /* Clear the associated IA interrupt */ + adf_csr_fetch_and_and(csr, + ADF_C4XXX_IAINTSTATSSM(accel_num), + ~BIT(13)); + } + + return handled; +} + +static bool +get_eth_doorbell_msg(struct adf_accel_dev *accel_dev) +{ + struct resource *csr = + (&GET_BARS(accel_dev)[ADF_C4XXX_PMISC_BAR])->virt_addr; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + u32 errsou11 = ADF_CSR_RD(csr, ADF_C4XXX_ERRSOU11); + u32 doorbell_int = ADF_CSR_RD(csr, ADF_C4XXX_ETH_DOORBELL_INT); + u32 eth_doorbell_reg[ADF_C4XXX_NUM_ETH_DOORBELL_REGS]; + bool handled = false; + u32 data_reg; + u8 i; + + /* Reset cannot be acknowledged until the reset */ + hw_device->reset_ack = false; + + /* Check if doorbell interrupt occurred. */ + if (errsou11 & ADF_C4XXX_DOORBELL_INT_SRC) { + /* Decode doorbell messages from ethernet device */ + for (i = 0; i < ADF_C4XXX_NUM_ETH_DOORBELL_REGS; i++) { + eth_doorbell_reg[i] = 0; + if (doorbell_int & BIT(i)) { + data_reg = ADF_C4XXX_ETH_DOORBELL(i); + eth_doorbell_reg[i] = ADF_CSR_RD(csr, data_reg); + device_printf( + GET_DEV(accel_dev), + "Receives Doorbell message(0x%08x)\n", + eth_doorbell_reg[i]); + } + } + /* Only need to check PF0 */ + if (eth_doorbell_reg[0] == ADF_C4XXX_IOSFSB_RESET_ACK) { + device_printf(GET_DEV(accel_dev), + "Receives pending reset ACK\n"); + hw_device->reset_ack = true; + } + /* Clear the interrupt source */ + ADF_CSR_WR(csr, + ADF_C4XXX_ETH_DOORBELL_INT, + ADF_C4XXX_ETH_DOORBELL_MASK); + handled = true; + } + + return handled; +} + +static enum dev_sku_info +get_sku(struct adf_hw_device_data *self) +{ + int aes = get_num_aes(self); + u32 capabilities = self->accel_capabilities_mask; + bool sym_only_sku = false; + + /* Check if SKU is capable only of symmetric cryptography + * via device capabilities. + */ + if ((capabilities & ADF_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC) && + !(capabilities & ADF_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC) && + !(capabilities & ADF_ACCEL_CAPABILITIES_COMPRESSION)) + sym_only_sku = true; + + switch (aes) { + case ADF_C4XXX_HIGH_SKU_AES: + if (sym_only_sku) + return DEV_SKU_1_CY; + return DEV_SKU_1; + case ADF_C4XXX_MED_SKU_AES: + if (sym_only_sku) + return DEV_SKU_2_CY; + return DEV_SKU_2; + case ADF_C4XXX_LOW_SKU_AES: + if (sym_only_sku) + return DEV_SKU_3_CY; + return DEV_SKU_3; + }; + + return DEV_SKU_UNKNOWN; +} + +static bool +c4xxx_check_prod_sku(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 fusectl0 = 0; + + fusectl0 = pci_read_config(pdev, ADF_C4XXX_FUSECTL0_OFFSET, 4); + + if (fusectl0 & ADF_C4XXX_FUSE_PROD_SKU_MASK) + return true; + else + return false; +} + +static bool +adf_check_sym_only_sku_c4xxx(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 legfuse = 0; + + legfuse = pci_read_config(pdev, ADF_DEVICE_LEGFUSE_OFFSET, 4); + + if (legfuse & ADF_C4XXX_LEGFUSE_BASE_SKU_MASK) + return true; + else + return false; +} + +static void +adf_enable_slice_hang_detection(struct adf_accel_dev *accel_dev) +{ + struct resource *csr; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + u32 accel = 0; + unsigned long accel_mask; + + csr = (&GET_BARS(accel_dev)[ADF_C4XXX_PMISC_BAR])->virt_addr; + accel_mask = hw_device->accel_mask; + + for_each_set_bit(accel, &accel_mask, ADF_C4XXX_MAX_ACCELERATORS) + { + /* Unmasks Slice Hang interrupts so they can be seen by IA. */ + ADF_CSR_WR(csr, + ADF_C4XXX_SHINTMASKSSM_OFFSET(accel), + ADF_C4XXX_SHINTMASKSSM_VAL); + } +} + +static void +adf_enable_ras(struct adf_accel_dev *accel_dev) +{ + struct resource *csr; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + u32 accel = 0; + unsigned long accel_mask; + + csr = (&GET_BARS(accel_dev)[ADF_C4XXX_PMISC_BAR])->virt_addr; + accel_mask = hw_device->accel_mask; + + for_each_set_bit(accel, &accel_mask, ADF_C4XXX_MAX_ACCELERATORS) + { + ADF_CSR_WR(csr, + ADF_C4XXX_GET_SSMFEATREN_OFFSET(accel), + ADF_C4XXX_SSMFEATREN_VAL); + } +} + +static u32 +get_clock_speed(struct adf_hw_device_data *self) +{ + /* c4xxx CPP clock is equal to high-speed clock */ + return self->clock_frequency; +} + +static void +adf_enable_error_interrupts(struct adf_accel_dev *accel_dev) +{ + struct resource *csr, *aram_csr; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + u32 accel = 0; + unsigned long accel_mask; + + csr = (&GET_BARS(accel_dev)[ADF_C4XXX_PMISC_BAR])->virt_addr; + aram_csr = (&GET_BARS(accel_dev)[ADF_C4XXX_SRAM_BAR])->virt_addr; + accel_mask = hw_device->accel_mask; + + for_each_set_bit(accel, &accel_mask, ADF_C4XXX_MAX_ACCELERATORS) + { + /* Enable shared memory, MMP, CPP, PPERR interrupts + * for a given accel + */ + ADF_CSR_WR(csr, ADF_C4XXX_GET_INTMASKSSM_OFFSET(accel), 0); + + /* Enable SPP parity error interrupts for a given accel */ + ADF_CSR_WR(csr, ADF_C4XXX_GET_SPPPARERRMSK_OFFSET(accel), 0); + + /* Enable ssm soft parity errors on given accel */ + ADF_CSR_WR(csr, + ADF_C4XXX_GET_SSMSOFTERRORPARITY_MASK_OFFSET(accel), + ADF_C4XXX_SSMSOFTERRORPARITY_MASK_VAL); + } + + /* Enable interrupts for VFtoPF0_127. */ + ADF_CSR_WR(csr, ADF_C4XXX_ERRMSK4, ADF_C4XXX_VF2PF0_31); + ADF_CSR_WR(csr, ADF_C4XXX_ERRMSK5, ADF_C4XXX_VF2PF32_63); + ADF_CSR_WR(csr, ADF_C4XXX_ERRMSK6, ADF_C4XXX_VF2PF64_95); + ADF_CSR_WR(csr, ADF_C4XXX_ERRMSK7, ADF_C4XXX_VF2PF96_127); + + /* Enable interrupts signaling ECC correctable errors for all AEs */ + ADF_CSR_WR(csr, ADF_C4XXX_ERRMSK8, ADF_C4XXX_ERRMSK8_COERR); + ADF_CSR_WR(csr, + ADF_C4XXX_HI_ME_COR_ERRLOG_ENABLE, + ADF_C4XXX_HI_ME_COR_ERRLOG_ENABLE_MASK); + + /* Enable error interrupts reported by ERRSOU9 */ + ADF_CSR_WR(csr, ADF_C4XXX_ERRMSK9, ADF_C4XXX_ERRMSK9_IRQ_MASK); + + /* Enable uncorrectable errors on all the AE */ + ADF_CSR_WR(csr, + ADF_C4XXX_HI_ME_UNCERR_LOG_ENABLE, + ADF_C4XXX_HI_ME_UNCERR_LOG_ENABLE_MASK); + + /* Enable CPP Agent to report command parity errors */ + ADF_CSR_WR(csr, + ADF_C4XXX_HI_CPP_AGENT_CMD_PAR_ERR_LOG_ENABLE, + ADF_C4XXX_HI_CPP_AGENT_CMD_PAR_ERR_LOG_ENABLE_MASK); + + /* Enable reporting of RI memory parity errors */ + ADF_CSR_WR(csr, + ADF_C4XXX_RI_MEM_PAR_ERR_EN0, + ADF_C4XXX_RI_MEM_PAR_ERR_EN0_MASK); + + /* Enable reporting of TI memory parity errors */ + ADF_CSR_WR(csr, + ADF_C4XXX_TI_MEM_PAR_ERR_EN0, + ADF_C4XXX_TI_MEM_PAR_ERR_EN0_MASK); + ADF_CSR_WR(csr, + ADF_C4XXX_TI_MEM_PAR_ERR_EN1, + ADF_C4XXX_TI_MEM_PAR_ERR_EN1_MASK); + + /* Enable SSM errors */ + ADF_CSR_WR(csr, ADF_C4XXX_ERRMSK10, ADF_C4XXX_ERRMSK10_SSM_ERR); + + /* Enable miscellaneous errors (ethernet doorbell aram, ici, ice) */ + ADF_CSR_WR(csr, ADF_C4XXX_ERRMSK11, ADF_C4XXX_ERRMSK11_ERR); + + /* RI CPP bus interface error detection and reporting. */ + ADF_CSR_WR(csr, ADF_C4XXX_RICPPINTCTL, ADF_C4XXX_RICPP_EN); + + /* TI CPP bus interface error detection and reporting. */ + ADF_CSR_WR(csr, ADF_C4XXX_TICPPINTCTL, ADF_C4XXX_TICPP_EN); + + /* Enable CFC Error interrupts and logging. */ + ADF_CSR_WR(csr, ADF_C4XXX_CPP_CFC_ERR_CTRL, ADF_C4XXX_CPP_CFC_UE); + + /* Enable ARAM correctable error detection. */ + ADF_CSR_WR(aram_csr, ADF_C4XXX_ARAMCERR, ADF_C4XXX_ARAM_CERR); + + /* Enable ARAM uncorrectable error detection. */ + ADF_CSR_WR(aram_csr, ADF_C4XXX_ARAMUERR, ADF_C4XXX_ARAM_UERR); + + /* Enable Push/Pull Misc Uncorrectable error interrupts and logging */ + ADF_CSR_WR(aram_csr, ADF_C4XXX_CPPMEMTGTERR, ADF_C4XXX_TGT_UERR); +} + +static void +adf_enable_mmp_error_correction(struct resource *csr, + struct adf_hw_device_data *hw_data) +{ + unsigned int accel = 0, mmp; + unsigned long uerrssmmmp_mask, cerrssmmmp_mask; + enum operation op; + unsigned long accel_mask; + + /* Prepare values and operation that will be performed on + * UERRSSMMMP and CERRSSMMMP registers on each MMP + */ + if (hw_data->accel_capabilities_mask & + ADF_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC) { + uerrssmmmp_mask = ADF_C4XXX_UERRSSMMMP_EN; + cerrssmmmp_mask = ADF_C4XXX_CERRSSMMMP_EN; + op = OR; + } else { + uerrssmmmp_mask = ~ADF_C4XXX_UERRSSMMMP_EN; + cerrssmmmp_mask = ~ADF_C4XXX_CERRSSMMMP_EN; + op = AND; + } + + accel_mask = hw_data->accel_mask; + + /* Enable MMP Logging */ + for_each_set_bit(accel, &accel_mask, ADF_C4XXX_MAX_ACCELERATORS) + { + /* Set power-up */ + adf_csr_fetch_and_and(csr, + ADF_C4XXX_SLICEPWRDOWN(accel), + ~ADF_C4XXX_MMP_PWR_UP_MSK); + + for (mmp = 0; mmp < ADF_C4XXX_MAX_MMP; ++mmp) { + adf_csr_fetch_and_update(op, + csr, + ADF_C4XXX_UERRSSMMMP(accel, + mmp), + uerrssmmmp_mask); + adf_csr_fetch_and_update(op, + csr, + ADF_C4XXX_CERRSSMMMP(accel, + mmp), + cerrssmmmp_mask); + } + + /* Restore power-down value */ + adf_csr_fetch_and_or(csr, + ADF_C4XXX_SLICEPWRDOWN(accel), + ADF_C4XXX_MMP_PWR_UP_MSK); + } +} + +static u32 +get_pf2vf_offset(u32 i) +{ + return ADF_C4XXX_PF2VF_OFFSET(i); +} + +static u32 +get_vintmsk_offset(u32 i) +{ + return ADF_C4XXX_VINTMSK_OFFSET(i); +} + +static void +get_arb_info(struct arb_info *arb_csrs_info) +{ + arb_csrs_info->arbiter_offset = ADF_C4XXX_ARB_OFFSET; + arb_csrs_info->wrk_cfg_offset = ADF_C4XXX_ARB_WQCFG_OFFSET; +} + +static void +get_admin_info(struct admin_info *admin_csrs_info) +{ + admin_csrs_info->mailbox_offset = ADF_C4XXX_MAILBOX_BASE_OFFSET; + admin_csrs_info->admin_msg_ur = ADF_C4XXX_ADMINMSGUR_OFFSET; + admin_csrs_info->admin_msg_lr = ADF_C4XXX_ADMINMSGLR_OFFSET; +} + +static void +get_errsou_offset(u32 *errsou3, u32 *errsou5) +{ + *errsou3 = ADF_C4XXX_ERRSOU3; + *errsou5 = ADF_C4XXX_ERRSOU5; +} + +static void +adf_enable_error_correction(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_C4XXX_PMISC_BAR]; + struct resource *csr = misc_bar->virt_addr; + unsigned int val, i = 0; + unsigned long ae_mask; + unsigned long accel_mask; + + ae_mask = hw_device->ae_mask; + + /* Enable Accel Engine error detection & correction */ + for_each_set_bit(i, &ae_mask, ADF_C4XXX_MAX_ACCELENGINES) + { + val = ADF_CSR_RD(csr, ADF_C4XXX_AE_CTX_ENABLES(i)); + val |= ADF_C4XXX_ENABLE_AE_ECC_ERR; + ADF_CSR_WR(csr, ADF_C4XXX_AE_CTX_ENABLES(i), val); + val = ADF_CSR_RD(csr, ADF_C4XXX_AE_MISC_CONTROL(i)); + val |= ADF_C4XXX_ENABLE_AE_ECC_PARITY_CORR; + ADF_CSR_WR(csr, ADF_C4XXX_AE_MISC_CONTROL(i), val); + } + + accel_mask = hw_device->accel_mask; + + /* Enable shared memory error detection & correction */ + for_each_set_bit(i, &accel_mask, ADF_C4XXX_MAX_ACCELERATORS) + { + val = ADF_CSR_RD(csr, ADF_C4XXX_UERRSSMSH(i)); + val |= ADF_C4XXX_ERRSSMSH_EN; + ADF_CSR_WR(csr, ADF_C4XXX_UERRSSMSH(i), val); + val = ADF_CSR_RD(csr, ADF_C4XXX_CERRSSMSH(i)); + val |= ADF_C4XXX_ERRSSMSH_EN; + ADF_CSR_WR(csr, ADF_C4XXX_CERRSSMSH(i), val); + } + + adf_enable_ras(accel_dev); + adf_enable_mmp_error_correction(csr, hw_device); + adf_enable_slice_hang_detection(accel_dev); + adf_enable_error_interrupts(accel_dev); +} + +static void +adf_enable_ints(struct adf_accel_dev *accel_dev) +{ + struct resource *addr; + + addr = (&GET_BARS(accel_dev)[ADF_C4XXX_PMISC_BAR])->virt_addr; + + /* Enable bundle interrupts */ + ADF_CSR_WR(addr, ADF_C4XXX_SMIAPF0_MASK_OFFSET, ADF_C4XXX_SMIA0_MASK); + ADF_CSR_WR(addr, ADF_C4XXX_SMIAPF1_MASK_OFFSET, ADF_C4XXX_SMIA1_MASK); + ADF_CSR_WR(addr, ADF_C4XXX_SMIAPF2_MASK_OFFSET, ADF_C4XXX_SMIA2_MASK); + ADF_CSR_WR(addr, ADF_C4XXX_SMIAPF3_MASK_OFFSET, ADF_C4XXX_SMIA3_MASK); + /*Enable misc interrupts*/ + ADF_CSR_WR(addr, ADF_C4XXX_SMIAPF4_MASK_OFFSET, ADF_C4XXX_SMIA4_MASK); +} + +static u32 +get_ae_clock(struct adf_hw_device_data *self) +{ + /* Clock update interval is <16> ticks for c4xxx. */ + return self->clock_frequency / 16; +} + +static int +measure_clock(struct adf_accel_dev *accel_dev) +{ + u32 frequency; + int ret = 0; + + ret = adf_dev_measure_clock(accel_dev, + &frequency, + ADF_C4XXX_MIN_AE_FREQ, + ADF_C4XXX_MAX_AE_FREQ); + if (ret) + return ret; + + accel_dev->hw_device->clock_frequency = frequency; + return 0; +} + +static int +get_storage_enabled(struct adf_accel_dev *accel_dev, uint32_t *storage_enabled) +{ + if (accel_dev->au_info->num_dc_au > 0) { + *storage_enabled = 1; + GET_HW_DATA(accel_dev)->extended_dc_capabilities = + ICP_ACCEL_CAPABILITIES_ADVANCED_COMPRESSION; + } + return 0; +} + +static u32 +c4xxx_get_hw_cap(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 legfuses; + u32 softstrappull0, softstrappull2; + u32 fusectl0, fusectl2; + u32 capabilities; + + /* Read accelerator capabilities mask */ + legfuses = pci_read_config(pdev, ADF_DEVICE_LEGFUSE_OFFSET, 4); + capabilities = ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC | + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC | + ICP_ACCEL_CAPABILITIES_CIPHER | + ICP_ACCEL_CAPABILITIES_AUTHENTICATION | + ICP_ACCEL_CAPABILITIES_COMPRESSION | ICP_ACCEL_CAPABILITIES_ZUC | + ICP_ACCEL_CAPABILITIES_HKDF | ICP_ACCEL_CAPABILITIES_SHA3_EXT | + ICP_ACCEL_CAPABILITIES_SM3 | ICP_ACCEL_CAPABILITIES_SM4 | + ICP_ACCEL_CAPABILITIES_CHACHA_POLY | + ICP_ACCEL_CAPABILITIES_AESGCM_SPC | + ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY | + ICP_ACCEL_CAPABILITIES_ECEDMONT; + + if (legfuses & ICP_ACCEL_MASK_CIPHER_SLICE) { + capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC; + capabilities &= ~ICP_ACCEL_CAPABILITIES_CIPHER; + } + if (legfuses & ICP_ACCEL_MASK_AUTH_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_AUTHENTICATION; + if (legfuses & ICP_ACCEL_MASK_PKE_SLICE) + capabilities &= ~(ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC | + ICP_ACCEL_CAPABILITIES_ECEDMONT); + if (legfuses & ICP_ACCEL_MASK_COMPRESS_SLICE) { + capabilities &= ~ICP_ACCEL_CAPABILITIES_COMPRESSION; + capabilities &= ~ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY; + } + if (legfuses & ICP_ACCEL_MASK_EIA3_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_ZUC; + if (legfuses & ICP_ACCEL_MASK_SM3_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_SM3; + if (legfuses & ICP_ACCEL_MASK_SM4_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_SM4; + + /* Read fusectl0 & softstrappull0 registers to ensure inline + * acceleration is not disabled + */ + softstrappull0 = + pci_read_config(pdev, ADF_C4XXX_SOFTSTRAPPULL0_OFFSET, 4); + fusectl0 = pci_read_config(pdev, ADF_C4XXX_FUSECTL0_OFFSET, 4); + if ((fusectl0 | softstrappull0) & ADF_C4XXX_FUSE_DISABLE_INLINE_MASK) + capabilities &= ~ICP_ACCEL_CAPABILITIES_INLINE; + + /* Read fusectl2 & softstrappull2 registers to check out if + * PKE/DC are enabled/disabled + */ + softstrappull2 = + pci_read_config(pdev, ADF_C4XXX_SOFTSTRAPPULL2_OFFSET, 4); + fusectl2 = pci_read_config(pdev, ADF_C4XXX_FUSECTL2_OFFSET, 4); + /* Disable PKE/DC cap if there are no PKE/DC-enabled AUs. */ + if (!(~fusectl2 & ~softstrappull2 & ADF_C4XXX_FUSE_PKE_MASK)) + capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC; + if (!(~fusectl2 & ~softstrappull2 & ADF_C4XXX_FUSE_COMP_MASK)) + capabilities &= ~(ICP_ACCEL_CAPABILITIES_COMPRESSION | + ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY); + + return capabilities; +} + +static int +c4xxx_configure_accel_units(struct adf_accel_dev *accel_dev) +{ + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES] = { 0 }; + unsigned long val; + char val_str[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { 0 }; + int sku; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + + sku = get_sku(hw_data); + + if (adf_cfg_section_add(accel_dev, ADF_GENERAL_SEC)) + goto err; + + snprintf(key, sizeof(key), ADF_SERVICES_ENABLED); + + /* Base station SKU supports symmetric cryptography only. */ + if (adf_check_sym_only_sku_c4xxx(accel_dev)) + snprintf(val_str, sizeof(val_str), ADF_SERVICE_SYM); + else + snprintf(val_str, sizeof(val_str), ADF_SERVICE_CY); + + val = sku_dc_au[sku]; + if (val) { + strncat(val_str, + ADF_SERVICES_SEPARATOR ADF_SERVICE_DC, + ADF_CFG_MAX_VAL_LEN_IN_BYTES - + strnlen(val_str, sizeof(val_str)) - + ADF_CFG_NULL_TERM_SIZE); + } + + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)val_str, ADF_STR)) + goto err; + + snprintf(key, sizeof(key), ADF_NUM_CY_ACCEL_UNITS); + val = sku_cy_au[sku]; + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC)) + goto err; + + snprintf(key, sizeof(key), ADF_NUM_DC_ACCEL_UNITS); + val = sku_dc_au[sku]; + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC)) + goto err; + + snprintf(key, sizeof(key), ADF_NUM_INLINE_ACCEL_UNITS); + val = sku_inline_au[sku]; + if (adf_cfg_add_key_value_param( + accel_dev, ADF_GENERAL_SEC, key, (void *)&val, ADF_DEC)) + goto err; + + return 0; +err: + device_printf(GET_DEV(accel_dev), "Failed to configure accel units\n"); + return EINVAL; +} + +static void +update_hw_capability(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_unit_info *au_info = accel_dev->au_info; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + u32 disabled_caps = 0; + + if (!au_info->asym_ae_msk) + disabled_caps = ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC | + ICP_ACCEL_CAPABILITIES_AUTHENTICATION; + + if (!au_info->sym_ae_msk) + disabled_caps |= ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC | + ICP_ACCEL_CAPABILITIES_CIPHER | ICP_ACCEL_CAPABILITIES_ZUC | + ICP_ACCEL_CAPABILITIES_SHA3_EXT | + ICP_ACCEL_CAPABILITIES_SM3 | ICP_ACCEL_CAPABILITIES_SM4 | + ICP_ACCEL_CAPABILITIES_CHACHA_POLY | + ICP_ACCEL_CAPABILITIES_AESGCM_SPC; + + if (!au_info->dc_ae_msk) { + disabled_caps |= ICP_ACCEL_CAPABILITIES_COMPRESSION | + ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY; + hw_device->extended_dc_capabilities = 0; + } + + if (!au_info->inline_ingress_msk && !au_info->inline_egress_msk) + disabled_caps |= ICP_ACCEL_CAPABILITIES_INLINE; + + hw_device->accel_capabilities_mask = + c4xxx_get_hw_cap(accel_dev) & ~disabled_caps; +} + +static void +c4xxx_set_sadb_size(struct adf_accel_dev *accel_dev) +{ + u32 sadb_reg_value = 0; + struct resource *aram_csr_base; + + aram_csr_base = (&GET_BARS(accel_dev)[ADF_C4XXX_SRAM_BAR])->virt_addr; + if (accel_dev->au_info->num_inline_au) { + /* REG_SA_DB_CTRL register initialisation */ + sadb_reg_value = ADF_C4XXX_SADB_REG_VALUE(accel_dev); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_REG_SA_DB_CTRL, + sadb_reg_value); + } else { + /* Zero the SADB size when inline is disabled. */ + adf_csr_fetch_and_and(aram_csr_base, + ADF_C4XXX_REG_SA_DB_CTRL, + ADF_C4XXX_SADB_SIZE_BIT); + } + /* REG_SA_CTRL_LOCK register initialisation. We set the lock + * bit in order to prevent the REG_SA_DB_CTRL to be + * overwritten + */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_REG_SA_CTRL_LOCK, + ADF_C4XXX_DEFAULT_SA_CTRL_LOCKOUT); +} + +static void +c4xxx_init_error_notification_configuration(struct adf_accel_dev *accel_dev, + u32 offset) +{ + struct resource *aram_csr_base; + + aram_csr_base = (&GET_BARS(accel_dev)[ADF_C4XXX_SRAM_BAR])->virt_addr; + + /* configure error notification configuration registers */ + /* Set CD Parity error */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_CD_RF_PARITY_ERR_0 + offset, + ADF_C4XXX_CD_RF_PARITY_ERR_0_VAL); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_CD_RF_PARITY_ERR_1 + offset, + ADF_C4XXX_CD_RF_PARITY_ERR_1_VAL); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_CD_RF_PARITY_ERR_2 + offset, + ADF_C4XXX_CD_RF_PARITY_ERR_2_VAL); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_CD_RF_PARITY_ERR_3 + offset, + ADF_C4XXX_CD_RF_PARITY_ERR_3_VAL); + /* Set CD RAM ECC Correctable Error */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_CD_CERR + offset, + ADF_C4XXX_CD_CERR_VAL); + /* Set CD RAM ECC UnCorrectable Error */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_CD_UERR + offset, + ADF_C4XXX_CD_UERR_VAL); + /* Set Inline (excl cmd_dis) Parity Error */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_INLN_RF_PARITY_ERR_0 + offset, + ADF_C4XXX_INLN_RF_PARITY_ERR_0_VAL); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_INLN_RF_PARITY_ERR_1 + offset, + ADF_C4XXX_INLN_RF_PARITY_ERR_1_VAL); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_INLN_RF_PARITY_ERR_2 + offset, + ADF_C4XXX_INLN_RF_PARITY_ERR_2_VAL); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_INLN_RF_PARITY_ERR_3 + offset, + ADF_C4XXX_INLN_RF_PARITY_ERR_3_VAL); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_INLN_RF_PARITY_ERR_4 + offset, + ADF_C4XXX_INLN_RF_PARITY_ERR_4_VAL); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_INLN_RF_PARITY_ERR_5 + offset, + ADF_C4XXX_INLN_RF_PARITY_ERR_5_VAL); + /* Set Parser RAM ECC Correctable Error */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSER_CERR + offset, + ADF_C4XXX_PARSER_CERR_VAL); + /* Set Parser RAM ECC UnCorrectable Error */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSER_UERR + offset, + ADF_C4XXX_PARSER_UERR_VAL); + /* Set CTPB RAM ECC Correctable Error */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_CTPB_CERR + offset, + ADF_C4XXX_CTPB_CERR_VAL); + /* Set CTPB RAM ECC UnCorrectable Error */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_CTPB_UERR + offset, + ADF_C4XXX_CTPB_UERR_VAL); + /* Set CPP Interface Status */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_CPPM_ERR_STAT + offset, + ADF_C4XXX_CPPM_ERR_STAT_VAL); + /* Set CGST_MGMT_INT */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_CONGESTION_MGMT_INT + offset, + ADF_C4XXX_CONGESTION_MGMT_INI_VAL); + /* CPP Interface Status */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_CPPT_ERR_STAT + offset, + ADF_C4XXX_CPPT_ERR_STAT_VAL); + /* MAC Interrupt Mask */ + ADF_CSR_WR64(aram_csr_base, + ADF_C4XXX_IC_MAC_IM + offset, + ADF_C4XXX_MAC_IM_VAL); +} + +static void +c4xxx_enable_parse_extraction(struct adf_accel_dev *accel_dev) +{ + struct resource *aram_csr_base; + + aram_csr_base = (&GET_BARS(accel_dev)[ADF_C4XXX_SRAM_BAR])->virt_addr; + + /* Enable Inline Parse Extraction CRSs */ + + /* Set IC_PARSE_CTRL register */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_CTRL_OFFSET, + ADF_C4XXX_IC_PARSE_CTRL_OFFSET_DEFAULT_VALUE); + + /* Set IC_PARSE_FIXED_DATA(0) */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_FIXED_DATA(0), + ADF_C4XXX_DEFAULT_IC_PARSE_FIXED_DATA_0); + + /* Set IC_PARSE_FIXED_LENGTH */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_FIXED_LENGTH, + ADF_C4XXX_DEFAULT_IC_PARSE_FIXED_LEN); + + /* Configure ESP protocol from an IPv4 header */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV4_OFFSET_0, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_OFFS_0_VALUE); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV4_LENGTH_0, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_LEN_0_VALUE); + /* Configure protocol extraction field from an IPv4 header */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV4_OFFSET_1, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_OFFS_1_VALUE); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV4_LENGTH_1, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_LEN_1_VALUE); + /* Configure SPI extraction field from an IPv4 header */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV4_OFFSET_2, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_OFFS_2_VALUE); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV4_LENGTH_2, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_LEN_2_VALUE); + /* Configure destination field IP address from an IPv4 header */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV4_OFFSET_3, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_OFFS_3_VALUE); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV4_LENGTH_3, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_LEN_3_VALUE); + + /* Configure function number extraction field from an IPv6 header */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV6_OFFSET_0, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_OFFS_0_VALUE); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV6_LENGTH_0, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_LEN_0_VALUE); + /* Configure protocol extraction field from an IPv6 header */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV6_OFFSET_1, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_OFFS_1_VALUE); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV6_LENGTH_1, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_LEN_1_VALUE); + /* Configure SPI extraction field from an IPv6 header */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV6_OFFSET_2, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_OFFS_2_VALUE); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV6_LENGTH_2, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_LEN_2_VALUE); + /* Configure destination field IP address from an IPv6 header */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV6_OFFSET_3, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_OFFS_3_VALUE); + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_IC_PARSE_IPV6_LENGTH_3, + ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_LEN_3_VALUE); +} + +static int +adf_get_inline_ipsec_algo_group(struct adf_accel_dev *accel_dev, + unsigned long *ipsec_algo_group) +{ + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + + if (adf_cfg_get_param_value( + accel_dev, ADF_INLINE_SEC, ADF_INLINE_IPSEC_ALGO_GROUP, val)) + return EFAULT; + if (kstrtoul(val, 0, ipsec_algo_group)) + return EFAULT; + + /* Verify the ipsec_algo_group */ + if (*ipsec_algo_group >= IPSEC_ALGO_GROUP_DELIMITER) { + device_printf( + GET_DEV(accel_dev), + "Unsupported IPSEC algo group %lu in config file!\n", + *ipsec_algo_group); + return EFAULT; + } + + return 0; +} + +static int +c4xxx_init_inline_hw(struct adf_accel_dev *accel_dev) +{ + u32 sa_entry_reg_value = 0; + u32 sa_fn_lim = 0; + u32 supported_algo = 0; + struct resource *aram_csr_base; + u32 offset; + unsigned long ipsec_algo_group = IPSEC_DEFAUL_ALGO_GROUP; + + aram_csr_base = (&GET_BARS(accel_dev)[ADF_C4XXX_SRAM_BAR])->virt_addr; + + if (adf_get_inline_ipsec_algo_group(accel_dev, &ipsec_algo_group)) + return EFAULT; + + sa_entry_reg_value |= + (ADF_C4XXX_DEFAULT_LU_KEY_LEN << ADF_C4XXX_LU_KEY_LEN_BIT_OFFSET); + if (ipsec_algo_group == IPSEC_DEFAUL_ALGO_GROUP) { + sa_entry_reg_value |= ADF_C4XXX_DEFAULT_SA_SIZE; + sa_fn_lim = + ADF_C4XXX_FUNC_LIMIT(accel_dev, ADF_C4XXX_DEFAULT_SA_SIZE); + supported_algo = ADF_C4XXX_DEFAULT_SUPPORTED_ALGORITHMS; + } else if (ipsec_algo_group == IPSEC_ALGO_GROUP1) { + sa_entry_reg_value |= ADF_C4XXX_ALGO_GROUP1_SA_SIZE; + sa_fn_lim = ADF_C4XXX_FUNC_LIMIT(accel_dev, + ADF_C4XXX_ALGO_GROUP1_SA_SIZE); + supported_algo = ADF_C4XXX_SUPPORTED_ALGORITHMS_GROUP1; + } else { + return EFAULT; + } + + /* REG_SA_ENTRY_CTRL register initialisation */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_REG_SA_ENTRY_CTRL, + sa_entry_reg_value); + + /* REG_SAL_FUNC_LIMITS register initialisation. Only the first register + * needs to be initialised to enable as it is assigned to a physical + * function. Other registers will be initialised by the LAN PF driver. + * The function limits is initialised to its maximal value. + */ + ADF_CSR_WR(aram_csr_base, ADF_C4XXX_REG_SA_FUNC_LIMITS, sa_fn_lim); + + /* Initialize REG_SA_SCRATCH[0] register to + * advertise supported crypto algorithms + */ + ADF_CSR_WR(aram_csr_base, ADF_C4XXX_REG_SA_SCRATCH_0, supported_algo); + + /* REG_SA_SCRATCH[2] register initialisation + * to advertise supported crypto offload features. + */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_REG_SA_SCRATCH_2, + ADF_C4XXX_DEFAULT_CY_OFFLOAD_FEATURES); + + /* Overwrite default MAC_CFG register in ingress offset */ + ADF_CSR_WR64(aram_csr_base, + ADF_C4XXX_MAC_CFG + ADF_C4XXX_INLINE_INGRESS_OFFSET, + ADF_C4XXX_MAC_CFG_VALUE); + + /* Overwrite default MAC_CFG register in egress offset */ + ADF_CSR_WR64(aram_csr_base, + ADF_C4XXX_MAC_CFG + ADF_C4XXX_INLINE_EGRESS_OFFSET, + ADF_C4XXX_MAC_CFG_VALUE); + + /* Overwrite default MAC_PIA_CFG + * (Packet Interface Adapter Configuration) registers + * in ingress offset + */ + ADF_CSR_WR64(aram_csr_base, + ADF_C4XXX_MAC_PIA_CFG + ADF_C4XXX_INLINE_INGRESS_OFFSET, + ADF_C4XXX_MAC_PIA_CFG_VALUE); + + /* Overwrite default MAC_PIA_CFG in egress offset */ + ADF_CSR_WR64(aram_csr_base, + ADF_C4XXX_MAC_PIA_CFG + ADF_C4XXX_INLINE_EGRESS_OFFSET, + ADF_C4XXX_MAC_PIA_CFG_VALUE); + + c4xxx_enable_parse_extraction(accel_dev); + + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_INGRESS_CMD_DIS_MISC, + ADF_C4XXX_REG_CMD_DIS_MISC_DEFAULT_VALUE); + + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_EGRESS_CMD_DIS_MISC, + ADF_C4XXX_REG_CMD_DIS_MISC_DEFAULT_VALUE); + + /* Set bits<1:0> in ADF_C4XXX_INLINE_CAPABILITY register to + * advertize that both ingress and egress directions are available + */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_REG_SA_INLINE_CAPABILITY, + ADF_C4XXX_INLINE_CAPABILITIES); + + /* Set error notification configuration of ingress */ + offset = ADF_C4XXX_INLINE_INGRESS_OFFSET; + c4xxx_init_error_notification_configuration(accel_dev, offset); + /* Set error notification configuration of egress */ + offset = ADF_C4XXX_INLINE_EGRESS_OFFSET; + c4xxx_init_error_notification_configuration(accel_dev, offset); + + return 0; +} + +static void +adf_enable_inline_notification(struct adf_accel_dev *accel_dev) +{ + struct resource *aram_csr_base; + + aram_csr_base = (&GET_BARS(accel_dev)[ADF_C4XXX_SRAM_BAR])->virt_addr; + + /* Set bit<0> in ADF_C4XXX_REG_SA_INLINE_ENABLE to advertise + * that inline is enabled. + */ + ADF_CSR_WR(aram_csr_base, + ADF_C4XXX_REG_SA_INLINE_ENABLE, + ADF_C4XXX_INLINE_ENABLED); +} + +static int +c4xxx_init_aram_config(struct adf_accel_dev *accel_dev) +{ + u32 aram_size = ADF_C4XXX_2MB_ARAM_SIZE; + u32 ibuff_mem_needed = 0; + u32 usable_aram_size = 0; + struct adf_hw_aram_info *aram_info; + u32 sa_db_ctl_value; + struct resource *aram_csr_base; + u8 profile = 0; + u32 sadb_size = 0; + u32 sa_size = 0; + unsigned long ipsec_algo_group = IPSEC_DEFAUL_ALGO_GROUP; + u32 i; + + if (accel_dev->au_info->num_inline_au > 0) + if (adf_get_inline_ipsec_algo_group(accel_dev, + &ipsec_algo_group)) + return EFAULT; + + /* Allocate memory for adf_hw_aram_info */ + aram_info = kzalloc(sizeof(*accel_dev->aram_info), GFP_KERNEL); + if (!aram_info) + return ENOMEM; + + /* Initialise Inline direction */ + aram_info->inline_direction_egress_mask = 0; + if (accel_dev->au_info->num_inline_au) { + /* Set inline direction bitmap in the ARAM to + * inform firmware which ME is egress + */ + aram_info->inline_direction_egress_mask = + accel_dev->au_info->inline_egress_msk; + + /* User profile is valid, we can now add it + * in the ARAM partition table + */ + aram_info->inline_congest_mngt_profile = profile; + } + /* Initialise DC ME mask, "1" = ME is used for DC operations */ + aram_info->dc_ae_mask = accel_dev->au_info->dc_ae_msk; + + /* Initialise CY ME mask, "1" = ME is used for CY operations + * Since asym service can also be enabled on inline AEs, here + * we use the sym ae mask for configuring the cy_ae_msk + */ + aram_info->cy_ae_mask = accel_dev->au_info->sym_ae_msk; + + /* Configure number of long words in the ARAM */ + aram_info->num_aram_lw_entries = ADF_C4XXX_NUM_ARAM_ENTRIES; + + /* Reset region offset values to 0xffffffff */ + aram_info->mmp_region_offset = ~aram_info->mmp_region_offset; + aram_info->skm_region_offset = ~aram_info->skm_region_offset; + aram_info->inter_buff_aram_region_offset = + ~aram_info->inter_buff_aram_region_offset; + + /* Determine ARAM size */ + aram_csr_base = (&GET_BARS(accel_dev)[ADF_C4XXX_SRAM_BAR])->virt_addr; + sa_db_ctl_value = ADF_CSR_RD(aram_csr_base, ADF_C4XXX_REG_SA_DB_CTRL); + + aram_size = (sa_db_ctl_value & ADF_C4XXX_SADB_SIZE_BIT) ? + ADF_C4XXX_2MB_ARAM_SIZE : + ADF_C4XXX_4MB_ARAM_SIZE; + device_printf(GET_DEV(accel_dev), + "Total available accelerator memory: %uMB\n", + aram_size / ADF_C4XXX_1MB_SIZE); + + /* Compute MMP region offset */ + aram_info->mmp_region_size = ADF_C4XXX_DEFAULT_MMP_REGION_SIZE; + aram_info->mmp_region_offset = aram_size - aram_info->mmp_region_size; + + if (accel_dev->au_info->num_cy_au || + accel_dev->au_info->num_inline_au) { + /* Crypto is available therefore we must + * include space in the ARAM for SKM. + */ + aram_info->skm_region_size = ADF_C4XXX_DEFAULT_SKM_REGION_SIZE; + /* Compute SKM region offset */ + aram_info->skm_region_offset = aram_size - + (aram_info->mmp_region_size + aram_info->skm_region_size); + } + + /* SADB always start at offset 0. */ + if (accel_dev->au_info->num_inline_au) { + /* Inline is available therefore we must + * use remaining ARAM for the SADB. + */ + sadb_size = aram_size - + (aram_info->mmp_region_size + aram_info->skm_region_size); + + /* + * When the inline service is enabled, the policy is that + * compression gives up it's space in ARAM to allow for a + * larger SADB. Compression must use DRAM instead of ARAM. + */ + aram_info->inter_buff_aram_region_size = 0; + + /* the SADB size must be an integral multiple of the SA size */ + if (ipsec_algo_group == IPSEC_DEFAUL_ALGO_GROUP) { + sa_size = ADF_C4XXX_DEFAULT_SA_SIZE; + } else { + /* IPSEC_ALGO_GROUP1 + * Total 2 algo groups. + */ + sa_size = ADF_C4XXX_ALGO_GROUP1_SA_SIZE; + } + + sadb_size = sadb_size - + (sadb_size % ADF_C4XXX_SA_SIZE_IN_BYTES(sa_size)); + aram_info->sadb_region_size = sadb_size; + } + + if (accel_dev->au_info->num_dc_au && + !accel_dev->au_info->num_inline_au) { + /* Compression is available therefore we must see if there is + * space in the ARAM for intermediate buffers. + */ + aram_info->inter_buff_aram_region_size = 0; + usable_aram_size = aram_size - + (aram_info->mmp_region_size + aram_info->skm_region_size); + + for (i = 1; i <= accel_dev->au_info->num_dc_au; i++) { + if ((i * ADF_C4XXX_AU_COMPR_INTERM_SIZE) > + usable_aram_size) + break; + + ibuff_mem_needed = i * ADF_C4XXX_AU_COMPR_INTERM_SIZE; + } + + /* Set remaining ARAM to intermediate buffers. Firmware handles + * fallback to DRAM for cases were number of AU assigned + * to compression exceeds available ARAM memory. + */ + aram_info->inter_buff_aram_region_size = ibuff_mem_needed; + + /* If ARAM is used for compression set its initial offset. */ + if (aram_info->inter_buff_aram_region_size) + aram_info->inter_buff_aram_region_offset = 0; + } + + accel_dev->aram_info = aram_info; + + return 0; +} + +static void +c4xxx_exit_aram_config(struct adf_accel_dev *accel_dev) +{ + kfree(accel_dev->aram_info); + accel_dev->aram_info = NULL; +} + +static u32 +get_num_accel_units(struct adf_hw_device_data *self) +{ + u32 i = 0, num_accel = 0; + unsigned long accel_mask = 0; + + if (!self || !self->accel_mask) + return 0; + + accel_mask = self->accel_mask; + + for_each_set_bit(i, &accel_mask, ADF_C4XXX_MAX_ACCELERATORS) + { + num_accel++; + } + + return num_accel / ADF_C4XXX_NUM_ACCEL_PER_AU; +} + +static int +get_accel_unit(struct adf_hw_device_data *self, + struct adf_accel_unit **accel_unit) +{ + enum dev_sku_info sku; + + sku = get_sku(self); + + switch (sku) { + case DEV_SKU_1: + case DEV_SKU_1_CY: + *accel_unit = adf_c4xxx_au_32_ae; + break; + case DEV_SKU_2: + case DEV_SKU_2_CY: + *accel_unit = adf_c4xxx_au_24_ae; + break; + case DEV_SKU_3: + case DEV_SKU_3_CY: + *accel_unit = adf_c4xxx_au_12_ae; + break; + default: + *accel_unit = adf_c4xxx_au_emulation; + break; + } + return 0; +} + +static int +get_ae_info(struct adf_hw_device_data *self, const struct adf_ae_info **ae_info) +{ + enum dev_sku_info sku; + + sku = get_sku(self); + + switch (sku) { + case DEV_SKU_1: + *ae_info = adf_c4xxx_32_ae; + break; + case DEV_SKU_1_CY: + *ae_info = adf_c4xxx_32_ae_sym; + break; + case DEV_SKU_2: + *ae_info = adf_c4xxx_24_ae; + break; + case DEV_SKU_2_CY: + *ae_info = adf_c4xxx_24_ae_sym; + break; + case DEV_SKU_3: + *ae_info = adf_c4xxx_12_ae; + break; + case DEV_SKU_3_CY: + *ae_info = adf_c4xxx_12_ae_sym; + break; + default: + *ae_info = adf_c4xxx_12_ae; + break; + } + return 0; +} + +static int +adf_add_debugfs_info(struct adf_accel_dev *accel_dev) +{ + /* Add Accel Unit configuration table to debug FS interface */ + if (c4xxx_init_ae_config(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed to create entry for AE configuration\n"); + return EFAULT; + } + + return 0; +} + +static void +adf_remove_debugfs_info(struct adf_accel_dev *accel_dev) +{ + /* Remove Accel Unit configuration table from debug FS interface */ + c4xxx_exit_ae_config(accel_dev); +} + +static int +check_svc_to_hw_capabilities(struct adf_accel_dev *accel_dev, + const char *svc_name, + enum icp_qat_capabilities_mask cap) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 hw_cap = hw_data->accel_capabilities_mask; + + hw_cap &= cap; + if (hw_cap != cap) { + device_printf(GET_DEV(accel_dev), + "Service not supported by accelerator: %s\n", + svc_name); + return EPERM; + } + + return 0; +} + +static int +check_accel_unit_config(struct adf_accel_dev *accel_dev, + u8 num_cy_au, + u8 num_dc_au, + u8 num_inline_au) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + u32 num_au = hw_data->get_num_accel_units(hw_data); + u32 service_mask = ADF_ACCEL_SERVICE_NULL; + char *token, *cur_str; + int ret = 0; + + /* Get the services enabled by user */ + snprintf(key, sizeof(key), ADF_SERVICES_ENABLED); + if (adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) + return EFAULT; + cur_str = val; + token = strsep(&cur_str, ADF_SERVICES_SEPARATOR); + while (token) { + if (!strncmp(token, ADF_SERVICE_CY, strlen(ADF_SERVICE_CY))) { + service_mask |= ADF_ACCEL_CRYPTO; + ret |= check_svc_to_hw_capabilities( + accel_dev, + token, + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC | + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC); + } + + if (!strncmp(token, ADF_CFG_SYM, strlen(ADF_CFG_SYM))) { + service_mask |= ADF_ACCEL_CRYPTO; + ret |= check_svc_to_hw_capabilities( + accel_dev, + token, + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC); + } + + if (!strncmp(token, ADF_CFG_ASYM, strlen(ADF_CFG_ASYM))) { + /* Handle a special case of services 'asym;inline' + * enabled where ASYM is handled by Inline firmware + * at AE level. This configuration allows to enable + * ASYM service without accel units assigned to + * CRYPTO service, e.g. + * num_inline_au = 6 + * num_cy_au = 0 + */ + if (num_inline_au < num_au) + service_mask |= ADF_ACCEL_CRYPTO; + + ret |= check_svc_to_hw_capabilities( + accel_dev, + token, + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC); + } + + if (!strncmp(token, ADF_SERVICE_DC, strlen(ADF_SERVICE_DC))) { + service_mask |= ADF_ACCEL_COMPRESSION; + ret |= check_svc_to_hw_capabilities( + accel_dev, + token, + ICP_ACCEL_CAPABILITIES_COMPRESSION); + } + + if (!strncmp(token, + ADF_SERVICE_INLINE, + strlen(ADF_SERVICE_INLINE))) { + service_mask |= ADF_ACCEL_INLINE_CRYPTO; + ret |= check_svc_to_hw_capabilities( + accel_dev, token, ICP_ACCEL_CAPABILITIES_INLINE); + } + + token = strsep(&cur_str, ADF_SERVICES_SEPARATOR); + } + + /* Ensure the user doesn't enable services that are not supported by + * accelerator. + */ + if (ret) { + device_printf(GET_DEV(accel_dev), + "Invalid accelerator configuration.\n"); + return EFAULT; + } + + if (!(service_mask & ADF_ACCEL_COMPRESSION) && num_dc_au > 0) { + device_printf(GET_DEV(accel_dev), + "Invalid accel unit config.\n"); + device_printf( + GET_DEV(accel_dev), + "DC accel units set when dc service not enabled\n"); + return EFAULT; + } + + if (!(service_mask & ADF_ACCEL_CRYPTO) && num_cy_au > 0) { + device_printf(GET_DEV(accel_dev), + "Invalid accel unit config.\n"); + device_printf( + GET_DEV(accel_dev), + "CY accel units set when cy service not enabled\n"); + return EFAULT; + } + + if (!(service_mask & ADF_ACCEL_INLINE_CRYPTO) && num_inline_au > 0) { + device_printf(GET_DEV(accel_dev), + "Invalid accel unit config.\n" + "Inline feature not supported.\n"); + return EFAULT; + } + + hw_data->service_mask = service_mask; + /* Ensure the user doesn't allocate more than max accel units */ + if (num_au != (num_cy_au + num_dc_au + num_inline_au)) { + device_printf(GET_DEV(accel_dev), + "Invalid accel unit config.\n"); + device_printf(GET_DEV(accel_dev), + "Max accel units is %d\n", + num_au); + return EFAULT; + } + + /* Ensure user allocates hardware resources for enabled services */ + if (!num_cy_au && (service_mask & ADF_ACCEL_CRYPTO)) { + device_printf(GET_DEV(accel_dev), + "Failed to enable cy service!\n"); + device_printf(GET_DEV(accel_dev), + "%s should not be 0", + ADF_NUM_CY_ACCEL_UNITS); + return EFAULT; + } + if (!num_dc_au && (service_mask & ADF_ACCEL_COMPRESSION)) { + device_printf(GET_DEV(accel_dev), + "Failed to enable dc service!\n"); + device_printf(GET_DEV(accel_dev), + "%s should not be 0", + ADF_NUM_DC_ACCEL_UNITS); + return EFAULT; + } + if (!num_inline_au && (service_mask & ADF_ACCEL_INLINE_CRYPTO)) { + device_printf(GET_DEV(accel_dev), "Failed to enable"); + device_printf(GET_DEV(accel_dev), " inline service!"); + device_printf(GET_DEV(accel_dev), + " %s should not be 0\n", + ADF_NUM_INLINE_ACCEL_UNITS); + return EFAULT; + } + + return 0; +} + +static int +get_accel_unit_config(struct adf_accel_dev *accel_dev, + u8 *num_cy_au, + u8 *num_dc_au, + u8 *num_inline_au) +{ + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + + /* Get the number of accel units allocated for each service */ + snprintf(key, sizeof(key), ADF_NUM_CY_ACCEL_UNITS); + if (adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) + return EFAULT; + if (compat_strtou8(val, 10, num_cy_au)) + return EFAULT; + snprintf(key, sizeof(key), ADF_NUM_DC_ACCEL_UNITS); + if (adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) + return EFAULT; + if (compat_strtou8(val, 10, num_dc_au)) + return EFAULT; + + snprintf(key, sizeof(key), ADF_NUM_INLINE_ACCEL_UNITS); + if (adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) + return EFAULT; + if (compat_strtou8(val, 10, num_inline_au)) + return EFAULT; + + return 0; +} + +/* Function reads the inline ingress/egress configuration + * and returns the number of AEs reserved for ingress + * and egress for accel units which are allocated for + * inline service + */ +static int +adf_get_inline_config(struct adf_accel_dev *accel_dev, u32 *num_ingress_aes) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + char *value; + u32 num_au = hw_data->get_num_accel_units(hw_data); + unsigned long ingress, egress = 0; + struct adf_accel_unit *accel_unit = accel_dev->au_info->au; + u32 num_inline_aes = 0, num_ingress_ae = 0; + u32 i = 0; + + snprintf(key, sizeof(key), ADF_INLINE_INGRESS); + if (adf_cfg_get_param_value(accel_dev, ADF_INLINE_SEC, key, val)) { + device_printf(GET_DEV(accel_dev), "Failed to find ingress\n"); + return EFAULT; + } + value = val; + value = strsep(&value, ADF_C4XXX_PERCENTAGE); + if (compat_strtoul(value, 10, &ingress)) + return EFAULT; + + snprintf(key, sizeof(key), ADF_INLINE_EGRESS); + if (adf_cfg_get_param_value(accel_dev, ADF_INLINE_SEC, key, val)) { + device_printf(GET_DEV(accel_dev), "Failed to find egress\n"); + return EFAULT; + } + value = val; + value = strsep(&value, ADF_C4XXX_PERCENTAGE); + if (compat_strtoul(value, 10, &egress)) + return EFAULT; + + if (ingress + egress != ADF_C4XXX_100) { + device_printf(GET_DEV(accel_dev), + "The sum of ingress and egress should be 100\n"); + return EFAULT; + } + + for (i = 0; i < num_au; i++) { + if (accel_unit[i].services == ADF_ACCEL_INLINE_CRYPTO) + num_inline_aes += accel_unit[i].num_ae; + } + + num_ingress_ae = num_inline_aes * ingress / ADF_C4XXX_100; + if (((num_inline_aes * ingress) % ADF_C4XXX_100) > + ADF_C4XXX_ROUND_LIMIT) + num_ingress_ae++; + + *num_ingress_aes = num_ingress_ae; + return 0; +} + +static int +adf_set_inline_ae_mask(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 num_au = hw_data->get_num_accel_units(hw_data); + struct adf_accel_unit_info *au_info = accel_dev->au_info; + struct adf_accel_unit *accel_unit = accel_dev->au_info->au; + u32 num_ingress_ae = 0; + u32 ingress_msk = 0; + u32 i, j, ae_mask; + + if (adf_get_inline_config(accel_dev, &num_ingress_ae)) + return EFAULT; + + for (i = 0; i < num_au; i++) { + j = 0; + if (accel_unit[i].services == ADF_ACCEL_INLINE_CRYPTO) { + /* AEs with inline service enabled are also used + * for asymmetric crypto + */ + au_info->asym_ae_msk |= accel_unit[i].ae_mask; + ae_mask = accel_unit[i].ae_mask; + while (num_ingress_ae && ae_mask) { + if (ae_mask & 1) { + ingress_msk |= BIT(j); + num_ingress_ae--; + } + ae_mask = ae_mask >> 1; + j++; + } + au_info->inline_ingress_msk |= ingress_msk; + + au_info->inline_egress_msk |= + ~(au_info->inline_ingress_msk) & + accel_unit[i].ae_mask; + } + } + + return 0; +} + +static int +adf_set_ae_mask(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 num_au = hw_data->get_num_accel_units(hw_data); + struct adf_accel_unit_info *au_info = accel_dev->au_info; + struct adf_accel_unit *accel_unit = accel_dev->au_info->au; + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + char *token, *cur_str; + bool asym_en = false, sym_en = false; + u32 i; + + /* Get the services enabled by user */ + snprintf(key, sizeof(key), ADF_SERVICES_ENABLED); + if (adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) + return EFAULT; + cur_str = val; + token = strsep(&cur_str, ADF_SERVICES_SEPARATOR); + while (token) { + if (!strncmp(token, ADF_CFG_ASYM, strlen(ADF_CFG_ASYM))) + asym_en = true; + if (!strncmp(token, ADF_CFG_SYM, strlen(ADF_CFG_SYM))) + sym_en = true; + if (!strncmp(token, ADF_CFG_CY, strlen(ADF_CFG_CY))) { + sym_en = true; + asym_en = true; + } + token = strsep(&cur_str, ADF_SERVICES_SEPARATOR); + } + + for (i = 0; i < num_au; i++) { + if (accel_unit[i].services == ADF_ACCEL_CRYPTO) { + /* AEs that support crypto can perform both + * symmetric and asymmetric crypto, however + * we only enable the threads if the relevant + * service is also enabled + */ + if (asym_en) + au_info->asym_ae_msk |= accel_unit[i].ae_mask; + if (sym_en) + au_info->sym_ae_msk |= accel_unit[i].ae_mask; + } else if (accel_unit[i].services == ADF_ACCEL_COMPRESSION) { + au_info->dc_ae_msk |= accel_unit[i].comp_ae_mask; + } + } + return 0; +} + +static int +adf_init_accel_unit_services(struct adf_accel_dev *accel_dev) +{ + u8 num_cy_au, num_dc_au, num_inline_au; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 num_au = hw_data->get_num_accel_units(hw_data); + struct adf_accel_unit *accel_unit; + const struct adf_ae_info *ae_info; + int i; + + if (get_accel_unit_config( + accel_dev, &num_cy_au, &num_dc_au, &num_inline_au)) { + device_printf(GET_DEV(accel_dev), "Invalid accel unit cfg\n"); + return EFAULT; + } + + if (check_accel_unit_config( + accel_dev, num_cy_au, num_dc_au, num_inline_au)) + return EFAULT; + + accel_dev->au_info = kzalloc(sizeof(*accel_dev->au_info), GFP_KERNEL); + if (!accel_dev->au_info) + return ENOMEM; + + accel_dev->au_info->num_cy_au = num_cy_au; + accel_dev->au_info->num_dc_au = num_dc_au; + accel_dev->au_info->num_inline_au = num_inline_au; + + if (get_ae_info(hw_data, &ae_info)) { + device_printf(GET_DEV(accel_dev), "Failed to get ae info\n"); + goto err_au_info; + } + accel_dev->au_info->ae_info = ae_info; + + if (get_accel_unit(hw_data, &accel_unit)) { + device_printf(GET_DEV(accel_dev), "Failed to get accel unit\n"); + goto err_ae_info; + } + + /* Enable compression accel units */ + /* Accel units with 4AEs are reserved for compression first */ + for (i = num_au - 1; i >= 0 && num_dc_au > 0; i--) { + if (accel_unit[i].num_ae == ADF_C4XXX_4_AE) { + accel_unit[i].services = ADF_ACCEL_COMPRESSION; + num_dc_au--; + } + } + for (i = num_au - 1; i >= 0 && num_dc_au > 0; i--) { + if (accel_unit[i].services == ADF_ACCEL_SERVICE_NULL) { + accel_unit[i].services = ADF_ACCEL_COMPRESSION; + num_dc_au--; + } + } + + /* Enable inline accel units */ + for (i = 0; i < num_au && num_inline_au > 0; i++) { + if (accel_unit[i].services == ADF_ACCEL_SERVICE_NULL) { + accel_unit[i].services = ADF_ACCEL_INLINE_CRYPTO; + num_inline_au--; + } + } + + /* Enable crypto accel units */ + for (i = 0; i < num_au && num_cy_au > 0; i++) { + if (accel_unit[i].services == ADF_ACCEL_SERVICE_NULL) { + accel_unit[i].services = ADF_ACCEL_CRYPTO; + num_cy_au--; + } + } + accel_dev->au_info->au = accel_unit; + return 0; + +err_ae_info: + accel_dev->au_info->ae_info = NULL; +err_au_info: + kfree(accel_dev->au_info); + accel_dev->au_info = NULL; + return EFAULT; +} + +static void +adf_exit_accel_unit_services(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 num_au = hw_data->get_num_accel_units(hw_data); + int i; + + if (accel_dev->au_info) { + if (accel_dev->au_info->au) { + for (i = 0; i < num_au; i++) { + accel_dev->au_info->au[i].services = + ADF_ACCEL_SERVICE_NULL; + } + } + accel_dev->au_info->au = NULL; + accel_dev->au_info->ae_info = NULL; + kfree(accel_dev->au_info); + accel_dev->au_info = NULL; + } +} + +static inline void +adf_c4xxx_reset_hw_units(struct adf_accel_dev *accel_dev) +{ + struct resource *pmisc = + (&GET_BARS(accel_dev)[ADF_C4XXX_PMISC_BAR])->virt_addr; + + u32 global_clk_enable = ADF_C4XXX_GLOBAL_CLK_ENABLE_GENERIC_ARAM | + ADF_C4XXX_GLOBAL_CLK_ENABLE_GENERIC_ICI_ENABLE | + ADF_C4XXX_GLOBAL_CLK_ENABLE_GENERIC_ICE_ENABLE; + + u32 ixp_reset_generic = ADF_C4XXX_IXP_RESET_GENERIC_ARAM | + ADF_C4XXX_IXP_RESET_GENERIC_INLINE_EGRESS | + ADF_C4XXX_IXP_RESET_GENERIC_INLINE_INGRESS; + + /* To properly reset each of the units driver must: + * 1)Call out resetactive state using ixp reset generic + * register; + * 2)Disable generic clock; + * 3)Take device out of reset by clearing ixp reset + * generic register; + * 4)Re-enable generic clock; + */ + ADF_CSR_WR(pmisc, ADF_C4XXX_IXP_RESET_GENERIC, ixp_reset_generic); + ADF_CSR_WR(pmisc, + ADF_C4XXX_GLOBAL_CLK_ENABLE_GENERIC, + ADF_C4XXX_GLOBAL_CLK_ENABLE_GENERIC_DISABLE_ALL); + ADF_CSR_WR(pmisc, + ADF_C4XXX_IXP_RESET_GENERIC, + ADF_C4XXX_IXP_RESET_GENERIC_OUT_OF_RESET_TRIGGER); + ADF_CSR_WR(pmisc, + ADF_C4XXX_GLOBAL_CLK_ENABLE_GENERIC, + global_clk_enable); +} + +static int +adf_init_accel_units(struct adf_accel_dev *accel_dev) +{ + struct resource *csr = + (&GET_BARS(accel_dev)[ADF_C4XXX_PMISC_BAR])->virt_addr; + + if (adf_init_accel_unit_services(accel_dev)) + return EFAULT; + + /* Set cy and dc enabled AE masks */ + if (accel_dev->au_info->num_cy_au || accel_dev->au_info->num_dc_au) { + if (adf_set_ae_mask(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed to set ae masks\n"); + goto err_au; + } + } + /* Set ingress/egress ae mask if inline is enabled */ + if (accel_dev->au_info->num_inline_au) { + if (adf_set_inline_ae_mask(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed to set inline ae masks\n"); + goto err_au; + } + } + /* Define ARAM regions */ + if (c4xxx_init_aram_config(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed to init aram config\n"); + goto err_au; + } + /* Configure h/w registers for inline operations */ + if (accel_dev->au_info->num_inline_au > 0) + /* Initialise configuration parsing registers */ + if (c4xxx_init_inline_hw(accel_dev)) + goto err_au; + + c4xxx_set_sadb_size(accel_dev); + + if (accel_dev->au_info->num_inline_au > 0) { + /* ici/ice interrupt shall be enabled after msi-x enabled */ + ADF_CSR_WR(csr, + ADF_C4XXX_ERRMSK11, + ADF_C4XXX_ERRMSK11_ERR_DISABLE_ICI_ICE_INTR); + adf_enable_inline_notification(accel_dev); + } + + update_hw_capability(accel_dev); + if (adf_add_debugfs_info(accel_dev)) { + device_printf(GET_DEV(accel_dev), + "Failed to add debug FS information\n"); + goto err_au; + } + return 0; + +err_au: + /* Free and clear accel unit data structures */ + adf_exit_accel_unit_services(accel_dev); + return EFAULT; +} + +static void +adf_exit_accel_units(struct adf_accel_dev *accel_dev) +{ + adf_exit_accel_unit_services(accel_dev); + /* Free aram mapping structure */ + c4xxx_exit_aram_config(accel_dev); + /* Remove entries in debug FS */ + adf_remove_debugfs_info(accel_dev); +} + +static const char * +get_obj_name(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services service) +{ + u32 capabilities = GET_HW_DATA(accel_dev)->accel_capabilities_mask; + bool sym_only_sku = false; + + /* Check if SKU is capable only of symmetric cryptography + * via device capabilities. + */ + if ((capabilities & ADF_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC) && + !(capabilities & ADF_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC) && + !(capabilities & ADF_ACCEL_CAPABILITIES_COMPRESSION)) + sym_only_sku = true; + + switch (service) { + case ADF_ACCEL_INLINE_CRYPTO: + return ADF_C4XXX_INLINE_OBJ; + case ADF_ACCEL_CRYPTO: + if (sym_only_sku) + return ADF_C4XXX_SYM_OBJ; + else + return ADF_C4XXX_CY_OBJ; + break; + case ADF_ACCEL_COMPRESSION: + return ADF_C4XXX_DC_OBJ; + default: + return NULL; + } +} + +static uint32_t +get_objs_num(struct adf_accel_dev *accel_dev) +{ + u32 srv = 0; + u32 max_srv_id = 0; + unsigned long service_mask = accel_dev->hw_device->service_mask; + + /* The objects number corresponds to the number of services */ + for_each_set_bit(srv, &service_mask, ADF_C4XXX_MAX_OBJ) + { + max_srv_id = srv; + } + + return (max_srv_id + 1); +} + +static uint32_t +get_obj_cfg_ae_mask(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services service) +{ + u32 ae_mask = 0; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 num_au = hw_data->get_num_accel_units(hw_data); + struct adf_accel_unit *accel_unit = accel_dev->au_info->au; + u32 i = 0; + + if (service == ADF_ACCEL_SERVICE_NULL) + return 0; + + for (i = 0; i < num_au; i++) { + if (accel_unit[i].services == service) + ae_mask |= accel_unit[i].ae_mask; + } + return ae_mask; +} + +static void +configure_iov_threads(struct adf_accel_dev *accel_dev, bool enable) +{ + struct resource *addr; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 num_aes = hw_data->get_num_aes(hw_data); + u32 reg = 0x0; + u32 i; + + addr = (&GET_BARS(accel_dev)[ADF_C4XXX_PMISC_BAR])->virt_addr; + + /* Set/Unset Valid bits in AE Thread to PCIe Function Mapping */ + for (i = 0; i < ADF_C4XXX_AE2FUNC_REG_PER_AE * num_aes; i++) { + reg = ADF_CSR_RD(addr + ADF_C4XXX_AE2FUNC_MAP_OFFSET, + i * ADF_C4XXX_AE2FUNC_MAP_REG_SIZE); + if (enable) + reg |= ADF_C4XXX_AE2FUNC_MAP_VALID; + else + reg &= ~ADF_C4XXX_AE2FUNC_MAP_VALID; + ADF_CSR_WR(addr + ADF_C4XXX_AE2FUNC_MAP_OFFSET, + i * ADF_C4XXX_AE2FUNC_MAP_REG_SIZE, + reg); + } +} + +static int +adf_get_heartbeat_status_c4xxx(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + struct icp_qat_fw_init_c4xxx_admin_hb_stats *live_s = + (struct icp_qat_fw_init_c4xxx_admin_hb_stats *) + accel_dev->admin->virt_hb_addr; + const size_t max_aes = hw_device->get_num_aes(hw_device); + const size_t stats_size = + max_aes * sizeof(struct icp_qat_fw_init_c4xxx_admin_hb_stats); + int ret = 0; + size_t ae = 0, thr; + unsigned long ae_mask = 0; + int num_threads_per_ae = ADF_NUM_THREADS_PER_AE; + + /* + * Memory layout of Heartbeat + * + * +----------------+----------------+---------+ + * | Live value | Last value | Count | + * +----------------+----------------+---------+ + * \_______________/\_______________/\________/ + * ^ ^ ^ + * | | | + * | | max_aes * sizeof(adf_hb_count) + * | max_aes * + * sizeof(icp_qat_fw_init_c4xxx_admin_hb_stats) + * max_aes * sizeof(icp_qat_fw_init_c4xxx_admin_hb_stats) + */ + struct icp_qat_fw_init_c4xxx_admin_hb_stats *curr_s; + struct icp_qat_fw_init_c4xxx_admin_hb_stats *last_s = live_s + max_aes; + struct adf_hb_count *count = (struct adf_hb_count *)(last_s + max_aes); + + curr_s = malloc(stats_size, M_QAT, M_WAITOK | M_ZERO); + + memcpy(curr_s, live_s, stats_size); + ae_mask = hw_device->ae_mask; + + for_each_set_bit(ae, &ae_mask, max_aes) + { + for (thr = 0; thr < num_threads_per_ae; ++thr) { + struct icp_qat_fw_init_admin_hb_cnt *curr = + &curr_s[ae].stats[thr]; + struct icp_qat_fw_init_admin_hb_cnt *prev = + &last_s[ae].stats[thr]; + u16 req = curr->req_heartbeat_cnt; + u16 resp = curr->resp_heartbeat_cnt; + u16 last = prev->resp_heartbeat_cnt; + + if ((thr == ADF_AE_ADMIN_THREAD || req != resp) && + resp == last) { + u16 retry = ++count[ae].ae_thread[thr]; + + if (retry >= ADF_CFG_HB_COUNT_THRESHOLD) + ret = EIO; + } else { + count[ae].ae_thread[thr] = 0; + } + } + } + + /* Copy current stats for the next iteration */ + memcpy(last_s, curr_s, stats_size); + free(curr_s, M_QAT); + + return ret; +} + +void +adf_init_hw_data_c4xxx(struct adf_hw_device_data *hw_data) +{ + hw_data->dev_class = &c4xxx_class; + hw_data->instance_id = c4xxx_class.instances++; + hw_data->num_banks = ADF_C4XXX_ETR_MAX_BANKS; + hw_data->num_rings_per_bank = ADF_C4XXX_NUM_RINGS_PER_BANK; + hw_data->num_accel = ADF_C4XXX_MAX_ACCELERATORS; + hw_data->num_engines = ADF_C4XXX_MAX_ACCELENGINES; + hw_data->num_logical_accel = 1; + hw_data->tx_rx_gap = ADF_C4XXX_RX_RINGS_OFFSET; + hw_data->tx_rings_mask = ADF_C4XXX_TX_RINGS_MASK; + hw_data->alloc_irq = adf_isr_resource_alloc; + hw_data->free_irq = adf_isr_resource_free; + hw_data->enable_error_correction = adf_enable_error_correction; + hw_data->init_ras = adf_init_ras; + hw_data->exit_ras = adf_exit_ras; + hw_data->ras_interrupts = adf_ras_interrupts; + hw_data->get_accel_mask = get_accel_mask; + hw_data->get_ae_mask = get_ae_mask; + hw_data->get_num_accels = get_num_accels; + hw_data->get_num_aes = get_num_aes; + hw_data->get_num_accel_units = get_num_accel_units; + hw_data->get_sram_bar_id = get_sram_bar_id; + hw_data->get_etr_bar_id = get_etr_bar_id; + hw_data->get_misc_bar_id = get_misc_bar_id; + hw_data->get_pf2vf_offset = get_pf2vf_offset; + hw_data->get_vintmsk_offset = get_vintmsk_offset; + hw_data->get_arb_info = get_arb_info; + hw_data->get_admin_info = get_admin_info; + hw_data->get_errsou_offset = get_errsou_offset; + hw_data->get_clock_speed = get_clock_speed; + hw_data->get_eth_doorbell_msg = get_eth_doorbell_msg; + hw_data->get_sku = get_sku; + hw_data->check_prod_sku = c4xxx_check_prod_sku; + hw_data->fw_name = ADF_C4XXX_FW; + hw_data->fw_mmp_name = ADF_C4XXX_MMP; + hw_data->get_obj_name = get_obj_name; + hw_data->get_objs_num = get_objs_num; + hw_data->get_obj_cfg_ae_mask = get_obj_cfg_ae_mask; + hw_data->init_admin_comms = adf_init_admin_comms; + hw_data->exit_admin_comms = adf_exit_admin_comms; + hw_data->configure_iov_threads = configure_iov_threads; + hw_data->disable_iov = adf_disable_sriov; + hw_data->send_admin_init = adf_send_admin_init; + hw_data->init_arb = adf_init_arb_c4xxx; + hw_data->exit_arb = adf_exit_arb_c4xxx; + hw_data->disable_arb = adf_disable_arb; + hw_data->enable_ints = adf_enable_ints; + hw_data->set_ssm_wdtimer = c4xxx_set_ssm_wdtimer; + hw_data->check_slice_hang = c4xxx_check_slice_hang; + hw_data->enable_vf2pf_comms = adf_pf_enable_vf2pf_comms; + hw_data->disable_vf2pf_comms = adf_pf_disable_vf2pf_comms; + hw_data->reset_device = adf_reset_flr; + hw_data->restore_device = adf_c4xxx_dev_restore; + hw_data->min_iov_compat_ver = ADF_PFVF_COMPATIBILITY_VERSION; + hw_data->init_accel_units = adf_init_accel_units; + hw_data->reset_hw_units = adf_c4xxx_reset_hw_units; + hw_data->exit_accel_units = adf_exit_accel_units; + hw_data->ring_to_svc_map = ADF_DEFAULT_RING_TO_SRV_MAP; + hw_data->get_heartbeat_status = adf_get_heartbeat_status_c4xxx; + hw_data->get_ae_clock = get_ae_clock; + hw_data->clock_frequency = ADF_C4XXX_AE_FREQ; + hw_data->measure_clock = measure_clock; + hw_data->add_pke_stats = adf_pke_replay_counters_add_c4xxx; + hw_data->remove_pke_stats = adf_pke_replay_counters_remove_c4xxx; + hw_data->add_misc_error = adf_misc_error_add_c4xxx; + hw_data->remove_misc_error = adf_misc_error_remove_c4xxx; + hw_data->extended_dc_capabilities = 0; + hw_data->get_storage_enabled = get_storage_enabled; + hw_data->query_storage_cap = 0; + hw_data->get_accel_cap = c4xxx_get_hw_cap; + hw_data->configure_accel_units = c4xxx_configure_accel_units; + hw_data->pre_reset = adf_dev_pre_reset; + hw_data->post_reset = adf_dev_post_reset; + hw_data->get_ring_to_svc_map = adf_cfg_get_services_enabled; + hw_data->count_ras_event = adf_fw_count_ras_event; + hw_data->config_device = adf_config_device; + hw_data->set_asym_rings_mask = adf_cfg_set_asym_rings_mask; +} + +void +adf_clean_hw_data_c4xxx(struct adf_hw_device_data *hw_data) +{ + hw_data->dev_class->instances--; +} + +void +remove_oid(struct adf_accel_dev *accel_dev, struct sysctl_oid *oid) +{ + struct sysctl_ctx_list *qat_sysctl_ctx; + int ret; + + qat_sysctl_ctx = + device_get_sysctl_ctx(accel_dev->accel_pci_dev.pci_dev); + + ret = sysctl_ctx_entry_del(qat_sysctl_ctx, oid); + if (ret) + device_printf(GET_DEV(accel_dev), "Failed to delete entry\n"); + + ret = sysctl_remove_oid(oid, 1, 1); + if (ret) + device_printf(GET_DEV(accel_dev), "Failed to delete oid\n"); +} Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_inline.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_inline.h @@ -0,0 +1,599 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_C4XXX_INLINE_H_ +#define ADF_C4XXX_INLINE_H_ + +/* Inline register addresses in SRAM BAR */ +#define ARAM_CSR_BAR_OFFSET 0x100000 +#define ADF_C4XXX_REG_SA_CTRL_LOCK (ARAM_CSR_BAR_OFFSET + 0x00) +#define ADF_C4XXX_REG_SA_SCRATCH_0 (ARAM_CSR_BAR_OFFSET + 0x04) +#define ADF_C4XXX_REG_SA_SCRATCH_2 (ARAM_CSR_BAR_OFFSET + 0x0C) +#define ADF_C4XXX_REG_SA_ENTRY_CTRL (ARAM_CSR_BAR_OFFSET + 0x18) +#define ADF_C4XXX_REG_SA_DB_CTRL (ARAM_CSR_BAR_OFFSET + 0x1C) +#define ADF_C4XXX_REG_SA_REMAP (ARAM_CSR_BAR_OFFSET + 0x20) +#define ADF_C4XXX_REG_SA_INLINE_CAPABILITY (ARAM_CSR_BAR_OFFSET + 0x24) +#define ADF_C4XXX_REG_SA_INLINE_ENABLE (ARAM_CSR_BAR_OFFSET + 0x28) +#define ADF_C4XXX_REG_SA_LINK_UP (ARAM_CSR_BAR_OFFSET + 0x2C) +#define ADF_C4XXX_REG_SA_FUNC_LIMITS (ARAM_CSR_BAR_OFFSET + 0x38) + +#define ADF_C4XXX_SADB_SIZE_BIT BIT(24) +#define ADF_C4XXX_SADB_SIZE_IN_WORDS(accel_dev) \ + ((accel_dev)->aram_info->sadb_region_size / 32) +#define ADF_C4XXX_DEFAULT_MAX_CHAIN_LEN 0 +#define ADF_C4XXX_DEFAULT_LIMIT_CHAIN_LEN 0 +/* SADB CTRL register bit offsets */ +#define ADF_C4XXX_SADB_BIT_OFFSET 6 +#define ADF_C4XXX_MAX_CHAIN_LEN_BIT_OFFS 1 + +#define ADF_C4XXX_SADB_REG_VALUE(accel_dev) \ + ((ADF_C4XXX_SADB_SIZE_IN_WORDS(accel_dev) \ + << ADF_C4XXX_SADB_BIT_OFFSET) | \ + (ADF_C4XXX_DEFAULT_MAX_CHAIN_LEN \ + << ADF_C4XXX_MAX_CHAIN_LEN_BIT_OFFS) | \ + (ADF_C4XXX_DEFAULT_LIMIT_CHAIN_LEN)) + +#define ADF_C4XXX_INLINE_INGRESS_OFFSET 0x0 +#define ADF_C4XXX_INLINE_EGRESS_OFFSET 0x1000 + +/* MAC_CFG register access related definitions */ +#define ADF_C4XXX_STATS_REQUEST_ENABLED BIT(16) +#define ADF_C4XXX_STATS_REQUEST_DISABLED ~BIT(16) +#define ADF_C4XXX_UNLOCK true +#define ADF_C4XXX_LOCK false + +/* MAC IP register access related definitions */ +#define ADF_C4XXX_MAC_STATS_READY BIT(0) +#define ADF_C4XXX_MAX_NUM_STAT_READY_READS 10 +#define ADF_C4XXX_MAC_STATS_POLLING_INTERVAL 100 +#define ADF_C4XXX_MAC_ERROR_TX_UNDERRUN BIT(6) +#define ADF_C4XXX_MAC_ERROR_TX_FCS BIT(7) +#define ADF_C4XXX_MAC_ERROR_TX_DATA_CORRUPT BIT(8) +#define ADF_C4XXX_MAC_ERROR_RX_OVERRUN BIT(9) +#define ADF_C4XXX_MAC_ERROR_RX_RUNT BIT(10) +#define ADF_C4XXX_MAC_ERROR_RX_UNDERSIZE BIT(11) +#define ADF_C4XXX_MAC_ERROR_RX_JABBER BIT(12) +#define ADF_C4XXX_MAC_ERROR_RX_OVERSIZE BIT(13) +#define ADF_C4XXX_MAC_ERROR_RX_FCS BIT(14) +#define ADF_C4XXX_MAC_ERROR_RX_FRAME BIT(15) +#define ADF_C4XXX_MAC_ERROR_RX_CODE BIT(16) +#define ADF_C4XXX_MAC_ERROR_RX_PREAMBLE BIT(17) +#define ADF_C4XXX_MAC_RX_LINK_UP BIT(21) +#define ADF_C4XXX_MAC_INVALID_SPEED BIT(31) +#define ADF_C4XXX_MAC_PIA_RX_FIFO_OVERRUN (1ULL << 32) +#define ADF_C4XXX_MAC_PIA_TX_FIFO_OVERRUN (1ULL << 33) +#define ADF_C4XXX_MAC_PIA_TX_FIFO_UNDERRUN (1ULL << 34) + +/* 64-bit inline control registers. It will require + * adding ADF_C4XXX_INLINE_INGRESS_OFFSET to the address for ingress + * direction or ADF_C4XXX_INLINE_EGRESS_OFFSET to the address for + * egress direction + */ +#define ADF_C4XXX_MAC_IP 0x8 +#define ADF_C4XXX_MAC_CFG 0x18 +#define ADF_C4XXX_MAC_PIA_CFG 0xA0 + +/* Default MAC_CFG value + * - MAC_LINKUP_ENABLE = 1 + * - MAX_FRAME_LENGTH = 0x2600 + */ +#define ADF_C4XXX_MAC_CFG_VALUE 0x00000000FA0C2600 + +/* Bit definitions for MAC_PIA_CFG register */ +#define ADF_C4XXX_ONPI_ENABLE BIT(0) +#define ADF_C4XXX_XOFF_ENABLE BIT(10) + +/* New default value for MAC_PIA_CFG register */ +#define ADF_C4XXX_MAC_PIA_CFG_VALUE \ + (ADF_C4XXX_XOFF_ENABLE | ADF_C4XXX_ONPI_ENABLE) + +/* 64-bit Inline statistics registers. It will require + * adding ADF_C4XXX_INLINE_INGRESS_OFFSET to the address for ingress + * direction or ADF_C4XXX_INLINE_EGRESS_OFFSET to the address for + * egress direction + */ +#define ADF_C4XXX_MAC_STAT_TX_OCTET 0x100 +#define ADF_C4XXX_MAC_STAT_TX_FRAME 0x110 +#define ADF_C4XXX_MAC_STAT_TX_BAD_FRAME 0x118 +#define ADF_C4XXX_MAC_STAT_TX_FCS_ERROR 0x120 +#define ADF_C4XXX_MAC_STAT_TX_64 0x130 +#define ADF_C4XXX_MAC_STAT_TX_65 0x138 +#define ADF_C4XXX_MAC_STAT_TX_128 0x140 +#define ADF_C4XXX_MAC_STAT_TX_256 0x148 +#define ADF_C4XXX_MAC_STAT_TX_512 0x150 +#define ADF_C4XXX_MAC_STAT_TX_1024 0x158 +#define ADF_C4XXX_MAC_STAT_TX_1519 0x160 +#define ADF_C4XXX_MAC_STAT_TX_JABBER 0x168 +#define ADF_C4XXX_MAC_STAT_RX_OCTET 0x200 +#define ADF_C4XXX_MAC_STAT_RX_FRAME 0x210 +#define ADF_C4XXX_MAC_STAT_RX_BAD_FRAME 0x218 +#define ADF_C4XXX_MAC_STAT_RX_FCS_ERROR 0x220 +#define ADF_C4XXX_MAC_STAT_RX_64 0x250 +#define ADF_C4XXX_MAC_STAT_RX_65 0x258 +#define ADF_C4XXX_MAC_STAT_RX_128 0x260 +#define ADF_C4XXX_MAC_STAT_RX_256 0x268 +#define ADF_C4XXX_MAC_STAT_RX_512 0x270 +#define ADF_C4XXX_MAC_STAT_RX_1024 0x278 +#define ADF_C4XXX_MAC_STAT_RX_1519 0x280 +#define ADF_C4XXX_MAC_STAT_RX_OVERSIZE 0x288 +#define ADF_C4XXX_MAC_STAT_RX_JABBER 0x290 + +/* 32-bit Inline statistics registers. It will require + * adding ADF_C4XXX_INLINE_INGRESS_OFFSET to the address for ingress + * direction or ADF_C4XXX_INLINE_EGRESS_OFFSET to the address for + * egress direction + */ +#define ADF_C4XXX_IC_PAR_IPSEC_DESC_COUNT 0xBC0 +#define ADF_C4XXX_IC_PAR_MIXED_DESC_COUNT 0xBC4 +#define ADF_C4XXX_IC_PAR_FULLY_CLEAR_DESC_COUNT 0xBC8 +#define ADF_C4XXX_IC_PAR_CLR_COUNT 0xBCC +#define ADF_C4XXX_IC_CTPB_PKT_COUNT 0xDF4 +#define ADF_C4XXX_RB_DATA_COUNT 0xDF8 +#define ADF_C4XXX_IC_CLEAR_DESC_COUNT 0xDFC +#define ADF_C4XXX_IC_IPSEC_DESC_COUNT 0xE00 + +/* REG_CMD_DIS_MISC bit definitions */ +#define ADF_C4XXX_BYTE_SWAP_ENABLE BIT(0) +#define ADF_C4XXX_REG_CMD_DIS_MISC_DEFAULT_VALUE (ADF_C4XXX_BYTE_SWAP_ENABLE) + +/* Command Dispatch Misc Register */ +#define ADF_C4XXX_INGRESS_CMD_DIS_MISC (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0x8A8) + +#define ADF_C4XXX_EGRESS_CMD_DIS_MISC (ADF_C4XXX_INLINE_EGRESS_OFFSET + 0x8A8) + +/* Congestion management threshold registers */ +#define ADF_C4XXX_NEXT_FCTHRESH_OFFSET 4 + +/* Number of congestion management domains */ +#define ADF_C4XXX_NUM_CONGEST_DOMAINS 8 + +#define ADF_C4XXX_BB_FCHTHRESH_OFFSET 0xB78 + +/* IC_BB_FCHTHRESH registers */ +#define ADF_C4XXX_ICI_BB_FCHTHRESH_OFFSET \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + ADF_C4XXX_BB_FCHTHRESH_OFFSET) + +#define ADF_C4XXX_ICE_BB_FCHTHRESH_OFFSET \ + (ADF_C4XXX_INLINE_EGRESS_OFFSET + ADF_C4XXX_BB_FCHTHRESH_OFFSET) + +#define ADF_C4XXX_WR_ICI_BB_FCHTHRESH(csr_base_addr, index, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_C4XXX_ICI_BB_FCHTHRESH_OFFSET + \ + (index)*ADF_C4XXX_NEXT_FCTHRESH_OFFSET), \ + value) + +#define ADF_C4XXX_WR_ICE_BB_FCHTHRESH(csr_base_addr, index, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_C4XXX_ICE_BB_FCHTHRESH_OFFSET + \ + (index)*ADF_C4XXX_NEXT_FCTHRESH_OFFSET), \ + value) + +#define ADF_C4XXX_BB_FCLTHRESH_OFFSET 0xB98 + +/* IC_BB_FCLTHRESH registers */ +#define ADF_C4XXX_ICI_BB_FCLTHRESH_OFFSET \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + ADF_C4XXX_BB_FCLTHRESH_OFFSET) + +#define ADF_C4XXX_ICE_BB_FCLTHRESH_OFFSET \ + (ADF_C4XXX_INLINE_EGRESS_OFFSET + ADF_C4XXX_BB_FCLTHRESH_OFFSET) + +#define ADF_C4XXX_WR_ICI_BB_FCLTHRESH(csr_base_addr, index, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_C4XXX_ICI_BB_FCLTHRESH_OFFSET + \ + (index)*ADF_C4XXX_NEXT_FCTHRESH_OFFSET), \ + value) + +#define ADF_C4XXX_WR_ICE_BB_FCLTHRESH(csr_base_addr, index, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_C4XXX_ICE_BB_FCLTHRESH_OFFSET + \ + (index)*ADF_C4XXX_NEXT_FCTHRESH_OFFSET), \ + value) + +#define ADF_C4XXX_BB_BEHTHRESH_OFFSET 0xBB8 +#define ADF_C4XXX_BB_BELTHRESH_OFFSET 0xBBC +#define ADF_C4XXX_BEWIP_THRESH_OFFSET 0xDEC +#define ADF_C4XXX_CTPB_THRESH_OFFSET 0xDE8 +#define ADF_C4XXX_CIRQ_OFFSET 0xDE4 +#define ADF_C4XXX_Q2MEMAP_OFFSET 0xC04 + +/* IC_BB_BEHTHRESH register */ +#define ADF_C4XXX_ICI_BB_BEHTHRESH_OFFSET \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + ADF_C4XXX_BB_BEHTHRESH_OFFSET) + +#define ADF_C4XXX_ICE_BB_BEHTHRESH_OFFSET \ + (ADF_C4XXX_INLINE_EGRESS_OFFSET + ADF_C4XXX_BB_BEHTHRESH_OFFSET) + +/* IC_BB_BELTHRESH register */ +#define ADF_C4XXX_ICI_BB_BELTHRESH_OFFSET \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + ADF_C4XXX_BB_BELTHRESH_OFFSET) + +#define ADF_C4XXX_ICE_BB_BELTHRESH_OFFSET \ + (ADF_C4XXX_INLINE_EGRESS_OFFSET + ADF_C4XXX_BB_BELTHRESH_OFFSET) + +/* IC_BEWIP_THRESH register */ +#define ADF_C4XXX_ICI_BEWIP_THRESH_OFFSET \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + ADF_C4XXX_BEWIP_THRESH_OFFSET) + +#define ADF_C4XXX_ICE_BEWIP_THRESH_OFFSET \ + (ADF_C4XXX_INLINE_EGRESS_OFFSET + ADF_C4XXX_BEWIP_THRESH_OFFSET) + +/* IC_CTPB_THRESH register */ +#define ADF_C4XXX_ICI_CTPB_THRESH_OFFSET \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + ADF_C4XXX_CTPB_THRESH_OFFSET) + +#define ADF_C4XXX_ICE_CTPB_THRESH_OFFSET \ + (ADF_C4XXX_INLINE_EGRESS_OFFSET + ADF_C4XXX_CTPB_THRESH_OFFSET) + +/* ADF_C4XXX_ICI_CIRQ_OFFSET */ +#define ADF_C4XXX_ICI_CIRQ_OFFSET \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + ADF_C4XXX_CIRQ_OFFSET) + +#define ADF_C4XXX_ICE_CIRQ_OFFSET \ + (ADF_C4XXX_INLINE_EGRESS_OFFSET + ADF_C4XXX_CIRQ_OFFSET) + +/* IC_Q2MEMAP register */ +#define ADF_C4XXX_ICI_Q2MEMAP_OFFSET \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + ADF_C4XXX_Q2MEMAP_OFFSET) + +#define ADF_C4XXX_ICE_Q2MEMAP_OFFSET \ + (ADF_C4XXX_INLINE_EGRESS_OFFSET + ADF_C4XXX_Q2MEMAP_OFFSET) + +#define ADF_C4XXX_NEXT_Q2MEMAP_OFFSET 4 +#define ADF_C4XXX_NUM_Q2MEMAP_REGISTERS 8 + +#define ADF_C4XXX_WR_CSR_ICI_Q2MEMAP(csr_base_addr, index, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_C4XXX_ICI_Q2MEMAP_OFFSET + \ + (index)*ADF_C4XXX_NEXT_Q2MEMAP_OFFSET), \ + value) + +#define ADF_C4XXX_WR_CSR_ICE_Q2MEMAP(csr_base_addr, index, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_C4XXX_ICE_Q2MEMAP_OFFSET + \ + (index)*ADF_C4XXX_NEXT_Q2MEMAP_OFFSET), \ + value) + +/* IC_PARSE_CTRL register */ +#define ADF_C4XXX_DEFAULT_KEY_LENGTH 21 +#define ADF_C4XXX_DEFAULT_REL_ABS_OFFSET 1 +#define ADF_C4XXX_DEFAULT_NUM_TUPLES 4 +#define ADF_C4XXX_IC_PARSE_CTRL_OFFSET_DEFAULT_VALUE \ + ((ADF_C4XXX_DEFAULT_KEY_LENGTH << 4) | \ + (ADF_C4XXX_DEFAULT_REL_ABS_OFFSET << 3) | \ + (ADF_C4XXX_DEFAULT_NUM_TUPLES)) + +/* Configuration parsing register definitions */ +#define ADF_C4XXX_IC_PARSE_CTRL_OFFSET (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB00) + +/* Fixed data parsing register */ +#define ADF_C4XXX_IC_PARSE_FIXED_DATA(i) \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB04 + ((i)*4)) +#define ADF_C4XXX_DEFAULT_IC_PARSE_FIXED_DATA_0 0x32 + +/* Fixed length parsing register */ +#define ADF_C4XXX_IC_PARSE_FIXED_LENGTH \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB14) +#define ADF_C4XXX_DEFAULT_IC_PARSE_FIXED_LEN 0x0 + +/* IC_PARSE_IPV4 offset and length registers */ +#define ADF_C4XXX_IC_PARSE_IPV4_OFFSET_0 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB18) +#define ADF_C4XXX_IC_PARSE_IPV4_OFFSET_1 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB1C) +#define ADF_C4XXX_IC_PARSE_IPV4_OFFSET_2 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB20) +#define ADF_C4XXX_IC_PARSE_IPV4_OFFSET_3 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB24) +#define ADF_C4XXX_IC_PARSE_IPV4_OFFSET_4 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB28) +#define ADF_C4XXX_IC_PARSE_IPV4_OFFSET_5 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB2C) + +#define ADF_C4XXX_IC_PARSE_IPV4_LENGTH_0 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB30) +#define ADF_C4XXX_IC_PARSE_IPV4_LENGTH_1 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB34) +#define ADF_C4XXX_IC_PARSE_IPV4_LENGTH_2 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB38) +#define ADF_C4XXX_IC_PARSE_IPV4_LENGTH_3 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB3C) +#define ADF_C4XXX_IC_PARSE_IPV4_LENGTH_4 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB40) +#define ADF_C4XXX_IC_PARSE_IPV4_LENGTH_5 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB44) + +#define ADF_C4XXX_IPV4_OFFSET_0_PARSER_BASE 0x1 +#define ADF_C4XXX_IPV4_OFFSET_0_OFFSET 0x0 +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_OFFS_0_VALUE \ + ((ADF_C4XXX_IPV4_OFFSET_0_PARSER_BASE << 29) | \ + ADF_C4XXX_IPV4_OFFSET_0_OFFSET) +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_LEN_0_VALUE 0 + +#define ADF_C4XXX_IPV4_OFFSET_1_PARSER_BASE 0x2 +#define ADF_C4XXX_IPV4_OFFSET_1_OFFSET 0x0 +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_OFFS_1_VALUE \ + ((ADF_C4XXX_IPV4_OFFSET_1_PARSER_BASE << 29) | \ + ADF_C4XXX_IPV4_OFFSET_1_OFFSET) +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_LEN_1_VALUE 3 + +#define ADF_C4XXX_IPV4_OFFSET_2_PARSER_BASE 0x4 +#define ADF_C4XXX_IPV4_OFFSET_2_OFFSET 0x10 +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_OFFS_2_VALUE \ + ((ADF_C4XXX_IPV4_OFFSET_2_PARSER_BASE << 29) | \ + ADF_C4XXX_IPV4_OFFSET_2_OFFSET) +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_LEN_2_VALUE 3 + +#define ADF_C4XXX_IPV4_OFFSET_3_PARSER_BASE 0x0 +#define ADF_C4XXX_IPV4_OFFSET_3_OFFSET 0x0 +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_OFFS_3_VALUE \ + ((ADF_C4XXX_IPV4_OFFSET_3_PARSER_BASE << 29) | \ + ADF_C4XXX_IPV4_OFFSET_3_OFFSET) +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV4_LEN_3_VALUE 0 + +/* IC_PARSE_IPV6 offset and length registers */ +#define ADF_C4XXX_IC_PARSE_IPV6_OFFSET_0 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB48) +#define ADF_C4XXX_IC_PARSE_IPV6_OFFSET_1 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB4C) +#define ADF_C4XXX_IC_PARSE_IPV6_OFFSET_2 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB50) +#define ADF_C4XXX_IC_PARSE_IPV6_OFFSET_3 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB54) +#define ADF_C4XXX_IC_PARSE_IPV6_OFFSET_4 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB58) +#define ADF_C4XXX_IC_PARSE_IPV6_OFFSET_5 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB5C) + +#define ADF_C4XXX_IC_PARSE_IPV6_LENGTH_0 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB60) +#define ADF_C4XXX_IC_PARSE_IPV6_LENGTH_1 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB64) +#define ADF_C4XXX_IC_PARSE_IPV6_LENGTH_2 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB68) +#define ADF_C4XXX_IC_PARSE_IPV6_LENGTH_3 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB6C) +#define ADF_C4XXX_IC_PARSE_IPV6_LENGTH_4 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB70) +#define ADF_C4XXX_IC_PARSE_IPV6_LENGTH_5 \ + (ADF_C4XXX_INLINE_INGRESS_OFFSET + 0xB74) + +#define ADF_C4XXX_IPV6_OFFSET_0_PARSER_BASE 0x1 +#define ADF_C4XXX_IPV6_OFFSET_0_OFFSET 0x0 +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_OFFS_0_VALUE \ + ((ADF_C4XXX_IPV6_OFFSET_0_PARSER_BASE << 29) | \ + (ADF_C4XXX_IPV6_OFFSET_0_OFFSET)) +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_LEN_0_VALUE 0 + +#define ADF_C4XXX_IPV6_OFFSET_1_PARSER_BASE 0x2 +#define ADF_C4XXX_IPV6_OFFSET_1_OFFSET 0x0 +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_OFFS_1_VALUE \ + ((ADF_C4XXX_IPV6_OFFSET_1_PARSER_BASE << 29) | \ + (ADF_C4XXX_IPV6_OFFSET_1_OFFSET)) +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_LEN_1_VALUE 3 + +#define ADF_C4XXX_IPV6_OFFSET_2_PARSER_BASE 0x4 +#define ADF_C4XXX_IPV6_OFFSET_2_OFFSET 0x18 +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_OFFS_2_VALUE \ + ((ADF_C4XXX_IPV6_OFFSET_2_PARSER_BASE << 29) | \ + (ADF_C4XXX_IPV6_OFFSET_2_OFFSET)) +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_LEN_2_VALUE 0xF + +#define ADF_C4XXX_IPV6_OFFSET_3_PARSER_BASE 0x0 +#define ADF_C4XXX_IPV6_OFFSET_3_OFFSET 0x0 +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_OFFS_3_VALUE \ + ((ADF_C4XXX_IPV6_OFFSET_3_PARSER_BASE << 29) | \ + (ADF_C4XXX_IPV6_OFFSET_3_OFFSET)) +#define ADF_C4XXX_DEFAULT_IC_PARSE_IPV6_LEN_3_VALUE 0x0 + +/* error notification configuration registers */ + +#define ADF_C4XXX_IC_CD_RF_PARITY_ERR_0 0xA00 +#define ADF_C4XXX_IC_CD_RF_PARITY_ERR_1 0xA04 +#define ADF_C4XXX_IC_CD_RF_PARITY_ERR_2 0xA08 +#define ADF_C4XXX_IC_CD_RF_PARITY_ERR_3 0xA0C +#define ADF_C4XXX_IC_CD_CERR 0xA10 +#define ADF_C4XXX_IC_CD_UERR 0xA14 + +#define ADF_C4XXX_IC_INLN_RF_PARITY_ERR_0 0xF00 +#define ADF_C4XXX_IC_INLN_RF_PARITY_ERR_1 0xF04 +#define ADF_C4XXX_IC_INLN_RF_PARITY_ERR_2 0xF08 +#define ADF_C4XXX_IC_INLN_RF_PARITY_ERR_3 0xF0C +#define ADF_C4XXX_IC_INLN_RF_PARITY_ERR_4 0xF10 +#define ADF_C4XXX_IC_INLN_RF_PARITY_ERR_5 0xF14 +#define ADF_C4XXX_IC_PARSER_CERR 0xF18 +#define ADF_C4XXX_IC_PARSER_UERR 0xF1C +#define ADF_C4XXX_IC_CTPB_CERR 0xF28 +#define ADF_C4XXX_IC_CTPB_UERR 0xF2C +#define ADF_C4XXX_IC_CPPM_ERR_STAT 0xF3C +#define ADF_C4XXX_IC_CONGESTION_MGMT_INT 0xF58 + +#define ADF_C4XXX_IC_CPPT_ERR_STAT 0x704 +#define ADF_C4XXX_IC_MAC_IM 0x10 + +#define ADF_C4XXX_CD_RF_PARITY_ERR_0_VAL 0x22222222 +#define ADF_C4XXX_CD_RF_PARITY_ERR_1_VAL 0x22222323 +#define ADF_C4XXX_CD_RF_PARITY_ERR_2_VAL 0x00022222 +#define ADF_C4XXX_CD_RF_PARITY_ERR_3_VAL 0x00000000 +#define ADF_C4XXX_CD_UERR_VAL 0x00000008 +#define ADF_C4XXX_CD_CERR_VAL 0x00000008 +#define ADF_C4XXX_PARSER_UERR_VAL 0x00100008 +#define ADF_C4XXX_PARSER_CERR_VAL 0x00000008 +#define ADF_C4XXX_INLN_RF_PARITY_ERR_0_VAL 0x33333333 +#define ADF_C4XXX_INLN_RF_PARITY_ERR_1_VAL 0x33333333 +#define ADF_C4XXX_INLN_RF_PARITY_ERR_2_VAL 0x33333333 +#define ADF_C4XXX_INLN_RF_PARITY_ERR_3_VAL 0x22222222 +#define ADF_C4XXX_INLN_RF_PARITY_ERR_4_VAL 0x22222222 +#define ADF_C4XXX_INLN_RF_PARITY_ERR_5_VAL 0x00333232 +#define ADF_C4XXX_CTPB_UERR_VAL 0x00000008 +#define ADF_C4XXX_CTPB_CERR_VAL 0x00000008 +#define ADF_C4XXX_CPPM_ERR_STAT_VAL 0x00007000 +#define ADF_C4XXX_CPPT_ERR_STAT_VAL 0x000001C0 +#define ADF_C4XXX_CONGESTION_MGMT_INI_VAL 0x00000001 +#define ADF_C4XXX_MAC_IM_VAL 0x000000087FDC003E + +/* parser ram ecc uerr */ +#define ADF_C4XXX_PARSER_UERR_INTR BIT(0) +/* multiple err */ +#define ADF_C4XXX_PARSER_MUL_UERR_INTR BIT(18) +#define ADF_C4XXX_PARSER_DESC_UERR_INTR_ENA BIT(20) + +#define ADF_C4XXX_RF_PAR_ERR_BITS 32 +#define ADF_C4XXX_MAX_STR_LEN 64 +#define RF_PAR_MUL_MAP(bit_num) (((bit_num)-2) / 4) +#define RF_PAR_MAP(bit_num) (((bit_num)-3) / 4) + +/* cd rf parity error + * BIT(2) rf parity mul 0 + * BIT(3) rf parity 0 + * BIT(10) rf parity mul 2 + * BIT(11) rf parity 2 + */ +#define ADF_C4XXX_CD_RF_PAR_ERR_1_INTR (BIT(2) | BIT(3) | BIT(10) | BIT(11)) + +/* inln rf parity error + * BIT(2) rf parity mul 0 + * BIT(3) rf parity 0 + * BIT(6) rf parity mul 1 + * BIT(7) rf parity 1 + * BIT(10) rf parity mul 2 + * BIT(11) rf parity 2 + * BIT(14) rf parity mul 3 + * BIT(15) rf parity 3 + * BIT(18) rf parity mul 4 + * BIT(19) rf parity 4 + * BIT(22) rf parity mul 5 + * BIT(23) rf parity 5 + * BIT(26) rf parity mul 6 + * BIT(27) rf parity 6 + * BIT(30) rf parity mul 7 + * BIT(31) rf parity 7 + */ +#define ADF_C4XXX_INLN_RF_PAR_ERR_0_INTR \ + (BIT(2) | BIT(3) | BIT(6) | BIT(7) | BIT(10) | BIT(11) | BIT(14) | \ + BIT(15) | BIT(18) | BIT(19) | BIT(22) | BIT(23) | BIT(26) | BIT(27) | \ + BIT(30) | BIT(31)) +#define ADF_C4XXX_INLN_RF_PAR_ERR_1_INTR ADF_C4XXX_INLN_RF_PAR_ERR_0_INTR +#define ADF_C4XXX_INLN_RF_PAR_ERR_2_INTR ADF_C4XXX_INLN_RF_PAR_ERR_0_INTR +#define ADF_C4XXX_INLN_RF_PAR_ERR_5_INTR \ + (BIT(6) | BIT(7) | BIT(14) | BIT(15) | BIT(18) | BIT(19) | BIT(22) | \ + BIT(23)) + +/* Congestion mgmt events */ +#define ADF_C4XXX_CONGESTION_MGMT_CTPB_GLOBAL_CROSSED BIT(1) +#define ADF_C4XXX_CONGESTION_MGMT_XOFF_CIRQ_OUT BIT(2) +#define ADF_C4XXX_CONGESTION_MGMT_XOFF_CIRQ_IN BIT(3) + +/* AEAD algorithm definitions in REG_SA_SCRATCH[0] register. + * Bits<6:5> are reserved for expansion. + */ +#define AES128_GCM BIT(0) +#define AES192_GCM BIT(1) +#define AES256_GCM BIT(2) +#define AES128_CCM BIT(3) +#define CHACHA20_POLY1305 BIT(4) +/* Cipher algorithm definitions in REG_SA_SCRATCH[0] register + * Bit<15> is reserved for expansion. + */ +#define CIPHER_NULL BIT(7) +#define AES128_CBC BIT(8) +#define AES192_CBC BIT(9) +#define AES256_CBC BIT(10) +#define AES128_CTR BIT(11) +#define AES192_CTR BIT(12) +#define AES256_CTR BIT(13) +#define _3DES_CBC BIT(14) +/* Authentication algorithm definitions in REG_SA_SCRATCH[0] register + * Bits<25:30> are reserved for expansion. + */ +#define HMAC_MD5_96 BIT(16) +#define HMAC_SHA1_96 BIT(17) +#define HMAC_SHA256_128 BIT(18) +#define HMAC_SHA384_192 BIT(19) +#define HMAC_SHA512_256 BIT(20) +#define AES_GMAC_AES_128 BIT(21) +#define AES_XCBC_MAC_96 BIT(22) +#define AES_CMAC_96 BIT(23) +#define AUTH_NULL BIT(24) + +/* Algo group0:DEFAULT */ +#define ADF_C4XXX_DEFAULT_SUPPORTED_ALGORITHMS \ + (AES128_GCM | \ + (AES192_GCM | AES256_GCM | AES128_CCM | CHACHA20_POLY1305) | \ + (CIPHER_NULL | AES128_CBC | AES192_CBC | AES256_CBC) | \ + (AES128_CTR | AES192_CTR | AES256_CTR | _3DES_CBC) | \ + (HMAC_MD5_96 | HMAC_SHA1_96 | HMAC_SHA256_128) | \ + (HMAC_SHA384_192 | HMAC_SHA512_256 | AES_GMAC_AES_128) | \ + (AES_XCBC_MAC_96 | AES_CMAC_96 | AUTH_NULL)) + +/* Algo group1 */ +#define ADF_C4XXX_SUPPORTED_ALGORITHMS_GROUP1 \ + (AES128_GCM | (AES256_GCM | CHACHA20_POLY1305)) + +/* Supported crypto offload features in REG_SA_SCRATCH[2] register */ +#define ADF_C4XXX_IPSEC_ESP BIT(0) +#define ADF_C4XXX_IPSEC_AH BIT(1) +#define ADF_C4XXX_UDP_ENCAPSULATION BIT(2) +#define ADF_C4XXX_IPSEC_TUNNEL_MODE BIT(3) +#define ADF_C4XXX_IPSEC_TRANSPORT_MODE BIT(4) +#define ADF_C4XXX_IPSEC_EXT_SEQ_NUM BIT(5) + +#define ADF_C4XXX_DEFAULT_CY_OFFLOAD_FEATURES \ + (ADF_C4XXX_IPSEC_ESP | \ + (ADF_C4XXX_UDP_ENCAPSULATION | ADF_C4XXX_IPSEC_TUNNEL_MODE) | \ + (ADF_C4XXX_IPSEC_TRANSPORT_MODE | ADF_C4XXX_IPSEC_EXT_SEQ_NUM)) + +/* REG_SA_CTRL_LOCK default value */ +#define ADF_C4XXX_DEFAULT_SA_CTRL_LOCKOUT BIT(0) + +/* SA ENTRY CTRL default values */ +#define ADF_C4XXX_DEFAULT_LU_KEY_LEN 21 + +/* Sa size for algo group0 */ +#define ADF_C4XXX_DEFAULT_SA_SIZE 6 + +/* Sa size for algo group1 */ +#define ADF_C4XXX_ALGO_GROUP1_SA_SIZE 2 + +/* SA size is based on 32byte granularity + * A value of zero indicates an SA size of 32 bytes + */ +#define ADF_C4XXX_SA_SIZE_IN_BYTES(sa_size) (((sa_size) + 1) * 32) + +/* SA ENTRY CTRL register bit offsets */ +#define ADF_C4XXX_LU_KEY_LEN_BIT_OFFSET 5 + +/* REG_SA_FUNC_LIMITS default value */ +#define ADF_C4XXX_FUNC_LIMIT(accel_dev, sa_size) \ + (ADF_C4XXX_SADB_SIZE_IN_WORDS(accel_dev) / ((sa_size) + 1)) + +/* REG_SA_INLINE_ENABLE bit definition */ +#define ADF_C4XXX_INLINE_ENABLED BIT(0) + +/* REG_SA_INLINE_CAPABILITY bit definitions */ +#define ADF_C4XXX_INLINE_INGRESS_ENABLE BIT(0) +#define ADF_C4XXX_INLINE_EGRESS_ENABLE BIT(1) +#define ADF_C4XXX_INLINE_CAPABILITIES \ + (ADF_C4XXX_INLINE_INGRESS_ENABLE | ADF_C4XXX_INLINE_EGRESS_ENABLE) + +/* Congestion management profile information */ +enum congest_mngt_profile_info { + CIRQ_CFG_1 = 0, + CIRQ_CFG_2, + CIRQ_CFG_3, + BEST_EFFORT_SINGLE_QUEUE, + BEST_EFFORT_8_QUEUES, +}; + +/* IPsec Algo Group */ +enum ipsec_algo_group_info { + IPSEC_DEFAUL_ALGO_GROUP = 0, + IPSEC_ALGO_GROUP1, + IPSEC_ALGO_GROUP_DELIMITER +}; + +int get_congestion_management_profile(struct adf_accel_dev *accel_dev, + u8 *profile); +int c4xxx_init_congestion_management(struct adf_accel_dev *accel_dev); +int c4xxx_init_debugfs_inline_dir(struct adf_accel_dev *accel_dev); +void c4xxx_exit_debugfs_inline_dir(struct adf_accel_dev *accel_dev); +#endif /* ADF_C4XXX_INLINE_H_ */ Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_misc_error_stats.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_misc_error_stats.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_C4XXX_MISC_ERROR_STATS_H_ +#define ADF_C4XXX_MISC_ERROR_STATS_H_ + +#include "adf_accel_devices.h" + +int adf_misc_error_add_c4xxx(struct adf_accel_dev *accel_dev); +void adf_misc_error_remove_c4xxx(struct adf_accel_dev *accel_dev); + +#endif Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_misc_error_stats.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_misc_error_stats.c @@ -0,0 +1,106 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_c4xxx_hw_data.h" +#include "adf_c4xxx_misc_error_stats.h" +#include "adf_common_drv.h" +#include "adf_cfg_common.h" +#include +#include + +#define MISC_ERROR_DBG_FILE "misc_error_stats" +#define LINE \ + "+-----------------------------------------------------------------+\n" +#define BANNER \ + "| Miscellaneous Error Statistics for Qat Device |\n" + +static void *misc_counter; + +struct adf_dev_miscellaneous_stats { + u64 misc_counter; +}; + +static int qat_misc_error_show(SYSCTL_HANDLER_ARGS) +{ + struct sbuf sb; + + sbuf_new_for_sysctl(&sb, NULL, 256, req); + sbuf_printf(&sb, "\n"); + sbuf_printf(&sb, LINE); + sbuf_printf(&sb, + "| Miscellaneous Error: %40llu |\n", + (unsigned long long)((struct adf_dev_miscellaneous_stats *) + misc_counter) + ->misc_counter); + + sbuf_finish(&sb); + SYSCTL_OUT(req, sbuf_data(&sb), sbuf_len(&sb)); + sbuf_delete(&sb); + + return 0; +} + +/** + * adf_misc_error_add_c4xxx() - Create debugfs entry for + * acceleration device Freq counters. + * @accel_dev: Pointer to acceleration device. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_misc_error_add_c4xxx(struct adf_accel_dev *accel_dev) +{ + struct sysctl_ctx_list *qat_sysctl_ctx = NULL; + struct sysctl_oid *qat_sysctl_tree = NULL; + struct sysctl_oid *misc_er_file = NULL; + + qat_sysctl_ctx = + device_get_sysctl_ctx(accel_dev->accel_pci_dev.pci_dev); + qat_sysctl_tree = + device_get_sysctl_tree(accel_dev->accel_pci_dev.pci_dev); + + misc_er_file = SYSCTL_ADD_PROC(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_sysctl_tree), + OID_AUTO, + MISC_ERROR_DBG_FILE, + CTLTYPE_STRING | CTLFLAG_RD, + accel_dev, + 0, + qat_misc_error_show, + "A", + "QAT Miscellaneous Error Statistics"); + accel_dev->misc_error_dbgfile = misc_er_file; + if (!accel_dev->misc_error_dbgfile) { + device_printf( + GET_DEV(accel_dev), + "Failed to create qat miscellaneous error debugfs entry.\n"); + return ENOENT; + } + + misc_counter = kmalloc(PAGE_SIZE, GFP_KERNEL); + if (!misc_counter) + return ENOMEM; + + memset(misc_counter, 0, PAGE_SIZE); + + return 0; +} + +/** + * adf_misc_error_remove_c4xxx() - Remove debugfs entry for + * acceleration device misc error counter. + * @accel_dev: Pointer to acceleration device. + * + * Return: void + */ +void +adf_misc_error_remove_c4xxx(struct adf_accel_dev *accel_dev) +{ + if (accel_dev->misc_error_dbgfile) { + remove_oid(accel_dev, accel_dev->misc_error_dbgfile); + accel_dev->misc_error_dbgfile = NULL; + } + + kfree(misc_counter); + misc_counter = NULL; +} Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_pke_replay_stats.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_pke_replay_stats.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_C4XXX_PKE_REPLAY_STATS_H_ +#define ADF_C4XXX_PKE_REPLAY_STATS_H_ + +#include "adf_accel_devices.h" + +int adf_pke_replay_counters_add_c4xxx(struct adf_accel_dev *accel_dev); +void adf_pke_replay_counters_remove_c4xxx(struct adf_accel_dev *accel_dev); + +#endif Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_pke_replay_stats.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_pke_replay_stats.c @@ -0,0 +1,100 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_c4xxx_hw_data.h" +#include "adf_c4xxx_pke_replay_stats.h" +#include "adf_common_drv.h" +#include "icp_qat_fw_init_admin.h" +#include +#include + +#define PKE_REPLAY_DBG_FILE "pke_replay_stats" +#define LINE \ + "+-----------------------------------------------------------------+\n" +#define BANNER \ + "| PKE Replay Statistics for Qat Device |\n" + +static int qat_pke_replay_counters_show(SYSCTL_HANDLER_ARGS) +{ + struct sbuf sb; + struct adf_accel_dev *accel_dev = arg1; + int ret = 0; + u64 suc_counter = 0; + u64 unsuc_counter = 0; + + sbuf_new_for_sysctl(&sb, NULL, 256, req); + + sbuf_printf(&sb, "\n"); + sbuf_printf(&sb, LINE); + + ret = adf_get_fw_pke_stats(accel_dev, &suc_counter, &unsuc_counter); + if (ret) + return ret; + + sbuf_printf( + &sb, + "| Successful Replays: %40llu |\n| Unsuccessful Replays: %40llu |\n", + (unsigned long long)suc_counter, + (unsigned long long)unsuc_counter); + + sbuf_finish(&sb); + SYSCTL_OUT(req, sbuf_data(&sb), sbuf_len(&sb)); + sbuf_delete(&sb); + + return 0; +} + +/** + * adf_pke_replay_counters_add_c4xxx() - Create debugfs entry for + * acceleration device Freq counters. + * @accel_dev: Pointer to acceleration device. + * + * Return: 0 on success, error code otherwise. + */ +int +adf_pke_replay_counters_add_c4xxx(struct adf_accel_dev *accel_dev) +{ + struct sysctl_ctx_list *qat_sysctl_ctx = NULL; + struct sysctl_oid *qat_sysctl_tree = NULL; + struct sysctl_oid *pke_rep_file = NULL; + + qat_sysctl_ctx = + device_get_sysctl_ctx(accel_dev->accel_pci_dev.pci_dev); + qat_sysctl_tree = + device_get_sysctl_tree(accel_dev->accel_pci_dev.pci_dev); + + pke_rep_file = SYSCTL_ADD_PROC(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_sysctl_tree), + OID_AUTO, + PKE_REPLAY_DBG_FILE, + CTLTYPE_STRING | CTLFLAG_RD, + accel_dev, + 0, + qat_pke_replay_counters_show, + "A", + "QAT PKE Replay Statistics"); + accel_dev->pke_replay_dbgfile = pke_rep_file; + if (!accel_dev->pke_replay_dbgfile) { + device_printf( + GET_DEV(accel_dev), + "Failed to create qat pke replay debugfs entry.\n"); + return ENOENT; + } + return 0; +} + +/** + * adf_pke_replay_counters_remove_c4xxx() - Remove debugfs entry for + * acceleration device Freq counters. + * @accel_dev: Pointer to acceleration device. + * + * Return: void + */ +void +adf_pke_replay_counters_remove_c4xxx(struct adf_accel_dev *accel_dev) +{ + if (accel_dev->pke_replay_dbgfile) { + remove_oid(accel_dev, accel_dev->pke_replay_dbgfile); + accel_dev->pke_replay_dbgfile = NULL; + } +} Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_ras.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_ras.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_RAS_H +#define ADF_RAS_H + +#include + +#define ADF_RAS_CORR 0 +#define ADF_RAS_UNCORR 1 +#define ADF_RAS_FATAL 2 +#define ADF_RAS_ERRORS 3 + +struct adf_accel_dev; + +int adf_init_ras(struct adf_accel_dev *accel_dev); +void adf_exit_ras(struct adf_accel_dev *accel_dev); +bool adf_ras_interrupts(struct adf_accel_dev *accel_dev, bool *reset_required); + +#endif /* ADF_RAS_H */ Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_ras.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_ras.c @@ -0,0 +1,1345 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_c4xxx_ras.h" +#include "adf_accel_devices.h" +#include "adf_c4xxx_hw_data.h" +#include +#include "adf_c4xxx_inline.h" + +#define ADF_RAS_STR_LEN 64 + +static int adf_sysctl_read_ras_correctable(SYSCTL_HANDLER_ARGS) +{ + struct adf_accel_dev *accel_dev = arg1; + unsigned long counter = 0; + + if (accel_dev->ras_counters) + counter = atomic_read(&accel_dev->ras_counters[ADF_RAS_CORR]); + + return SYSCTL_OUT(req, &counter, sizeof(counter)); +} + +static int adf_sysctl_read_ras_uncorrectable(SYSCTL_HANDLER_ARGS) +{ + struct adf_accel_dev *accel_dev = arg1; + unsigned long counter = 0; + + if (accel_dev->ras_counters) + counter = atomic_read(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + + return SYSCTL_OUT(req, &counter, sizeof(counter)); +} + +static int adf_sysctl_read_ras_fatal(SYSCTL_HANDLER_ARGS) +{ + struct adf_accel_dev *accel_dev = arg1; + unsigned long counter = 0; + + if (accel_dev->ras_counters) + counter = atomic_read(&accel_dev->ras_counters[ADF_RAS_FATAL]); + + return SYSCTL_OUT(req, &counter, sizeof(counter)); +} + +static int adf_sysctl_write_ras_reset(SYSCTL_HANDLER_ARGS) +{ + struct adf_accel_dev *accel_dev = arg1; + int value = 0; + int ret = SYSCTL_IN(req, &value, sizeof(value)); + + if (!ret && value != 0 && accel_dev->ras_counters) { + } + + return SYSCTL_OUT(req, &value, sizeof(value)); +} + +int +adf_init_ras(struct adf_accel_dev *accel_dev) +{ + struct sysctl_ctx_list *qat_sysctl_ctx; + struct sysctl_oid *qat_sysctl_tree; + struct sysctl_oid *ras_corr; + struct sysctl_oid *ras_uncor; + struct sysctl_oid *ras_fat; + struct sysctl_oid *ras_res; + int i; + + accel_dev->ras_counters = kcalloc(ADF_RAS_ERRORS, + sizeof(*accel_dev->ras_counters), + GFP_KERNEL); + if (!accel_dev->ras_counters) + return -ENOMEM; + + for (i = 0; i < ADF_RAS_ERRORS; ++i) + + qat_sysctl_ctx = + device_get_sysctl_ctx(accel_dev->accel_pci_dev.pci_dev); + qat_sysctl_tree = + device_get_sysctl_tree(accel_dev->accel_pci_dev.pci_dev); + ras_corr = SYSCTL_ADD_OID(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_sysctl_tree), + OID_AUTO, + "ras_correctable", + CTLTYPE_ULONG | CTLFLAG_RD | CTLFLAG_DYN, + accel_dev, + 0, + adf_sysctl_read_ras_correctable, + "LU", + "QAT RAS correctable"); + accel_dev->ras_correctable = ras_corr; + if (!accel_dev->ras_correctable) { + device_printf(GET_DEV(accel_dev), + "Failed to register ras_correctable sysctl\n"); + return -EINVAL; + } + ras_uncor = SYSCTL_ADD_OID(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_sysctl_tree), + OID_AUTO, + "ras_uncorrectable", + CTLTYPE_ULONG | CTLFLAG_RD | CTLFLAG_DYN, + accel_dev, + 0, + adf_sysctl_read_ras_uncorrectable, + "LU", + "QAT RAS uncorrectable"); + accel_dev->ras_uncorrectable = ras_uncor; + if (!accel_dev->ras_uncorrectable) { + device_printf(GET_DEV(accel_dev), + "Failed to register ras_uncorrectable sysctl\n"); + return -EINVAL; + } + + ras_fat = SYSCTL_ADD_OID(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_sysctl_tree), + OID_AUTO, + "ras_fatal", + CTLTYPE_ULONG | CTLFLAG_RD | CTLFLAG_DYN, + accel_dev, + 0, + adf_sysctl_read_ras_fatal, + "LU", + "QAT RAS fatal"); + accel_dev->ras_fatal = ras_fat; + if (!accel_dev->ras_fatal) { + device_printf(GET_DEV(accel_dev), + "Failed to register ras_fatal sysctl\n"); + return -EINVAL; + } + + ras_res = SYSCTL_ADD_OID(qat_sysctl_ctx, + SYSCTL_CHILDREN(qat_sysctl_tree), + OID_AUTO, + "ras_reset", + CTLTYPE_INT | CTLFLAG_RW | CTLFLAG_DYN, + accel_dev, + 0, + adf_sysctl_write_ras_reset, + "I", + "QAT RAS reset"); + accel_dev->ras_reset = ras_res; + if (!accel_dev->ras_reset) { + device_printf(GET_DEV(accel_dev), + "Failed to register ras_reset sysctl\n"); + return -EINVAL; + } + + return 0; +} + +void +adf_exit_ras(struct adf_accel_dev *accel_dev) +{ + if (accel_dev->ras_counters) { + remove_oid(accel_dev, accel_dev->ras_correctable); + remove_oid(accel_dev, accel_dev->ras_uncorrectable); + remove_oid(accel_dev, accel_dev->ras_fatal); + remove_oid(accel_dev, accel_dev->ras_reset); + + accel_dev->ras_correctable = NULL; + accel_dev->ras_uncorrectable = NULL; + accel_dev->ras_fatal = NULL; + accel_dev->ras_reset = NULL; + + kfree(accel_dev->ras_counters); + accel_dev->ras_counters = NULL; + } +} + +static inline void +adf_log_source_iastatssm(struct adf_accel_dev *accel_dev, + struct resource *pmisc, + u32 iastatssm, + u32 accel_num) +{ + if (iastatssm & ADF_C4XXX_IASTATSSM_UERRSSMSH_MASK) + device_printf( + GET_DEV(accel_dev), + "Uncorrectable error shared memory detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_CERRSSMSH_MASK) + device_printf( + GET_DEV(accel_dev), + "Correctable error shared memory detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_UERRSSMMMP0_MASK) + device_printf( + GET_DEV(accel_dev), + "Uncorrectable error MMP0 detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_CERRSSMMMP0_MASK) + device_printf(GET_DEV(accel_dev), + "Correctable error MMP0 detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_UERRSSMMMP1_MASK) + device_printf( + GET_DEV(accel_dev), + "Uncorrectable error MMP1 detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_CERRSSMMMP1_MASK) + device_printf(GET_DEV(accel_dev), + "Correctable error MMP1 detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_UERRSSMMMP2_MASK) + device_printf( + GET_DEV(accel_dev), + "Uncorrectable error MMP2 detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_CERRSSMMMP2_MASK) + device_printf(GET_DEV(accel_dev), + "Correctable error MMP2 detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_UERRSSMMMP3_MASK) + device_printf( + GET_DEV(accel_dev), + "Uncorrectable error MMP3 detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_CERRSSMMMP3_MASK) + device_printf(GET_DEV(accel_dev), + "Correctable error MMP3 detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_UERRSSMMMP4_MASK) + device_printf( + GET_DEV(accel_dev), + "Uncorrectable error MMP4 detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_CERRSSMMMP4_MASK) + device_printf(GET_DEV(accel_dev), + "Correctable error MMP4 detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_PPERR_MASK) + device_printf( + GET_DEV(accel_dev), + "Uncorrectable error Push or Pull detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_CPPPAR_ERR_MASK) + device_printf( + GET_DEV(accel_dev), + "Uncorrectable CPP parity error detected in accel: %u\n", + accel_num); + + if (iastatssm & ADF_C4XXX_IASTATSSM_RFPAR_ERR_MASK) + device_printf( + GET_DEV(accel_dev), + "Uncorrectable SSM RF parity error detected in accel: %u\n", + accel_num); +} + +static inline void +adf_clear_source_statssm(struct adf_accel_dev *accel_dev, + struct resource *pmisc, + u32 statssm, + u32 accel_num) +{ + if (statssm & ADF_C4XXX_IASTATSSM_UERRSSMSH_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_UERRSSMSH(accel_num), + ADF_C4XXX_UERRSSMSH_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_CERRSSMSH_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_CERRSSMSH(accel_num), + ADF_C4XXX_CERRSSMSH_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_UERRSSMMMP0_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_UERRSSMMMP(accel_num, 0), + ~ADF_C4XXX_UERRSSMMMP_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_CERRSSMMMP0_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_CERRSSMMMP(accel_num, 0), + ~ADF_C4XXX_CERRSSMMMP_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_UERRSSMMMP1_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_UERRSSMMMP(accel_num, 1), + ~ADF_C4XXX_UERRSSMMMP_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_CERRSSMMMP1_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_CERRSSMMMP(accel_num, 1), + ~ADF_C4XXX_CERRSSMMMP_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_UERRSSMMMP2_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_UERRSSMMMP(accel_num, 2), + ~ADF_C4XXX_UERRSSMMMP_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_CERRSSMMMP2_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_CERRSSMMMP(accel_num, 2), + ~ADF_C4XXX_CERRSSMMMP_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_UERRSSMMMP3_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_UERRSSMMMP(accel_num, 3), + ~ADF_C4XXX_UERRSSMMMP_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_CERRSSMMMP3_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_CERRSSMMMP(accel_num, 3), + ~ADF_C4XXX_CERRSSMMMP_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_UERRSSMMMP4_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_UERRSSMMMP(accel_num, 4), + ~ADF_C4XXX_UERRSSMMMP_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_CERRSSMMMP4_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_CERRSSMMMP(accel_num, 4), + ~ADF_C4XXX_CERRSSMMMP_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_PPERR_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_PPERR(accel_num), + ~ADF_C4XXX_PPERR_INTS_CLEAR_MASK); + + if (statssm & ADF_C4XXX_IASTATSSM_RFPAR_ERR_MASK) + adf_csr_fetch_and_or(pmisc, + ADF_C4XXX_SSMSOFTERRORPARITY(accel_num), + 0UL); + + if (statssm & ADF_C4XXX_IASTATSSM_CPPPAR_ERR_MASK) + adf_csr_fetch_and_or(pmisc, + ADF_C4XXX_SSMCPPERR(accel_num), + 0UL); +} + +static inline void +adf_process_errsou8(struct adf_accel_dev *accel_dev, struct resource *pmisc) +{ + int i; + u32 mecorrerr = ADF_CSR_RD(pmisc, ADF_C4XXX_HI_ME_COR_ERRLOG); + const unsigned long tmp_mecorrerr = mecorrerr; + + /* For each correctable error in ME increment RAS counter */ + for_each_set_bit(i, + &tmp_mecorrerr, + ADF_C4XXX_HI_ME_COR_ERRLOG_SIZE_IN_BITS) + { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_CORR]); + device_printf(GET_DEV(accel_dev), + "Correctable error detected in AE%d\n", + i); + } + + /* Clear interrupt from errsou8 (RW1C) */ + ADF_CSR_WR(pmisc, ADF_C4XXX_HI_ME_COR_ERRLOG, mecorrerr); +} + +static inline void +adf_handle_ae_uncorr_err(struct adf_accel_dev *accel_dev, + struct resource *pmisc) +{ + int i; + u32 me_uncorr_err = ADF_CSR_RD(pmisc, ADF_C4XXX_HI_ME_UNCERR_LOG); + const unsigned long tmp_me_uncorr_err = me_uncorr_err; + + /* For each uncorrectable fatal error in AE increment RAS error + * counter. + */ + for_each_set_bit(i, + &tmp_me_uncorr_err, + ADF_C4XXX_HI_ME_UNCOR_ERRLOG_BITS) + { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_FATAL]); + device_printf(GET_DEV(accel_dev), + "Uncorrectable error detected in AE%d\n", + i); + } + + /* Clear interrupt from me_uncorr_err (RW1C) */ + ADF_CSR_WR(pmisc, ADF_C4XXX_HI_ME_UNCERR_LOG, me_uncorr_err); +} + +static inline void +adf_handle_ri_mem_par_err(struct adf_accel_dev *accel_dev, + struct resource *pmisc, + bool *reset_required) +{ + u32 ri_mem_par_err_sts = 0; + u32 ri_mem_par_err_ferr = 0; + + ri_mem_par_err_sts = ADF_CSR_RD(pmisc, ADF_C4XXX_RI_MEM_PAR_ERR_STS); + + ri_mem_par_err_ferr = ADF_CSR_RD(pmisc, ADF_C4XXX_RI_MEM_PAR_ERR_FERR); + + if (ri_mem_par_err_sts & ADF_C4XXX_RI_MEM_PAR_ERR_STS_MASK) { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + device_printf( + GET_DEV(accel_dev), + "Uncorrectable RI memory parity error detected.\n"); + } + + if (ri_mem_par_err_sts & ADF_C4XXX_RI_MEM_MSIX_TBL_INT_MASK) { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_FATAL]); + device_printf( + GET_DEV(accel_dev), + "Uncorrectable fatal MSIX table parity error detected.\n"); + *reset_required = true; + } + + device_printf(GET_DEV(accel_dev), + "ri_mem_par_err_sts=0x%X\tri_mem_par_err_ferr=%u\n", + ri_mem_par_err_sts, + ri_mem_par_err_ferr); + + ADF_CSR_WR(pmisc, ADF_C4XXX_RI_MEM_PAR_ERR_STS, ri_mem_par_err_sts); +} + +static inline void +adf_handle_ti_mem_par_err(struct adf_accel_dev *accel_dev, + struct resource *pmisc) +{ + u32 ti_mem_par_err_sts0 = 0; + u32 ti_mem_par_err_sts1 = 0; + u32 ti_mem_par_err_ferr = 0; + + ti_mem_par_err_sts0 = ADF_CSR_RD(pmisc, ADF_C4XXX_TI_MEM_PAR_ERR_STS0); + ti_mem_par_err_sts1 = ADF_CSR_RD(pmisc, ADF_C4XXX_TI_MEM_PAR_ERR_STS1); + ti_mem_par_err_ferr = + ADF_CSR_RD(pmisc, ADF_C4XXX_TI_MEM_PAR_ERR_FIRST_ERROR); + + atomic_inc(&accel_dev->ras_counters[ADF_RAS_FATAL]); + ti_mem_par_err_sts1 &= ADF_C4XXX_TI_MEM_PAR_ERR_STS1_MASK; + + device_printf(GET_DEV(accel_dev), + "Uncorrectable TI memory parity error detected.\n"); + device_printf(GET_DEV(accel_dev), + "ti_mem_par_err_sts0=0x%X\tti_mem_par_err_sts1=0x%X\t" + "ti_mem_par_err_ferr=0x%X\n", + ti_mem_par_err_sts0, + ti_mem_par_err_sts1, + ti_mem_par_err_ferr); + + ADF_CSR_WR(pmisc, ADF_C4XXX_TI_MEM_PAR_ERR_STS0, ti_mem_par_err_sts0); + ADF_CSR_WR(pmisc, ADF_C4XXX_TI_MEM_PAR_ERR_STS1, ti_mem_par_err_sts1); +} + +static inline void +adf_log_fatal_cmd_par_err(struct adf_accel_dev *accel_dev, char *err_type) +{ + atomic_inc(&accel_dev->ras_counters[ADF_RAS_FATAL]); + device_printf(GET_DEV(accel_dev), + "Fatal error detected: %s command parity\n", + err_type); +} + +static inline void +adf_handle_host_cpp_par_err(struct adf_accel_dev *accel_dev, + struct resource *pmisc) +{ + u32 host_cpp_par_err = 0; + + host_cpp_par_err = + ADF_CSR_RD(pmisc, ADF_C4XXX_HI_CPP_AGENT_CMD_PAR_ERR_LOG); + + if (host_cpp_par_err & ADF_C4XXX_TI_CMD_PAR_ERR) + adf_log_fatal_cmd_par_err(accel_dev, "TI"); + + if (host_cpp_par_err & ADF_C4XXX_RI_CMD_PAR_ERR) + adf_log_fatal_cmd_par_err(accel_dev, "RI"); + + if (host_cpp_par_err & ADF_C4XXX_ICI_CMD_PAR_ERR) + adf_log_fatal_cmd_par_err(accel_dev, "ICI"); + + if (host_cpp_par_err & ADF_C4XXX_ICE_CMD_PAR_ERR) + adf_log_fatal_cmd_par_err(accel_dev, "ICE"); + + if (host_cpp_par_err & ADF_C4XXX_ARAM_CMD_PAR_ERR) + adf_log_fatal_cmd_par_err(accel_dev, "ARAM"); + + if (host_cpp_par_err & ADF_C4XXX_CFC_CMD_PAR_ERR) + adf_log_fatal_cmd_par_err(accel_dev, "CFC"); + + if (ADF_C4XXX_SSM_CMD_PAR_ERR(host_cpp_par_err)) + adf_log_fatal_cmd_par_err(accel_dev, "SSM"); + + /* Clear interrupt from host_cpp_par_err (RW1C) */ + ADF_CSR_WR(pmisc, + ADF_C4XXX_HI_CPP_AGENT_CMD_PAR_ERR_LOG, + host_cpp_par_err); +} + +static inline void +adf_process_errsou9(struct adf_accel_dev *accel_dev, + struct resource *pmisc, + u32 errsou, + bool *reset_required) +{ + if (errsou & ADF_C4XXX_ME_UNCORR_ERROR) { + adf_handle_ae_uncorr_err(accel_dev, pmisc); + + /* Notify caller that function level reset is required. */ + *reset_required = true; + } + + if (errsou & ADF_C4XXX_CPP_CMD_PAR_ERR) { + adf_handle_host_cpp_par_err(accel_dev, pmisc); + *reset_required = true; + } + + /* RI memory parity errors are uncorrectable non-fatal errors + * with exception of bit 22 MSIX table parity error, which should + * be treated as fatal error, followed by device restart. + */ + if (errsou & ADF_C4XXX_RI_MEM_PAR_ERR) + adf_handle_ri_mem_par_err(accel_dev, pmisc, reset_required); + + if (errsou & ADF_C4XXX_TI_MEM_PAR_ERR) { + adf_handle_ti_mem_par_err(accel_dev, pmisc); + *reset_required = true; + } +} + +static inline void +adf_process_exprpssmcpr(struct adf_accel_dev *accel_dev, + struct resource *pmisc, + u32 accel) +{ + u32 exprpssmcpr; + + /* CPR0 */ + exprpssmcpr = ADF_CSR_RD(pmisc, ADF_C4XXX_EXPRPSSMCPR0(accel)); + if (exprpssmcpr & ADF_C4XXX_EXPRPSSM_FATAL_MASK) { + device_printf(GET_DEV(accel_dev), + "Uncorrectable error CPR0 detected in accel %u\n", + accel); + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + } + if (exprpssmcpr & ADF_C4XXX_EXPRPSSM_SOFT_MASK) { + device_printf(GET_DEV(accel_dev), + "Correctable error CPR0 detected in accel %u\n", + accel); + atomic_inc(&accel_dev->ras_counters[ADF_RAS_CORR]); + } + ADF_CSR_WR(pmisc, ADF_C4XXX_EXPRPSSMCPR0(accel), 0); + + /* CPR1 */ + exprpssmcpr = ADF_CSR_RD(pmisc, ADF_C4XXX_EXPRPSSMCPR1(accel)); + if (exprpssmcpr & ADF_C4XXX_EXPRPSSM_FATAL_MASK) { + device_printf(GET_DEV(accel_dev), + "Uncorrectable error CPR1 detected in accel %u\n", + accel); + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + } + if (exprpssmcpr & ADF_C4XXX_EXPRPSSM_SOFT_MASK) { + device_printf(GET_DEV(accel_dev), + "Correctable error CPR1 detected in accel %u\n", + accel); + atomic_inc(&accel_dev->ras_counters[ADF_RAS_CORR]); + } + ADF_CSR_WR(pmisc, ADF_C4XXX_EXPRPSSMCPR1(accel), 0); +} + +static inline void +adf_process_exprpssmxlt(struct adf_accel_dev *accel_dev, + struct resource *pmisc, + u32 accel) +{ + u32 exprpssmxlt; + + /* XTL0 */ + exprpssmxlt = ADF_CSR_RD(pmisc, ADF_C4XXX_EXPRPSSMXLT0(accel)); + if (exprpssmxlt & ADF_C4XXX_EXPRPSSM_FATAL_MASK) { + device_printf(GET_DEV(accel_dev), + "Uncorrectable error XLT0 detected in accel %u\n", + accel); + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + } + if (exprpssmxlt & ADF_C4XXX_EXPRPSSM_SOFT_MASK) { + device_printf(GET_DEV(accel_dev), + "Correctable error XLT0 detected in accel %u\n", + accel); + atomic_inc(&accel_dev->ras_counters[ADF_RAS_CORR]); + } + ADF_CSR_WR(pmisc, ADF_C4XXX_EXPRPSSMXLT0(accel), 0); + + /* XTL1 */ + exprpssmxlt = ADF_CSR_RD(pmisc, ADF_C4XXX_EXPRPSSMXLT1(accel)); + if (exprpssmxlt & ADF_C4XXX_EXPRPSSM_FATAL_MASK) { + device_printf(GET_DEV(accel_dev), + "Uncorrectable error XLT1 detected in accel %u\n", + accel); + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + } + if (exprpssmxlt & ADF_C4XXX_EXPRPSSM_SOFT_MASK) { + device_printf(GET_DEV(accel_dev), + "Correctable error XLT1 detected in accel %u\n", + accel); + atomic_inc(&accel_dev->ras_counters[ADF_RAS_CORR]); + } + ADF_CSR_WR(pmisc, ADF_C4XXX_EXPRPSSMXLT0(accel), 0); +} + +static inline void +adf_process_spp_par_err(struct adf_accel_dev *accel_dev, + struct resource *pmisc, + u32 accel, + bool *reset_required) +{ + /* All SPP parity errors are treated as uncorrectable fatal errors */ + atomic_inc(&accel_dev->ras_counters[ADF_RAS_FATAL]); + *reset_required = true; + device_printf(GET_DEV(accel_dev), + "Uncorrectable fatal SPP parity error detected\n"); +} + +static inline void +adf_process_statssm(struct adf_accel_dev *accel_dev, + struct resource *pmisc, + u32 accel, + bool *reset_required) +{ + u32 i; + u32 statssm = ADF_CSR_RD(pmisc, ADF_INTSTATSSM(accel)); + u32 iastatssm = ADF_CSR_RD(pmisc, ADF_C4XXX_IAINTSTATSSM(accel)); + bool type; + const unsigned long tmp_iastatssm = iastatssm; + + /* First collect all errors */ + for_each_set_bit(i, &tmp_iastatssm, ADF_C4XXX_IASTATSSM_BITS) + { + if (i == ADF_C4XXX_IASTATSSM_SLICE_HANG_ERR_BIT) { + /* Slice Hang error is being handled in + * separate function adf_check_slice_hang_c4xxx(), + * which also increments RAS counters for + * SliceHang error. + */ + continue; + } + if (i == ADF_C4XXX_IASTATSSM_SPP_PAR_ERR_BIT) { + adf_process_spp_par_err(accel_dev, + pmisc, + accel, + reset_required); + continue; + } + + type = (i % 2) ? ADF_RAS_CORR : ADF_RAS_UNCORR; + if (i == ADF_C4XXX_IASTATSSM_CPP_PAR_ERR_BIT) + type = ADF_RAS_UNCORR; + + atomic_inc(&accel_dev->ras_counters[type]); + } + + /* If iastatssm is set, we need to log the error */ + if (iastatssm & ADF_C4XXX_IASTATSSM_MASK) + adf_log_source_iastatssm(accel_dev, pmisc, iastatssm, accel); + /* If statssm is set, we need to clear the error sources */ + if (statssm & ADF_C4XXX_IASTATSSM_MASK) + adf_clear_source_statssm(accel_dev, pmisc, statssm, accel); + /* Clear the iastatssm after clearing error sources */ + if (iastatssm & ADF_C4XXX_IASTATSSM_MASK) + adf_csr_fetch_and_and(pmisc, + ADF_C4XXX_IAINTSTATSSM(accel), + ADF_C4XXX_IASTATSSM_CLR_MASK); +} + +static inline void +adf_process_errsou10(struct adf_accel_dev *accel_dev, + struct resource *pmisc, + u32 errsou, + u32 num_accels, + bool *reset_required) +{ + int accel; + const unsigned long tmp_errsou = errsou; + + for_each_set_bit(accel, &tmp_errsou, num_accels) + { + adf_process_statssm(accel_dev, pmisc, accel, reset_required); + adf_process_exprpssmcpr(accel_dev, pmisc, accel); + adf_process_exprpssmxlt(accel_dev, pmisc, accel); + } +} + +/* ERRSOU 11 */ +static inline void +adf_handle_ti_misc_err(struct adf_accel_dev *accel_dev, struct resource *pmisc) +{ + u32 ti_misc_sts = 0; + u32 err_type = 0; + + ti_misc_sts = ADF_CSR_RD(pmisc, ADF_C4XXX_TI_MISC_STS); + dev_dbg(GET_DEV(accel_dev), "ti_misc_sts = 0x%X\n", ti_misc_sts); + + if (ti_misc_sts & ADF_C4XXX_TI_MISC_ERR_MASK) { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + + /* If TI misc error occurred then check its type */ + err_type = ADF_C4XXX_GET_TI_MISC_ERR_TYPE(ti_misc_sts); + if (err_type == ADF_C4XXX_TI_BME_RESP_ORDER_ERR) { + device_printf( + GET_DEV(accel_dev), + "Uncorrectable non-fatal BME response order error.\n"); + + } else if (err_type == ADF_C4XXX_TI_RESP_ORDER_ERR) { + device_printf( + GET_DEV(accel_dev), + "Uncorrectable non-fatal response order error.\n"); + } + + /* Clear the interrupt and allow the next error to be + * logged. + */ + ADF_CSR_WR(pmisc, ADF_C4XXX_TI_MISC_STS, BIT(0)); + } +} + +static inline void +adf_handle_ri_push_pull_par_err(struct adf_accel_dev *accel_dev, + struct resource *pmisc) +{ + u32 ri_cpp_int_sts = 0; + u32 err_clear_mask = 0; + + ri_cpp_int_sts = ADF_CSR_RD(pmisc, ADF_C4XXX_RI_CPP_INT_STS); + dev_dbg(GET_DEV(accel_dev), "ri_cpp_int_sts = 0x%X\n", ri_cpp_int_sts); + + if (ri_cpp_int_sts & ADF_C4XXX_RI_CPP_INT_STS_PUSH_ERR) { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + device_printf( + GET_DEV(accel_dev), + "CPP%d: Uncorrectable non-fatal RI push error detected.\n", + ADF_C4XXX_GET_CPP_BUS_FROM_STS(ri_cpp_int_sts)); + + err_clear_mask |= ADF_C4XXX_RI_CPP_INT_STS_PUSH_ERR; + } + + if (ri_cpp_int_sts & ADF_C4XXX_RI_CPP_INT_STS_PULL_ERR) { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + device_printf( + GET_DEV(accel_dev), + "CPP%d: Uncorrectable non-fatal RI pull error detected.\n", + ADF_C4XXX_GET_CPP_BUS_FROM_STS(ri_cpp_int_sts)); + + err_clear_mask |= ADF_C4XXX_RI_CPP_INT_STS_PULL_ERR; + } + + /* Clear the interrupt for handled errors and allow the next error + * to be logged. + */ + ADF_CSR_WR(pmisc, ADF_C4XXX_RI_CPP_INT_STS, err_clear_mask); +} + +static inline void +adf_handle_ti_push_pull_par_err(struct adf_accel_dev *accel_dev, + struct resource *pmisc) +{ + u32 ti_cpp_int_sts = 0; + u32 err_clear_mask = 0; + + ti_cpp_int_sts = ADF_CSR_RD(pmisc, ADF_C4XXX_TI_CPP_INT_STS); + dev_dbg(GET_DEV(accel_dev), "ti_cpp_int_sts = 0x%X\n", ti_cpp_int_sts); + + if (ti_cpp_int_sts & ADF_C4XXX_TI_CPP_INT_STS_PUSH_ERR) { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + device_printf( + GET_DEV(accel_dev), + "CPP%d: Uncorrectable non-fatal TI push error detected.\n", + ADF_C4XXX_GET_CPP_BUS_FROM_STS(ti_cpp_int_sts)); + + err_clear_mask |= ADF_C4XXX_TI_CPP_INT_STS_PUSH_ERR; + } + + if (ti_cpp_int_sts & ADF_C4XXX_TI_CPP_INT_STS_PULL_ERR) { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + device_printf( + GET_DEV(accel_dev), + "CPP%d: Uncorrectable non-fatal TI pull error detected.\n", + ADF_C4XXX_GET_CPP_BUS_FROM_STS(ti_cpp_int_sts)); + + err_clear_mask |= ADF_C4XXX_TI_CPP_INT_STS_PULL_ERR; + } + + /* Clear the interrupt for handled errors and allow the next error + * to be logged. + */ + ADF_CSR_WR(pmisc, ADF_C4XXX_TI_CPP_INT_STS, err_clear_mask); +} + +static inline void +adf_handle_aram_corr_err(struct adf_accel_dev *accel_dev, + struct resource *aram_base_addr) +{ + u32 aram_cerr = 0; + + aram_cerr = ADF_CSR_RD(aram_base_addr, ADF_C4XXX_ARAMCERR); + dev_dbg(GET_DEV(accel_dev), "aram_cerr = 0x%X\n", aram_cerr); + + if (aram_cerr & ADF_C4XXX_ARAM_CORR_ERR_MASK) { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_CORR]); + device_printf(GET_DEV(accel_dev), + "Correctable ARAM error detected.\n"); + } + + /* Clear correctable ARAM error interrupt. */ + ADF_C4XXX_CLEAR_CSR_BIT(aram_cerr, 0); + ADF_CSR_WR(aram_base_addr, ADF_C4XXX_ARAMCERR, aram_cerr); +} + +static inline void +adf_handle_aram_uncorr_err(struct adf_accel_dev *accel_dev, + struct resource *aram_base_addr) +{ + u32 aram_uerr = 0; + + aram_uerr = ADF_CSR_RD(aram_base_addr, ADF_C4XXX_ARAMUERR); + dev_dbg(GET_DEV(accel_dev), "aram_uerr = 0x%X\n", aram_uerr); + + if (aram_uerr & ADF_C4XXX_ARAM_UNCORR_ERR_MASK) { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + device_printf(GET_DEV(accel_dev), + "Uncorrectable non-fatal ARAM error detected.\n"); + } + + /* Clear uncorrectable ARAM error interrupt. */ + ADF_C4XXX_CLEAR_CSR_BIT(aram_uerr, 0); + ADF_CSR_WR(aram_base_addr, ADF_C4XXX_ARAMUERR, aram_uerr); +} + +static inline void +adf_handle_ti_pull_par_err(struct adf_accel_dev *accel_dev, + struct resource *pmisc) +{ + u32 ti_cpp_int_sts = 0; + + ti_cpp_int_sts = ADF_CSR_RD(pmisc, ADF_C4XXX_TI_CPP_INT_STS); + dev_dbg(GET_DEV(accel_dev), "ti_cpp_int_sts = 0x%X\n", ti_cpp_int_sts); + + if (ti_cpp_int_sts & ADF_C4XXX_TI_CPP_INT_STS_PUSH_DATA_PAR_ERR) { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + device_printf( + GET_DEV(accel_dev), + "CPP%d: Uncorrectable non-fatal TI pull data parity error detected.\n", + ADF_C4XXX_GET_CPP_BUS_FROM_STS(ti_cpp_int_sts)); + } + + /* Clear the interrupt and allow the next error to be logged. */ + ADF_CSR_WR(pmisc, + ADF_C4XXX_TI_CPP_INT_STS, + ADF_C4XXX_TI_CPP_INT_STS_PUSH_DATA_PAR_ERR); +} + +static inline void +adf_handle_ri_push_par_err(struct adf_accel_dev *accel_dev, + struct resource *pmisc) +{ + u32 ri_cpp_int_sts = 0; + + ri_cpp_int_sts = ADF_CSR_RD(pmisc, ADF_C4XXX_RI_CPP_INT_STS); + dev_dbg(GET_DEV(accel_dev), "ri_cpp_int_sts = 0x%X\n", ri_cpp_int_sts); + + if (ri_cpp_int_sts & ADF_C4XXX_RI_CPP_INT_STS_PUSH_DATA_PAR_ERR) { + atomic_inc(&accel_dev->ras_counters[ADF_RAS_UNCORR]); + device_printf( + GET_DEV(accel_dev), + "CPP%d: Uncorrectable non-fatal RI push data parity error detected.\n", + ADF_C4XXX_GET_CPP_BUS_FROM_STS(ri_cpp_int_sts)); + } + + /* Clear the interrupt and allow the next error to be logged. */ + ADF_CSR_WR(pmisc, + ADF_C4XXX_RI_CPP_INT_STS, + ADF_C4XXX_RI_CPP_INT_STS_PUSH_DATA_PAR_ERR); +} + +static inline void +adf_log_inln_err(struct adf_accel_dev *accel_dev, + u32 offset, + u8 ras_type, + char *msg) +{ + if (ras_type >= ADF_RAS_ERRORS) { + device_printf(GET_DEV(accel_dev), + "Invalid ras type %u\n", + ras_type); + return; + } + + if (offset == ADF_C4XXX_INLINE_INGRESS_OFFSET) { + if (ras_type == ADF_RAS_CORR) + dev_dbg(GET_DEV(accel_dev), "Detect ici %s\n", msg); + else + device_printf(GET_DEV(accel_dev), + "Detect ici %s\n", + msg); + } else { + if (ras_type == ADF_RAS_CORR) + dev_dbg(GET_DEV(accel_dev), "Detect ice %s\n", msg); + else + device_printf(GET_DEV(accel_dev), + "Detect ice %s\n", + msg); + } + atomic_inc(&accel_dev->ras_counters[ras_type]); +} + +static inline void +adf_handle_parser_uerr(struct adf_accel_dev *accel_dev, + struct resource *aram_base_addr, + u32 offset, + bool *reset_required) +{ + u32 reg_val = 0; + + reg_val = ADF_CSR_RD(aram_base_addr, ADF_C4XXX_IC_PARSER_UERR + offset); + if (reg_val & ADF_C4XXX_PARSER_UERR_INTR) { + /* Mask inten */ + reg_val &= ~ADF_C4XXX_PARSER_DESC_UERR_INTR_ENA; + ADF_CSR_WR(aram_base_addr, + ADF_C4XXX_IC_PARSER_UERR + offset, + reg_val); + + /* Fatal error then increase RAS error counter + * and reset CPM + */ + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_FATAL, + "parser uncorr fatal err"); + *reset_required = true; + } +} + +static inline void +adf_handle_mac_intr(struct adf_accel_dev *accel_dev, + struct resource *aram_base_addr, + u32 offset, + bool *reset_required) +{ + u64 reg_val; + + reg_val = ADF_CSR_RD64(aram_base_addr, ADF_C4XXX_MAC_IP + offset); + + /* Handle the MAC interrupts masked out in MAC_IM */ + if (reg_val & ADF_C4XXX_MAC_ERROR_TX_UNDERRUN) + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_CORR, + "err tx underrun"); + + if (reg_val & ADF_C4XXX_MAC_ERROR_TX_FCS) + adf_log_inln_err(accel_dev, offset, ADF_RAS_CORR, "err tx fcs"); + + if (reg_val & ADF_C4XXX_MAC_ERROR_TX_DATA_CORRUPT) + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_CORR, + "err tx data corrupt"); + + if (reg_val & ADF_C4XXX_MAC_ERROR_RX_OVERRUN) { + *reset_required = true; + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_FATAL, + "err rx overrun fatal err"); + } + + if (reg_val & ADF_C4XXX_MAC_ERROR_RX_RUNT) { + *reset_required = true; + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_FATAL, + "err rx runt fatal err"); + } + + if (reg_val & ADF_C4XXX_MAC_ERROR_RX_UNDERSIZE) { + *reset_required = true; + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_FATAL, + "err rx undersize fatal err"); + } + + if (reg_val & ADF_C4XXX_MAC_ERROR_RX_JABBER) { + *reset_required = true; + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_FATAL, + "err rx jabber fatal err"); + } + + if (reg_val & ADF_C4XXX_MAC_ERROR_RX_OVERSIZE) { + *reset_required = true; + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_FATAL, + "err rx oversize fatal err"); + } + + if (reg_val & ADF_C4XXX_MAC_ERROR_RX_FCS) + adf_log_inln_err(accel_dev, offset, ADF_RAS_CORR, "err rx fcs"); + + if (reg_val & ADF_C4XXX_MAC_ERROR_RX_FRAME) + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_CORR, + "err rx frame"); + + if (reg_val & ADF_C4XXX_MAC_ERROR_RX_CODE) + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_CORR, + "err rx code"); + + if (reg_val & ADF_C4XXX_MAC_ERROR_RX_PREAMBLE) + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_CORR, + "err rx preamble"); + + if (reg_val & ADF_C4XXX_MAC_RX_LINK_UP) + adf_log_inln_err(accel_dev, offset, ADF_RAS_CORR, "rx link up"); + + if (reg_val & ADF_C4XXX_MAC_INVALID_SPEED) + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_CORR, + "invalid speed"); + + if (reg_val & ADF_C4XXX_MAC_PIA_RX_FIFO_OVERRUN) { + *reset_required = true; + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_FATAL, + "pia rx fifo overrun fatal err"); + } + + if (reg_val & ADF_C4XXX_MAC_PIA_TX_FIFO_OVERRUN) { + *reset_required = true; + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_FATAL, + "pia tx fifo overrun fatal err"); + } + + if (reg_val & ADF_C4XXX_MAC_PIA_TX_FIFO_UNDERRUN) { + *reset_required = true; + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_FATAL, + "pia tx fifo underrun fatal err"); + } + + /* Clear the interrupt and allow the next error to be logged. */ + ADF_CSR_WR64(aram_base_addr, ADF_C4XXX_MAC_IP + offset, reg_val); +} + +static inline bool +adf_handle_rf_par_err(struct adf_accel_dev *accel_dev, + struct resource *aram_base_addr, + u32 rf_par_addr, + u32 rf_par_msk, + u32 offset, + char *msg) +{ + u32 reg_val; + unsigned long intr_status; + int i; + char strbuf[ADF_C4XXX_MAX_STR_LEN]; + + /* Handle rf parity error */ + reg_val = ADF_CSR_RD(aram_base_addr, rf_par_addr + offset); + intr_status = reg_val & rf_par_msk; + if (intr_status) { + for_each_set_bit(i, &intr_status, ADF_C4XXX_RF_PAR_ERR_BITS) + { + if (i % 2 == 0) + snprintf(strbuf, + sizeof(strbuf), + "%s mul par %u uncorr fatal err", + msg, + RF_PAR_MUL_MAP(i)); + + else + snprintf(strbuf, + sizeof(strbuf), + "%s par %u uncorr fatal err", + msg, + RF_PAR_MAP(i)); + + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_FATAL, + strbuf); + } + + /* Clear the interrupt and allow the next error to be logged. */ + ADF_CSR_WR(aram_base_addr, rf_par_addr + offset, reg_val); + return true; + } + return false; +} + +static inline void +adf_handle_cd_rf_par_err(struct adf_accel_dev *accel_dev, + struct resource *aram_base_addr, + u32 offset, + bool *reset_required) +{ + /* Handle reg_cd_rf_parity_err[1] */ + *reset_required |= + adf_handle_rf_par_err(accel_dev, + aram_base_addr, + ADF_C4XXX_IC_CD_RF_PARITY_ERR_1, + ADF_C4XXX_CD_RF_PAR_ERR_1_INTR, + offset, + "cd rf par[1]:") ? + true : + false; +} + +static inline void +adf_handle_inln_rf_par_err(struct adf_accel_dev *accel_dev, + struct resource *aram_base_addr, + u32 offset, + bool *reset_required) +{ + /* Handle reg_inln_rf_parity_err[0] */ + *reset_required |= + adf_handle_rf_par_err(accel_dev, + aram_base_addr, + ADF_C4XXX_IC_INLN_RF_PARITY_ERR_0, + ADF_C4XXX_INLN_RF_PAR_ERR_0_INTR, + offset, + "inln rf par[0]:") ? + true : + false; + + /* Handle reg_inln_rf_parity_err[1] */ + *reset_required |= + adf_handle_rf_par_err(accel_dev, + aram_base_addr, + ADF_C4XXX_IC_INLN_RF_PARITY_ERR_1, + ADF_C4XXX_INLN_RF_PAR_ERR_1_INTR, + offset, + "inln rf par[1]:") ? + true : + false; + + /* Handle reg_inln_rf_parity_err[2] */ + *reset_required |= + adf_handle_rf_par_err(accel_dev, + aram_base_addr, + ADF_C4XXX_IC_INLN_RF_PARITY_ERR_2, + ADF_C4XXX_INLN_RF_PAR_ERR_2_INTR, + offset, + "inln rf par[2]:") ? + true : + false; + + /* Handle reg_inln_rf_parity_err[5] */ + *reset_required |= + adf_handle_rf_par_err(accel_dev, + aram_base_addr, + ADF_C4XXX_IC_INLN_RF_PARITY_ERR_5, + ADF_C4XXX_INLN_RF_PAR_ERR_5_INTR, + offset, + "inln rf par[5]:") ? + true : + false; +} + +static inline void +adf_handle_congest_mngt_intr(struct adf_accel_dev *accel_dev, + struct resource *aram_base_addr, + u32 offset, + bool *reset_required) +{ + u32 reg_val; + + reg_val = ADF_CSR_RD(aram_base_addr, + ADF_C4XXX_IC_CONGESTION_MGMT_INT + offset); + + /* A mis-configuration of CPM, a mis-configuration of the Ethernet + * Complex or that the traffic profile has deviated from that for + * which the resources were configured + */ + if (reg_val & ADF_C4XXX_CONGESTION_MGMT_CTPB_GLOBAL_CROSSED) { + adf_log_inln_err( + accel_dev, + offset, + ADF_RAS_FATAL, + "congestion mgmt ctpb global crossed fatal err"); + *reset_required = true; + } + + if (reg_val & ADF_C4XXX_CONGESTION_MGMT_XOFF_CIRQ_OUT) { + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_CORR, + "congestion mgmt XOFF cirq out err"); + } + + if (reg_val & ADF_C4XXX_CONGESTION_MGMT_XOFF_CIRQ_IN) { + adf_log_inln_err(accel_dev, + offset, + ADF_RAS_CORR, + "congestion mgmt XOFF cirq in err"); + } + + /* Clear the interrupt and allow the next error to be logged */ + ADF_CSR_WR(aram_base_addr, + ADF_C4XXX_IC_CONGESTION_MGMT_INT + offset, + reg_val); +} + +static inline void +adf_handle_inline_intr(struct adf_accel_dev *accel_dev, + struct resource *aram_base_addr, + u32 csr_offset, + bool *reset_required) +{ + adf_handle_cd_rf_par_err(accel_dev, + aram_base_addr, + csr_offset, + reset_required); + + adf_handle_parser_uerr(accel_dev, + aram_base_addr, + csr_offset, + reset_required); + + adf_handle_inln_rf_par_err(accel_dev, + aram_base_addr, + csr_offset, + reset_required); + + adf_handle_congest_mngt_intr(accel_dev, + aram_base_addr, + csr_offset, + reset_required); + + adf_handle_mac_intr(accel_dev, + aram_base_addr, + csr_offset, + reset_required); +} + +static inline void +adf_process_errsou11(struct adf_accel_dev *accel_dev, + struct resource *pmisc, + u32 errsou, + bool *reset_required) +{ + struct resource *aram_base_addr = + (&GET_BARS(accel_dev)[ADF_C4XXX_SRAM_BAR])->virt_addr; + + if (errsou & ADF_C4XXX_TI_MISC) + adf_handle_ti_misc_err(accel_dev, pmisc); + + if (errsou & ADF_C4XXX_RI_PUSH_PULL_PAR_ERR) + adf_handle_ri_push_pull_par_err(accel_dev, pmisc); + + if (errsou & ADF_C4XXX_TI_PUSH_PULL_PAR_ERR) + adf_handle_ti_push_pull_par_err(accel_dev, pmisc); + + if (errsou & ADF_C4XXX_ARAM_CORR_ERR) + adf_handle_aram_corr_err(accel_dev, aram_base_addr); + + if (errsou & ADF_C4XXX_ARAM_UNCORR_ERR) + adf_handle_aram_uncorr_err(accel_dev, aram_base_addr); + + if (errsou & ADF_C4XXX_TI_PULL_PAR_ERR) + adf_handle_ti_pull_par_err(accel_dev, pmisc); + + if (errsou & ADF_C4XXX_RI_PUSH_PAR_ERR) + adf_handle_ri_push_par_err(accel_dev, pmisc); + + if (errsou & ADF_C4XXX_INLINE_INGRESS_INTR) + adf_handle_inline_intr(accel_dev, + aram_base_addr, + ADF_C4XXX_INLINE_INGRESS_OFFSET, + reset_required); + + if (errsou & ADF_C4XXX_INLINE_EGRESS_INTR) + adf_handle_inline_intr(accel_dev, + aram_base_addr, + ADF_C4XXX_INLINE_EGRESS_OFFSET, + reset_required); +} + +bool +adf_ras_interrupts(struct adf_accel_dev *accel_dev, bool *reset_required) +{ + u32 errsou = 0; + bool handled = false; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 num_accels = hw_data->get_num_accels(hw_data); + struct resource *pmisc = + (&GET_BARS(accel_dev)[ADF_C4XXX_PMISC_BAR])->virt_addr; + + if (unlikely(!reset_required)) { + device_printf(GET_DEV(accel_dev), + "Invalid pointer reset_required\n"); + return false; + } + + /* errsou8 */ + errsou = ADF_CSR_RD(pmisc, ADF_C4XXX_ERRSOU8); + if (errsou & ADF_C4XXX_ERRSOU8_MECORR_MASK) { + adf_process_errsou8(accel_dev, pmisc); + handled = true; + } + + /* errsou9 */ + errsou = ADF_CSR_RD(pmisc, ADF_C4XXX_ERRSOU9); + if (errsou & ADF_C4XXX_ERRSOU9_ERROR_MASK) { + adf_process_errsou9(accel_dev, pmisc, errsou, reset_required); + handled = true; + } + + /* errsou10 */ + errsou = ADF_CSR_RD(pmisc, ADF_C4XXX_ERRSOU10); + if (errsou & ADF_C4XXX_ERRSOU10_RAS_MASK) { + adf_process_errsou10( + accel_dev, pmisc, errsou, num_accels, reset_required); + handled = true; + } + + /* errsou11 */ + errsou = ADF_CSR_RD(pmisc, ADF_C4XXX_ERRSOU11); + if (errsou & ADF_C4XXX_ERRSOU11_ERROR_MASK) { + adf_process_errsou11(accel_dev, pmisc, errsou, reset_required); + handled = true; + } + + return handled; +} Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_res_part.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_res_part.c @@ -0,0 +1,195 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "adf_cfg_common.h" +#include "adf_transport_internal.h" +#include "icp_qat_hw.h" +#include "adf_c4xxx_hw_data.h" + +#define ADF_C4XXX_PARTTITION_SHIFT 8 +#define ADF_C4XXX_PARTITION(svc, ring) \ + ((svc) << ((ring)*ADF_C4XXX_PARTTITION_SHIFT)) + +static void +adf_get_partitions_mask(struct adf_accel_dev *accel_dev, u32 *partitions_mask) +{ + device_t dev = accel_to_pci_dev(accel_dev); + u32 enabled_partitions_msk = 0; + u8 ring_pair = 0; + enum adf_cfg_service_type serv_type = 0; + u16 ring_to_svc_map = accel_dev->hw_device->ring_to_svc_map; + + for (ring_pair = 0; ring_pair < ADF_CFG_NUM_SERVICES; ring_pair++) { + serv_type = GET_SRV_TYPE(ring_to_svc_map, ring_pair); + switch (serv_type) { + case CRYPTO: { + enabled_partitions_msk |= + ADF_C4XXX_PARTITION(ADF_C4XXX_PART_ASYM, + ring_pair++); + if (ring_pair < ADF_CFG_NUM_SERVICES) + enabled_partitions_msk |= + ADF_C4XXX_PARTITION(ADF_C4XXX_PART_SYM, + ring_pair); + else + device_printf( + dev, "Failed to enable SYM partition.\n"); + break; + } + case COMP: + enabled_partitions_msk |= + ADF_C4XXX_PARTITION(ADF_C4XXX_PART_DC, ring_pair); + break; + case SYM: + enabled_partitions_msk |= + ADF_C4XXX_PARTITION(ADF_C4XXX_PART_SYM, ring_pair); + break; + case ASYM: + enabled_partitions_msk |= + ADF_C4XXX_PARTITION(ADF_C4XXX_PART_ASYM, ring_pair); + break; + default: + enabled_partitions_msk |= + ADF_C4XXX_PARTITION(ADF_C4XXX_PART_UNUSED, + ring_pair); + break; + } + } + *partitions_mask = enabled_partitions_msk; +} + +static void +adf_enable_sym_threads(struct adf_accel_dev *accel_dev, u32 ae, u32 partition) +{ + struct resource *csr = accel_dev->transport->banks[0].csr_addr; + const struct adf_ae_info *ae_info = accel_dev->au_info->ae_info; + u32 num_sym_thds = ae_info[ae].num_sym_thd; + u32 i; + u32 part_group = partition / ADF_C4XXX_PARTS_PER_GRP; + u32 wkrthd2_partmap = part_group << ADF_C4XXX_PARTS_PER_GRP | + (BIT(partition % ADF_C4XXX_PARTS_PER_GRP)); + + for (i = 0; i < num_sym_thds; i++) + WRITE_CSR_WQM(csr, + ADF_C4XXX_WRKTHD2PARTMAP, + (ae * ADF_NUM_THREADS_PER_AE + i), + wkrthd2_partmap); +} + +static void +adf_enable_asym_threads(struct adf_accel_dev *accel_dev, u32 ae, u32 partition) +{ + struct resource *csr = accel_dev->transport->banks[0].csr_addr; + const struct adf_ae_info *ae_info = accel_dev->au_info->ae_info; + u32 num_asym_thds = ae_info[ae].num_asym_thd; + u32 i; + u32 part_group = partition / ADF_C4XXX_PARTS_PER_GRP; + u32 wkrthd2_partmap = part_group << ADF_C4XXX_PARTS_PER_GRP | + (BIT(partition % ADF_C4XXX_PARTS_PER_GRP)); + /* For asymmetric cryptography SKU we have one thread less */ + u32 num_all_thds = ADF_NUM_THREADS_PER_AE - 2; + + for (i = num_all_thds; i > (num_all_thds - num_asym_thds); i--) + WRITE_CSR_WQM(csr, + ADF_C4XXX_WRKTHD2PARTMAP, + (ae * ADF_NUM_THREADS_PER_AE + i), + wkrthd2_partmap); +} + +static void +adf_enable_dc_threads(struct adf_accel_dev *accel_dev, u32 ae, u32 partition) +{ + struct resource *csr = accel_dev->transport->banks[0].csr_addr; + const struct adf_ae_info *ae_info = accel_dev->au_info->ae_info; + u32 num_dc_thds = ae_info[ae].num_dc_thd; + u32 i; + u32 part_group = partition / ADF_C4XXX_PARTS_PER_GRP; + u32 wkrthd2_partmap = part_group << ADF_C4XXX_PARTS_PER_GRP | + (BIT(partition % ADF_C4XXX_PARTS_PER_GRP)); + + for (i = 0; i < num_dc_thds; i++) + WRITE_CSR_WQM(csr, + ADF_C4XXX_WRKTHD2PARTMAP, + (ae * ADF_NUM_THREADS_PER_AE + i), + wkrthd2_partmap); +} + +/* Initialise Resource partitioning. + * Initialise a default set of 4 partitions to arbitrate + * request rings per bundle. + */ +int +adf_init_arb_c4xxx(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct resource *csr = accel_dev->transport->banks[0].csr_addr; + struct adf_accel_unit_info *au_info = accel_dev->au_info; + u32 i; + unsigned long ae_mask; + u32 partitions_mask = 0; + + /* invoke common adf_init_arb */ + adf_init_arb(accel_dev); + + adf_get_partitions_mask(accel_dev, &partitions_mask); + for (i = 0; i < hw_data->num_banks; i++) + WRITE_CSR_WQM(csr, + ADF_C4XXX_PARTITION_LUT_OFFSET, + i, + partitions_mask); + + ae_mask = hw_data->ae_mask; + + /* Assigning default partitions to accel engine + * worker threads + */ + for_each_set_bit(i, &ae_mask, ADF_C4XXX_MAX_ACCELENGINES) + { + if (BIT(i) & au_info->sym_ae_msk) + adf_enable_sym_threads(accel_dev, + i, + ADF_C4XXX_PART_SYM); + if (BIT(i) & au_info->asym_ae_msk) + adf_enable_asym_threads(accel_dev, + i, + ADF_C4XXX_PART_ASYM); + if (BIT(i) & au_info->dc_ae_msk) + adf_enable_dc_threads(accel_dev, i, ADF_C4XXX_PART_DC); + } + + return 0; +} + +/* Disable the resource partitioning feature + * and restore the default partitioning scheme + */ +void +adf_exit_arb_c4xxx(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct resource *csr; + u32 i; + unsigned long ae_mask; + + if (!accel_dev->transport) + return; + csr = accel_dev->transport->banks[0].csr_addr; + + /* Restore the default partitionLUT registers */ + for (i = 0; i < hw_data->num_banks; i++) + WRITE_CSR_WQM(csr, + ADF_C4XXX_PARTITION_LUT_OFFSET, + i, + ADF_C4XXX_DEFAULT_PARTITIONS); + + ae_mask = hw_data->ae_mask; + + /* Reset worker thread to partition mapping */ + for (i = 0; i < hw_data->num_engines * ADF_NUM_THREADS_PER_AE; i++) { + if (!test_bit((u32)(i / ADF_NUM_THREADS_PER_AE), &ae_mask)) + continue; + + WRITE_CSR_WQM(csr, ADF_C4XXX_WRKTHD2PARTMAP, i, 0); + } +} Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_reset.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_reset.h @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_C4XXX_RESET_H_ +#define ADF_C4XXX_RESET_H_ + +#include +#include +#include "adf_c4xxx_hw_data.h" + +/* IA2IOSFSB register definitions */ +#define ADF_C4XXX_IA2IOSFSB_PORTCMD (0x60000 + 0x1320) +#define ADF_C4XXX_IA2IOSFSB_LOADD (0x60000 + 0x1324) +#define ADF_C4XXX_IA2IOSFSB_HIADD (0x60000 + 0x1328) +#define ADF_C4XXX_IA2IOSFSB_DATA(index) ((index)*0x4 + 0x60000 + 0x132C) +#define ADF_C4XXX_IA2IOSFSB_KHOLE (0x60000 + 0x136C) +#define ADF_C4XXX_IA2IOSFSB_STATUS (0x60000 + 0x1370) + +/* IOSF-SB Port command definitions */ +/* Ethernet controller Port ID */ +#define ADF_C4XXX_ETH_PORT_ID 0x61 +/* Byte enable */ +#define ADF_C4XXX_PORTD_CMD_BE 0xFF +/* Non posted; Only non-posted commands are used */ +#define ADF_C4XXX_PORTD_CMD_NP 0x1 +/* Number of DWORDs to transfer */ +#define ADF_C4XXX_PORTD_CMD_LENDW 0x2 +/* Extended header always used */ +#define ADF_C4XXX_PORTD_CMD_EH 0x1 +/* Address length */ +#define ADF_C4XXX_PORTD_CMD_ALEN 0x0 +/* Message opcode: Private Register Write Non-Posted or Posted Message*/ +#define ADF_C4XXX_MOPCODE 0x07 + +/* Compute port command based on port ID */ +#define ADF_C4XXX_GET_PORT_CMD(port_id) \ + ((((port_id)&0xFF) << 24) | (ADF_C4XXX_PORTD_CMD_BE << 16) | \ + (ADF_C4XXX_PORTD_CMD_NP << 15) | (ADF_C4XXX_PORTD_CMD_LENDW << 10) | \ + (ADF_C4XXX_PORTD_CMD_EH << 9) | (ADF_C4XXX_PORTD_CMD_ALEN << 8) | \ + (ADF_C4XXX_MOPCODE)) + +/* Pending reset event/ack message over IOSF-SB */ +#define ADF_C4XXX_IOSFSB_RESET_EVENT BIT(0) +#define ADF_C4XXX_IOSFSB_RESET_ACK BIT(7) + +/* Upon an FLR, the PCI_EXP_AERUCS register must be read and we must make sure + * that not other bit is set excepted the: + * - UR (Unsupported request) bit<20> + * - IEUNC (Uncorrectable Internal Error) bit <22> + */ +#define PCIE_C4XXX_VALID_ERR_MASK (~BIT(20) ^ BIT(22)) + +/* Trigger: trigger an IOSF SB message */ +#define ADF_C4XXX_IOSFSB_TRIGGER BIT(0) + +/* IOSF-SB status definitions */ +/* Response status bits<1:0> definitions + * 00 = Successful + * 01 = Unsuccessful + * 10 = Powered down + * 11 = Multicast + */ +#define ADF_C4XXX_IA2IOSFSB_STATUS_RTS (BIT(0) | BIT(1)) +#define ADF_C4XXX_IA2IOSFSB_STATUS_PEND BIT(6) +/* Allow 100ms polling interval */ +#define ADF_C4XXX_IA2IOSFSB_POLLING_INTERVAL 100 +/* Allow a maximum of 500ms before timing out */ +#define ADF_C4XXX_IA2IOSFSB_POLLING_COUNT 5 + +/* Ethernet notification polling interval */ +#define ADF_C4XXX_MAX_ETH_ACK_ATTEMPT 100 +#define ADF_C4XXX_ETH_ACK_POLLING_INTERVAL 10 + +void adf_c4xxx_dev_restore(struct adf_accel_dev *accel_dev); +void notify_and_wait_ethernet(struct adf_accel_dev *accel_dev); +#endif Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_reset.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_c4xxx_reset.c @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include +#include "adf_c4xxx_reset.h" + +static void +adf_check_uncorr_status(struct adf_accel_dev *accel_dev) +{ + u32 uncorr_err; + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + + uncorr_err = pci_read_config(pdev, PCI_EXP_AERUCS, 4); + if (uncorr_err & PCIE_C4XXX_VALID_ERR_MASK) { + device_printf(GET_DEV(accel_dev), + "Uncorrectable error occurred during reset\n"); + device_printf(GET_DEV(accel_dev), + "Error code value: 0x%04x\n", + uncorr_err); + } +} + +static void +adf_c4xxx_dev_reset(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + u8 count = 0; + uintptr_t device_id1; + uintptr_t device_id2; + + /* Read device ID before triggering reset */ + device_id1 = pci_read_config(pdev, PCIR_DEVICE, 2); + hw_device->reset_device(accel_dev); + + /* Wait for reset to complete */ + do { + /* Ensure we have the configuration space restored */ + device_id2 = pci_read_config(pdev, PCIR_DEVICE, 2); + if (device_id1 == device_id2) { + /* Check if a PCIe uncorrectable error occurred + * during the reset + */ + adf_check_uncorr_status(accel_dev); + return; + } + count++; + pause_ms("adfstop", 100); + } while (count < ADF_PCIE_FLR_ATTEMPT); + device_printf(GET_DEV(accel_dev), + "Too many attempts to read back config space.\n"); +} + +void +adf_c4xxx_dev_restore(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 pmisclbar1; + u32 pmisclbar2; + u32 pmiscubar1; + u32 pmiscubar2; + + if (hw_device->reset_device) { + device_printf(GET_DEV(accel_dev), + "Resetting device qat_dev%d\n", + accel_dev->accel_id); + + /* Read pmiscubar and pmisclbar */ + pmisclbar1 = pci_read_config(pdev, ADF_PMISC_L_OFFSET, 4); + pmiscubar1 = pci_read_config(pdev, ADF_PMISC_U_OFFSET, 4); + + adf_c4xxx_dev_reset(accel_dev); + pci_restore_state(pdev); + + /* Read pmiscubar and pmisclbar */ + pmisclbar2 = pci_read_config(pdev, ADF_PMISC_L_OFFSET, 4); + pmiscubar2 = pci_read_config(pdev, ADF_PMISC_U_OFFSET, 4); + + /* Check if restore operation has completed successfully */ + if (pmisclbar1 != pmisclbar2 || pmiscubar1 != pmiscubar2) { + device_printf( + GET_DEV(accel_dev), + "Failed to restore device configuration\n"); + return; + } + pci_save_state(pdev); + } + + if (hw_device->post_reset) { + dev_dbg(GET_DEV(accel_dev), "Performing post reset restore\n"); + hw_device->post_reset(accel_dev); + } +} Index: sys/dev/qat/qat_hw/qat_c4xxx/adf_drv.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c4xxx/adf_drv.c @@ -0,0 +1,268 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "adf_c4xxx_hw_data.h" +#include "adf_fw_counters.h" +#include "adf_cfg_device.h" +#include +#include +#include +#include +#include +#include "adf_heartbeat_dbg.h" +#include "adf_cnvnr_freq_counters.h" + +static MALLOC_DEFINE(M_QAT_C4XXX, "qat_c4xx", "qat_c4xx"); + +#define ADF_SYSTEM_DEVICE(device_id) \ + { \ + PCI_VENDOR_ID_INTEL, device_id \ + } + +static const struct pci_device_id adf_pci_tbl[] = + { ADF_SYSTEM_DEVICE(ADF_C4XXX_PCI_DEVICE_ID), + { + 0, + } }; + +static int +adf_probe(device_t dev) +{ + const struct pci_device_id *id; + + for (id = adf_pci_tbl; id->vendor != 0; id++) { + if (pci_get_vendor(dev) == id->vendor && + pci_get_device(dev) == id->device) { + device_set_desc(dev, + "Intel " ADF_C4XXX_DEVICE_NAME + " QuickAssist"); + return BUS_PROBE_GENERIC; + } + } + return ENXIO; +} + +static void +adf_cleanup_accel(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_pci *accel_pci_dev = &accel_dev->accel_pci_dev; + int i; + + if (accel_dev->dma_tag) + bus_dma_tag_destroy(accel_dev->dma_tag); + for (i = 0; i < ADF_PCI_MAX_BARS; i++) { + struct adf_bar *bar = &accel_pci_dev->pci_bars[i]; + + if (bar->virt_addr) + bus_free_resource(accel_pci_dev->pci_dev, + SYS_RES_MEMORY, + bar->virt_addr); + } + + if (accel_dev->hw_device) { + switch (pci_get_device(accel_pci_dev->pci_dev)) { + case ADF_C4XXX_PCI_DEVICE_ID: + adf_clean_hw_data_c4xxx(accel_dev->hw_device); + break; + default: + break; + } + free(accel_dev->hw_device, M_QAT_C4XXX); + accel_dev->hw_device = NULL; + } + adf_cfg_dev_remove(accel_dev); + adf_devmgr_rm_dev(accel_dev, NULL); +} + +static int +adf_attach(device_t dev) +{ + struct adf_accel_dev *accel_dev; + struct adf_accel_pci *accel_pci_dev; + struct adf_hw_device_data *hw_data; + unsigned int i, bar_nr; + int ret, rid; + struct adf_cfg_device *cfg_dev = NULL; + + /* Set pci MaxPayLoad to 256. Implemented to avoid the issue of + * Pci-passthrough causing Maxpayload to be reset to 128 bytes + * when the device is reset. + */ + if (pci_get_max_payload(dev) != 256) + pci_set_max_payload(dev, 256); + + accel_dev = device_get_softc(dev); + + INIT_LIST_HEAD(&accel_dev->crypto_list); + accel_pci_dev = &accel_dev->accel_pci_dev; + accel_pci_dev->pci_dev = dev; + + if (bus_get_domain(dev, &accel_pci_dev->node) != 0) + accel_pci_dev->node = 0; + + /* XXX: Revisit if we actually need a devmgr table at all. */ + + /* Add accel device to accel table. + * This should be called before adf_cleanup_accel is called + */ + if (adf_devmgr_add_dev(accel_dev, NULL)) { + device_printf(dev, "Failed to add new accelerator device.\n"); + return ENXIO; + } + + /* Allocate and configure device configuration structure */ + hw_data = malloc(sizeof(*hw_data), M_QAT_C4XXX, M_WAITOK | M_ZERO); + + accel_dev->hw_device = hw_data; + adf_init_hw_data_c4xxx(accel_dev->hw_device); + accel_pci_dev->revid = pci_get_revid(dev); + hw_data->fuses = pci_read_config(dev, ADF_DEVICE_FUSECTL_OFFSET, 4); + + /* Get PPAERUCM values and store */ + ret = adf_aer_store_ppaerucm_reg(dev, hw_data); + if (ret) + goto out_err; + + /* Get Accelerators and Accelerators Engines masks */ + hw_data->accel_mask = hw_data->get_accel_mask(accel_dev); + hw_data->ae_mask = hw_data->get_ae_mask(accel_dev); + + /* If the device has no acceleration engines then ignore it. */ + if (!hw_data->accel_mask || !hw_data->ae_mask || + (~hw_data->ae_mask & 0x01)) { + device_printf(dev, "No acceleration units found\n"); + ret = ENXIO; + goto out_err; + } + + /* Create device configuration table */ + ret = adf_cfg_dev_add(accel_dev); + if (ret) + goto out_err; + + ret = adf_clock_debugfs_add(accel_dev); + if (ret) + goto out_err; + + pci_set_max_read_req(dev, 1024); + + ret = bus_dma_tag_create(bus_get_dma_tag(dev), + 1, + 0, + BUS_SPACE_MAXADDR, + BUS_SPACE_MAXADDR, + NULL, + NULL, + BUS_SPACE_MAXSIZE, + /*BUS_SPACE_UNRESTRICTED*/ 1, + BUS_SPACE_MAXSIZE, + 0, + NULL, + NULL, + &accel_dev->dma_tag); + if (ret) + goto out_err; + + if (hw_data->get_accel_cap) { + hw_data->accel_capabilities_mask = + hw_data->get_accel_cap(accel_dev); + } + + accel_pci_dev->sku = hw_data->get_sku(hw_data); + + /* Find and map all the device's BARS */ + i = 0; + for (bar_nr = 0; i < ADF_PCI_MAX_BARS && bar_nr < PCIR_MAX_BAR_0; + bar_nr++) { + struct adf_bar *bar; + + /* + * XXX: This isn't quite right as it will ignore a BAR + * that wasn't assigned a valid resource range by the + * firmware. + */ + rid = PCIR_BAR(bar_nr); + if (bus_get_resource(dev, SYS_RES_MEMORY, rid, NULL, NULL) != 0) + continue; + bar = &accel_pci_dev->pci_bars[i++]; + bar->virt_addr = bus_alloc_resource_any(dev, + SYS_RES_MEMORY, + &rid, + RF_ACTIVE); + if (!bar->virt_addr) { + device_printf(dev, "Failed to map BAR %d\n", bar_nr); + ret = ENXIO; + goto out_err; + } + bar->base_addr = rman_get_start(bar->virt_addr); + bar->size = rman_get_start(bar->virt_addr); + } + pci_enable_busmaster(dev); + + if (!accel_dev->hw_device->config_device) { + ret = EFAULT; + goto out_err; + } + + ret = accel_dev->hw_device->config_device(accel_dev); + if (ret) + goto out_err; + + ret = adf_dev_init(accel_dev); + if (ret) + goto out_dev_shutdown; + + ret = adf_dev_start(accel_dev); + if (ret) + goto out_dev_stop; + + cfg_dev = accel_dev->cfg->dev; + adf_cfg_device_clear(cfg_dev, accel_dev); + free(cfg_dev, M_QAT); + accel_dev->cfg->dev = NULL; + return ret; +out_dev_stop: + adf_dev_stop(accel_dev); +out_dev_shutdown: + adf_dev_shutdown(accel_dev); +out_err: + adf_cleanup_accel(accel_dev); + return ret; +} + +static int +adf_detach(device_t dev) +{ + struct adf_accel_dev *accel_dev = device_get_softc(dev); + + if (adf_dev_stop(accel_dev)) { + device_printf(dev, "Failed to stop QAT accel dev\n"); + return EBUSY; + } + + adf_dev_shutdown(accel_dev); + + adf_cleanup_accel(accel_dev); + + return 0; +} + +static device_method_t adf_methods[] = { DEVMETHOD(device_probe, adf_probe), + DEVMETHOD(device_attach, adf_attach), + DEVMETHOD(device_detach, adf_detach), + + DEVMETHOD_END }; + +static driver_t adf_driver = { "qat", + adf_methods, + sizeof(struct adf_accel_dev) }; + +DRIVER_MODULE_ORDERED(qat_c4xxx, pci, adf_driver, NULL, NULL, SI_ORDER_THIRD); +MODULE_VERSION(qat_c4xxx, 1); +MODULE_DEPEND(qat_c4xxx, qat_common, 1, 1, 1); +MODULE_DEPEND(qat_c4xxx, qat_api, 1, 1, 1); +MODULE_DEPEND(qat_c4xxx, linuxkpi, 1, 1, 1); Index: sys/dev/qat/qat_hw/qat_c62x/adf_c62x_hw_data.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c62x/adf_c62x_hw_data.h @@ -0,0 +1,117 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_C62X_HW_DATA_H_ +#define ADF_C62X_HW_DATA_H_ + +/* PCIe configuration space */ +#define ADF_C62X_SRAM_BAR 0 +#define ADF_C62X_PMISC_BAR 1 +#define ADF_C62X_ETR_BAR 2 +#define ADF_C62X_RX_RINGS_OFFSET 8 +#define ADF_C62X_TX_RINGS_MASK 0xFF +#define ADF_C62X_MAX_ACCELERATORS 5 +#define ADF_C62X_MAX_ACCELENGINES 10 +#define ADF_C62X_ACCELERATORS_REG_OFFSET 16 +#define ADF_C62X_ACCELERATORS_MASK 0x1F +#define ADF_C62X_ACCELENGINES_MASK 0x3FF +#define ADF_C62X_ETR_MAX_BANKS 16 +#define ADF_C62X_SMIAPF0_MASK_OFFSET (0x3A000 + 0x28) +#define ADF_C62X_SMIAPF1_MASK_OFFSET (0x3A000 + 0x30) +#define ADF_C62X_SMIA0_MASK 0xFFFF +#define ADF_C62X_SMIA1_MASK 0x1 +#define ADF_C62X_SOFTSTRAP_CSR_OFFSET 0x2EC +#define ADF_C62X_POWERGATE_PKE BIT(24) +#define ADF_C62X_POWERGATE_DC BIT(23) + +/* Error detection and correction */ +#define ADF_C62X_AE_CTX_ENABLES(i) (i * 0x1000 + 0x20818) +#define ADF_C62X_AE_MISC_CONTROL(i) (i * 0x1000 + 0x20960) +#define ADF_C62X_ENABLE_AE_ECC_ERR BIT(28) +#define ADF_C62X_ENABLE_AE_ECC_PARITY_CORR (BIT(24) | BIT(12)) +#define ADF_C62X_UERRSSMSH(i) (i * 0x4000 + 0x18) +#define ADF_C62X_CERRSSMSH(i) (i * 0x4000 + 0x10) +#define ADF_C62X_ERRSOU3 (0x3A000 + 0x0C) +#define ADF_C62X_ERRSOU5 (0x3A000 + 0xD8) +#define ADF_C62X_ERRSSMSH_EN (BIT(3)) +/* BIT(2) enables the logging of push/pull data errors. */ +#define ADF_C62X_PPERR_EN (BIT(2)) + +/* Mask for VF2PF interrupts */ +#define ADF_C62X_VF2PF1_16 (0xFFFF << 9) +#define ADF_C62X_ERRSOU3_VF2PF(errsou3) (((errsou3)&0x01FFFE00) >> 9) +#define ADF_C62X_ERRMSK3_VF2PF(vf_mask) (((vf_mask)&0xFFFF) << 9) + +/* Masks for correctable error interrupts. */ +#define ADF_C62X_ERRMSK0_CERR (BIT(24) | BIT(16) | BIT(8) | BIT(0)) +#define ADF_C62X_ERRMSK1_CERR (BIT(24) | BIT(16) | BIT(8) | BIT(0)) +#define ADF_C62X_ERRMSK3_CERR (BIT(7)) +#define ADF_C62X_ERRMSK4_CERR (BIT(8) | BIT(0)) +#define ADF_C62X_ERRMSK5_CERR (0) + +/* Masks for uncorrectable error interrupts. */ +#define ADF_C62X_ERRMSK0_UERR (BIT(25) | BIT(17) | BIT(9) | BIT(1)) +#define ADF_C62X_ERRMSK1_UERR (BIT(25) | BIT(17) | BIT(9) | BIT(1)) +#define ADF_C62X_ERRMSK3_UERR \ + (BIT(8) | BIT(6) | BIT(5) | BIT(4) | BIT(3) | BIT(2) | BIT(0)) +#define ADF_C62X_ERRMSK4_UERR (BIT(9) | BIT(1)) +#define ADF_C62X_ERRMSK5_UERR (BIT(18) | BIT(17) | BIT(16)) + +/* RI CPP control */ +#define ADF_C62X_RICPPINTCTL (0x3A000 + 0x110) +/* + * BIT(2) enables error detection and reporting on the RI Parity Error. + * BIT(1) enables error detection and reporting on the RI CPP Pull interface. + * BIT(0) enables error detection and reporting on the RI CPP Push interface. + */ +#define ADF_C62X_RICPP_EN (BIT(2) | BIT(1) | BIT(0)) + +/* TI CPP control */ +#define ADF_C62X_TICPPINTCTL (0x3A400 + 0x138) +/* + * BIT(4) enables parity error detection and reporting on the Secure RAM. + * BIT(3) enables error detection and reporting on the ETR Parity Error. + * BIT(2) enables error detection and reporting on the TI Parity Error. + * BIT(1) enables error detection and reporting on the TI CPP Pull interface. + * BIT(0) enables error detection and reporting on the TI CPP Push interface. + */ +#define ADF_C62X_TICPP_EN (BIT(4) | BIT(3) | BIT(2) | BIT(1) | BIT(0)) + +/* CFC Uncorrectable Errors */ +#define ADF_C62X_CPP_CFC_ERR_CTRL (0x30000 + 0xC00) +/* + * BIT(1) enables interrupt. + * BIT(0) enables detecting and logging of push/pull data errors. + */ +#define ADF_C62X_CPP_CFC_UE (BIT(1) | BIT(0)) + +#define ADF_C62X_PF2VF_OFFSET(i) (0x3A000 + 0x280 + ((i)*0x04)) +#define ADF_C62X_VINTMSK_OFFSET(i) (0x3A000 + 0x200 + ((i)*0x04)) + +/* Arbiter configuration */ +#define ADF_C62X_ARB_OFFSET 0x30000 +#define ADF_C62X_ARB_WRK_2_SER_MAP_OFFSET 0x180 +#define ADF_C62X_ARB_WQCFG_OFFSET 0x100 + +/* Admin Interface Reg Offset */ +#define ADF_C62X_ADMINMSGUR_OFFSET (0x3A000 + 0x574) +#define ADF_C62X_ADMINMSGLR_OFFSET (0x3A000 + 0x578) +#define ADF_C62X_MAILBOX_BASE_OFFSET 0x20970 + +/* Firmware Binary */ +#define ADF_C62X_FW "qat_c62x_fw" +#define ADF_C62X_MMP "qat_c62x_mmp_fw" + +void adf_init_hw_data_c62x(struct adf_hw_device_data *hw_data); +void adf_clean_hw_data_c62x(struct adf_hw_device_data *hw_data); + +#define ADF_C62X_AE_FREQ (685 * 1000000) + +#define ADF_C62X_MIN_AE_FREQ (533 * 1000000) +#define ADF_C62X_MAX_AE_FREQ (800 * 1000000) + +#define ADF_C62X_THREADS_ON_ENGINE 8 +#define ADF_C62X_MAX_SERVICES 4 +#define ADF_C62X_DEF_ASYM_MASK 0x03 + +#endif Index: sys/dev/qat/qat_hw/qat_c62x/adf_c62x_hw_data.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c62x/adf_c62x_hw_data.c @@ -0,0 +1,420 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include +#include +#include +#include +#include +#include "adf_c62x_hw_data.h" +#include "icp_qat_hw.h" +#include "adf_cfg.h" +#include "adf_heartbeat.h" + +/* Worker thread to service arbiter mappings */ +static const u32 thrd_to_arb_map[ADF_C62X_MAX_ACCELENGINES] = + { 0x12222AAA, 0x11222AAA, 0x12222AAA, 0x11222AAA, 0x12222AAA, + 0x11222AAA, 0x12222AAA, 0x11222AAA, 0x12222AAA, 0x11222AAA }; + +enum { DEV_C62X_SKU_1 = 0, DEV_C62X_SKU_2 = 1 }; + +static u32 thrd_to_arb_map_gen[ADF_C62X_MAX_ACCELENGINES] = { 0 }; + +static struct adf_hw_device_class c62x_class = {.name = ADF_C62X_DEVICE_NAME, + .type = DEV_C62X, + .instances = 0 }; + +static u32 +get_accel_mask(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + + u32 fuse; + u32 straps; + + fuse = pci_read_config(pdev, ADF_DEVICE_FUSECTL_OFFSET, 4); + straps = pci_read_config(pdev, ADF_C62X_SOFTSTRAP_CSR_OFFSET, 4); + + return (~(fuse | straps)) >> ADF_C62X_ACCELERATORS_REG_OFFSET & + ADF_C62X_ACCELERATORS_MASK; +} + +static u32 +get_ae_mask(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 fuse; + u32 me_straps; + u32 me_disable; + u32 ssms_disabled; + + fuse = pci_read_config(pdev, ADF_DEVICE_FUSECTL_OFFSET, 4); + me_straps = pci_read_config(pdev, ADF_C62X_SOFTSTRAP_CSR_OFFSET, 4); + + /* If SSMs are disabled, then disable the corresponding MEs */ + ssms_disabled = + (~get_accel_mask(accel_dev)) & ADF_C62X_ACCELERATORS_MASK; + me_disable = 0x3; + while (ssms_disabled) { + if (ssms_disabled & 1) + me_straps |= me_disable; + ssms_disabled >>= 1; + me_disable <<= 2; + } + + return (~(fuse | me_straps)) & ADF_C62X_ACCELENGINES_MASK; +} + +static u32 +get_num_accels(struct adf_hw_device_data *self) +{ + u32 i, ctr = 0; + + if (!self || !self->accel_mask) + return 0; + + for (i = 0; i < ADF_C62X_MAX_ACCELERATORS; i++) { + if (self->accel_mask & (1 << i)) + ctr++; + } + return ctr; +} + +static u32 +get_num_aes(struct adf_hw_device_data *self) +{ + u32 i, ctr = 0; + + if (!self || !self->ae_mask) + return 0; + + for (i = 0; i < ADF_C62X_MAX_ACCELENGINES; i++) { + if (self->ae_mask & (1 << i)) + ctr++; + } + return ctr; +} + +static u32 +get_misc_bar_id(struct adf_hw_device_data *self) +{ + return ADF_C62X_PMISC_BAR; +} + +static u32 +get_etr_bar_id(struct adf_hw_device_data *self) +{ + return ADF_C62X_ETR_BAR; +} + +static u32 +get_sram_bar_id(struct adf_hw_device_data *self) +{ + return ADF_C62X_SRAM_BAR; +} + +static enum dev_sku_info +get_sku(struct adf_hw_device_data *self) +{ + int aes = get_num_aes(self); + + if (aes == 8) + return DEV_SKU_2; + else if (aes == 10) + return DEV_SKU_4; + + return DEV_SKU_UNKNOWN; +} + +static void +adf_get_arbiter_mapping(struct adf_accel_dev *accel_dev, + u32 const **arb_map_config) +{ + int i; + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + + for (i = 0; i < ADF_C62X_MAX_ACCELENGINES; i++) { + thrd_to_arb_map_gen[i] = 0; + if (hw_device->ae_mask & (1 << i)) + thrd_to_arb_map_gen[i] = thrd_to_arb_map[i]; + } + adf_cfg_gen_dispatch_arbiter(accel_dev, + thrd_to_arb_map, + thrd_to_arb_map_gen, + ADF_C62X_MAX_ACCELENGINES); + *arb_map_config = thrd_to_arb_map_gen; +} + +static u32 +get_pf2vf_offset(u32 i) +{ + return ADF_C62X_PF2VF_OFFSET(i); +} + +static u32 +get_vintmsk_offset(u32 i) +{ + return ADF_C62X_VINTMSK_OFFSET(i); +} + +static void +get_arb_info(struct arb_info *arb_csrs_info) +{ + arb_csrs_info->arbiter_offset = ADF_C62X_ARB_OFFSET; + arb_csrs_info->wrk_thd_2_srv_arb_map = + ADF_C62X_ARB_WRK_2_SER_MAP_OFFSET; + arb_csrs_info->wrk_cfg_offset = ADF_C62X_ARB_WQCFG_OFFSET; +} + +static void +get_admin_info(struct admin_info *admin_csrs_info) +{ + admin_csrs_info->mailbox_offset = ADF_C62X_MAILBOX_BASE_OFFSET; + admin_csrs_info->admin_msg_ur = ADF_C62X_ADMINMSGUR_OFFSET; + admin_csrs_info->admin_msg_lr = ADF_C62X_ADMINMSGLR_OFFSET; +} + +static void +get_errsou_offset(u32 *errsou3, u32 *errsou5) +{ + *errsou3 = ADF_C62X_ERRSOU3; + *errsou5 = ADF_C62X_ERRSOU5; +} + +static u32 +get_clock_speed(struct adf_hw_device_data *self) +{ + /* CPP clock is half high-speed clock */ + return self->clock_frequency / 2; +} + +static void +adf_enable_error_correction(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_C62X_PMISC_BAR]; + struct resource *csr = misc_bar->virt_addr; + unsigned int val, i; + unsigned int mask; + + /* Enable Accel Engine error detection & correction */ + mask = hw_device->ae_mask; + for (i = 0; mask; i++, mask >>= 1) { + if (!(mask & 1)) + continue; + val = ADF_CSR_RD(csr, ADF_C62X_AE_CTX_ENABLES(i)); + val |= ADF_C62X_ENABLE_AE_ECC_ERR; + ADF_CSR_WR(csr, ADF_C62X_AE_CTX_ENABLES(i), val); + val = ADF_CSR_RD(csr, ADF_C62X_AE_MISC_CONTROL(i)); + val |= ADF_C62X_ENABLE_AE_ECC_PARITY_CORR; + ADF_CSR_WR(csr, ADF_C62X_AE_MISC_CONTROL(i), val); + } + + /* Enable shared memory error detection & correction */ + mask = hw_device->accel_mask; + for (i = 0; mask; i++, mask >>= 1) { + if (!(mask & 1)) + continue; + val = ADF_CSR_RD(csr, ADF_C62X_UERRSSMSH(i)); + val |= ADF_C62X_ERRSSMSH_EN; + ADF_CSR_WR(csr, ADF_C62X_UERRSSMSH(i), val); + val = ADF_CSR_RD(csr, ADF_C62X_CERRSSMSH(i)); + val |= ADF_C62X_ERRSSMSH_EN; + ADF_CSR_WR(csr, ADF_C62X_CERRSSMSH(i), val); + } +} + +static void +adf_enable_ints(struct adf_accel_dev *accel_dev) +{ + struct resource *addr; + + addr = (&GET_BARS(accel_dev)[ADF_C62X_PMISC_BAR])->virt_addr; + + /* Enable bundle and misc interrupts */ + ADF_CSR_WR(addr, ADF_C62X_SMIAPF0_MASK_OFFSET, ADF_C62X_SMIA0_MASK); + ADF_CSR_WR(addr, ADF_C62X_SMIAPF1_MASK_OFFSET, ADF_C62X_SMIA1_MASK); +} + +static u32 +get_ae_clock(struct adf_hw_device_data *self) +{ + /* + * Clock update interval is <16> ticks for c62x. + */ + return self->clock_frequency / 16; +} + +static int +get_storage_enabled(struct adf_accel_dev *accel_dev, uint32_t *storage_enabled) +{ + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + + strlcpy(key, ADF_STORAGE_FIRMWARE_ENABLED, sizeof(key)); + if (!adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) { + if (kstrtouint(val, 0, storage_enabled)) + return -EFAULT; + } + return 0; +} + +static int +measure_clock(struct adf_accel_dev *accel_dev) +{ + u32 frequency; + int ret = 0; + + ret = adf_dev_measure_clock(accel_dev, + &frequency, + ADF_C62X_MIN_AE_FREQ, + ADF_C62X_MAX_AE_FREQ); + if (ret) + return ret; + + accel_dev->hw_device->clock_frequency = frequency; + return 0; +} + +static u32 +c62x_get_hw_cap(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 legfuses; + u32 capabilities; + u32 straps; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 fuses = hw_data->fuses; + + /* Read accelerator capabilities mask */ + legfuses = pci_read_config(pdev, ADF_DEVICE_LEGFUSE_OFFSET, 4); + + capabilities = ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC + + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC + + ICP_ACCEL_CAPABILITIES_CIPHER + + ICP_ACCEL_CAPABILITIES_AUTHENTICATION + + ICP_ACCEL_CAPABILITIES_COMPRESSION + ICP_ACCEL_CAPABILITIES_ZUC + + ICP_ACCEL_CAPABILITIES_SHA3 + ICP_ACCEL_CAPABILITIES_HKDF + + ICP_ACCEL_CAPABILITIES_ECEDMONT + + ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN; + if (legfuses & ICP_ACCEL_MASK_CIPHER_SLICE) + capabilities &= ~(ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC | + ICP_ACCEL_CAPABILITIES_CIPHER | + ICP_ACCEL_CAPABILITIES_HKDF | + ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN); + if (legfuses & ICP_ACCEL_MASK_AUTH_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_AUTHENTICATION; + if (legfuses & ICP_ACCEL_MASK_PKE_SLICE) + capabilities &= ~(ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC | + ICP_ACCEL_CAPABILITIES_ECEDMONT); + if (legfuses & ICP_ACCEL_MASK_COMPRESS_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_COMPRESSION; + if (legfuses & ICP_ACCEL_MASK_EIA3_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_ZUC; + if (legfuses & ICP_ACCEL_MASK_SHA3_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_SHA3; + + straps = pci_read_config(pdev, ADF_C62X_SOFTSTRAP_CSR_OFFSET, 4); + if ((straps | fuses) & ADF_C62X_POWERGATE_PKE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC; + if ((straps | fuses) & ADF_C62X_POWERGATE_DC) + capabilities &= ~ICP_ACCEL_CAPABILITIES_COMPRESSION; + + return capabilities; +} + +static const char * +get_obj_name(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services service) +{ + return ADF_CXXX_AE_FW_NAME_CUSTOM1; +} + +static uint32_t +get_objs_num(struct adf_accel_dev *accel_dev) +{ + return 1; +} + +static uint32_t +get_obj_cfg_ae_mask(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services services) +{ + return accel_dev->hw_device->ae_mask; +} + +void +adf_init_hw_data_c62x(struct adf_hw_device_data *hw_data) +{ + hw_data->dev_class = &c62x_class; + hw_data->instance_id = c62x_class.instances++; + hw_data->num_banks = ADF_C62X_ETR_MAX_BANKS; + hw_data->num_rings_per_bank = ADF_ETR_MAX_RINGS_PER_BANK; + hw_data->num_accel = ADF_C62X_MAX_ACCELERATORS; + hw_data->num_logical_accel = 1; + hw_data->num_engines = ADF_C62X_MAX_ACCELENGINES; + hw_data->tx_rx_gap = ADF_C62X_RX_RINGS_OFFSET; + hw_data->tx_rings_mask = ADF_C62X_TX_RINGS_MASK; + hw_data->alloc_irq = adf_isr_resource_alloc; + hw_data->free_irq = adf_isr_resource_free; + hw_data->enable_error_correction = adf_enable_error_correction; + hw_data->print_err_registers = adf_print_err_registers; + hw_data->get_accel_mask = get_accel_mask; + hw_data->get_ae_mask = get_ae_mask; + hw_data->get_num_accels = get_num_accels; + hw_data->get_num_aes = get_num_aes; + hw_data->get_sram_bar_id = get_sram_bar_id; + hw_data->get_etr_bar_id = get_etr_bar_id; + hw_data->get_misc_bar_id = get_misc_bar_id; + hw_data->get_pf2vf_offset = get_pf2vf_offset; + hw_data->get_vintmsk_offset = get_vintmsk_offset; + hw_data->get_arb_info = get_arb_info; + hw_data->get_admin_info = get_admin_info; + hw_data->get_errsou_offset = get_errsou_offset; + hw_data->get_clock_speed = get_clock_speed; + hw_data->get_sku = get_sku; + hw_data->fw_name = ADF_C62X_FW; + hw_data->fw_mmp_name = ADF_C62X_MMP; + hw_data->init_admin_comms = adf_init_admin_comms; + hw_data->exit_admin_comms = adf_exit_admin_comms; + hw_data->disable_iov = adf_disable_sriov; + hw_data->send_admin_init = adf_send_admin_init; + hw_data->init_arb = adf_init_gen2_arb; + hw_data->exit_arb = adf_exit_arb; + hw_data->get_arb_mapping = adf_get_arbiter_mapping; + hw_data->enable_ints = adf_enable_ints; + hw_data->set_ssm_wdtimer = adf_set_ssm_wdtimer; + hw_data->check_slice_hang = adf_check_slice_hang; + hw_data->enable_vf2pf_comms = adf_pf_enable_vf2pf_comms; + hw_data->disable_vf2pf_comms = adf_pf_disable_vf2pf_comms; + hw_data->restore_device = adf_dev_restore; + hw_data->reset_device = adf_reset_flr; + hw_data->min_iov_compat_ver = ADF_PFVF_COMPATIBILITY_VERSION; + hw_data->get_objs_num = get_objs_num; + hw_data->get_obj_name = get_obj_name; + hw_data->get_obj_cfg_ae_mask = get_obj_cfg_ae_mask; + hw_data->clock_frequency = ADF_C62X_AE_FREQ; + hw_data->measure_clock = measure_clock; + hw_data->get_ae_clock = get_ae_clock; + hw_data->get_accel_cap = c62x_get_hw_cap; + hw_data->reset_device = adf_reset_flr; + hw_data->extended_dc_capabilities = 0; + hw_data->get_storage_enabled = get_storage_enabled; + hw_data->query_storage_cap = 1; + hw_data->get_heartbeat_status = adf_get_heartbeat_status; + hw_data->get_ae_clock = get_ae_clock; + hw_data->storage_enable = 0; + hw_data->get_fw_image_type = adf_cfg_get_fw_image_type; + hw_data->config_device = adf_config_device; + hw_data->get_ring_to_svc_map = adf_cfg_get_services_enabled; + hw_data->set_asym_rings_mask = adf_cfg_set_asym_rings_mask; + hw_data->ring_to_svc_map = ADF_DEFAULT_RING_TO_SRV_MAP; + hw_data->pre_reset = adf_dev_pre_reset; + hw_data->post_reset = adf_dev_post_reset; +} + +void +adf_clean_hw_data_c62x(struct adf_hw_device_data *hw_data) +{ + hw_data->dev_class->instances--; +} Index: sys/dev/qat/qat_hw/qat_c62x/adf_drv.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_c62x/adf_drv.c @@ -0,0 +1,270 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "adf_c62x_hw_data.h" +#include "adf_fw_counters.h" +#include "adf_cfg_device.h" +#include +#include +#include +#include +#include +#include "adf_heartbeat_dbg.h" +#include "adf_cnvnr_freq_counters.h" + +static MALLOC_DEFINE(M_QAT_C62X, "qat_c62x", "qat_c62x"); + +#define ADF_SYSTEM_DEVICE(device_id) \ + { \ + PCI_VENDOR_ID_INTEL, device_id \ + } + +static const struct pci_device_id adf_pci_tbl[] = { ADF_SYSTEM_DEVICE( + ADF_C62X_PCI_DEVICE_ID), + { + 0, + } }; + +static int +adf_probe(device_t dev) +{ + const struct pci_device_id *id; + + for (id = adf_pci_tbl; id->vendor != 0; id++) { + if (pci_get_vendor(dev) == id->vendor && + pci_get_device(dev) == id->device) { + device_set_desc(dev, + "Intel " ADF_C62X_DEVICE_NAME + " QuickAssist"); + return BUS_PROBE_GENERIC; + } + } + return ENXIO; +} + +static void +adf_cleanup_accel(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_pci *accel_pci_dev = &accel_dev->accel_pci_dev; + int i; + + if (accel_dev->dma_tag) + bus_dma_tag_destroy(accel_dev->dma_tag); + for (i = 0; i < ADF_PCI_MAX_BARS; i++) { + struct adf_bar *bar = &accel_pci_dev->pci_bars[i]; + + if (bar->virt_addr) + bus_free_resource(accel_pci_dev->pci_dev, + SYS_RES_MEMORY, + bar->virt_addr); + } + + if (accel_dev->hw_device) { + switch (pci_get_device(accel_pci_dev->pci_dev)) { + case ADF_C62X_PCI_DEVICE_ID: + adf_clean_hw_data_c62x(accel_dev->hw_device); + break; + default: + break; + } + free(accel_dev->hw_device, M_QAT_C62X); + accel_dev->hw_device = NULL; + } + adf_cfg_dev_remove(accel_dev); + adf_devmgr_rm_dev(accel_dev, NULL); +} + +static int +adf_attach(device_t dev) +{ + struct adf_accel_dev *accel_dev; + struct adf_accel_pci *accel_pci_dev; + struct adf_hw_device_data *hw_data; + unsigned int i, bar_nr; + int ret, rid; + struct adf_cfg_device *cfg_dev = NULL; + + /* Set pci MaxPayLoad to 256. Implemented to avoid the issue of + * Pci-passthrough causing Maxpayload to be reset to 128 bytes + * when the device is reset. */ + if (pci_get_max_payload(dev) != 256) + pci_set_max_payload(dev, 256); + + accel_dev = device_get_softc(dev); + + INIT_LIST_HEAD(&accel_dev->crypto_list); + accel_pci_dev = &accel_dev->accel_pci_dev; + accel_pci_dev->pci_dev = dev; + + if (bus_get_domain(dev, &accel_pci_dev->node) != 0) + accel_pci_dev->node = 0; + + /* XXX: Revisit if we actually need a devmgr table at all. */ + + /* Add accel device to accel table. + * This should be called before adf_cleanup_accel is called */ + if (adf_devmgr_add_dev(accel_dev, NULL)) { + device_printf(dev, "Failed to add new accelerator device.\n"); + return ENXIO; + } + + /* Allocate and configure device configuration structure */ + hw_data = malloc(sizeof(*hw_data), M_QAT_C62X, M_WAITOK | M_ZERO); + + accel_dev->hw_device = hw_data; + adf_init_hw_data_c62x(accel_dev->hw_device); + accel_pci_dev->revid = pci_get_revid(dev); + hw_data->fuses = pci_read_config(dev, ADF_DEVICE_FUSECTL_OFFSET, 4); + if (accel_pci_dev->revid == 0x00) { + device_printf(dev, "A0 stepping is not supported.\n"); + ret = ENODEV; + goto out_err; + } + + /* Get PPAERUCM values and store */ + ret = adf_aer_store_ppaerucm_reg(dev, hw_data); + if (ret) + goto out_err; + + /* Get Accelerators and Accelerators Engines masks */ + hw_data->accel_mask = hw_data->get_accel_mask(accel_dev); + hw_data->ae_mask = hw_data->get_ae_mask(accel_dev); + accel_pci_dev->sku = hw_data->get_sku(hw_data); + /* If the device has no acceleration engines then ignore it. */ + if (!hw_data->accel_mask || !hw_data->ae_mask || + ((~hw_data->ae_mask) & 0x01)) { + device_printf(dev, "No acceleration units found\n"); + ret = ENXIO; + goto out_err; + } + + /* Create device configuration table */ + ret = adf_cfg_dev_add(accel_dev); + if (ret) + goto out_err; + + ret = adf_clock_debugfs_add(accel_dev); + if (ret) + goto out_err; + + pci_set_max_read_req(dev, 1024); + + ret = bus_dma_tag_create(bus_get_dma_tag(dev), + 1, + 0, + BUS_SPACE_MAXADDR, + BUS_SPACE_MAXADDR, + NULL, + NULL, + BUS_SPACE_MAXSIZE, + /* BUS_SPACE_UNRESTRICTED */ 1, + BUS_SPACE_MAXSIZE, + 0, + NULL, + NULL, + &accel_dev->dma_tag); + if (ret) + goto out_err; + + if (hw_data->get_accel_cap) { + hw_data->accel_capabilities_mask = + hw_data->get_accel_cap(accel_dev); + } + + /* Find and map all the device's BARS */ + i = (hw_data->fuses & ADF_DEVICE_FUSECTL_MASK) ? 1 : 0; + for (bar_nr = 0; i < ADF_PCI_MAX_BARS && bar_nr < PCIR_MAX_BAR_0; + bar_nr++) { + struct adf_bar *bar; + + /* + * XXX: This isn't quite right as it will ignore a BAR + * that wasn't assigned a valid resource range by the + * firmware. + */ + rid = PCIR_BAR(bar_nr); + if (bus_get_resource(dev, SYS_RES_MEMORY, rid, NULL, NULL) != 0) + continue; + bar = &accel_pci_dev->pci_bars[i++]; + bar->virt_addr = bus_alloc_resource_any(dev, + SYS_RES_MEMORY, + &rid, + RF_ACTIVE); + + if (bar->virt_addr == NULL) { + device_printf(dev, "Failed to map BAR %d\n", bar_nr); + ret = ENXIO; + goto out_err; + } + bar->base_addr = rman_get_start(bar->virt_addr); + bar->size = rman_get_size(bar->virt_addr); + } + pci_enable_busmaster(dev); + + if (!accel_dev->hw_device->config_device) { + ret = EFAULT; + goto out_err; + } + + ret = accel_dev->hw_device->config_device(accel_dev); + if (ret) + goto out_err; + + ret = adf_dev_init(accel_dev); + if (ret) + goto out_dev_shutdown; + + ret = adf_dev_start(accel_dev); + if (ret) + goto out_dev_stop; + + cfg_dev = accel_dev->cfg->dev; + adf_cfg_device_clear(cfg_dev, accel_dev); + free(cfg_dev, M_QAT); + accel_dev->cfg->dev = NULL; + return ret; +out_dev_stop: + adf_dev_stop(accel_dev); +out_dev_shutdown: + adf_dev_shutdown(accel_dev); +out_err: + adf_cleanup_accel(accel_dev); + return ret; +} + +static int +adf_detach(device_t dev) +{ + struct adf_accel_dev *accel_dev = device_get_softc(dev); + + if (adf_dev_stop(accel_dev)) { + device_printf(dev, "Failed to stop QAT accel dev\n"); + return EBUSY; + } + + adf_dev_shutdown(accel_dev); + + adf_cleanup_accel(accel_dev); + + return 0; +} + +static device_method_t adf_methods[] = { DEVMETHOD(device_probe, adf_probe), + DEVMETHOD(device_attach, adf_attach), + DEVMETHOD(device_detach, adf_detach), + + DEVMETHOD_END }; + +static driver_t adf_driver = { "qat", + adf_methods, + sizeof(struct adf_accel_dev) }; + +DRIVER_MODULE_ORDERED(qat_c62x, pci, adf_driver, NULL, NULL, SI_ORDER_THIRD); +MODULE_VERSION(qat_c62x, 1); +MODULE_DEPEND(qat_c62x, qat_common, 1, 1, 1); +MODULE_DEPEND(qat_c62x, qat_api, 1, 1, 1); +MODULE_DEPEND(qat_c62x, linuxkpi, 1, 1, 1); Index: sys/dev/qat/qat_hw/qat_dh895xcc/adf_dh895xcc_hw_data.h =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_dh895xcc/adf_dh895xcc_hw_data.h @@ -0,0 +1,146 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_DH895x_HW_DATA_H_ +#define ADF_DH895x_HW_DATA_H_ + +/* PCIe configuration space */ +#define ADF_DH895XCC_SRAM_BAR 0 +#define ADF_DH895XCC_PMISC_BAR 1 +#define ADF_DH895XCC_ETR_BAR 2 +#define ADF_DH895XCC_RX_RINGS_OFFSET 8 +#define ADF_DH895XCC_TX_RINGS_MASK 0xFF +#define ADF_DH895XCC_FUSECTL_SKU_MASK 0x300000 +#define ADF_DH895XCC_FUSECTL_SKU_SHIFT 20 +#define ADF_DH895XCC_FUSECTL_SKU_1 0x0 +#define ADF_DH895XCC_FUSECTL_SKU_2 0x1 +#define ADF_DH895XCC_FUSECTL_SKU_3 0x2 +#define ADF_DH895XCC_FUSECTL_SKU_4 0x3 +#define ADF_DH895XCC_MAX_ACCELERATORS 6 +#define ADF_DH895XCC_MAX_ACCELENGINES 12 +#define ADF_DH895XCC_ACCELERATORS_REG_OFFSET 13 +#define ADF_DH895XCC_ACCELERATORS_MASK 0x3F +#define ADF_DH895XCC_ACCELENGINES_MASK 0xFFF +#define ADF_DH895XCC_ETR_MAX_BANKS 32 +#define ADF_DH895XCC_SMIAPF0_MASK_OFFSET (0x3A000 + 0x28) +#define ADF_DH895XCC_SMIAPF1_MASK_OFFSET (0x3A000 + 0x30) +#define ADF_DH895XCC_SMIA0_MASK 0xFFFFFFFF +#define ADF_DH895XCC_SMIA1_MASK 0x1 +/* Error detection and correction */ +#define ADF_DH895XCC_AE_CTX_ENABLES(i) (i * 0x1000 + 0x20818) +#define ADF_DH895XCC_AE_MISC_CONTROL(i) (i * 0x1000 + 0x20960) +#define ADF_DH895XCC_ENABLE_AE_ECC_ERR BIT(28) +#define ADF_DH895XCC_ENABLE_AE_ECC_PARITY_CORR (BIT(24) | BIT(12)) +#define ADF_DH895XCC_UERRSSMSH(i) (i * 0x4000 + 0x18) +#define ADF_DH895XCC_CERRSSMSH(i) (i * 0x4000 + 0x10) +#define ADF_DH895XCC_ERRSSMSH_EN BIT(3) +#define ADF_DH895XCC_ERRSOU3 (0x3A000 + 0x0C) +#define ADF_DH895XCC_ERRSOU5 (0x3A000 + 0xD8) +/* BIT(2) enables the logging of push/pull data errors. */ +#define ADF_DH895XCC_PPERR_EN (BIT(2)) + +/* Masks for VF2PF interrupts */ +#define ADF_DH895XCC_VF2PF1_16 (0xFFFF << 9) +#define ADF_DH895XCC_VF2PF17_32 (0xFFFF) +#define ADF_DH895XCC_ERRSOU3_VF2PF_L(errsou3) (((errsou3)&0x01FFFE00) >> 9) +#define ADF_DH895XCC_ERRSOU5_VF2PF_U(errsou5) (((errsou5)&0x0000FFFF) << 16) +#define ADF_DH895XCC_ERRMSK3_VF2PF_L(vf_mask) (((vf_mask)&0xFFFF) << 9) +#define ADF_DH895XCC_ERRMSK5_VF2PF_U(vf_mask) ((vf_mask) >> 16) + +/* Masks for correctable error interrupts. */ +#define ADF_DH895XCC_ERRMSK0_CERR (BIT(24) | BIT(16) | BIT(8) | BIT(0)) +#define ADF_DH895XCC_ERRMSK1_CERR (BIT(24) | BIT(16) | BIT(8) | BIT(0)) +#define ADF_DH895XCC_ERRMSK3_CERR (BIT(7)) +#define ADF_DH895XCC_ERRMSK4_CERR (BIT(24) | BIT(16) | BIT(8) | BIT(0)) +#define ADF_DH895XCC_ERRMSK5_CERR (0) + +/* Masks for uncorrectable error interrupts. */ +#define ADF_DH895XCC_ERRMSK0_UERR (BIT(25) | BIT(17) | BIT(9) | BIT(1)) +#define ADF_DH895XCC_ERRMSK1_UERR (BIT(25) | BIT(17) | BIT(9) | BIT(1)) +#define ADF_DH895XCC_ERRMSK3_UERR \ + (BIT(8) | BIT(6) | BIT(5) | BIT(4) | BIT(3) | BIT(2) | BIT(0)) +#define ADF_DH895XCC_ERRMSK4_UERR (BIT(25) | BIT(17) | BIT(9) | BIT(1)) +#define ADF_DH895XCC_ERRMSK5_UERR (BIT(19) | BIT(18) | BIT(17) | BIT(16)) + +/* RI CPP control */ +#define ADF_DH895XCC_RICPPINTCTL (0x3A000 + 0x110) +/* + * BIT(1) enables error detection and reporting on the RI CPP Pull interface. + * BIT(0) enables error detection and reporting on the RI CPP Push interface. + */ +#define ADF_DH895XCC_RICPP_EN (BIT(1) | BIT(0)) + +/* TI CPP control */ +#define ADF_DH895XCC_TICPPINTCTL (0x3A400 + 0x138) +/* + * BIT(1) enables error detection and reporting on the TI CPP Pull interface. + * BIT(0) enables error detection and reporting on the TI CPP Push interface. + */ +#define ADF_DH895XCC_TICPP_EN (BIT(1) | BIT(0)) + +/* CFC Uncorrectable Errors */ +#define ADF_DH895XCC_CPP_SHAC_ERR_CTRL (0x30000 + 0xC00) +/* + * BIT(1) enables interrupt. + * BIT(0) enables detecting and logging of push/pull data errors. + */ +#define ADF_DH895XCC_CPP_SHAC_UE (BIT(1) | BIT(0)) + +/* Correctable SecureRAM Error Reg */ +#define ADF_DH895XCC_ESRAMCERR (0x3AC00 + 0x00) +/* BIT(3) enables fixing and logging of correctable errors. */ +#define ADF_DH895XCC_ESRAM_CERR (BIT(3)) + +/* Uncorrectable SecureRAM Error Reg */ +#define ADF_DH895XCC_ESRAMUERR (ADF_SECRAMUERR) +/* + * BIT(17) enables interrupt. + * BIT(3) enables detecting and logging of uncorrectable errors. + */ +#define ADF_DH895XCC_ESRAM_UERR (BIT(17) | BIT(3)) + +/* Miscellaneous Memory Target Errors Register */ +/* + * BIT(3) enables detecting and logging push/pull data errors. + * BIT(2) enables interrupt. + */ +#define ADF_DH895XCC_TGT_UERR (BIT(3) | BIT(2)) + +#define ADF_DH895XCC_SLICEPWRDOWN(i) ((i)*0x4000 + 0x2C) +/* Enabling PKE4-PKE0. */ +#define ADF_DH895XCC_MMP_PWR_UP_MSK (BIT(7) | BIT(6) | BIT(5) | BIT(4) | BIT(3)) + +/* CPM Uncorrectable Errors */ +#define ADF_DH895XCC_INTMASKSSM(i) ((i)*0x4000 + 0x0) +/* Disabling interrupts for correctable errors. */ +#define ADF_DH895XCC_INTMASKSSM_UERR \ + (BIT(11) | BIT(9) | BIT(7) | BIT(5) | BIT(3) | BIT(1)) + +/* MMP */ +/* BIT(3) enables correction. */ +#define ADF_DH895XCC_CERRSSMMMP_EN (BIT(3)) + +/* BIT(3) enables logging. */ +#define ADF_DH895XCC_UERRSSMMMP_EN (BIT(3)) + +#define ADF_DH895XCC_PF2VF_OFFSET(i) (0x3A000 + 0x280 + ((i)*0x04)) +#define ADF_DH895XCC_VINTMSK_OFFSET(i) (0x3A000 + 0x200 + ((i)*0x04)) + +/* Arbiter configuration */ +#define ADF_DH895XCC_ARB_OFFSET 0x30000 +#define ADF_DH895XCC_ARB_WRK_2_SER_MAP_OFFSET 0x180 +#define ADF_DH895XCC_ARB_WQCFG_OFFSET 0x100 + +/* Admin Interface Reg Offset */ +#define ADF_DH895XCC_ADMINMSGUR_OFFSET (0x3A000 + 0x574) +#define ADF_DH895XCC_ADMINMSGLR_OFFSET (0x3A000 + 0x578) +#define ADF_DH895XCC_MAILBOX_BASE_OFFSET 0x20970 + +/* FW names */ +#define ADF_DH895XCC_FW "qat_dh895xcc_fw" +#define ADF_DH895XCC_MMP "qat_dh895xcc_mmp_fw" + +void adf_init_hw_data_dh895xcc(struct adf_hw_device_data *hw_data); +void adf_clean_hw_data_dh895xcc(struct adf_hw_device_data *hw_data); +#define ADF_DH895XCC_AE_FREQ (933 * 1000000) +#endif Index: sys/dev/qat/qat_hw/qat_dh895xcc/adf_dh895xcc_hw_data.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_dh895xcc/adf_dh895xcc_hw_data.c @@ -0,0 +1,405 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include +#include +#include +#include +#include "adf_dh895xcc_hw_data.h" +#include "icp_qat_hw.h" +#include "adf_heartbeat.h" + +/* Worker thread to service arbiter mappings based on dev SKUs */ +static const u32 thrd_to_arb_map_sku4[] = + { 0x12222AAA, 0x11666666, 0x12222AAA, 0x11666666, 0x12222AAA, 0x11222222, + 0x12222AAA, 0x11222222, 0x00000000, 0x00000000, 0x00000000, 0x00000000 }; + +static const u32 thrd_to_arb_map_sku6[] = + { 0x12222AAA, 0x11666666, 0x12222AAA, 0x11666666, 0x12222AAA, 0x11222222, + 0x12222AAA, 0x11222222, 0x12222AAA, 0x11222222, 0x12222AAA, 0x11222222 }; + +static const u32 thrd_to_arb_map_sku3[] = + { 0x00000888, 0x00000000, 0x00000888, 0x00000000, 0x00000888, 0x00000000, + 0x00000888, 0x00000000, 0x00000888, 0x00000000, 0x00000888, 0x00000000 }; + +static u32 thrd_to_arb_map_gen[ADF_DH895XCC_MAX_ACCELENGINES] = { 0 }; + +static struct adf_hw_device_class dh895xcc_class = + {.name = ADF_DH895XCC_DEVICE_NAME, .type = DEV_DH895XCC, .instances = 0 }; + +static u32 +get_accel_mask(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 fuse; + + fuse = pci_read_config(pdev, ADF_DEVICE_FUSECTL_OFFSET, 4); + + return (~fuse) >> ADF_DH895XCC_ACCELERATORS_REG_OFFSET & + ADF_DH895XCC_ACCELERATORS_MASK; +} + +static u32 +get_ae_mask(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 fuse; + + fuse = pci_read_config(pdev, ADF_DEVICE_FUSECTL_OFFSET, 4); + + return (~fuse) & ADF_DH895XCC_ACCELENGINES_MASK; +} + +static uint32_t +get_num_accels(struct adf_hw_device_data *self) +{ + uint32_t i, ctr = 0; + + if (!self || !self->accel_mask) + return 0; + + for (i = 0; i < ADF_DH895XCC_MAX_ACCELERATORS; i++) { + if (self->accel_mask & (1 << i)) + ctr++; + } + return ctr; +} + +static uint32_t +get_num_aes(struct adf_hw_device_data *self) +{ + uint32_t i, ctr = 0; + + if (!self || !self->ae_mask) + return 0; + + for (i = 0; i < ADF_DH895XCC_MAX_ACCELENGINES; i++) { + if (self->ae_mask & (1 << i)) + ctr++; + } + return ctr; +} + +static uint32_t +get_misc_bar_id(struct adf_hw_device_data *self) +{ + return ADF_DH895XCC_PMISC_BAR; +} + +static uint32_t +get_etr_bar_id(struct adf_hw_device_data *self) +{ + return ADF_DH895XCC_ETR_BAR; +} + +static uint32_t +get_sram_bar_id(struct adf_hw_device_data *self) +{ + return ADF_DH895XCC_SRAM_BAR; +} + +static enum dev_sku_info +get_sku(struct adf_hw_device_data *self) +{ + int sku = (self->fuses & ADF_DH895XCC_FUSECTL_SKU_MASK) >> + ADF_DH895XCC_FUSECTL_SKU_SHIFT; + + switch (sku) { + case ADF_DH895XCC_FUSECTL_SKU_1: + return DEV_SKU_1; + case ADF_DH895XCC_FUSECTL_SKU_2: + return DEV_SKU_2; + case ADF_DH895XCC_FUSECTL_SKU_3: + return DEV_SKU_3; + case ADF_DH895XCC_FUSECTL_SKU_4: + return DEV_SKU_4; + default: + return DEV_SKU_UNKNOWN; + } + return DEV_SKU_UNKNOWN; +} + +static void +adf_get_arbiter_mapping(struct adf_accel_dev *accel_dev, + u32 const **arb_map_config) +{ + switch (accel_dev->accel_pci_dev.sku) { + case DEV_SKU_1: + adf_cfg_gen_dispatch_arbiter(accel_dev, + thrd_to_arb_map_sku4, + thrd_to_arb_map_gen, + ADF_DH895XCC_MAX_ACCELENGINES); + *arb_map_config = thrd_to_arb_map_gen; + break; + + case DEV_SKU_2: + case DEV_SKU_4: + adf_cfg_gen_dispatch_arbiter(accel_dev, + thrd_to_arb_map_sku6, + thrd_to_arb_map_gen, + ADF_DH895XCC_MAX_ACCELENGINES); + *arb_map_config = thrd_to_arb_map_gen; + break; + + case DEV_SKU_3: + adf_cfg_gen_dispatch_arbiter(accel_dev, + thrd_to_arb_map_sku3, + thrd_to_arb_map_gen, + ADF_DH895XCC_MAX_ACCELENGINES); + *arb_map_config = thrd_to_arb_map_gen; + break; + + default: + device_printf(GET_DEV(accel_dev), + "The configuration doesn't match any SKU"); + *arb_map_config = NULL; + } +} + +static uint32_t +get_pf2vf_offset(uint32_t i) +{ + return ADF_DH895XCC_PF2VF_OFFSET(i); +} + +static uint32_t +get_vintmsk_offset(uint32_t i) +{ + return ADF_DH895XCC_VINTMSK_OFFSET(i); +} + +static void +get_arb_info(struct arb_info *arb_csrs_info) +{ + arb_csrs_info->arbiter_offset = ADF_DH895XCC_ARB_OFFSET; + arb_csrs_info->wrk_thd_2_srv_arb_map = + ADF_DH895XCC_ARB_WRK_2_SER_MAP_OFFSET; + arb_csrs_info->wrk_cfg_offset = ADF_DH895XCC_ARB_WQCFG_OFFSET; +} + +static void +get_admin_info(struct admin_info *admin_csrs_info) +{ + admin_csrs_info->mailbox_offset = ADF_DH895XCC_MAILBOX_BASE_OFFSET; + admin_csrs_info->admin_msg_ur = ADF_DH895XCC_ADMINMSGUR_OFFSET; + admin_csrs_info->admin_msg_lr = ADF_DH895XCC_ADMINMSGLR_OFFSET; +} + +static void +get_errsou_offset(u32 *errsou3, u32 *errsou5) +{ + *errsou3 = ADF_DH895XCC_ERRSOU3; + *errsou5 = ADF_DH895XCC_ERRSOU5; +} + +static u32 +get_clock_speed(struct adf_hw_device_data *self) +{ + /* CPP clock is half high-speed clock */ + return self->clock_frequency / 2; +} + +static void +adf_enable_error_correction(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_DH895XCC_PMISC_BAR]; + struct resource *csr = misc_bar->virt_addr; + unsigned int val, i; + unsigned int mask; + + /* Enable Accel Engine error detection & correction */ + mask = hw_device->ae_mask; + for (i = 0; mask; i++, mask >>= 1) { + if (!(mask & 1)) + continue; + val = ADF_CSR_RD(csr, ADF_DH895XCC_AE_CTX_ENABLES(i)); + val |= ADF_DH895XCC_ENABLE_AE_ECC_ERR; + ADF_CSR_WR(csr, ADF_DH895XCC_AE_CTX_ENABLES(i), val); + val = ADF_CSR_RD(csr, ADF_DH895XCC_AE_MISC_CONTROL(i)); + val |= ADF_DH895XCC_ENABLE_AE_ECC_PARITY_CORR; + ADF_CSR_WR(csr, ADF_DH895XCC_AE_MISC_CONTROL(i), val); + } + + /* Enable shared memory error detection & correction */ + mask = hw_device->accel_mask; + for (i = 0; mask; i++, mask >>= 1) { + if (!(mask & 1)) + continue; + val = ADF_CSR_RD(csr, ADF_DH895XCC_UERRSSMSH(i)); + val |= ADF_DH895XCC_ERRSSMSH_EN; + ADF_CSR_WR(csr, ADF_DH895XCC_UERRSSMSH(i), val); + val = ADF_CSR_RD(csr, ADF_DH895XCC_CERRSSMSH(i)); + val |= ADF_DH895XCC_ERRSSMSH_EN; + ADF_CSR_WR(csr, ADF_DH895XCC_CERRSSMSH(i), val); + } +} + +static void +adf_enable_ints(struct adf_accel_dev *accel_dev) +{ + struct resource *addr; + + addr = (&GET_BARS(accel_dev)[ADF_DH895XCC_PMISC_BAR])->virt_addr; + + /* Enable bundle and misc interrupts */ + ADF_CSR_WR(addr, + ADF_DH895XCC_SMIAPF0_MASK_OFFSET, + accel_dev->u1.pf.vf_info ? + 0 : + (1ULL << GET_MAX_BANKS(accel_dev)) - 1); + ADF_CSR_WR(addr, + ADF_DH895XCC_SMIAPF1_MASK_OFFSET, + ADF_DH895XCC_SMIA1_MASK); +} + +static u32 +get_ae_clock(struct adf_hw_device_data *self) +{ + /* + * Clock update interval is <16> ticks for dh895xcc. + */ + return self->clock_frequency / 16; +} + +static int +get_storage_enabled(struct adf_accel_dev *accel_dev, u32 *storage_enabled) +{ + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + + strlcpy(key, ADF_STORAGE_FIRMWARE_ENABLED, sizeof(key)); + if (!adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, key, val)) { + if (kstrtouint(val, 0, storage_enabled)) + return -EFAULT; + } + return 0; +} + +static u32 +dh895xcc_get_hw_cap(struct adf_accel_dev *accel_dev) +{ + device_t pdev = accel_dev->accel_pci_dev.pci_dev; + u32 legfuses; + u32 capabilities; + + /* Read accelerator capabilities mask */ + legfuses = pci_read_config(pdev, ADF_DEVICE_LEGFUSE_OFFSET, 4); + capabilities = ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC + + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC + + ICP_ACCEL_CAPABILITIES_CIPHER + + ICP_ACCEL_CAPABILITIES_AUTHENTICATION + + ICP_ACCEL_CAPABILITIES_COMPRESSION + ICP_ACCEL_CAPABILITIES_RAND + + ICP_ACCEL_CAPABILITIES_HKDF + ICP_ACCEL_CAPABILITIES_ECEDMONT + + ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN; + + if (legfuses & ICP_ACCEL_MASK_CIPHER_SLICE) + capabilities &= ~(ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC | + ICP_ACCEL_CAPABILITIES_CIPHER | + ICP_ACCEL_CAPABILITIES_HKDF | + ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN); + if (legfuses & ICP_ACCEL_MASK_AUTH_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_AUTHENTICATION; + if (legfuses & ICP_ACCEL_MASK_PKE_SLICE) + capabilities &= ~(ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC | + ICP_ACCEL_CAPABILITIES_ECEDMONT); + if (legfuses & ICP_ACCEL_MASK_COMPRESS_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_COMPRESSION; + + return capabilities; +} + +static const char * +get_obj_name(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services service) +{ + return ADF_DH895XCC_AE_FW_NAME_CUSTOM1; +} + +static u32 +get_objs_num(struct adf_accel_dev *accel_dev) +{ + return 1; +} + +static u32 +get_obj_cfg_ae_mask(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services services) +{ + return accel_dev->hw_device->ae_mask; +} + +void +adf_init_hw_data_dh895xcc(struct adf_hw_device_data *hw_data) +{ + hw_data->dev_class = &dh895xcc_class; + hw_data->instance_id = dh895xcc_class.instances++; + hw_data->num_banks = ADF_DH895XCC_ETR_MAX_BANKS; + hw_data->num_rings_per_bank = ADF_ETR_MAX_RINGS_PER_BANK; + hw_data->num_accel = ADF_DH895XCC_MAX_ACCELERATORS; + hw_data->num_logical_accel = 1; + hw_data->num_engines = ADF_DH895XCC_MAX_ACCELENGINES; + hw_data->tx_rx_gap = ADF_DH895XCC_RX_RINGS_OFFSET; + hw_data->tx_rings_mask = ADF_DH895XCC_TX_RINGS_MASK; + hw_data->alloc_irq = adf_isr_resource_alloc; + hw_data->free_irq = adf_isr_resource_free; + hw_data->enable_error_correction = adf_enable_error_correction; + hw_data->print_err_registers = adf_print_err_registers; + hw_data->get_accel_mask = get_accel_mask; + hw_data->get_ae_mask = get_ae_mask; + hw_data->get_num_accels = get_num_accels; + hw_data->get_num_aes = get_num_aes; + hw_data->get_etr_bar_id = get_etr_bar_id; + hw_data->get_misc_bar_id = get_misc_bar_id; + hw_data->get_pf2vf_offset = get_pf2vf_offset; + hw_data->get_vintmsk_offset = get_vintmsk_offset; + hw_data->get_arb_info = get_arb_info; + hw_data->get_admin_info = get_admin_info; + hw_data->get_errsou_offset = get_errsou_offset; + hw_data->get_clock_speed = get_clock_speed; + hw_data->get_sram_bar_id = get_sram_bar_id; + hw_data->get_sku = get_sku; + hw_data->fw_name = ADF_DH895XCC_FW; + hw_data->fw_mmp_name = ADF_DH895XCC_MMP; + hw_data->init_admin_comms = adf_init_admin_comms; + hw_data->exit_admin_comms = adf_exit_admin_comms; + hw_data->disable_iov = adf_disable_sriov; + hw_data->send_admin_init = adf_send_admin_init; + hw_data->init_arb = adf_init_gen2_arb; + hw_data->exit_arb = adf_exit_arb; + hw_data->get_arb_mapping = adf_get_arbiter_mapping; + hw_data->enable_ints = adf_enable_ints; + hw_data->enable_vf2pf_comms = adf_pf_enable_vf2pf_comms; + hw_data->disable_vf2pf_comms = adf_pf_disable_vf2pf_comms; + hw_data->reset_device = adf_reset_sbr; + hw_data->restore_device = adf_dev_restore; + hw_data->min_iov_compat_ver = ADF_PFVF_COMPATIBILITY_VERSION; + hw_data->get_accel_cap = dh895xcc_get_hw_cap; + hw_data->get_heartbeat_status = adf_get_heartbeat_status; + hw_data->get_ae_clock = get_ae_clock; + hw_data->get_objs_num = get_objs_num; + hw_data->get_obj_name = get_obj_name; + hw_data->get_obj_cfg_ae_mask = get_obj_cfg_ae_mask; + hw_data->clock_frequency = ADF_DH895XCC_AE_FREQ; + hw_data->extended_dc_capabilities = 0; + hw_data->get_storage_enabled = get_storage_enabled; + hw_data->query_storage_cap = 1; + hw_data->get_heartbeat_status = adf_get_heartbeat_status; + hw_data->get_ae_clock = get_ae_clock; + hw_data->storage_enable = 0; + hw_data->get_fw_image_type = adf_cfg_get_fw_image_type; + hw_data->config_device = adf_config_device; + hw_data->get_ring_to_svc_map = adf_cfg_get_services_enabled; + hw_data->set_asym_rings_mask = adf_cfg_set_asym_rings_mask; + hw_data->ring_to_svc_map = ADF_DEFAULT_RING_TO_SRV_MAP; + hw_data->pre_reset = adf_dev_pre_reset; + hw_data->post_reset = adf_dev_post_reset; +} + +void +adf_clean_hw_data_dh895xcc(struct adf_hw_device_data *hw_data) +{ + hw_data->dev_class->instances--; +} Index: sys/dev/qat/qat_hw/qat_dh895xcc/adf_drv.c =================================================================== --- /dev/null +++ sys/dev/qat/qat_hw/qat_dh895xcc/adf_drv.c @@ -0,0 +1,262 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#include "qat_freebsd.h" +#include "adf_cfg.h" +#include "adf_common_drv.h" +#include "adf_accel_devices.h" +#include "adf_dh895xcc_hw_data.h" +#include "adf_fw_counters.h" +#include "adf_cfg_device.h" +#include +#include +#include +#include +#include +#include "adf_heartbeat_dbg.h" +#include "adf_cnvnr_freq_counters.h" + +static MALLOC_DEFINE(M_QAT_DH895XCC, "qat_dh895xcc", "qat_dh895xcc"); + +#define ADF_SYSTEM_DEVICE(device_id) \ + { \ + PCI_VENDOR_ID_INTEL, device_id \ + } + +static const struct pci_device_id adf_pci_tbl[] = + { ADF_SYSTEM_DEVICE(ADF_DH895XCC_PCI_DEVICE_ID), + { + 0, + } }; + +static int +adf_probe(device_t dev) +{ + const struct pci_device_id *id; + + for (id = adf_pci_tbl; id->vendor != 0; id++) { + if (pci_get_vendor(dev) == id->vendor && + pci_get_device(dev) == id->device) { + device_set_desc(dev, + "Intel " ADF_DH895XCC_DEVICE_NAME + " QuickAssist"); + return BUS_PROBE_DEFAULT; + } + } + return ENXIO; +} + +static void +adf_cleanup_accel(struct adf_accel_dev *accel_dev) +{ + struct adf_accel_pci *accel_pci_dev = &accel_dev->accel_pci_dev; + int i; + + if (accel_dev->dma_tag) + bus_dma_tag_destroy(accel_dev->dma_tag); + for (i = 0; i < ADF_PCI_MAX_BARS; i++) { + struct adf_bar *bar = &accel_pci_dev->pci_bars[i]; + + if (bar->virt_addr) + bus_free_resource(accel_pci_dev->pci_dev, + SYS_RES_MEMORY, + bar->virt_addr); + } + + if (accel_dev->hw_device) { + switch (pci_get_device(accel_pci_dev->pci_dev)) { + case ADF_DH895XCC_PCI_DEVICE_ID: + adf_clean_hw_data_dh895xcc(accel_dev->hw_device); + break; + default: + break; + } + free(accel_dev->hw_device, M_QAT_DH895XCC); + accel_dev->hw_device = NULL; + } + adf_cfg_dev_remove(accel_dev); + adf_devmgr_rm_dev(accel_dev, NULL); +} + +static int +adf_attach(device_t dev) +{ + struct adf_accel_dev *accel_dev; + struct adf_accel_pci *accel_pci_dev; + struct adf_hw_device_data *hw_data; + unsigned int i, bar_nr; + int ret, rid; + struct adf_cfg_device *cfg_dev = NULL; + + /* Set pci MaxPayLoad to 256. Implemented to avoid the issue of + * Pci-passthrough causing Maxpayload to be reset to 128 bytes + * when the device is reset. */ + if (pci_get_max_payload(dev) != 256) + pci_set_max_payload(dev, 256); + + accel_dev = device_get_softc(dev); + + INIT_LIST_HEAD(&accel_dev->crypto_list); + accel_pci_dev = &accel_dev->accel_pci_dev; + accel_pci_dev->pci_dev = dev; + + if (bus_get_domain(dev, &accel_pci_dev->node) != 0) + accel_pci_dev->node = 0; + + /* Add accel device to accel table. + * This should be called before adf_cleanup_accel is called */ + if (adf_devmgr_add_dev(accel_dev, NULL)) { + device_printf(dev, "Failed to add new accelerator device.\n"); + return ENXIO; + } + + /* Allocate and configure device configuration structure */ + hw_data = malloc(sizeof(*hw_data), M_QAT_DH895XCC, M_WAITOK | M_ZERO); + + accel_dev->hw_device = hw_data; + adf_init_hw_data_dh895xcc(accel_dev->hw_device); + accel_pci_dev->revid = pci_get_revid(dev); + hw_data->fuses = pci_read_config(dev, ADF_DEVICE_FUSECTL_OFFSET, 4); + + /* Get PPAERUCM values and store */ + ret = adf_aer_store_ppaerucm_reg(dev, hw_data); + if (ret) + goto out_err; + + /* Get Accelerators and Accelerators Engines masks */ + hw_data->accel_mask = hw_data->get_accel_mask(accel_dev); + hw_data->ae_mask = hw_data->get_ae_mask(accel_dev); + accel_pci_dev->sku = hw_data->get_sku(hw_data); + /* If the device has no acceleration engines then ignore it. */ + if (!hw_data->accel_mask || !hw_data->ae_mask || + ((~hw_data->ae_mask) & 0x01)) { + device_printf(dev, "No acceleration units found\n"); + ret = ENXIO; + goto out_err; + } + + /* Create device configuration table */ + ret = adf_cfg_dev_add(accel_dev); + if (ret) + goto out_err; + + pci_set_max_read_req(dev, 1024); + + ret = bus_dma_tag_create(bus_get_dma_tag(dev), + 1, + 0, + BUS_SPACE_MAXADDR, + BUS_SPACE_MAXADDR, + NULL, + NULL, + BUS_SPACE_MAXSIZE, + /* BUS_SPACE_UNRESTRICTED */ 1, + BUS_SPACE_MAXSIZE, + 0, + NULL, + NULL, + &accel_dev->dma_tag); + if (ret) + goto out_err; + + if (hw_data->get_accel_cap) { + hw_data->accel_capabilities_mask = + hw_data->get_accel_cap(accel_dev); + } + + /* Find and map all the device's BARS */ + i = 0; + for (bar_nr = 0; i < ADF_PCI_MAX_BARS && bar_nr < PCIR_MAX_BAR_0; + bar_nr++) { + struct adf_bar *bar; + + /* + * This will ignore a BAR + * that wasn't assigned a valid resource range by the + * firmware. + */ + rid = PCIR_BAR(bar_nr); + if (bus_get_resource(dev, SYS_RES_MEMORY, rid, NULL, NULL) != 0) + continue; + bar = &accel_pci_dev->pci_bars[i++]; + bar->virt_addr = bus_alloc_resource_any(dev, + SYS_RES_MEMORY, + &rid, + RF_ACTIVE); + if (bar->virt_addr == NULL) { + device_printf(dev, "Failed to map BAR %d\n", bar_nr); + ret = ENXIO; + goto out_err; + } + bar->base_addr = rman_get_start(bar->virt_addr); + bar->size = rman_get_size(bar->virt_addr); + } + pci_enable_busmaster(dev); + + if (!accel_dev->hw_device->config_device) { + ret = EFAULT; + goto out_err; + } + + ret = accel_dev->hw_device->config_device(accel_dev); + if (ret) + goto out_err; + + ret = adf_dev_init(accel_dev); + if (ret) + goto out_dev_shutdown; + + ret = adf_dev_start(accel_dev); + if (ret) + goto out_dev_stop; + + cfg_dev = accel_dev->cfg->dev; + adf_cfg_device_clear(cfg_dev, accel_dev); + free(cfg_dev, M_QAT); + accel_dev->cfg->dev = NULL; + return ret; +out_dev_stop: + adf_dev_stop(accel_dev); +out_dev_shutdown: + adf_dev_shutdown(accel_dev); +out_err: + adf_cleanup_accel(accel_dev); + return ret; +} + +static int +adf_detach(device_t dev) +{ + struct adf_accel_dev *accel_dev = device_get_softc(dev); + + if (adf_dev_stop(accel_dev)) { + device_printf(dev, "Failed to stop QAT accel dev\n"); + return EBUSY; + } + + adf_dev_shutdown(accel_dev); + + adf_cleanup_accel(accel_dev); + return 0; +} + +static device_method_t adf_methods[] = { DEVMETHOD(device_probe, adf_probe), + DEVMETHOD(device_attach, adf_attach), + DEVMETHOD(device_detach, adf_detach), + + DEVMETHOD_END }; + +static driver_t adf_driver = { "qat", + adf_methods, + sizeof(struct adf_accel_dev) }; + +DRIVER_MODULE_ORDERED(qat_dh895xcc, + pci, + adf_driver, + NULL, + NULL, + SI_ORDER_THIRD); +MODULE_VERSION(qat_dh895xcc, 1); +MODULE_DEPEND(qat_dh895xcc, qat_common, 1, 1, 1); +MODULE_DEPEND(qat_dh895xcc, qat_api, 1, 1, 1); +MODULE_DEPEND(qat_dh895xcc, linuxkpi, 1, 1, 1); Index: sys/dev/qat/qat_hw15.c =================================================================== --- sys/dev/qat/qat_hw15.c +++ /dev/null @@ -1,965 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_hw15.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2007-2013 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -__FBSDID("$FreeBSD$"); -#if 0 -__KERNEL_RCSID(0, "$NetBSD: qat_hw15.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $"); -#endif - -#include -#include -#include -#include -#include - -#include - -#include - -#include -#include - -#include "qatreg.h" -#include "qat_hw15reg.h" -#include "qatvar.h" -#include "qat_hw15var.h" - -static int qat_adm_ring_init_ring_table(struct qat_softc *); -static void qat_adm_ring_build_slice_mask(uint16_t *, uint32_t, uint32_t); -static void qat_adm_ring_build_shram_mask(uint64_t *, uint32_t, uint32_t); -static int qat_adm_ring_build_ring_table(struct qat_softc *, uint32_t); -static int qat_adm_ring_build_init_msg(struct qat_softc *, - struct fw_init_req *, enum fw_init_cmd_id, uint32_t, - struct qat_accel_init_cb *); -static int qat_adm_ring_send_init_msg_sync(struct qat_softc *, - enum fw_init_cmd_id, uint32_t); -static int qat_adm_ring_send_init_msg(struct qat_softc *, - enum fw_init_cmd_id); -static int qat_adm_ring_intr(struct qat_softc *, void *, void *); - -void -qat_msg_req_type_populate(struct arch_if_req_hdr *msg, enum arch_if_req type, - uint32_t rxring) -{ - - memset(msg, 0, sizeof(struct arch_if_req_hdr)); - msg->flags = ARCH_IF_FLAGS_VALID_FLAG | - ARCH_IF_FLAGS_RESP_RING_TYPE_ET | ARCH_IF_FLAGS_RESP_TYPE_S; - msg->req_type = type; - msg->resp_pipe_id = rxring; -} - -void -qat_msg_cmn_hdr_populate(struct fw_la_bulk_req *msg, bus_addr_t desc_paddr, - uint8_t hdrsz, uint8_t hwblksz, uint16_t comn_req_flags, uint32_t flow_id) -{ - struct fw_comn_req_hdr *hdr = &msg->comn_hdr; - - hdr->comn_req_flags = comn_req_flags; - hdr->content_desc_params_sz = hwblksz; - hdr->content_desc_hdr_sz = hdrsz; - hdr->content_desc_addr = desc_paddr; - msg->flow_id = flow_id; -} - -void -qat_msg_service_cmd_populate(struct fw_la_bulk_req *msg, enum fw_la_cmd_id cmdid, - uint16_t cmd_flags) -{ - msg->comn_la_req.la_cmd_id = cmdid; - msg->comn_la_req.u.la_flags = cmd_flags; -} - -void -qat_msg_cmn_mid_populate(struct fw_comn_req_mid *msg, void *cookie, - uint64_t src, uint64_t dst) -{ - - msg->opaque_data = (uint64_t)(uintptr_t)cookie; - msg->src_data_addr = src; - if (dst == 0) - msg->dest_data_addr = src; - else - msg->dest_data_addr = dst; -} - -void -qat_msg_req_params_populate(struct fw_la_bulk_req *msg, - bus_addr_t req_params_paddr, uint8_t req_params_sz) -{ - msg->req_params_addr = req_params_paddr; - msg->comn_la_req.u1.req_params_blk_sz = req_params_sz / 8; -} - -void -qat_msg_cmn_footer_populate(union fw_comn_req_ftr *msg, uint64_t next_addr) -{ - msg->next_request_addr = next_addr; -} - -void -qat_msg_params_populate(struct fw_la_bulk_req *msg, - struct qat_crypto_desc *desc, uint8_t req_params_sz, - uint16_t service_cmd_flags, uint16_t comn_req_flags) -{ - qat_msg_cmn_hdr_populate(msg, desc->qcd_desc_paddr, - desc->qcd_hdr_sz, desc->qcd_hw_blk_sz, comn_req_flags, 0); - qat_msg_service_cmd_populate(msg, desc->qcd_cmd_id, service_cmd_flags); - qat_msg_cmn_mid_populate(&msg->comn_mid, NULL, 0, 0); - qat_msg_req_params_populate(msg, 0, req_params_sz); - qat_msg_cmn_footer_populate(&msg->comn_ftr, 0); -} - -static int -qat_adm_ring_init_ring_table(struct qat_softc *sc) -{ - struct qat_admin_rings *qadr = &sc->sc_admin_rings; - - if (sc->sc_ae_num == 1) { - qadr->qadr_cya_ring_tbl = - &qadr->qadr_master_ring_tbl[0]; - qadr->qadr_srv_mask[0] = QAT_SERVICE_CRYPTO_A; - } else if (sc->sc_ae_num == 2 || sc->sc_ae_num == 4) { - qadr->qadr_cya_ring_tbl = - &qadr->qadr_master_ring_tbl[0]; - qadr->qadr_srv_mask[0] = QAT_SERVICE_CRYPTO_A; - qadr->qadr_cyb_ring_tbl = - &qadr->qadr_master_ring_tbl[1]; - qadr->qadr_srv_mask[1] = QAT_SERVICE_CRYPTO_B; - } - - return 0; -} - -int -qat_adm_ring_init(struct qat_softc *sc) -{ - struct qat_admin_rings *qadr = &sc->sc_admin_rings; - int error, i, j; - - error = qat_alloc_dmamem(sc, &qadr->qadr_dma, 1, PAGE_SIZE, PAGE_SIZE); - if (error) - return error; - - qadr->qadr_master_ring_tbl = qadr->qadr_dma.qdm_dma_vaddr; - - MPASS(sc->sc_ae_num * - sizeof(struct fw_init_ring_table) <= PAGE_SIZE); - - /* Initialize the Master Ring Table */ - for (i = 0; i < sc->sc_ae_num; i++) { - struct fw_init_ring_table *firt = - &qadr->qadr_master_ring_tbl[i]; - - for (j = 0; j < INIT_RING_TABLE_SZ; j++) { - struct fw_init_ring_params *firp = - &firt->firt_bulk_rings[j]; - - firp->firp_reserved = 0; - firp->firp_curr_weight = QAT_DEFAULT_RING_WEIGHT; - firp->firp_init_weight = QAT_DEFAULT_RING_WEIGHT; - firp->firp_ring_pvl = QAT_DEFAULT_PVL; - } - memset(firt->firt_ring_mask, 0, sizeof(firt->firt_ring_mask)); - } - - error = qat_etr_setup_ring(sc, 0, RING_NUM_ADMIN_TX, - ADMIN_RING_SIZE, sc->sc_hw.qhw_fw_req_size, - NULL, NULL, "admin_tx", &qadr->qadr_admin_tx); - if (error) - return error; - - error = qat_etr_setup_ring(sc, 0, RING_NUM_ADMIN_RX, - ADMIN_RING_SIZE, sc->sc_hw.qhw_fw_resp_size, - qat_adm_ring_intr, qadr, "admin_rx", &qadr->qadr_admin_rx); - if (error) - return error; - - /* - * Finally set up the service indices into the Master Ring Table - * and convenient ring table pointers for each service enabled. - * Only the Admin rings are initialized. - */ - error = qat_adm_ring_init_ring_table(sc); - if (error) - return error; - - /* - * Calculate the number of active AEs per QAT - * needed for Shram partitioning. - */ - for (i = 0; i < sc->sc_ae_num; i++) { - if (qadr->qadr_srv_mask[i]) - qadr->qadr_active_aes_per_accel++; - } - - return 0; -} - -static void -qat_adm_ring_build_slice_mask(uint16_t *slice_mask, uint32_t srv_mask, - uint32_t init_shram) -{ - uint16_t shram = 0, comn_req = 0; - - if (init_shram) - shram = COMN_REQ_SHRAM_INIT_REQUIRED; - - if (srv_mask & QAT_SERVICE_CRYPTO_A) - comn_req |= COMN_REQ_CY0_ONLY(shram); - if (srv_mask & QAT_SERVICE_CRYPTO_B) - comn_req |= COMN_REQ_CY1_ONLY(shram); - - *slice_mask = comn_req; -} - -static void -qat_adm_ring_build_shram_mask(uint64_t *shram_mask, uint32_t active_aes, - uint32_t ae) -{ - *shram_mask = 0; - - if (active_aes == 1) { - *shram_mask = ~(*shram_mask); - } else if (active_aes == 2) { - if (ae == 1) - *shram_mask = ((~(*shram_mask)) & 0xffffffff); - else - *shram_mask = ((~(*shram_mask)) & 0xffffffff00000000ull); - } else if (active_aes == 3) { - if (ae == 0) - *shram_mask = ((~(*shram_mask)) & 0x7fffff); - else if (ae == 1) - *shram_mask = ((~(*shram_mask)) & 0x3fffff800000ull); - else - *shram_mask = ((~(*shram_mask)) & 0xffffc00000000000ull); - } else { - panic("Only three services are supported in current version"); - } -} - -static int -qat_adm_ring_build_ring_table(struct qat_softc *sc, uint32_t ae) -{ - struct qat_admin_rings *qadr = &sc->sc_admin_rings; - struct fw_init_ring_table *tbl; - struct fw_init_ring_params *param; - uint8_t srv_mask = sc->sc_admin_rings.qadr_srv_mask[ae]; - - if ((srv_mask & QAT_SERVICE_CRYPTO_A)) { - tbl = qadr->qadr_cya_ring_tbl; - } else if ((srv_mask & QAT_SERVICE_CRYPTO_B)) { - tbl = qadr->qadr_cyb_ring_tbl; - } else { - device_printf(sc->sc_dev, - "Invalid execution engine %d\n", ae); - return EINVAL; - } - - param = &tbl->firt_bulk_rings[sc->sc_hw.qhw_ring_sym_tx]; - param->firp_curr_weight = QAT_HI_PRIO_RING_WEIGHT; - param->firp_init_weight = QAT_HI_PRIO_RING_WEIGHT; - FW_INIT_RING_MASK_SET(tbl, sc->sc_hw.qhw_ring_sym_tx); - - return 0; -} - -static int -qat_adm_ring_build_init_msg(struct qat_softc *sc, - struct fw_init_req *initmsg, enum fw_init_cmd_id cmd, uint32_t ae, - struct qat_accel_init_cb *cb) -{ - struct fw_init_set_ae_info_hdr *aehdr; - struct fw_init_set_ae_info *aeinfo; - struct fw_init_set_ring_info_hdr *ringhdr; - struct fw_init_set_ring_info *ringinfo; - int init_shram = 0, tgt_id, cluster_id; - uint32_t srv_mask; - - srv_mask = sc->sc_admin_rings.qadr_srv_mask[ - ae % sc->sc_ae_num]; - - memset(initmsg, 0, sizeof(struct fw_init_req)); - - qat_msg_req_type_populate(&initmsg->comn_hdr.arch_if, - ARCH_IF_REQ_QAT_FW_INIT, - sc->sc_admin_rings.qadr_admin_rx->qr_ring_id); - - qat_msg_cmn_mid_populate(&initmsg->comn_mid, cb, 0, 0); - - switch (cmd) { - case FW_INIT_CMD_SET_AE_INFO: - if (ae % sc->sc_ae_num == 0) - init_shram = 1; - if (ae >= sc->sc_ae_num) { - tgt_id = 1; - cluster_id = 1; - } else { - cluster_id = 0; - if (sc->sc_ae_mask) - tgt_id = 0; - else - tgt_id = 1; - } - aehdr = &initmsg->u.set_ae_info; - aeinfo = &initmsg->u1.set_ae_info; - - aehdr->init_cmd_id = cmd; - /* XXX that does not support sparse ae_mask */ - aehdr->init_trgt_id = ae; - aehdr->init_ring_cluster_id = cluster_id; - aehdr->init_qat_id = tgt_id; - - qat_adm_ring_build_slice_mask(&aehdr->init_slice_mask, srv_mask, - init_shram); - - qat_adm_ring_build_shram_mask(&aeinfo->init_shram_mask, - sc->sc_admin_rings.qadr_active_aes_per_accel, - ae % sc->sc_ae_num); - - break; - case FW_INIT_CMD_SET_RING_INFO: - ringhdr = &initmsg->u.set_ring_info; - ringinfo = &initmsg->u1.set_ring_info; - - ringhdr->init_cmd_id = cmd; - /* XXX that does not support sparse ae_mask */ - ringhdr->init_trgt_id = ae; - - /* XXX */ - qat_adm_ring_build_ring_table(sc, - ae % sc->sc_ae_num); - - ringhdr->init_ring_tbl_sz = sizeof(struct fw_init_ring_table); - - ringinfo->init_ring_table_ptr = - sc->sc_admin_rings.qadr_dma.qdm_dma_seg.ds_addr + - ((ae % sc->sc_ae_num) * - sizeof(struct fw_init_ring_table)); - - break; - default: - return ENOTSUP; - } - - return 0; -} - -static int -qat_adm_ring_send_init_msg_sync(struct qat_softc *sc, - enum fw_init_cmd_id cmd, uint32_t ae) -{ - struct fw_init_req initmsg; - struct qat_accel_init_cb cb; - int error; - - error = qat_adm_ring_build_init_msg(sc, &initmsg, cmd, ae, &cb); - if (error) - return error; - - error = qat_etr_put_msg(sc, sc->sc_admin_rings.qadr_admin_tx, - (uint32_t *)&initmsg); - if (error) - return error; - - error = tsleep(&cb, PZERO, "qat_init", hz * 3 / 2); - if (error) { - device_printf(sc->sc_dev, - "Timed out initialization firmware: %d\n", error); - return error; - } - if (cb.qaic_status) { - device_printf(sc->sc_dev, "Failed to initialize firmware\n"); - return EIO; - } - - return error; -} - -static int -qat_adm_ring_send_init_msg(struct qat_softc *sc, - enum fw_init_cmd_id cmd) -{ - struct qat_admin_rings *qadr = &sc->sc_admin_rings; - uint32_t error, ae; - - for (ae = 0; ae < sc->sc_ae_num; ae++) { - uint8_t srv_mask = qadr->qadr_srv_mask[ae]; - switch (cmd) { - case FW_INIT_CMD_SET_AE_INFO: - case FW_INIT_CMD_SET_RING_INFO: - if (!srv_mask) - continue; - break; - case FW_INIT_CMD_TRNG_ENABLE: - case FW_INIT_CMD_TRNG_DISABLE: - if (!(srv_mask & QAT_SERVICE_CRYPTO_A)) - continue; - break; - default: - return ENOTSUP; - } - - error = qat_adm_ring_send_init_msg_sync(sc, cmd, ae); - if (error) - return error; - } - - return 0; -} - -int -qat_adm_ring_send_init(struct qat_softc *sc) -{ - int error; - - error = qat_adm_ring_send_init_msg(sc, FW_INIT_CMD_SET_AE_INFO); - if (error) - return error; - - error = qat_adm_ring_send_init_msg(sc, FW_INIT_CMD_SET_RING_INFO); - if (error) - return error; - - return 0; -} - -static int -qat_adm_ring_intr(struct qat_softc *sc, void *arg, void *msg) -{ - struct arch_if_resp_hdr *resp; - struct fw_init_resp *init_resp; - struct qat_accel_init_cb *init_cb; - int handled = 0; - - resp = (struct arch_if_resp_hdr *)msg; - - switch (resp->resp_type) { - case ARCH_IF_REQ_QAT_FW_INIT: - init_resp = (struct fw_init_resp *)msg; - init_cb = (struct qat_accel_init_cb *) - (uintptr_t)init_resp->comn_resp.opaque_data; - init_cb->qaic_status = - __SHIFTOUT(init_resp->comn_resp.comn_status, - COMN_RESP_INIT_ADMIN_STATUS); - wakeup(init_cb); - break; - default: - device_printf(sc->sc_dev, - "unknown resp type %d\n", resp->resp_type); - break; - } - - return handled; -} - -static inline uint16_t -qat_hw15_get_comn_req_flags(uint8_t ae) -{ - if (ae == 0) { - return COMN_REQ_ORD_STRICT | COMN_REQ_PTR_TYPE_SGL | - COMN_REQ_AUTH0_SLICE_REQUIRED | - COMN_REQ_CIPHER0_SLICE_REQUIRED; - } else { - return COMN_REQ_ORD_STRICT | COMN_REQ_PTR_TYPE_SGL | - COMN_REQ_AUTH1_SLICE_REQUIRED | - COMN_REQ_CIPHER1_SLICE_REQUIRED; - } -} - -static uint32_t -qat_hw15_crypto_setup_cipher_desc(struct qat_crypto_desc *desc, - struct qat_session *qs, struct fw_cipher_hdr *cipher_hdr, - uint32_t hw_blk_offset, enum fw_slice next_slice) -{ - desc->qcd_cipher_blk_sz = HW_AES_BLK_SZ; - - cipher_hdr->state_padding_sz = 0; - cipher_hdr->key_sz = qs->qs_cipher_klen / 8; - - cipher_hdr->state_sz = desc->qcd_cipher_blk_sz / 8; - - cipher_hdr->next_id = next_slice; - cipher_hdr->curr_id = FW_SLICE_CIPHER; - cipher_hdr->offset = hw_blk_offset / 8; - cipher_hdr->resrvd = 0; - - return sizeof(struct hw_cipher_config) + qs->qs_cipher_klen; -} - -static void -qat_hw15_crypto_setup_cipher_config(const struct qat_crypto_desc *desc, - const struct qat_session *qs, const struct cryptop *crp, - struct hw_cipher_config *cipher_config) -{ - const uint8_t *key; - uint8_t *cipher_key; - - cipher_config->val = qat_crypto_load_cipher_session(desc, qs); - cipher_config->reserved = 0; - - cipher_key = (uint8_t *)(cipher_config + 1); - if (crp != NULL && crp->crp_cipher_key != NULL) - key = crp->crp_cipher_key; - else - key = qs->qs_cipher_key; - memcpy(cipher_key, key, qs->qs_cipher_klen); -} - -static uint32_t -qat_hw15_crypto_setup_auth_desc(struct qat_crypto_desc *desc, - struct qat_session *qs, struct fw_auth_hdr *auth_hdr, - uint32_t ctrl_blk_offset, uint32_t hw_blk_offset, - enum fw_slice next_slice) -{ - const struct qat_sym_hash_def *hash_def; - - (void)qat_crypto_load_auth_session(desc, qs, &hash_def); - - auth_hdr->next_id = next_slice; - auth_hdr->curr_id = FW_SLICE_AUTH; - auth_hdr->offset = hw_blk_offset / 8; - auth_hdr->resrvd = 0; - - auth_hdr->hash_flags = FW_AUTH_HDR_FLAG_NO_NESTED; - auth_hdr->u.inner_prefix_sz = 0; - auth_hdr->outer_prefix_sz = 0; - auth_hdr->final_sz = hash_def->qshd_alg->qshai_digest_len; - auth_hdr->inner_state1_sz = - roundup(hash_def->qshd_qat->qshqi_state1_len, 8); - auth_hdr->inner_res_sz = hash_def->qshd_alg->qshai_digest_len; - auth_hdr->inner_state2_sz = - roundup(hash_def->qshd_qat->qshqi_state2_len, 8); - auth_hdr->inner_state2_off = auth_hdr->offset + - ((sizeof(struct hw_auth_setup) + auth_hdr->inner_state1_sz) / 8); - - auth_hdr->outer_config_off = 0; - auth_hdr->outer_state1_sz = 0; - auth_hdr->outer_res_sz = 0; - auth_hdr->outer_prefix_off = 0; - - desc->qcd_auth_sz = hash_def->qshd_alg->qshai_sah->hashsize; - desc->qcd_state_storage_sz = (sizeof(struct hw_auth_counter) + - roundup(hash_def->qshd_alg->qshai_state_size, 8)) / 8; - desc->qcd_gcm_aad_sz_offset1 = desc->qcd_auth_offset + - sizeof(struct hw_auth_setup) + auth_hdr->inner_state1_sz + - AES_BLOCK_LEN; - desc->qcd_gcm_aad_sz_offset2 = ctrl_blk_offset + - offsetof(struct fw_auth_hdr, u.aad_sz); - - return sizeof(struct hw_auth_setup) + auth_hdr->inner_state1_sz + - auth_hdr->inner_state2_sz; -} - -static void -qat_hw15_crypto_setup_auth_setup(const struct qat_crypto_desc *desc, - const struct qat_session *qs, const struct cryptop *crp, - struct hw_auth_setup *auth_setup) -{ - const struct qat_sym_hash_def *hash_def; - const uint8_t *key; - uint8_t *state1, *state2; - uint32_t state_sz, state1_sz, state2_sz, state1_pad_len, state2_pad_len; - - auth_setup->auth_config.config = qat_crypto_load_auth_session(desc, qs, - &hash_def); - auth_setup->auth_config.reserved = 0; - - auth_setup->auth_counter.counter = - htobe32(hash_def->qshd_qat->qshqi_auth_counter); - auth_setup->auth_counter.reserved = 0; - - state1 = (uint8_t *)(auth_setup + 1); - state2 = state1 + roundup(hash_def->qshd_qat->qshqi_state1_len, 8); - switch (qs->qs_auth_algo) { - case HW_AUTH_ALGO_GALOIS_128: - qat_crypto_gmac_precompute(desc, qs->qs_cipher_key, - qs->qs_cipher_klen, hash_def, state2); - break; - case HW_AUTH_ALGO_SHA1: - state_sz = hash_def->qshd_alg->qshai_state_size; - state1_sz = roundup(hash_def->qshd_qat->qshqi_state1_len, 8); - state2_sz = roundup(hash_def->qshd_qat->qshqi_state2_len, 8); - if (qs->qs_auth_mode == HW_AUTH_MODE1) { - state1_pad_len = state1_sz - state_sz; - state2_pad_len = state2_sz - state_sz; - if (state1_pad_len > 0) - memset(state1 + state_sz, 0, state1_pad_len); - if (state2_pad_len > 0) - memset(state2 + state_sz, 0, state2_pad_len); - } - /* FALLTHROUGH */ - case HW_AUTH_ALGO_SHA256: - case HW_AUTH_ALGO_SHA384: - case HW_AUTH_ALGO_SHA512: - switch (qs->qs_auth_mode) { - case HW_AUTH_MODE0: - memcpy(state1, hash_def->qshd_alg->qshai_init_state, - state1_sz); - /* Override for mode 0 hashes. */ - auth_setup->auth_counter.counter = 0; - break; - case HW_AUTH_MODE1: - if (crp != NULL && crp->crp_auth_key != NULL) - key = crp->crp_auth_key; - else - key = qs->qs_auth_key; - if (key != NULL) { - qat_crypto_hmac_precompute(desc, key, - qs->qs_auth_klen, hash_def, state1, state2); - } - break; - default: - panic("%s: unhandled auth mode %d", __func__, - qs->qs_auth_mode); - } - break; - default: - panic("%s: unhandled auth algorithm %d", __func__, - qs->qs_auth_algo); - } -} - -void -qat_hw15_crypto_setup_desc(struct qat_crypto *qcy, struct qat_session *qs, - struct qat_crypto_desc *desc) -{ - struct fw_cipher_hdr *cipher_hdr; - struct fw_auth_hdr *auth_hdr; - struct fw_la_bulk_req *req_cache; - struct hw_auth_setup *auth_setup; - struct hw_cipher_config *cipher_config; - uint32_t ctrl_blk_sz, ctrl_blk_offset, hw_blk_offset; - int i; - uint16_t la_cmd_flags; - uint8_t req_params_sz; - uint8_t *ctrl_blk_ptr, *hw_blk_ptr; - - ctrl_blk_sz = 0; - if (qs->qs_cipher_algo != HW_CIPHER_ALGO_NULL) - ctrl_blk_sz += sizeof(struct fw_cipher_hdr); - if (qs->qs_auth_algo != HW_AUTH_ALGO_NULL) - ctrl_blk_sz += sizeof(struct fw_auth_hdr); - - ctrl_blk_ptr = desc->qcd_content_desc; - ctrl_blk_offset = 0; - hw_blk_ptr = ctrl_blk_ptr + ctrl_blk_sz; - hw_blk_offset = 0; - - la_cmd_flags = 0; - req_params_sz = 0; - for (i = 0; i < MAX_FW_SLICE; i++) { - switch (desc->qcd_slices[i]) { - case FW_SLICE_CIPHER: - cipher_hdr = (struct fw_cipher_hdr *)(ctrl_blk_ptr + - ctrl_blk_offset); - cipher_config = (struct hw_cipher_config *)(hw_blk_ptr + - hw_blk_offset); - desc->qcd_cipher_offset = ctrl_blk_sz + hw_blk_offset; - hw_blk_offset += qat_hw15_crypto_setup_cipher_desc(desc, - qs, cipher_hdr, hw_blk_offset, - desc->qcd_slices[i + 1]); - qat_hw15_crypto_setup_cipher_config(desc, qs, NULL, - cipher_config); - ctrl_blk_offset += sizeof(struct fw_cipher_hdr); - req_params_sz += sizeof(struct fw_la_cipher_req_params); - break; - case FW_SLICE_AUTH: - auth_hdr = (struct fw_auth_hdr *)(ctrl_blk_ptr + - ctrl_blk_offset); - auth_setup = (struct hw_auth_setup *)(hw_blk_ptr + - hw_blk_offset); - desc->qcd_auth_offset = ctrl_blk_sz + hw_blk_offset; - hw_blk_offset += qat_hw15_crypto_setup_auth_desc(desc, - qs, auth_hdr, ctrl_blk_offset, hw_blk_offset, - desc->qcd_slices[i + 1]); - qat_hw15_crypto_setup_auth_setup(desc, qs, NULL, - auth_setup); - ctrl_blk_offset += sizeof(struct fw_auth_hdr); - req_params_sz += sizeof(struct fw_la_auth_req_params); - la_cmd_flags |= LA_FLAGS_RET_AUTH_RES; - /* no digest verify */ - break; - case FW_SLICE_DRAM_WR: - i = MAX_FW_SLICE; /* end of chain */ - break; - default: - MPASS(0); - break; - } - } - - desc->qcd_hdr_sz = ctrl_blk_offset / 8; - desc->qcd_hw_blk_sz = hw_blk_offset / 8; - - req_cache = (struct fw_la_bulk_req *)desc->qcd_req_cache; - qat_msg_req_type_populate( - &req_cache->comn_hdr.arch_if, - ARCH_IF_REQ_QAT_FW_LA, 0); - - if (qs->qs_auth_algo == HW_AUTH_ALGO_GALOIS_128) - la_cmd_flags |= LA_FLAGS_PROTO_GCM | LA_FLAGS_GCM_IV_LEN_FLAG; - else - la_cmd_flags |= LA_FLAGS_PROTO_NO; - - qat_msg_params_populate(req_cache, desc, req_params_sz, - la_cmd_flags, 0); - - bus_dmamap_sync(qs->qs_desc_mem.qdm_dma_tag, - qs->qs_desc_mem.qdm_dma_map, BUS_DMASYNC_PREWRITE); -} - -static void -qat_hw15_crypto_req_setkey(const struct qat_crypto_desc *desc, - const struct qat_session *qs, struct qat_sym_cookie *qsc, - struct fw_la_bulk_req *bulk_req, struct cryptop *crp) -{ - struct hw_auth_setup *auth_setup; - struct hw_cipher_config *cipher_config; - uint8_t *cdesc; - int i; - - cdesc = qsc->qsc_content_desc; - memcpy(cdesc, desc->qcd_content_desc, CONTENT_DESC_MAX_SIZE); - for (i = 0; i < MAX_FW_SLICE; i++) { - switch (desc->qcd_slices[i]) { - case FW_SLICE_CIPHER: - cipher_config = (struct hw_cipher_config *) - (cdesc + desc->qcd_cipher_offset); - qat_hw15_crypto_setup_cipher_config(desc, qs, crp, - cipher_config); - break; - case FW_SLICE_AUTH: - auth_setup = (struct hw_auth_setup *) - (cdesc + desc->qcd_auth_offset); - qat_hw15_crypto_setup_auth_setup(desc, qs, crp, - auth_setup); - break; - case FW_SLICE_DRAM_WR: - i = MAX_FW_SLICE; /* end of chain */ - break; - default: - MPASS(0); - } - } - - bulk_req->comn_hdr.content_desc_addr = qsc->qsc_content_desc_paddr; -} - -void -qat_hw15_crypto_setup_req_params(struct qat_crypto_bank *qcb, - struct qat_session *qs, struct qat_crypto_desc const *desc, - struct qat_sym_cookie *qsc, struct cryptop *crp) -{ - struct qat_sym_bulk_cookie *qsbc; - struct fw_la_bulk_req *bulk_req; - struct fw_la_cipher_req_params *cipher_req; - struct fw_la_auth_req_params *auth_req; - bus_addr_t digest_paddr; - uint8_t *aad_szp2, *req_params_ptr; - uint32_t aad_sz, *aad_szp1; - enum fw_la_cmd_id cmd_id = desc->qcd_cmd_id; - enum fw_slice next_slice; - - qsbc = &qsc->qsc_bulk_cookie; - - bulk_req = (struct fw_la_bulk_req *)qsbc->qsbc_msg; - memcpy(bulk_req, &desc->qcd_req_cache, QAT_HW15_SESSION_REQ_CACHE_SIZE); - bulk_req->comn_hdr.arch_if.resp_pipe_id = qcb->qcb_sym_rx->qr_ring_id; - bulk_req->comn_hdr.comn_req_flags = - qat_hw15_get_comn_req_flags(qcb->qcb_bank % 2); - bulk_req->comn_mid.src_data_addr = qsc->qsc_buffer_list_desc_paddr; - if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) { - bulk_req->comn_mid.dest_data_addr = - qsc->qsc_obuffer_list_desc_paddr; - } else { - bulk_req->comn_mid.dest_data_addr = - qsc->qsc_buffer_list_desc_paddr; - } - bulk_req->req_params_addr = qsc->qsc_bulk_req_params_buf_paddr; - bulk_req->comn_ftr.next_request_addr = 0; - bulk_req->comn_mid.opaque_data = (uint64_t)(uintptr_t)qsc; - if (__predict_false(crp->crp_cipher_key != NULL || - crp->crp_auth_key != NULL)) { - qat_hw15_crypto_req_setkey(desc, qs, qsc, bulk_req, crp); - } - - digest_paddr = 0; - if (desc->qcd_auth_sz != 0) - digest_paddr = qsc->qsc_auth_res_paddr; - - req_params_ptr = qsbc->qsbc_req_params_buf; - memset(req_params_ptr, 0, sizeof(qsbc->qsbc_req_params_buf)); - - /* - * The SG list layout is a bit different for GCM and GMAC, it's simpler - * to handle those cases separately. - */ - if (qs->qs_auth_algo == HW_AUTH_ALGO_GALOIS_128) { - cipher_req = (struct fw_la_cipher_req_params *)req_params_ptr; - auth_req = (struct fw_la_auth_req_params *) - (req_params_ptr + sizeof(struct fw_la_cipher_req_params)); - - cipher_req->cipher_state_sz = desc->qcd_cipher_blk_sz / 8; - cipher_req->curr_id = FW_SLICE_CIPHER; - if (cmd_id == FW_LA_CMD_HASH_CIPHER || cmd_id == FW_LA_CMD_AUTH) - cipher_req->next_id = FW_SLICE_DRAM_WR; - else - cipher_req->next_id = FW_SLICE_AUTH; - cipher_req->state_address = qsc->qsc_iv_buf_paddr; - - if (cmd_id != FW_LA_CMD_AUTH) { - /* - * Don't fill out the cipher block if we're doing GMAC - * only. - */ - cipher_req->cipher_off = 0; - cipher_req->cipher_len = crp->crp_payload_length; - } - auth_req->curr_id = FW_SLICE_AUTH; - if (cmd_id == FW_LA_CMD_HASH_CIPHER || cmd_id == FW_LA_CMD_AUTH) - auth_req->next_id = FW_SLICE_CIPHER; - else - auth_req->next_id = FW_SLICE_DRAM_WR; - - auth_req->auth_res_address = digest_paddr; - auth_req->auth_res_sz = desc->qcd_auth_sz; - - auth_req->auth_off = 0; - auth_req->auth_len = crp->crp_payload_length; - - auth_req->hash_state_sz = - roundup2(crp->crp_aad_length, QAT_AES_GCM_AAD_ALIGN) >> 3; - auth_req->u1.aad_addr = crp->crp_aad_length > 0 ? - qsc->qsc_gcm_aad_paddr : 0; - - /* - * Update the hash state block if necessary. This only occurs - * when the AAD length changes between requests in a session and - * is synchronized by qat_process(). - */ - aad_sz = htobe32(crp->crp_aad_length); - aad_szp1 = (uint32_t *)( - __DECONST(uint8_t *, desc->qcd_content_desc) + - desc->qcd_gcm_aad_sz_offset1); - aad_szp2 = __DECONST(uint8_t *, desc->qcd_content_desc) + - desc->qcd_gcm_aad_sz_offset2; - if (__predict_false(*aad_szp1 != aad_sz)) { - *aad_szp1 = aad_sz; - *aad_szp2 = (uint8_t)roundup2(crp->crp_aad_length, - QAT_AES_GCM_AAD_ALIGN); - bus_dmamap_sync(qs->qs_desc_mem.qdm_dma_tag, - qs->qs_desc_mem.qdm_dma_map, - BUS_DMASYNC_PREWRITE); - } - } else { - cipher_req = (struct fw_la_cipher_req_params *)req_params_ptr; - if (cmd_id != FW_LA_CMD_AUTH) { - if (cmd_id == FW_LA_CMD_CIPHER || - cmd_id == FW_LA_CMD_HASH_CIPHER) - next_slice = FW_SLICE_DRAM_WR; - else - next_slice = FW_SLICE_AUTH; - - cipher_req->cipher_state_sz = - desc->qcd_cipher_blk_sz / 8; - - cipher_req->curr_id = FW_SLICE_CIPHER; - cipher_req->next_id = next_slice; - - if (crp->crp_aad_length == 0) { - cipher_req->cipher_off = 0; - } else if (crp->crp_aad == NULL) { - cipher_req->cipher_off = - crp->crp_payload_start - crp->crp_aad_start; - } else { - cipher_req->cipher_off = crp->crp_aad_length; - } - cipher_req->cipher_len = crp->crp_payload_length; - cipher_req->state_address = qsc->qsc_iv_buf_paddr; - } - if (cmd_id != FW_LA_CMD_CIPHER) { - if (cmd_id == FW_LA_CMD_AUTH) - auth_req = (struct fw_la_auth_req_params *) - req_params_ptr; - else - auth_req = (struct fw_la_auth_req_params *) - (cipher_req + 1); - if (cmd_id == FW_LA_CMD_HASH_CIPHER) - next_slice = FW_SLICE_CIPHER; - else - next_slice = FW_SLICE_DRAM_WR; - - auth_req->curr_id = FW_SLICE_AUTH; - auth_req->next_id = next_slice; - - auth_req->auth_res_address = digest_paddr; - auth_req->auth_res_sz = desc->qcd_auth_sz; - - auth_req->auth_len = - crp->crp_payload_length + crp->crp_aad_length; - auth_req->auth_off = 0; - - auth_req->hash_state_sz = 0; - auth_req->u1.prefix_addr = desc->qcd_hash_state_paddr + - desc->qcd_state_storage_sz; - } - } -} Index: sys/dev/qat/qat_hw15reg.h =================================================================== --- sys/dev/qat/qat_hw15reg.h +++ /dev/null @@ -1,635 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_hw15reg.h,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2007-2013 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* $FreeBSD$ */ - -#ifndef _DEV_PCI_QAT_HW15REG_H_ -#define _DEV_PCI_QAT_HW15REG_H_ - -/* Default message size in bytes */ -#define FW_REQ_DEFAULT_SZ_HW15 64 -#define FW_RESP_DEFAULT_SZ_HW15 64 - -#define ADMIN_RING_SIZE 256 -#define RING_NUM_ADMIN_TX 0 -#define RING_NUM_ADMIN_RX 1 - -/* -------------------------------------------------------------------------- */ -/* accel */ - -#define ARCH_IF_FLAGS_VALID_FLAG __BIT(7) -#define ARCH_IF_FLAGS_RESP_RING_TYPE __BITS(4, 3) -#define ARCH_IF_FLAGS_RESP_RING_TYPE_SHIFT 3 -#define ARCH_IF_FLAGS_RESP_RING_TYPE_SCRATCH (0 << ARCH_IF_FLAGS_RESP_RING_TYPE_SHIFT) -#define ARCH_IF_FLAGS_RESP_RING_TYPE_NN (1 << ARCH_IF_FLAGS_RESP_RING_TYPE_SHIFT) -#define ARCH_IF_FLAGS_RESP_RING_TYPE_ET (2 << ARCH_IF_FLAGS_RESP_RING_TYPE_SHIFT) -#define ARCH_IF_FLAGS_RESP_TYPE __BITS(2, 0) -#define ARCH_IF_FLAGS_RESP_TYPE_SHIFT 0 -#define ARCH_IF_FLAGS_RESP_TYPE_A (0 << ARCH_IF_FLAGS_RESP_TYPE_SHIFT) -#define ARCH_IF_FLAGS_RESP_TYPE_B (1 << ARCH_IF_FLAGS_RESP_TYPE_SHIFT) -#define ARCH_IF_FLAGS_RESP_TYPE_C (2 << ARCH_IF_FLAGS_RESP_TYPE_SHIFT) -#define ARCH_IF_FLAGS_RESP_TYPE_S (3 << ARCH_IF_FLAGS_RESP_TYPE_SHIFT) - -enum arch_if_req { - ARCH_IF_REQ_NULL, /* NULL request type */ - - /* QAT-AE Service Request Type IDs - 01 to 20 */ - ARCH_IF_REQ_QAT_FW_INIT, /* QAT-FW Initialization Request */ - ARCH_IF_REQ_QAT_FW_ADMIN, /* QAT-FW Administration Request */ - ARCH_IF_REQ_QAT_FW_PKE, /* QAT-FW PKE Request */ - ARCH_IF_REQ_QAT_FW_LA, /* QAT-FW Lookaside Request */ - ARCH_IF_REQ_QAT_FW_IPSEC, /* QAT-FW IPSec Request */ - ARCH_IF_REQ_QAT_FW_SSL, /* QAT-FW SSL Request */ - ARCH_IF_REQ_QAT_FW_DMA, /* QAT-FW DMA Request */ - ARCH_IF_REQ_QAT_FW_STORAGE, /* QAT-FW Storage Request */ - ARCH_IF_REQ_QAT_FW_COMPRESS, /* QAT-FW Compression Request */ - ARCH_IF_REQ_QAT_FW_PATMATCH, /* QAT-FW Pattern Matching Request */ - - /* IP Service (Range Match and Exception) Blocks Request Type IDs 21 - 30 */ - ARCH_IF_REQ_RM_FLOW_MISS = 21, /* RM flow miss request */ - ARCH_IF_REQ_RM_FLOW_TIMER_EXP, /* RM flow timer exp Request */ - ARCH_IF_REQ_IP_SERVICES_RFC_LOOKUP_UPDATE, /* RFC Lookup request */ - ARCH_IF_REQ_IP_SERVICES_CONFIG_UPDATE, /* Config Update request */ - ARCH_IF_REQ_IP_SERVICES_FCT_CONFIG, /* FCT Config request */ - ARCH_IF_REQ_IP_SERVICES_NEXT_HOP_TIMER_EXPIRY, /* NH Timer expiry request */ - ARCH_IF_REQ_IP_SERVICES_EXCEPTION, /* Exception processign request */ - ARCH_IF_REQ_IP_SERVICES_STACK_DRIVER, /* Send to SD request */ - ARCH_IF_REQ_IP_SERVICES_ACTION_HANDLER, /* Send to AH request */ - ARCH_IF_REQ_IP_SERVICES_EVENT_HANDLER, /* Send to EH request */ - ARCH_IF_REQ_DELIMITER /* End delimiter */ -}; - -struct arch_if_req_hdr { - uint8_t resp_dest_id; - /* Opaque identifier passed from the request to response to allow - * response handler perform any further processing */ - uint8_t resp_pipe_id; - /* Response pipe to write the response associated with this request to */ - uint8_t req_type; - /* Definition of the service described by the request */ - uint8_t flags; - /* Request and response control flags */ -}; - -struct arch_if_resp_hdr { - uint8_t dest_id; - /* Opaque identifier passed from the request to response to allow - * response handler perform any further processing */ - uint8_t serv_id; - /* Definition of the service id generating the response */ - uint8_t resp_type; - /* Definition of the service described by the request */ - uint8_t flags; - /* Request and response control flags */ -}; - -struct fw_comn_req_hdr { - struct arch_if_req_hdr arch_if; - /* Common arch fields used by all ICP interface requests. Remaining - * fields are specific to the common QAT FW service. */ - uint16_t comn_req_flags; - /* Flags used to describe common processing required by the request and - * the meaning of parameters in it i.e. differentiating between a buffer - * descriptor and a flat buffer pointer in the source (src) and destination - * (dest) data address fields. Full definition of the fields is given - * below */ - uint8_t content_desc_params_sz; - /* Size of the content descriptor parameters in quad words. These - * parameters describe the session setup configuration info for the - * slices that this request relies upon i.e. the configuration word and - * cipher key needed by the cipher slice if there is a request for cipher - * processing. The format of the parameters are contained in icp_qat_hw.h - * and vary depending on the algorithm and mode being used. It is the - * clients responsibility to ensure this structure is correctly packed */ - uint8_t content_desc_hdr_sz; - /* Size of the content descriptor header in quad words. This information - * is read into the QAT AE xfr registers */ - uint64_t content_desc_addr; - /* Address of the content descriptor containing both the content header - * the size of which is defined by content_desc_hdr_sz followed by the - * content parameters whose size is described bycontent_desc_params_sz - */ -}; - -struct fw_comn_req_mid { - uint64_t opaque_data; - /* Opaque data passed unmodified from the request to response messages - * by firmware (fw) */ - uint64_t src_data_addr; - /* Generic definition of the source data supplied to the QAT AE. The - * common flags are used to further describe the attributes of this - * field */ - uint64_t dest_data_addr; - /* Generic definition of the destination data supplied to the QAT AE. - * The common flags are used to further describe the attributes of this - * field */ -}; - -union fw_comn_req_ftr { - uint64_t next_request_addr; - /* Overloaded field, for stateful requests, this field is the pointer to - next request descriptor */ - struct { - uint32_t src_length; - /* Length of source flat buffer incase src buffer type is flat */ - uint32_t dst_length; - /* Length of source flat buffer incase dst buffer type is flat */ - } s; -}; - -union fw_comn_error { - struct { - uint8_t resrvd; /* 8 bit reserved field */ - uint8_t comn_err_code; /**< 8 bit common error code */ - } s; - /* Structure which is used for non-compression responses */ - - struct { - uint8_t xlat_err_code; /* 8 bit translator error field */ - uint8_t cmp_err_code; /* 8 bit compression error field */ - } s1; - /* Structure which is used for compression responses */ -}; - -struct fw_comn_resp_hdr { - struct arch_if_resp_hdr arch_if; - /* Common arch fields used by all ICP interface response messages. The - * remaining fields are specific to the QAT FW */ - union fw_comn_error comn_error; - /* This field is overloaded to allow for one 8 bit common error field - * or two 8 bit error fields from compression and translator */ - uint8_t comn_status; - /* Status field which specifies which slice(s) report an error */ - uint8_t serv_cmd_id; - /* For services that define multiple commands this field represents the - * command. If only 1 command is supported then this field will be 0 */ - uint64_t opaque_data; - /* Opaque data passed from the request to the response message */ -}; - - -#define RING_MASK_TABLE_ENTRY_LOG_SZ (5) - -#define FW_INIT_RING_MASK_SET(table, id) \ - table->firt_ring_mask[id >> RING_MASK_TABLE_ENTRY_LOG_SZ] =\ - table->firt_ring_mask[id >> RING_MASK_TABLE_ENTRY_LOG_SZ] | \ - (1 << (id & 0x1f)) - -struct fw_init_ring_params { - uint8_t firp_curr_weight; /* Current ring weight (working copy), - * has to be equal to init_weight */ - uint8_t firp_init_weight; /* Initial ring weight: -1 ... 0 - * -1 is equal to FF, -2 is equal to FE, - * the weighting uses negative logic - * where FF means poll the ring once, - * -2 is poll the ring twice, - * 0 is poll the ring 255 times */ - uint8_t firp_ring_pvl; /* Ring Privilege Level. */ - uint8_t firp_reserved; /* Reserved field which must be set - * to 0 by the client */ -}; - -#define INIT_RING_TABLE_SZ 128 -#define INIT_RING_TABLE_LW_SZ 4 - -struct fw_init_ring_table { - struct fw_init_ring_params firt_bulk_rings[INIT_RING_TABLE_SZ]; - /* array of ring parameters */ - uint32_t firt_ring_mask[INIT_RING_TABLE_LW_SZ]; - /* Structure to hold the bit masks for - * 128 rings. */ -}; - -struct fw_init_set_ae_info_hdr { - uint16_t init_slice_mask; /* Init time flags to set the ownership of the slices */ - uint16_t resrvd; /* Reserved field and must be set to 0 by the client */ - uint8_t init_qat_id; /* Init time qat id described in the request */ - uint8_t init_ring_cluster_id; /* Init time ring cluster Id */ - uint8_t init_trgt_id; /* Init time target AE id described in the request */ - uint8_t init_cmd_id; /* Init time command that is described in the request */ -}; - -struct fw_init_set_ae_info { - uint64_t init_shram_mask; /* Init time shram mask to set the page ownership in page pool of AE*/ - uint64_t resrvd; /* Reserved field and must be set to 0 by the client */ -}; - -struct fw_init_set_ring_info_hdr { - uint32_t resrvd; /* Reserved field and must be set to 0 by the client */ - uint16_t init_ring_tbl_sz; /* Init time information to state size of the ring table */ - uint8_t init_trgt_id; /* Init time target AE id described in the request */ - uint8_t init_cmd_id; /* Init time command that is described in the request */ -}; - -struct fw_init_set_ring_info { - uint64_t init_ring_table_ptr; /* Pointer to weighting information for 128 rings */ - uint64_t resrvd; /* Reserved field and must be set to 0 by the client */ -}; - -struct fw_init_trng_hdr { - uint32_t resrvd; /* Reserved field and must be set to 0 by the client */ - union { - uint8_t resrvd; /* Reserved field set to if cmd type is trng disable */ - uint8_t init_trng_cfg_sz; /* Size of the trng config word in QW*/ - } u; - uint8_t resrvd1; /* Reserved field and must be set to 0 by the client */ - uint8_t init_trgt_id; /* Init time target AE id described in the request */ - uint8_t init_cmd_id; /* Init time command that is described in the request */ -}; - -struct fw_init_trng { - union { - uint64_t resrvd; /* Reserved field set to 0 if cmd type is trng disable */ - uint64_t init_trng_cfg_ptr; /* Pointer to TRNG Slice config word*/ - } u; - uint64_t resrvd; /* Reserved field and must be set to 0 by the client */ -}; - -struct fw_init_req { - struct fw_comn_req_hdr comn_hdr; /* Common request header */ - union { - struct fw_init_set_ae_info_hdr set_ae_info; - /* INIT SET_AE_INFO request header structure */ - struct fw_init_set_ring_info_hdr set_ring_info; - /* INIT SET_RING_INFO request header structure */ - struct fw_init_trng_hdr init_trng; - /* INIT TRNG ENABLE/DISABLE request header structure */ - } u; - struct fw_comn_req_mid comn_mid; /* Common request middle section */ - union { - struct fw_init_set_ae_info set_ae_info; - /* INIT SET_AE_INFO request data structure */ - struct fw_init_set_ring_info set_ring_info; - /* INIT SET_RING_INFO request data structure */ - struct fw_init_trng init_trng; - /* INIT TRNG ENABLE/DISABLE request data structure */ - } u1; -}; - -enum fw_init_cmd_id { - FW_INIT_CMD_SET_AE_INFO, /* Setup AE Info command type */ - FW_INIT_CMD_SET_RING_INFO, /* Setup Ring Info command type */ - FW_INIT_CMD_TRNG_ENABLE, /* TRNG Enable command type */ - FW_INIT_CMD_TRNG_DISABLE, /* TRNG Disable command type */ - FW_INIT_CMD_DELIMITER /* Delimiter type */ -}; - -struct fw_init_resp { - struct fw_comn_resp_hdr comn_resp; /* Common interface response */ - uint8_t resrvd[64 - sizeof(struct fw_comn_resp_hdr)]; - /* XXX FW_RESP_DEFAULT_SZ_HW15 */ - /* Reserved padding out to the default response size */ -}; - -/* -------------------------------------------------------------------------- */ -/* look aside */ - -#define COMN_REQ_ORD UINT16_C(0x8000) -#define COMN_REQ_ORD_SHIFT 15 -#define COMN_REQ_ORD_NONE (0 << COMN_REQ_ORD_SHIFT) -#define COMN_REQ_ORD_STRICT (1 << COMN_REQ_ORD_SHIFT) -#define COMN_REQ_PTR_TYPE UINT16_C(0x4000) -#define COMN_REQ_PTR_TYPE_SHIFT 14 -#define COMN_REQ_PTR_TYPE_FLAT (0 << COMN_REQ_PTR_TYPE_SHIFT) -#define COMN_REQ_PTR_TYPE_SGL (1 << COMN_REQ_PTR_TYPE_SHIFT) -#define COMN_REQ_RESERVED UINT16_C(0x2000) -#define COMN_REQ_SHRAM_INIT UINT16_C(0x1000) -#define COMN_REQ_SHRAM_INIT_SHIFT 12 -#define COMN_REQ_SHRAM_INIT_REQUIRED (1 << COMN_REQ_SHRAM_INIT_SHIFT) -#define COMN_REQ_REGEX_SLICE UINT16_C(0x0800) -#define COMN_REQ_REGEX_SLICE_SHIFT 11 -#define COMN_REQ_REGEX_SLICE_REQUIRED (1 << COMN_REQ_REGEX_SLICE_SHIFT) -#define COMN_REQ_XLAT_SLICE UINT16_C(0x0400) -#define COMN_REQ_XLAT_SLICE_SHIFT 10 -#define COMN_REQ_XLAT_SLICE_REQUIRED (1 << COMN_REQ_XLAT_SLICE_SHIFT) -#define COMN_REQ_CPR_SLICE UINT16_C(0x0200) -#define COMN_REQ_CPR_SLICE_SHIFT 9 -#define COMN_REQ_CPR_SLICE_REQUIRED (1 << COMN_REQ_CPR_SLICE_SHIFT) -#define COMN_REQ_BULK_SLICE UINT16_C(0x0100) -#define COMN_REQ_BULK_SLICE_SHIFT 8 -#define COMN_REQ_BULK_SLICE_REQUIRED (1 << COMN_REQ_BULK_SLICE_SHIFT) -#define COMN_REQ_STORAGE_SLICE UINT16_C(0x0080) -#define COMN_REQ_STORAGE_SLICE_SHIFT 7 -#define COMN_REQ_STORAGE_SLICE_REQUIRED (1 << COMN_REQ_STORAGE_SLICE_SHIFT) -#define COMN_REQ_RND_SLICE UINT16_C(0x0040) -#define COMN_REQ_RND_SLICE_SHIFT 6 -#define COMN_REQ_RND_SLICE_REQUIRED (1 << COMN_REQ_RND_SLICE_SHIFT) -#define COMN_REQ_PKE1_SLICE UINT16_C(0x0020) -#define COMN_REQ_PKE1_SLICE_SHIFT 5 -#define COMN_REQ_PKE1_SLICE_REQUIRED (1 << COMN_REQ_PKE1_SLICE_SHIFT) -#define COMN_REQ_PKE0_SLICE UINT16_C(0x0010) -#define COMN_REQ_PKE0_SLICE_SHIFT 4 -#define COMN_REQ_PKE0_SLICE_REQUIRED (1 << COMN_REQ_PKE0_SLICE_SHIFT) -#define COMN_REQ_AUTH1_SLICE UINT16_C(0x0008) -#define COMN_REQ_AUTH1_SLICE_SHIFT 3 -#define COMN_REQ_AUTH1_SLICE_REQUIRED (1 << COMN_REQ_AUTH1_SLICE_SHIFT) -#define COMN_REQ_AUTH0_SLICE UINT16_C(0x0004) -#define COMN_REQ_AUTH0_SLICE_SHIFT 2 -#define COMN_REQ_AUTH0_SLICE_REQUIRED (1 << COMN_REQ_AUTH0_SLICE_SHIFT) -#define COMN_REQ_CIPHER1_SLICE UINT16_C(0x0002) -#define COMN_REQ_CIPHER1_SLICE_SHIFT 1 -#define COMN_REQ_CIPHER1_SLICE_REQUIRED (1 << COMN_REQ_CIPHER1_SLICE_SHIFT) -#define COMN_REQ_CIPHER0_SLICE UINT16_C(0x0001) -#define COMN_REQ_CIPHER0_SLICE_SHIFT 0 -#define COMN_REQ_CIPHER0_SLICE_REQUIRED (1 << COMN_REQ_CIPHER0_SLICE_SHIFT) - -#define COMN_REQ_CY0_ONLY(shram) \ - COMN_REQ_ORD_STRICT | \ - COMN_REQ_PTR_TYPE_FLAT | \ - (shram) | \ - COMN_REQ_RND_SLICE_REQUIRED | \ - COMN_REQ_PKE0_SLICE_REQUIRED | \ - COMN_REQ_AUTH0_SLICE_REQUIRED | \ - COMN_REQ_CIPHER0_SLICE_REQUIRED; -#define COMN_REQ_CY1_ONLY(shram) \ - COMN_REQ_ORD_STRICT | \ - COMN_REQ_PTR_TYPE_FLAT | \ - (shram) | \ - COMN_REQ_PKE1_SLICE_REQUIRED | \ - COMN_REQ_AUTH1_SLICE_REQUIRED | \ - COMN_REQ_CIPHER1_SLICE_REQUIRED; - -#define COMN_RESP_CRYPTO_STATUS __BIT(7) -#define COMN_RESP_PKE_STATUS __BIT(6) -#define COMN_RESP_CMP_STATUS __BIT(5) -#define COMN_RESP_XLAT_STATUS __BIT(4) -#define COMN_RESP_PM_STATUS __BIT(3) -#define COMN_RESP_INIT_ADMIN_STATUS __BIT(2) - -#define COMN_STATUS_FLAG_OK 0 -#define COMN_STATUS_FLAG_ERROR 1 - -struct fw_la_ssl_tls_common { - uint8_t out_len; /* Number of bytes of key material to output. */ - uint8_t label_len; /* Number of bytes of label for SSL and bytes - * for TLS key generation */ -}; - -struct fw_la_mgf_common { - uint8_t hash_len; - /* Number of bytes of hash output by the QAT per iteration */ - uint8_t seed_len; - /* Number of bytes of seed provided in src buffer for MGF1 */ -}; - -struct fw_cipher_hdr { - uint8_t state_sz; - /* State size in quad words of the cipher algorithm used in this session. - * Set to zero if the algorithm doesnt provide any state */ - uint8_t offset; - /* Quad word offset from the content descriptor parameters address i.e. - * (content_address + (cd_hdr_sz << 3)) to the parameters for the cipher - * processing */ - uint8_t curr_id; - /* Initialised with the cipher slice type */ - uint8_t next_id; - /* Set to the next slice to pass the ciphered data through. - * Set to ICP_QAT_FW_SLICE_DRAM_WR if the data is not to go through - * anymore slices after cipher */ - uint16_t resrvd; - /* Reserved padding byte to bring the struct to the word boundary. MUST be - * set to 0 */ - uint8_t state_padding_sz; - /* State padding size in quad words. Set to 0 if no padding is required. */ - uint8_t key_sz; - /* Key size in quad words of the cipher algorithm used in this session */ -}; - -struct fw_auth_hdr { - uint8_t hash_flags; - /* General flags defining the processing to perform. 0 is normal processing - * and 1 means there is a nested hash processing loop to go through */ - uint8_t offset; - /* Quad word offset from the content descriptor parameters address to the - * parameters for the auth processing */ - uint8_t curr_id; - /* Initialised with the auth slice type */ - uint8_t next_id; - /* Set to the next slice to pass data through. - * Set to ICP_QAT_FW_SLICE_DRAM_WR if the data is not to go through - * anymore slices after auth */ - union { - uint8_t inner_prefix_sz; - /* Size in bytes of the inner prefix data */ - uint8_t aad_sz; - /* Size in bytes of padded AAD data to prefix to the packet for CCM - * or GCM processing */ - } u; - - uint8_t outer_prefix_sz; - /* Size in bytes of outer prefix data */ - uint8_t final_sz; - /* Size in bytes of digest to be returned to the client if requested */ - uint8_t inner_res_sz; - /* Size in bytes of the digest from the inner hash algorithm */ - uint8_t resrvd; - /* This field is unused, assumed value is zero. */ - uint8_t inner_state1_sz; - /* Size in bytes of inner hash state1 data. Must be a qword multiple */ - uint8_t inner_state2_off; - /* Quad word offset from the content descriptor parameters pointer to the - * inner state2 value */ - uint8_t inner_state2_sz; - /* Size in bytes of inner hash state2 data. Must be a qword multiple */ - uint8_t outer_config_off; - /* Quad word offset from the content descriptor parameters pointer to the - * outer configuration information */ - uint8_t outer_state1_sz; - /* Size in bytes of the outer state1 value */ - uint8_t outer_res_sz; - /* Size in bytes of digest from the outer auth algorithm */ - uint8_t outer_prefix_off; - /* Quad word offset from the start of the inner prefix data to the outer - * prefix information. Should equal the rounded inner prefix size, converted - * to qwords */ -}; - -#define FW_AUTH_HDR_FLAG_DO_NESTED 1 -#define FW_AUTH_HDR_FLAG_NO_NESTED 0 - -struct fw_la_comn_req { - union { - uint16_t la_flags; - /* Definition of the common LA processing flags used for the - * bulk processing */ - union { - struct fw_la_ssl_tls_common ssl_tls_common; - /* For TLS or SSL Key Generation, this field is - * overloaded with ssl_tls common information */ - struct fw_la_mgf_common mgf_common; - /* For MGF Key Generation, this field is overloaded with - mgf information */ - } u; - } u; - - union { - uint8_t resrvd; - /* If not useRd by a request this field must be set to 0 */ - uint8_t tls_seed_len; - /* Byte Len of tls seed */ - uint8_t req_params_blk_sz; - /* For bulk processing this field represents the request - * parameters block size */ - uint8_t trng_cfg_sz; - /* This field is used for TRNG_ENABLE requests to indicate the - * size of the TRNG Slice configuration word. Size is in QW's */ - } u1; - uint8_t la_cmd_id; - /* Definition of the LA command defined by this request */ -}; - -#define LA_FLAGS_GCM_IV_LEN_FLAG __BIT(9) -#define LA_FLAGS_PROTO __BITS(8, 6) -#define LA_FLAGS_PROTO_SNOW_3G __SHIFTIN(4, LA_FLAGS_PROTO) -#define LA_FLAGS_PROTO_GCM __SHIFTIN(2, LA_FLAGS_PROTO) -#define LA_FLAGS_PROTO_CCM __SHIFTIN(1, LA_FLAGS_PROTO) -#define LA_FLAGS_PROTO_NO __SHIFTIN(0, LA_FLAGS_PROTO) -#define LA_FLAGS_DIGEST_IN_BUFFER __BIT(5) -#define LA_FLAGS_CMP_AUTH_RES __BIT(4) -#define LA_FLAGS_RET_AUTH_RES __BIT(3) -#define LA_FLAGS_UPDATE_STATE __BIT(2) -#define LA_FLAGS_PARTIAL __BITS(1, 0) - -struct fw_la_bulk_req { - struct fw_comn_req_hdr comn_hdr; - /* Common request header */ - uint32_t flow_id; - /* Field used by Firmware to limit the number of stateful requests - * for a session being processed at a given point of time */ - struct fw_la_comn_req comn_la_req; - /* Common LA request parameters */ - struct fw_comn_req_mid comn_mid; - /* Common request middle section */ - uint64_t req_params_addr; - /* Memory address of the request parameters */ - union fw_comn_req_ftr comn_ftr; - /* Common request footer */ -}; - -struct fw_la_resp { - struct fw_comn_resp_hdr comn_resp; - uint8_t resrvd[64 - sizeof(struct fw_comn_resp_hdr)]; - /* FW_RESP_DEFAULT_SZ_HW15 */ -}; - -struct fw_la_cipher_req_params { - uint8_t resrvd; - /* Reserved field and assumed set to 0 */ - uint8_t cipher_state_sz; - /* Number of quad words of state data for the cipher algorithm */ - uint8_t curr_id; - /* Initialised with the cipher slice type */ - uint8_t next_id; - /* Set to the next slice to pass the ciphered data through. - * Set to ICP_QAT_FW_SLICE_DRAM_WR if the data is not to go through - * anymore slices after cipher */ - uint16_t resrvd1; - /* Reserved field, should be set to zero*/ - uint8_t resrvd2; - /* Reserved field, should be set to zero*/ - uint8_t next_offset; - /* Offset in bytes to the next request parameter block */ - uint32_t cipher_off; - /* Byte offset from the start of packet to the cipher data region */ - uint32_t cipher_len; - /* Byte length of the cipher data region */ - uint64_t state_address; - /* Flat buffer address in memory of the cipher state information. Unused - * if the state size is 0 */ -}; - -struct fw_la_auth_req_params { - uint8_t auth_res_sz; - /* Size in quad words of digest information to validate */ - uint8_t hash_state_sz; - /* Number of quad words of inner and outer hash prefix data to process */ - uint8_t curr_id; - /* Initialised with the auth slice type */ - uint8_t next_id; - /* Set to the next slice to pass the auth data through. - * Set to ICP_QAT_FW_SLICE_NULL for in-place auth-only requests - * Set to ICP_QAT_FW_SLICE_DRAM_WR for all other request types - * if the data is not to go through anymore slices after auth */ - union { - uint16_t resrvd; - /* Reserved field should be set to zero for bulk services */ - uint16_t tls_secret_len; - /* Length of Secret information for TLS. */ - } u; - uint8_t resrvd; - /* Reserved field, should be set to zero*/ - uint8_t next_offset; - /* offset in bytes to the next request parameter block */ - uint32_t auth_off; - /* Byte offset from the start of packet to the auth data region */ - uint32_t auth_len; - /* Byte length of the auth data region */ - union { - uint64_t prefix_addr; - /* Address of the prefix information */ - uint64_t aad_addr; - /* Address of the AAD info in DRAM. Used for the CCM and GCM - * protocols */ - } u1; - uint64_t auth_res_address; - /* Address of the auth result information to validate or the location to - * writeback the digest information to */ -}; - -#endif Index: sys/dev/qat/qat_hw15var.h =================================================================== --- sys/dev/qat/qat_hw15var.h +++ /dev/null @@ -1,105 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_hw15var.h,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2007-2013 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* $FreeBSD$ */ - -#ifndef _DEV_PCI_QAT_HW15VAR_H_ -#define _DEV_PCI_QAT_HW15VAR_H_ - -CTASSERT(HASH_CONTENT_DESC_SIZE >= - sizeof(struct fw_auth_hdr) + MAX_HASH_SETUP_BLK_SZ); -CTASSERT(CIPHER_CONTENT_DESC_SIZE >= - sizeof(struct fw_cipher_hdr) + MAX_CIPHER_SETUP_BLK_SZ); -CTASSERT(CONTENT_DESC_MAX_SIZE >= - roundup(HASH_CONTENT_DESC_SIZE + CIPHER_CONTENT_DESC_SIZE, - QAT_OPTIMAL_ALIGN)); -CTASSERT(QAT_SYM_REQ_PARAMS_SIZE_PADDED >= - roundup(sizeof(struct fw_la_cipher_req_params) + - sizeof(struct fw_la_auth_req_params), QAT_OPTIMAL_ALIGN)); - -/* length of the 5 long words of the request that are stored in the session - * This is rounded up to 32 in order to use the fast memcopy function */ -#define QAT_HW15_SESSION_REQ_CACHE_SIZE (32) - -void qat_msg_req_type_populate(struct arch_if_req_hdr *, - enum arch_if_req, uint32_t); -void qat_msg_cmn_hdr_populate(struct fw_la_bulk_req *, bus_addr_t, - uint8_t, uint8_t, uint16_t, uint32_t); -void qat_msg_service_cmd_populate(struct fw_la_bulk_req *, - enum fw_la_cmd_id, uint16_t); -void qat_msg_cmn_mid_populate(struct fw_comn_req_mid *, void *, - uint64_t , uint64_t); -void qat_msg_req_params_populate(struct fw_la_bulk_req *, bus_addr_t, - uint8_t); -void qat_msg_cmn_footer_populate(union fw_comn_req_ftr *, uint64_t); -void qat_msg_params_populate(struct fw_la_bulk_req *, - struct qat_crypto_desc *, uint8_t, uint16_t, - uint16_t); - - -int qat_adm_ring_init(struct qat_softc *); -int qat_adm_ring_send_init(struct qat_softc *); - -void qat_hw15_crypto_setup_desc(struct qat_crypto *, - struct qat_session *, struct qat_crypto_desc *); -void qat_hw15_crypto_setup_req_params(struct qat_crypto_bank *, - struct qat_session *, struct qat_crypto_desc const *, - struct qat_sym_cookie *, struct cryptop *); - -#endif Index: sys/dev/qat/qat_hw17.c =================================================================== --- sys/dev/qat/qat_hw17.c +++ /dev/null @@ -1,674 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_hw17.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2014 Intel Corporation. - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -__FBSDID("$FreeBSD$"); -#if 0 -__KERNEL_RCSID(0, "$NetBSD: qat_hw17.c,v 1.1 2019/11/20 09:37:46 hikaru Exp $"); -#endif - -#include -#include -#include -#include - -#include - -#include - -#include -#include - -#include "qatreg.h" -#include "qat_hw17reg.h" -#include "qatvar.h" -#include "qat_hw17var.h" - -int qat_adm_mailbox_put_msg_sync(struct qat_softc *, uint32_t, - void *, void *); -int qat_adm_mailbox_send(struct qat_softc *, - struct fw_init_admin_req *, struct fw_init_admin_resp *); -int qat_adm_mailbox_send_init_me(struct qat_softc *); -int qat_adm_mailbox_send_hb_timer(struct qat_softc *); -int qat_adm_mailbox_send_fw_status(struct qat_softc *); -int qat_adm_mailbox_send_constants(struct qat_softc *); - -int -qat_adm_mailbox_init(struct qat_softc *sc) -{ - uint64_t addr; - int error; - struct qat_dmamem *qdm; - - error = qat_alloc_dmamem(sc, &sc->sc_admin_comms.qadc_dma, 1, - PAGE_SIZE, PAGE_SIZE); - if (error) - return error; - - qdm = &sc->sc_admin_comms.qadc_const_tbl_dma; - error = qat_alloc_dmamem(sc, qdm, 1, PAGE_SIZE, PAGE_SIZE); - if (error) - return error; - - memcpy(qdm->qdm_dma_vaddr, - mailbox_const_tab, sizeof(mailbox_const_tab)); - - bus_dmamap_sync(qdm->qdm_dma_tag, qdm->qdm_dma_map, - BUS_DMASYNC_PREWRITE); - - error = qat_alloc_dmamem(sc, &sc->sc_admin_comms.qadc_hb_dma, 1, - PAGE_SIZE, PAGE_SIZE); - if (error) - return error; - - addr = (uint64_t)sc->sc_admin_comms.qadc_dma.qdm_dma_seg.ds_addr; - qat_misc_write_4(sc, ADMINMSGUR, addr >> 32); - qat_misc_write_4(sc, ADMINMSGLR, addr); - - return 0; -} - -int -qat_adm_mailbox_put_msg_sync(struct qat_softc *sc, uint32_t ae, - void *in, void *out) -{ - struct qat_dmamem *qdm; - uint32_t mailbox; - bus_size_t mb_offset = MAILBOX_BASE + (ae * MAILBOX_STRIDE); - int offset = ae * ADMINMSG_LEN * 2; - int times, received; - uint8_t *buf = (uint8_t *)sc->sc_admin_comms.qadc_dma.qdm_dma_vaddr + offset; - - mailbox = qat_misc_read_4(sc, mb_offset); - if (mailbox == 1) - return EAGAIN; - - qdm = &sc->sc_admin_comms.qadc_dma; - memcpy(buf, in, ADMINMSG_LEN); - bus_dmamap_sync(qdm->qdm_dma_tag, qdm->qdm_dma_map, - BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); - qat_misc_write_4(sc, mb_offset, 1); - - received = 0; - for (times = 0; times < 50; times++) { - DELAY(20000); - if (qat_misc_read_4(sc, mb_offset) == 0) { - received = 1; - break; - } - } - if (received) { - bus_dmamap_sync(qdm->qdm_dma_tag, qdm->qdm_dma_map, - BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); - memcpy(out, buf + ADMINMSG_LEN, ADMINMSG_LEN); - } else { - device_printf(sc->sc_dev, - "Failed to send admin msg to accelerator\n"); - } - - return received ? 0 : EFAULT; -} - -int -qat_adm_mailbox_send(struct qat_softc *sc, - struct fw_init_admin_req *req, struct fw_init_admin_resp *resp) -{ - int error; - uint32_t mask; - uint8_t ae; - - for (ae = 0, mask = sc->sc_ae_mask; mask; ae++, mask >>= 1) { - if (!(mask & 1)) - continue; - - error = qat_adm_mailbox_put_msg_sync(sc, ae, req, resp); - if (error) - return error; - if (resp->init_resp_hdr.status) { - device_printf(sc->sc_dev, - "Failed to send admin msg: cmd %d\n", - req->init_admin_cmd_id); - return EFAULT; - } - } - - return 0; -} - -int -qat_adm_mailbox_send_init_me(struct qat_softc *sc) -{ - struct fw_init_admin_req req; - struct fw_init_admin_resp resp; - - memset(&req, 0, sizeof(req)); - req.init_admin_cmd_id = FW_INIT_ME; - - return qat_adm_mailbox_send(sc, &req, &resp); -} - -int -qat_adm_mailbox_send_hb_timer(struct qat_softc *sc) -{ - struct fw_init_admin_req req; - struct fw_init_admin_resp resp; - - memset(&req, 0, sizeof(req)); - req.init_admin_cmd_id = FW_HEARTBEAT_TIMER_SET; - - req.init_cfg_ptr = sc->sc_admin_comms.qadc_hb_dma.qdm_dma_seg.ds_addr; - req.heartbeat_ticks = - sc->sc_hw.qhw_clock_per_sec / 1000 * QAT_HB_INTERVAL; - - return qat_adm_mailbox_send(sc, &req, &resp); -} - -int -qat_adm_mailbox_send_fw_status(struct qat_softc *sc) -{ - int error; - struct fw_init_admin_req req; - struct fw_init_admin_resp resp; - - memset(&req, 0, sizeof(req)); - req.init_admin_cmd_id = FW_STATUS_GET; - - error = qat_adm_mailbox_send(sc, &req, &resp); - if (error) - return error; - - return 0; -} - -int -qat_adm_mailbox_send_constants(struct qat_softc *sc) -{ - struct fw_init_admin_req req; - struct fw_init_admin_resp resp; - - memset(&req, 0, sizeof(req)); - req.init_admin_cmd_id = FW_CONSTANTS_CFG; - - req.init_cfg_sz = 1024; - req.init_cfg_ptr = - sc->sc_admin_comms.qadc_const_tbl_dma.qdm_dma_seg.ds_addr; - - return qat_adm_mailbox_send(sc, &req, &resp); -} - -int -qat_adm_mailbox_send_init(struct qat_softc *sc) -{ - int error; - - error = qat_adm_mailbox_send_init_me(sc); - if (error) - return error; - - error = qat_adm_mailbox_send_hb_timer(sc); - if (error) - return error; - - error = qat_adm_mailbox_send_fw_status(sc); - if (error) - return error; - - return qat_adm_mailbox_send_constants(sc); -} - -int -qat_arb_init(struct qat_softc *sc) -{ - uint32_t arb_cfg = 0x1 << 31 | 0x4 << 4 | 0x1; - uint32_t arb, i; - const uint32_t *thd_2_arb_cfg; - - /* Service arb configured for 32 bytes responses and - * ring flow control check enabled. */ - for (arb = 0; arb < MAX_ARB; arb++) - qat_arb_sarconfig_write_4(sc, arb, arb_cfg); - - /* Map worker threads to service arbiters */ - sc->sc_hw.qhw_get_arb_mapping(sc, &thd_2_arb_cfg); - - if (!thd_2_arb_cfg) - return EINVAL; - - for (i = 0; i < sc->sc_hw.qhw_num_engines; i++) - qat_arb_wrk_2_ser_map_write_4(sc, i, *(thd_2_arb_cfg + i)); - - return 0; -} - -int -qat_set_ssm_wdtimer(struct qat_softc *sc) -{ - uint32_t timer; - u_int mask; - int i; - - timer = sc->sc_hw.qhw_clock_per_sec / 1000 * QAT_SSM_WDT; - for (i = 0, mask = sc->sc_accel_mask; mask; i++, mask >>= 1) { - if (!(mask & 1)) - continue; - qat_misc_write_4(sc, SSMWDT(i), timer); - qat_misc_write_4(sc, SSMWDTPKE(i), timer); - } - - return 0; -} - -int -qat_check_slice_hang(struct qat_softc *sc) -{ - int handled = 0; - - return handled; -} - -static uint32_t -qat_hw17_crypto_setup_cipher_ctrl(struct qat_crypto_desc *desc, - struct qat_session *qs, uint32_t cd_blk_offset, - struct fw_la_bulk_req *req_tmpl, enum fw_slice next_slice) -{ - struct fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = - (struct fw_cipher_cd_ctrl_hdr *)&req_tmpl->cd_ctrl; - - desc->qcd_cipher_blk_sz = HW_AES_BLK_SZ; - desc->qcd_cipher_offset = cd_blk_offset; - - cipher_cd_ctrl->cipher_state_sz = desc->qcd_cipher_blk_sz >> 3; - cipher_cd_ctrl->cipher_key_sz = qs->qs_cipher_klen >> 3; - cipher_cd_ctrl->cipher_cfg_offset = cd_blk_offset >> 3; - FW_COMN_CURR_ID_SET(cipher_cd_ctrl, FW_SLICE_CIPHER); - FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, next_slice); - - return roundup(sizeof(struct hw_cipher_config) + qs->qs_cipher_klen, 8); -} - -static void -qat_hw17_crypto_setup_cipher_cdesc(const struct qat_crypto_desc *desc, - const struct qat_session *qs, const struct cryptop *crp, - union hw_cipher_algo_blk *cipher) -{ - const uint8_t *key; - - cipher->max.cipher_config.val = - qat_crypto_load_cipher_session(desc, qs); - if (crp != NULL && crp->crp_cipher_key != NULL) - key = crp->crp_cipher_key; - else - key = qs->qs_cipher_key; - memcpy(cipher->max.key, key, qs->qs_cipher_klen); -} - -static uint32_t -qat_hw17_crypto_setup_auth_ctrl(struct qat_crypto_desc *desc, - struct qat_session *qs, uint32_t cd_blk_offset, - struct fw_la_bulk_req *req_tmpl, enum fw_slice next_slice) -{ - struct fw_auth_cd_ctrl_hdr *auth_cd_ctrl = - (struct fw_auth_cd_ctrl_hdr *)&req_tmpl->cd_ctrl; - struct qat_sym_hash_def const *hash_def; - - (void)qat_crypto_load_auth_session(desc, qs, &hash_def); - - auth_cd_ctrl->hash_cfg_offset = cd_blk_offset >> 3; - auth_cd_ctrl->hash_flags = FW_AUTH_HDR_FLAG_NO_NESTED; - auth_cd_ctrl->inner_res_sz = hash_def->qshd_alg->qshai_digest_len; - auth_cd_ctrl->final_sz = hash_def->qshd_alg->qshai_sah->hashsize; - - auth_cd_ctrl->inner_state1_sz = - roundup(hash_def->qshd_qat->qshqi_state1_len, 8); - auth_cd_ctrl->inner_state2_sz = - roundup(hash_def->qshd_qat->qshqi_state2_len, 8); - auth_cd_ctrl->inner_state2_offset = - auth_cd_ctrl->hash_cfg_offset + - ((sizeof(struct hw_auth_setup) + - auth_cd_ctrl->inner_state1_sz) >> 3); - - FW_COMN_CURR_ID_SET(auth_cd_ctrl, FW_SLICE_AUTH); - FW_COMN_NEXT_ID_SET(auth_cd_ctrl, next_slice); - - desc->qcd_auth_sz = auth_cd_ctrl->final_sz; - desc->qcd_auth_offset = cd_blk_offset; - desc->qcd_gcm_aad_sz_offset1 = - cd_blk_offset + offsetof(union hw_auth_algo_blk, max.state1) + - auth_cd_ctrl->inner_state1_sz + AES_BLOCK_LEN; - - return roundup(auth_cd_ctrl->inner_state1_sz + - auth_cd_ctrl->inner_state2_sz + - sizeof(struct hw_auth_setup), 8); -} - -static void -qat_hw17_crypto_setup_auth_cdesc(const struct qat_crypto_desc *desc, - const struct qat_session *qs, const struct cryptop *crp, - union hw_auth_algo_blk *auth) -{ - struct qat_sym_hash_def const *hash_def; - uint8_t inner_state1_sz, *state1, *state2; - const uint8_t *key; - - auth->max.inner_setup.auth_config.config = - qat_crypto_load_auth_session(desc, qs, &hash_def); - auth->max.inner_setup.auth_counter.counter = - htobe32(hash_def->qshd_qat->qshqi_auth_counter); - inner_state1_sz = roundup(hash_def->qshd_qat->qshqi_state1_len, 8); - - state1 = auth->max.state1; - state2 = auth->max.state1 + inner_state1_sz; - switch (qs->qs_auth_algo) { - case HW_AUTH_ALGO_GALOIS_128: - key = NULL; - if (crp != NULL && crp->crp_cipher_key != NULL) - key = crp->crp_cipher_key; - else if (qs->qs_cipher_key != NULL) - key = qs->qs_cipher_key; - if (key != NULL) { - qat_crypto_gmac_precompute(desc, key, - qs->qs_cipher_klen, hash_def, state2); - } - break; - case HW_AUTH_ALGO_SHA1: - case HW_AUTH_ALGO_SHA256: - case HW_AUTH_ALGO_SHA384: - case HW_AUTH_ALGO_SHA512: - switch (qs->qs_auth_mode) { - case HW_AUTH_MODE0: - memcpy(state1, hash_def->qshd_alg->qshai_init_state, - inner_state1_sz); - /* Override for mode 0 hashes. */ - auth->max.inner_setup.auth_counter.counter = 0; - break; - case HW_AUTH_MODE1: - if (crp != NULL && crp->crp_auth_key != NULL) - key = crp->crp_auth_key; - else - key = qs->qs_auth_key; - if (key != NULL) { - qat_crypto_hmac_precompute(desc, key, - qs->qs_auth_klen, hash_def, state1, state2); - } - break; - default: - panic("%s: unhandled auth mode %d", __func__, - qs->qs_auth_mode); - } - break; - default: - panic("%s: unhandled auth algorithm %d", __func__, - qs->qs_auth_algo); - } -} - -static void -qat_hw17_init_comn_req_hdr(struct qat_crypto_desc *desc, - struct fw_la_bulk_req *req) -{ - union fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars; - struct fw_comn_req_hdr *req_hdr = &req->comn_hdr; - - req_hdr->service_cmd_id = desc->qcd_cmd_id; - req_hdr->hdr_flags = FW_COMN_VALID; - req_hdr->service_type = FW_COMN_REQ_CPM_FW_LA; - req_hdr->comn_req_flags = FW_COMN_FLAGS_BUILD( - COMN_CD_FLD_TYPE_64BIT_ADR, COMN_PTR_TYPE_SGL); - req_hdr->serv_specif_flags = 0; - cd_pars->s.content_desc_addr = desc->qcd_desc_paddr; -} - -void -qat_hw17_crypto_setup_desc(struct qat_crypto *qcy, struct qat_session *qs, - struct qat_crypto_desc *desc) -{ - union hw_cipher_algo_blk *cipher; - union hw_auth_algo_blk *auth; - struct fw_la_bulk_req *req_tmpl; - struct fw_comn_req_hdr *req_hdr; - uint32_t cd_blk_offset = 0; - int i; - uint8_t *cd_blk_ptr; - - req_tmpl = (struct fw_la_bulk_req *)desc->qcd_req_cache; - req_hdr = &req_tmpl->comn_hdr; - cd_blk_ptr = desc->qcd_content_desc; - - memset(req_tmpl, 0, sizeof(struct fw_la_bulk_req)); - qat_hw17_init_comn_req_hdr(desc, req_tmpl); - - for (i = 0; i < MAX_FW_SLICE; i++) { - switch (desc->qcd_slices[i]) { - case FW_SLICE_CIPHER: - cipher = (union hw_cipher_algo_blk *)(cd_blk_ptr + - cd_blk_offset); - cd_blk_offset += qat_hw17_crypto_setup_cipher_ctrl(desc, - qs, cd_blk_offset, req_tmpl, - desc->qcd_slices[i + 1]); - qat_hw17_crypto_setup_cipher_cdesc(desc, qs, NULL, - cipher); - break; - case FW_SLICE_AUTH: - auth = (union hw_auth_algo_blk *)(cd_blk_ptr + - cd_blk_offset); - cd_blk_offset += qat_hw17_crypto_setup_auth_ctrl(desc, - qs, cd_blk_offset, req_tmpl, - desc->qcd_slices[i + 1]); - qat_hw17_crypto_setup_auth_cdesc(desc, qs, NULL, auth); - req_hdr->serv_specif_flags |= FW_LA_RET_AUTH_RES; - break; - case FW_SLICE_DRAM_WR: - i = MAX_FW_SLICE; /* end of chain */ - break; - default: - MPASS(0); - break; - } - } - - req_tmpl->cd_pars.s.content_desc_params_sz = - roundup(cd_blk_offset, QAT_OPTIMAL_ALIGN) >> 3; - if (qs->qs_auth_algo == HW_AUTH_ALGO_GALOIS_128) - req_hdr->serv_specif_flags |= - FW_LA_PROTO_GCM | FW_LA_GCM_IV_LEN_12_OCTETS; - - bus_dmamap_sync(qs->qs_desc_mem.qdm_dma_tag, - qs->qs_desc_mem.qdm_dma_map, BUS_DMASYNC_PREWRITE); -} - -static void -qat_hw17_crypto_req_setkey(const struct qat_crypto_desc *desc, - const struct qat_session *qs, struct qat_sym_cookie *qsc, - struct fw_la_bulk_req *bulk_req, const struct cryptop *crp) -{ - union hw_auth_algo_blk *auth; - union hw_cipher_algo_blk *cipher; - uint8_t *cdesc; - int i; - - cdesc = qsc->qsc_content_desc; - memcpy(cdesc, desc->qcd_content_desc, CONTENT_DESC_MAX_SIZE); - for (i = 0; i < MAX_FW_SLICE; i++) { - switch (desc->qcd_slices[i]) { - case FW_SLICE_CIPHER: - cipher = (union hw_cipher_algo_blk *) - (cdesc + desc->qcd_cipher_offset); - qat_hw17_crypto_setup_cipher_cdesc(desc, qs, crp, - cipher); - break; - case FW_SLICE_AUTH: - auth = (union hw_auth_algo_blk *) - (cdesc + desc->qcd_auth_offset); - qat_hw17_crypto_setup_auth_cdesc(desc, qs, crp, auth); - break; - case FW_SLICE_DRAM_WR: - i = MAX_FW_SLICE; /* end of chain */ - break; - default: - MPASS(0); - } - } - - bulk_req->cd_pars.s.content_desc_addr = qsc->qsc_content_desc_paddr; -} - -void -qat_hw17_crypto_setup_req_params(struct qat_crypto_bank *qcb __unused, - struct qat_session *qs, const struct qat_crypto_desc *desc, - struct qat_sym_cookie *qsc, struct cryptop *crp) -{ - struct qat_sym_bulk_cookie *qsbc; - struct fw_la_bulk_req *bulk_req; - struct fw_la_cipher_req_params *cipher_param; - struct fw_la_auth_req_params *auth_param; - bus_addr_t digest_paddr; - uint32_t aad_sz, *aad_szp; - uint8_t *req_params_ptr; - enum fw_la_cmd_id cmd_id = desc->qcd_cmd_id; - - qsbc = &qsc->qsc_bulk_cookie; - bulk_req = (struct fw_la_bulk_req *)qsbc->qsbc_msg; - - memcpy(bulk_req, desc->qcd_req_cache, sizeof(struct fw_la_bulk_req)); - bulk_req->comn_mid.opaque_data = (uint64_t)(uintptr_t)qsc; - bulk_req->comn_mid.src_data_addr = qsc->qsc_buffer_list_desc_paddr; - if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) { - bulk_req->comn_mid.dest_data_addr = - qsc->qsc_obuffer_list_desc_paddr; - } else { - bulk_req->comn_mid.dest_data_addr = - qsc->qsc_buffer_list_desc_paddr; - } - if (__predict_false(crp->crp_cipher_key != NULL || - crp->crp_auth_key != NULL)) - qat_hw17_crypto_req_setkey(desc, qs, qsc, bulk_req, crp); - - digest_paddr = 0; - if (desc->qcd_auth_sz != 0) - digest_paddr = qsc->qsc_auth_res_paddr; - - req_params_ptr = (uint8_t *)&bulk_req->serv_specif_rqpars; - cipher_param = (struct fw_la_cipher_req_params *)req_params_ptr; - auth_param = (struct fw_la_auth_req_params *) - (req_params_ptr + sizeof(struct fw_la_cipher_req_params)); - - cipher_param->u.s.cipher_IV_ptr = qsc->qsc_iv_buf_paddr; - - /* - * The SG list layout is a bit different for GCM and GMAC, it's simpler - * to handle those cases separately. - */ - if (qs->qs_auth_algo == HW_AUTH_ALGO_GALOIS_128) { - if (cmd_id != FW_LA_CMD_AUTH) { - /* - * Don't fill out the cipher block if we're doing GMAC - * only. - */ - cipher_param->cipher_offset = 0; - cipher_param->cipher_length = crp->crp_payload_length; - } - auth_param->auth_off = 0; - auth_param->auth_len = crp->crp_payload_length; - auth_param->auth_res_addr = digest_paddr; - auth_param->auth_res_sz = desc->qcd_auth_sz; - auth_param->u1.aad_adr = - crp->crp_aad_length > 0 ? qsc->qsc_gcm_aad_paddr : 0; - auth_param->u2.aad_sz = - roundup2(crp->crp_aad_length, QAT_AES_GCM_AAD_ALIGN); - auth_param->hash_state_sz = auth_param->u2.aad_sz >> 3; - - /* - * Update the hash state block if necessary. This only occurs - * when the AAD length changes between requests in a session and - * is synchronized by qat_process(). - */ - aad_sz = htobe32(crp->crp_aad_length); - aad_szp = (uint32_t *)( - __DECONST(uint8_t *, desc->qcd_content_desc) + - desc->qcd_gcm_aad_sz_offset1); - if (__predict_false(*aad_szp != aad_sz)) { - *aad_szp = aad_sz; - bus_dmamap_sync(qs->qs_desc_mem.qdm_dma_tag, - qs->qs_desc_mem.qdm_dma_map, - BUS_DMASYNC_PREWRITE); - } - } else { - if (cmd_id != FW_LA_CMD_AUTH) { - if (crp->crp_aad_length == 0) { - cipher_param->cipher_offset = 0; - } else if (crp->crp_aad == NULL) { - cipher_param->cipher_offset = - crp->crp_payload_start - crp->crp_aad_start; - } else { - cipher_param->cipher_offset = - crp->crp_aad_length; - } - cipher_param->cipher_length = crp->crp_payload_length; - } - if (cmd_id != FW_LA_CMD_CIPHER) { - auth_param->auth_off = 0; - auth_param->auth_len = - crp->crp_payload_length + crp->crp_aad_length; - auth_param->auth_res_addr = digest_paddr; - auth_param->auth_res_sz = desc->qcd_auth_sz; - auth_param->u1.aad_adr = 0; - auth_param->u2.aad_sz = 0; - auth_param->hash_state_sz = 0; - } - } -} Index: sys/dev/qat/qat_hw17reg.h =================================================================== --- sys/dev/qat/qat_hw17reg.h +++ /dev/null @@ -1,2460 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_hw17reg.h,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2014 Intel Corporation. - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* $FreeBSD$ */ - -#ifndef _DEV_PCI_QAT_HW17REG_H_ -#define _DEV_PCI_QAT_HW17REG_H_ - -/* Default message size in bytes */ -#define FW_REQ_DEFAULT_SZ_HW17 128 -#define FW_RESP_DEFAULT_SZ_HW17 32 - -/* -------------------------------------------------------------------------- */ -/* accel */ - -enum fw_init_admin_cmd_id { - FW_INIT_ME = 0, - FW_TRNG_ENABLE = 1, - FW_TRNG_DISABLE = 2, - FW_CONSTANTS_CFG = 3, - FW_STATUS_GET = 4, - FW_COUNTERS_GET = 5, - FW_LOOPBACK = 6, - FW_HEARTBEAT_SYNC = 7, - FW_HEARTBEAT_GET = 8, - FW_COMP_CAPABILITY_GET = 9, - FW_CRYPTO_CAPABILITY_GET = 10, - FW_HEARTBEAT_TIMER_SET = 13, -}; - -enum fw_init_admin_resp_status { - FW_INIT_RESP_STATUS_SUCCESS = 0, - FW_INIT_RESP_STATUS_FAIL = 1, - FW_INIT_RESP_STATUS_UNSUPPORTED = 4 -}; - -struct fw_init_admin_req { - uint16_t init_cfg_sz; - uint8_t resrvd1; - uint8_t init_admin_cmd_id; - uint32_t resrvd2; - uint64_t opaque_data; - uint64_t init_cfg_ptr; - - union { - struct { - uint16_t ibuf_size_in_kb; - uint16_t resrvd3; - }; - uint32_t heartbeat_ticks; - }; - - uint32_t resrvd4; -}; - -struct fw_init_admin_resp_hdr { - uint8_t flags; - uint8_t resrvd1; - uint8_t status; - uint8_t init_admin_cmd_id; -}; - -enum fw_init_admin_init_flag { - FW_INIT_FLAG_PKE_DISABLED = 0 -}; - -struct fw_init_admin_fw_capability_resp_hdr { - uint16_t reserved; - uint8_t status; - uint8_t init_admin_cmd_id; -}; - -struct fw_init_admin_capability_resp { - struct fw_init_admin_fw_capability_resp_hdr init_resp_hdr; - uint32_t extended_features; - uint64_t opaque_data; - union { - struct { - uint16_t compression_algos; - uint16_t checksum_algos; - uint32_t deflate_capabilities; - uint32_t resrvd1; - uint32_t lzs_capabilities; - } compression; - struct { - uint32_t cipher_algos; - uint32_t hash_algos; - uint16_t keygen_algos; - uint16_t other; - uint16_t public_key_algos; - uint16_t prime_algos; - } crypto; - }; -}; - -struct fw_init_admin_resp_pars { - union { - uint32_t resrvd1[4]; - struct { - uint32_t version_patch_num; - uint8_t context_id; - uint8_t ae_id; - uint16_t resrvd1; - uint64_t resrvd2; - } s1; - struct { - uint64_t req_rec_count; - uint64_t resp_sent_count; - } s2; - } u; -}; - -struct fw_init_admin_hb_cnt { - uint16_t resp_heartbeat_cnt; - uint16_t req_heartbeat_cnt; -}; - -#define QAT_NUM_THREADS 8 - -struct fw_init_admin_hb_stats { - struct fw_init_admin_hb_cnt stats[QAT_NUM_THREADS]; -}; - -struct fw_init_admin_resp { - struct fw_init_admin_resp_hdr init_resp_hdr; - union { - uint32_t resrvd2; - struct { - uint16_t version_minor_num; - uint16_t version_major_num; - } s; - } u; - uint64_t opaque_data; - struct fw_init_admin_resp_pars init_resp_pars; -}; - -#define FW_COMN_HEARTBEAT_OK 0 -#define FW_COMN_HEARTBEAT_BLOCKED 1 -#define FW_COMN_HEARTBEAT_FLAG_BITPOS 0 -#define FW_COMN_HEARTBEAT_FLAG_MASK 0x1 -#define FW_COMN_STATUS_RESRVD_FLD_MASK 0xFE -#define FW_COMN_HEARTBEAT_HDR_FLAG_GET(hdr_t) \ - FW_COMN_HEARTBEAT_FLAG_GET(hdr_t.flags) - -#define FW_COMN_HEARTBEAT_HDR_FLAG_SET(hdr_t, val) \ - FW_COMN_HEARTBEAT_FLAG_SET(hdr_t, val) - -#define FW_COMN_HEARTBEAT_FLAG_GET(flags) \ - QAT_FIELD_GET(flags, \ - FW_COMN_HEARTBEAT_FLAG_BITPOS, \ - FW_COMN_HEARTBEAT_FLAG_MASK) - -/* -------------------------------------------------------------------------- */ - -/* Big assumptions that both bitpos and mask are constants */ -#define FIELD_SET(flags, val, bitpos, mask) \ - (flags) = \ - (((flags) & (~((mask) << (bitpos)))) | (((val) & (mask)) << (bitpos))) - -#define FIELD_GET(flags, bitpos, mask) (((flags) >> (bitpos)) & (mask)) - -#define FLAG_SET(flags, bitpos) (flags) = ((flags) | (1 << (bitpos))) - -#define FLAG_CLEAR(flags, bitpos) (flags) = ((flags) & (~(1 << (bitpos)))) - -#define FLAG_GET(flags, bitpos) (((flags) >> (bitpos)) & 1) - -/* Default request and response ring size in bytes */ -#define FW_REQ_DEFAULT_SZ 128 -#define FW_RESP_DEFAULT_SZ 32 - -#define FW_COMN_ONE_BYTE_SHIFT 8 -#define FW_COMN_SINGLE_BYTE_MASK 0xFF - -/* Common Request - Block sizes definitions in multiples of individual long - * words */ -#define FW_NUM_LONGWORDS_1 1 -#define FW_NUM_LONGWORDS_2 2 -#define FW_NUM_LONGWORDS_3 3 -#define FW_NUM_LONGWORDS_4 4 -#define FW_NUM_LONGWORDS_5 5 -#define FW_NUM_LONGWORDS_6 6 -#define FW_NUM_LONGWORDS_7 7 -#define FW_NUM_LONGWORDS_10 10 -#define FW_NUM_LONGWORDS_13 13 - -/* Definition of the associated service Id for NULL service type. - Note: the response is expected to use FW_COMN_RESP_SERV_CPM_FW */ -#define FW_NULL_REQ_SERV_ID 1 - -/* - * Definition of the firmware interface service users, for - * responses. - * Enumeration which is used to indicate the ids of the services - * for responses using the external firmware interfaces. - */ - -enum fw_comn_resp_serv_id { - FW_COMN_RESP_SERV_NULL, /* NULL service id type */ - FW_COMN_RESP_SERV_CPM_FW, /* CPM FW Service ID */ - FW_COMN_RESP_SERV_DELIMITER /* Delimiter service id type */ -}; - -/* - * Definition of the request types - * Enumeration which is used to indicate the ids of the request - * types used in each of the external firmware interfaces - */ - -enum fw_comn_request_id { - FW_COMN_REQ_NULL = 0, /* NULL request type */ - FW_COMN_REQ_CPM_FW_PKE = 3, /* CPM FW PKE Request */ - FW_COMN_REQ_CPM_FW_LA = 4, /* CPM FW Lookaside Request */ - FW_COMN_REQ_CPM_FW_DMA = 7, /* CPM FW DMA Request */ - FW_COMN_REQ_CPM_FW_COMP = 9, /* CPM FW Compression Request */ - FW_COMN_REQ_DELIMITER /* End delimiter */ - -}; - -/* - * Definition of the common QAT FW request content descriptor field - - * points to the content descriptor parameters or itself contains service- - * specific data. Also specifies content descriptor parameter size. - * Contains reserved fields. - * Common section of the request used across all of the services exposed - * by the QAT FW. Each of the services inherit these common fields - */ -union fw_comn_req_hdr_cd_pars { - /* LWs 2-5 */ - struct - { - uint64_t content_desc_addr; - /* Address of the content descriptor */ - - uint16_t content_desc_resrvd1; - /* Content descriptor reserved field */ - - uint8_t content_desc_params_sz; - /* Size of the content descriptor parameters in quad words. These - * parameters describe the session setup configuration info for the - * slices that this request relies upon i.e. the configuration word and - * cipher key needed by the cipher slice if there is a request for - * cipher processing. */ - - uint8_t content_desc_hdr_resrvd2; - /* Content descriptor reserved field */ - - uint32_t content_desc_resrvd3; - /* Content descriptor reserved field */ - } s; - - struct - { - uint32_t serv_specif_fields[FW_NUM_LONGWORDS_4]; - - } s1; - -}; - -/* - * Definition of the common QAT FW request middle block. - * Common section of the request used across all of the services exposed - * by the QAT FW. Each of the services inherit these common fields - */ -struct fw_comn_req_mid -{ - /* LWs 6-13 */ - uint64_t opaque_data; - /* Opaque data passed unmodified from the request to response messages by - * firmware (fw) */ - - uint64_t src_data_addr; - /* Generic definition of the source data supplied to the QAT AE. The - * common flags are used to further describe the attributes of this - * field */ - - uint64_t dest_data_addr; - /* Generic definition of the destination data supplied to the QAT AE. The - * common flags are used to further describe the attributes of this - * field */ - - uint32_t src_length; - /* Length of source flat buffer incase src buffer - * type is flat */ - - uint32_t dst_length; - /* Length of source flat buffer incase dst buffer - * type is flat */ - -}; - -/* - * Definition of the common QAT FW request content descriptor control - * block. - * - * Service specific section of the request used across all of the services - * exposed by the QAT FW. Each of the services populates this block - * uniquely. Refer to the service-specific header structures e.g. - * 'fw_cipher_hdr_s' (for Cipher) etc. - */ -struct fw_comn_req_cd_ctrl -{ - /* LWs 27-31 */ - uint32_t content_desc_ctrl_lw[FW_NUM_LONGWORDS_5]; - -}; - -/* - * Definition of the common QAT FW request header. - * Common section of the request used across all of the services exposed - * by the QAT FW. Each of the services inherit these common fields. The - * reserved field of 7 bits and the service command Id field are all - * service-specific fields, along with the service specific flags. - */ -struct fw_comn_req_hdr -{ - /* LW0 */ - uint8_t resrvd1; - /* reserved field */ - - uint8_t service_cmd_id; - /* Service Command Id - this field is service-specific - * Please use service-specific command Id here e.g.Crypto Command Id - * or Compression Command Id etc. */ - - uint8_t service_type; - /* Service type */ - - uint8_t hdr_flags; - /* This represents a flags field for the Service Request. - * The most significant bit is the 'valid' flag and the only - * one used. All remaining bit positions are unused and - * are therefore reserved and need to be set to 0. */ - - /* LW1 */ - uint16_t serv_specif_flags; - /* Common Request service-specific flags - * e.g. Symmetric Crypto Command Flags */ - - uint16_t comn_req_flags; - /* Common Request Flags consisting of - * - 14 reserved bits, - * - 1 Content Descriptor field type bit and - * - 1 Source/destination pointer type bit */ - -}; - -/* - * Definition of the common QAT FW request parameter field. - * - * Service specific section of the request used across all of the services - * exposed by the QAT FW. Each of the services populates this block - * uniquely. Refer to service-specific header structures e.g. - * 'fw_comn_req_cipher_rqpars_s' (for Cipher) etc. - * - */ -struct fw_comn_req_rqpars -{ - /* LWs 14-26 */ - uint32_t serv_specif_rqpars_lw[FW_NUM_LONGWORDS_13]; - -}; - -/* - * Definition of the common request structure with service specific - * fields - * This is a definition of the full qat request structure used by all - * services. Each service is free to use the service fields in its own - * way. This struct is useful as a message passing argument before the - * service contained within the request is determined. - */ -struct fw_comn_req -{ - /* LWs 0-1 */ - struct fw_comn_req_hdr comn_hdr; - /* Common request header */ - - /* LWs 2-5 */ - union fw_comn_req_hdr_cd_pars cd_pars; - /* Common Request content descriptor field which points either to a - * content descriptor - * parameter block or contains the service-specific data itself. */ - - /* LWs 6-13 */ - struct fw_comn_req_mid comn_mid; - /* Common request middle section */ - - /* LWs 14-26 */ - struct fw_comn_req_rqpars serv_specif_rqpars; - /* Common request service-specific parameter field */ - - /* LWs 27-31 */ - struct fw_comn_req_cd_ctrl cd_ctrl; - /* Common request content descriptor control block - - * this field is service-specific */ - -}; - -/* - * Error code field - * - * Overloaded field with 8 bit common error field or two - * 8 bit compression error fields for compression and translator slices - */ -union fw_comn_error { - struct - { - uint8_t resrvd; - /* 8 bit reserved field */ - - uint8_t comn_err_code; - /* 8 bit common error code */ - - } s; - /* Structure which is used for non-compression responses */ - - struct - { - uint8_t xlat_err_code; - /* 8 bit translator error field */ - - uint8_t cmp_err_code; - /* 8 bit compression error field */ - - } s1; - /* Structure which is used for compression responses */ - -}; - -/* - * Definition of the common QAT FW response header. - * This section of the response is common across all of the services - * that generate a firmware interface response - */ -struct fw_comn_resp_hdr -{ - /* LW0 */ - uint8_t resrvd1; - /* Reserved field - this field is service-specific - - * Note: The Response Destination Id has been removed - * from first QWord */ - - uint8_t service_id; - /* Service Id returned by service block */ - - uint8_t response_type; - /* Response type - copied from the request to - * the response message */ - - uint8_t hdr_flags; - /* This represents a flags field for the Response. - * Bit<7> = 'valid' flag - * Bit<6> = 'CNV' flag indicating that CNV was executed - * on the current request - * Bit<5> = 'CNVNR' flag indicating that a recovery happened - * on the current request following a CNV error - * All remaining bits are unused and are therefore reserved. - * They must to be set to 0. - */ - - /* LW 1 */ - union fw_comn_error comn_error; - /* This field is overloaded to allow for one 8 bit common error field - * or two 8 bit error fields from compression and translator */ - - uint8_t comn_status; - /* Status field which specifies which slice(s) report an error */ - - uint8_t cmd_id; - /* Command Id - passed from the request to the response message */ - -}; - -/* - * Definition of the common response structure with service specific - * fields - * This is a definition of the full qat response structure used by all - * services. - */ -struct fw_comn_resp -{ - /* LWs 0-1 */ - struct fw_comn_resp_hdr comn_hdr; - /* Common header fields */ - - /* LWs 2-3 */ - uint64_t opaque_data; - /* Opaque data passed from the request to the response message */ - - /* LWs 4-7 */ - uint32_t resrvd[FW_NUM_LONGWORDS_4]; - /* Reserved */ - -}; - -/* Common QAT FW request header - structure of LW0 - * + ===== + ---- + ----------- + ----------- + ----------- + ----------- + - * | Bit | 31 | 30 - 24 | 21 - 16 | 15 - 8 | 7 - 0 | - * + ===== + ---- + ----------- + ----------- + ----------- + ----------- + - * | Flags | V | Reserved | Serv Type | Serv Cmd Id | Reserved | - * + ===== + ---- + ----------- + ----------- + ----------- + ----------- + - */ - -#define FW_COMN_VALID __BIT(7) - -/* Common QAT FW response header - structure of LW0 - * + ===== + --- + --- + ----- + ----- + --------- + ----------- + ----- + - * | Bit | 31 | 30 | 29 | 28-24 | 21 - 16 | 15 - 8 | 7-0 | - * + ===== + --- + ----+ ----- + ----- + --------- + ----------- + ----- + - * | Flags | V | CNV | CNVNR | Rsvd | Serv Type | Serv Cmd Id | Rsvd | - * + ===== + --- + --- + ----- + ----- + --------- + ----------- + ----- + */ -/* Macros defining the bit position and mask of 'CNV' flag - * within the hdr_flags field of LW0 (service response only) */ -#define FW_COMN_CNV_FLAG_BITPOS 6 -#define FW_COMN_CNV_FLAG_MASK 0x1 - -/* Macros defining the bit position and mask of CNVNR flag - * within the hdr_flags field of LW0 (service response only) */ -#define FW_COMN_CNVNR_FLAG_BITPOS 5 -#define FW_COMN_CNVNR_FLAG_MASK 0x1 - -/* - * Macro for extraction of Service Type Field - * - * struct fw_comn_req_hdr Structure 'fw_comn_req_hdr_t' - * to extract the Service Type Field - */ -#define FW_COMN_OV_SRV_TYPE_GET(fw_comn_req_hdr_t) \ - fw_comn_req_hdr_t.service_type - -/* - * Macro for setting of Service Type Field - * - * 'fw_comn_req_hdr_t' structure to set the Service - * Type Field - * val Value of the Service Type Field - */ -#define FW_COMN_OV_SRV_TYPE_SET(fw_comn_req_hdr_t, val) \ - fw_comn_req_hdr_t.service_type = val - -/* - * Macro for extraction of Service Command Id Field - * - * struct fw_comn_req_hdr Structure 'fw_comn_req_hdr_t' - * to extract the Service Command Id Field - */ -#define FW_COMN_OV_SRV_CMD_ID_GET(fw_comn_req_hdr_t) \ - fw_comn_req_hdr_t.service_cmd_id - -/* - * Macro for setting of Service Command Id Field - * - * 'fw_comn_req_hdr_t' structure to set the - * Service Command Id Field - * val Value of the Service Command Id Field - */ -#define FW_COMN_OV_SRV_CMD_ID_SET(fw_comn_req_hdr_t, val) \ - fw_comn_req_hdr_t.service_cmd_id = val - -/* - * Extract the valid flag from the request or response's header flags. - * - * hdr_t Request or Response 'hdr_t' structure to extract the valid bit - * from the 'hdr_flags' field. - */ -#define FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \ - FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags) - -/* - * Extract the CNVNR flag from the header flags in the response only. - * - * hdr_t Response 'hdr_t' structure to extract the CNVNR bit - * from the 'hdr_flags' field. - */ -#define FW_COMN_HDR_CNVNR_FLAG_GET(hdr_flags) \ - FIELD_GET(hdr_flags, \ - FW_COMN_CNVNR_FLAG_BITPOS, \ - FW_COMN_CNVNR_FLAG_MASK) - -/* - * Extract the CNV flag from the header flags in the response only. - * - * hdr_t Response 'hdr_t' structure to extract the CNV bit - * from the 'hdr_flags' field. - */ -#define FW_COMN_HDR_CNV_FLAG_GET(hdr_flags) \ - FIELD_GET(hdr_flags, \ - FW_COMN_CNV_FLAG_BITPOS, \ - FW_COMN_CNV_FLAG_MASK) - -/* - * Set the valid bit in the request's header flags. - * - * hdr_t Request or Response 'hdr_t' structure to set the valid bit - * val Value of the valid bit flag. - */ -#define FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \ - FW_COMN_VALID_FLAG_SET(hdr_t, val) - -/* - * Common macro to extract the valid flag from the header flags field - * within the header structure (request or response). - * - * hdr_t Structure (request or response) to extract the - * valid bit from the 'hdr_flags' field. - */ -#define FW_COMN_VALID_FLAG_GET(hdr_flags) \ - FIELD_GET(hdr_flags, \ - FW_COMN_VALID_FLAG_BITPOS, \ - FW_COMN_VALID_FLAG_MASK) - -/* - * Common macro to extract the remaining reserved flags from the header - * flags field within the header structure (request or response). - * - * hdr_t Structure (request or response) to extract the - * remaining bits from the 'hdr_flags' field (excluding the - * valid flag). - */ -#define FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \ - (hdr_flags & FW_COMN_HDR_RESRVD_FLD_MASK) - -/* - * Common macro to set the valid bit in the header flags field within - * the header structure (request or response). - * - * hdr_t Structure (request or response) containing the header - * flags field, to allow the valid bit to be set. - * val Value of the valid bit flag. - */ -#define FW_COMN_VALID_FLAG_SET(hdr_t, val) \ - FIELD_SET((hdr_t.hdr_flags), \ - (val), \ - FW_COMN_VALID_FLAG_BITPOS, \ - FW_COMN_VALID_FLAG_MASK) - -/* - * Macro that must be used when building the common header flags. - * Note that all bits reserved field bits 0-6 (LW0) need to be forced to 0. - * - * ptr Value of the valid flag - */ - -#define FW_COMN_HDR_FLAGS_BUILD(valid) \ - (((valid)&FW_COMN_VALID_FLAG_MASK) \ - << FW_COMN_VALID_FLAG_BITPOS) - -/* - * Common Request Flags Definition - * The bit offsets below are within the flags field. These are NOT relative to - * the memory word. Unused fields e.g. reserved bits, must be zeroed. - * - * + ===== + ------ + --- + --- + --- + --- + --- + --- + --- + --- + - * | Bits [15:8] | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | - * + ===== + ------ + --- + --- + --- + --- + --- + --- + --- + --- + - * | Flags[15:8] | Rsv | Rsv | Rsv | Rsv | Rsv | Rsv | Rsv | Rsv | - * + ===== + ------ + --- + --- + --- + --- + --- + --- + --- + --- + - * | Bits [7:0] | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 | - * + ===== + ------ + --- + --- + --- + --- + --- + --- + --- + --- + - * | Flags [7:0] | Rsv | Rsv | Rsv | Rsv | Rsv | BnP | Cdt | Ptr | - * + ===== + ------ + --- + --- + --- + --- + --- + --- + --- + --- + - */ - -#define COMN_PTR_TYPE_BITPOS 0 -/* Common Request Flags - Starting bit position indicating - * Src&Dst Buffer Pointer type */ - -#define COMN_PTR_TYPE_MASK 0x1 -/* Common Request Flags - One bit mask used to determine - * Src&Dst Buffer Pointer type */ - -#define COMN_CD_FLD_TYPE_BITPOS 1 -/* Common Request Flags - Starting bit position indicating - * CD Field type */ - -#define COMN_CD_FLD_TYPE_MASK 0x1 -/* Common Request Flags - One bit mask used to determine - * CD Field type */ - -#define COMN_BNP_ENABLED_BITPOS 2 -/* Common Request Flags - Starting bit position indicating - * the source buffer contains batch of requests. if this - * bit is set, source buffer is type of Batch And Pack OpData List - * and the Ptr Type Bit only applies to Destination buffer. */ - -#define COMN_BNP_ENABLED_MASK 0x1 -/* Batch And Pack Enabled Flag Mask - One bit mask used to determine - * the source buffer is in Batch and Pack OpData Link List Mode. */ - -/* ========================================================================= */ -/* Pointer Type Flag definitions */ -/* ========================================================================= */ -#define COMN_PTR_TYPE_FLAT 0x0 -/* Constant value indicating Src&Dst Buffer Pointer type is flat - * If Batch and Pack mode is enabled, only applies to Destination buffer. */ - -#define COMN_PTR_TYPE_SGL 0x1 -/* Constant value indicating Src&Dst Buffer Pointer type is SGL type - * If Batch and Pack mode is enabled, only applies to Destination buffer. */ - -#define COMN_PTR_TYPE_BATCH 0x2 -/* Constant value indicating Src is a batch request - * and Dst Buffer Pointer type is SGL type */ - -/* ========================================================================= */ -/* CD Field Flag definitions */ -/* ========================================================================= */ -#define COMN_CD_FLD_TYPE_64BIT_ADR 0x0 -/* Constant value indicating CD Field contains 64-bit address */ - -#define COMN_CD_FLD_TYPE_16BYTE_DATA 0x1 -/* Constant value indicating CD Field contains 16 bytes of setup data */ - -/* ========================================================================= */ -/* Batch And Pack Enable/Disable Definitions */ -/* ========================================================================= */ -#define COMN_BNP_ENABLED 0x1 -/* Constant value indicating Source buffer will point to Batch And Pack OpData - * List */ - -#define COMN_BNP_DISABLED 0x0 -/* Constant value indicating Source buffer will point to Batch And Pack OpData - * List */ - -/* - * Macro that must be used when building the common request flags (for all - * requests but comp BnP). - * Note that all bits reserved field bits 2-15 (LW1) need to be forced to 0. - * - * ptr Value of the pointer type flag - * cdt Value of the cd field type flag -*/ -#define FW_COMN_FLAGS_BUILD(cdt, ptr) \ - ((((cdt)&COMN_CD_FLD_TYPE_MASK) << COMN_CD_FLD_TYPE_BITPOS) | \ - (((ptr)&COMN_PTR_TYPE_MASK) << COMN_PTR_TYPE_BITPOS)) - -/* - * Macro that must be used when building the common request flags for comp - * BnP service. - * Note that all bits reserved field bits 3-15 (LW1) need to be forced to 0. - * - * ptr Value of the pointer type flag - * cdt Value of the cd field type flag - * bnp Value of the bnp enabled flag - */ -#define FW_COMN_FLAGS_BUILD_BNP(cdt, ptr, bnp) \ - ((((cdt)&COMN_CD_FLD_TYPE_MASK) << COMN_CD_FLD_TYPE_BITPOS) | \ - (((ptr)&COMN_PTR_TYPE_MASK) << COMN_PTR_TYPE_BITPOS) | \ - (((bnp)&COMN_BNP_ENABLED_MASK) << COMN_BNP_ENABLED_BITPOS)) - -/* - * Macro for extraction of the pointer type bit from the common flags - * - * flags Flags to extract the pointer type bit from - */ -#define FW_COMN_PTR_TYPE_GET(flags) \ - FIELD_GET(flags, COMN_PTR_TYPE_BITPOS, COMN_PTR_TYPE_MASK) - -/* - * Macro for extraction of the cd field type bit from the common flags - * - * flags Flags to extract the cd field type type bit from - */ -#define FW_COMN_CD_FLD_TYPE_GET(flags) \ - FIELD_GET(flags, COMN_CD_FLD_TYPE_BITPOS, COMN_CD_FLD_TYPE_MASK) - -/* - * Macro for extraction of the bnp field type bit from the common flags - * - * flags Flags to extract the bnp field type type bit from - * - */ -#define FW_COMN_BNP_ENABLED_GET(flags) \ - FIELD_GET(flags, COMN_BNP_ENABLED_BITPOS, COMN_BNP_ENABLED_MASK) - -/* - * Macro for setting the pointer type bit in the common flags - * - * flags Flags in which Pointer Type bit will be set - * val Value of the bit to be set in flags - * - */ -#define FW_COMN_PTR_TYPE_SET(flags, val) \ - FIELD_SET(flags, val, COMN_PTR_TYPE_BITPOS, COMN_PTR_TYPE_MASK) - -/* - * Macro for setting the cd field type bit in the common flags - * - * flags Flags in which Cd Field Type bit will be set - * val Value of the bit to be set in flags - * - */ -#define FW_COMN_CD_FLD_TYPE_SET(flags, val) \ - FIELD_SET( \ - flags, val, COMN_CD_FLD_TYPE_BITPOS, COMN_CD_FLD_TYPE_MASK) - -/* - * Macro for setting the bnp field type bit in the common flags - * - * flags Flags in which Bnp Field Type bit will be set - * val Value of the bit to be set in flags - * - */ -#define FW_COMN_BNP_ENABLE_SET(flags, val) \ - FIELD_SET( \ - flags, val, COMN_BNP_ENABLED_BITPOS, COMN_BNP_ENABLED_MASK) - -/* - * Macros using the bit position and mask to set/extract the next - * and current id nibbles within the next_curr_id field of the - * content descriptor header block. Note that these are defined - * in the common header file, as they are used by compression, cipher - * and authentication. - * - * cd_ctrl_hdr_t Content descriptor control block header pointer. - * val Value of the field being set. - */ -#define FW_COMN_NEXT_ID_BITPOS 4 -#define FW_COMN_NEXT_ID_MASK 0xF0 -#define FW_COMN_CURR_ID_BITPOS 0 -#define FW_COMN_CURR_ID_MASK 0x0F - -#define FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \ - ((((cd_ctrl_hdr_t)->next_curr_id) & FW_COMN_NEXT_ID_MASK) >> \ - (FW_COMN_NEXT_ID_BITPOS)) - -#define FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \ - ((cd_ctrl_hdr_t)->next_curr_id) = \ - ((((cd_ctrl_hdr_t)->next_curr_id) & FW_COMN_CURR_ID_MASK) | \ - ((val << FW_COMN_NEXT_ID_BITPOS) & \ - FW_COMN_NEXT_ID_MASK)) - -#define FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \ - (((cd_ctrl_hdr_t)->next_curr_id) & FW_COMN_CURR_ID_MASK) - -#define FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \ - ((cd_ctrl_hdr_t)->next_curr_id) = \ - ((((cd_ctrl_hdr_t)->next_curr_id) & FW_COMN_NEXT_ID_MASK) | \ - ((val)&FW_COMN_CURR_ID_MASK)) - -/* - * Common Status Field Definition The bit offsets below are within the COMMON - * RESPONSE status field, assumed to be 8 bits wide. In the case of the PKE - * response (which follows the CPM 1.5 message format), the status field is 16 - * bits wide. - * The status flags are contained within the most significant byte and align - * with the diagram below. Please therefore refer to the service-specific PKE - * header file for the appropriate macro definition to extract the PKE status - * flag from the PKE response, which assumes that a word is passed to the - * macro. - * + ===== + ------ + --- + --- + ---- + ---- + -------- + ---- + ---------- + - * | Bit | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 | - * + ===== + ------ + --- + --- + ---- + ---- + -------- + ---- + ---------- + - * | Flags | Crypto | Pke | Cmp | Xlat | EOLB | UnSupReq | Rsvd | XltWaApply | - * + ===== + ------ + --- + --- + ---- + ---- + -------- + ---- + ---------- + - * Note: - * For the service specific status bit definitions refer to service header files - * Eg. Crypto Status bit refers to Symmetric Crypto, Key Generation, and NRBG - * Requests' Status. Unused bits e.g. reserved bits need to have been forced to - * 0. - */ - -#define COMN_RESP_CRYPTO_STATUS_BITPOS 7 -/* Starting bit position indicating Response for Crypto service Flag */ - -#define COMN_RESP_CRYPTO_STATUS_MASK 0x1 -/* One bit mask used to determine Crypto status mask */ - -#define COMN_RESP_PKE_STATUS_BITPOS 6 -/* Starting bit position indicating Response for PKE service Flag */ - -#define COMN_RESP_PKE_STATUS_MASK 0x1 -/* One bit mask used to determine PKE status mask */ - -#define COMN_RESP_CMP_STATUS_BITPOS 5 -/* Starting bit position indicating Response for Compression service Flag */ - -#define COMN_RESP_CMP_STATUS_MASK 0x1 -/* One bit mask used to determine Compression status mask */ - -#define COMN_RESP_XLAT_STATUS_BITPOS 4 -/* Starting bit position indicating Response for Xlat service Flag */ - -#define COMN_RESP_XLAT_STATUS_MASK 0x1 -/* One bit mask used to determine Translator status mask */ - -#define COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3 -/* Starting bit position indicating the last block in a deflate stream for - the compression service Flag */ - -#define COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1 -/* One bit mask used to determine the last block in a deflate stream - status mask */ - -#define COMN_RESP_UNSUPPORTED_REQUEST_BITPOS 2 -/* Starting bit position indicating when an unsupported service request Flag */ - -#define COMN_RESP_UNSUPPORTED_REQUEST_MASK 0x1 -/* One bit mask used to determine the unsupported service request status mask */ - -#define COMN_RESP_XLT_WA_APPLIED_BITPOS 0 -/* Bit position indicating a firmware workaround was applied to translation */ - -#define COMN_RESP_XLT_WA_APPLIED_MASK 0x1 -/* One bit mask */ - -/* - * Macro that must be used when building the status - * for the common response - * - * crypto Value of the Crypto Service status flag - * comp Value of the Compression Service Status flag - * xlat Value of the Xlator Status flag - * eolb Value of the Compression End of Last Block Status flag - * unsupp Value of the Unsupported Request flag - * xlt_wa Value of the Translation WA marker - */ -#define FW_COMN_RESP_STATUS_BUILD( \ - crypto, pke, comp, xlat, eolb, unsupp, xlt_wa) \ - ((((crypto)&COMN_RESP_CRYPTO_STATUS_MASK) \ - << COMN_RESP_CRYPTO_STATUS_BITPOS) | \ - (((pke)&COMN_RESP_PKE_STATUS_MASK) \ - << COMN_RESP_PKE_STATUS_BITPOS) | \ - (((xlt_wa)&COMN_RESP_XLT_WA_APPLIED_MASK) \ - << COMN_RESP_XLT_WA_APPLIED_BITPOS) | \ - (((comp)&COMN_RESP_CMP_STATUS_MASK) \ - << COMN_RESP_CMP_STATUS_BITPOS) | \ - (((xlat)&COMN_RESP_XLAT_STATUS_MASK) \ - << COMN_RESP_XLAT_STATUS_BITPOS) | \ - (((eolb)&COMN_RESP_CMP_END_OF_LAST_BLK_MASK) \ - << COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS) | \ - (((unsupp)&COMN_RESP_UNSUPPORTED_REQUEST_BITPOS) \ - << COMN_RESP_UNSUPPORTED_REQUEST_MASK)) - -/* - * Macro for extraction of the Crypto bit from the status - * - * status Status to extract the status bit from - */ -#define FW_COMN_RESP_CRYPTO_STAT_GET(status) \ - FIELD_GET(status, \ - COMN_RESP_CRYPTO_STATUS_BITPOS, \ - COMN_RESP_CRYPTO_STATUS_MASK) - -/* - * Macro for extraction of the PKE bit from the status - * - * status Status to extract the status bit from - */ -#define FW_COMN_RESP_PKE_STAT_GET(status) \ - FIELD_GET(status, \ - COMN_RESP_PKE_STATUS_BITPOS, \ - COMN_RESP_PKE_STATUS_MASK) - -/* - * Macro for extraction of the Compression bit from the status - * - * status Status to extract the status bit from - */ -#define FW_COMN_RESP_CMP_STAT_GET(status) \ - FIELD_GET(status, \ - COMN_RESP_CMP_STATUS_BITPOS, \ - COMN_RESP_CMP_STATUS_MASK) - -/* - * Macro for extraction of the Translator bit from the status - * - * status Status to extract the status bit from - */ -#define FW_COMN_RESP_XLAT_STAT_GET(status) \ - FIELD_GET(status, \ - COMN_RESP_XLAT_STATUS_BITPOS, \ - COMN_RESP_XLAT_STATUS_MASK) - -/* - * Macro for extraction of the Translation Workaround Applied bit from the - * status - * - * status Status to extract the status bit from - */ -#define FW_COMN_RESP_XLT_WA_APPLIED_GET(status) \ - FIELD_GET(status, \ - COMN_RESP_XLT_WA_APPLIED_BITPOS, \ - COMN_RESP_XLT_WA_APPLIED_MASK) - -/* - * Macro for extraction of the end of compression block bit from the - * status - * - * status - * Status to extract the status bit from - */ -#define FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \ - FIELD_GET(status, \ - COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \ - COMN_RESP_CMP_END_OF_LAST_BLK_MASK) - -/* - * Macro for extraction of the Unsupported request from the status - * - * status - * Status to extract the status bit from - */ -#define FW_COMN_RESP_UNSUPPORTED_REQUEST_STAT_GET(status) \ - FIELD_GET(status, \ - COMN_RESP_UNSUPPORTED_REQUEST_BITPOS, \ - COMN_RESP_UNSUPPORTED_REQUEST_MASK) - -#define FW_COMN_STATUS_FLAG_OK 0 -/* Definition of successful processing of a request */ - -#define FW_COMN_STATUS_FLAG_ERROR 1 -/* Definition of erroneous processing of a request */ - -#define FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0 -/* Final Deflate block of a compression request not completed */ - -#define FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1 -/* Final Deflate block of a compression request completed */ - -#define ERR_CODE_NO_ERROR 0 -/* Error Code constant value for no error */ - -#define ERR_CODE_INVALID_BLOCK_TYPE -1 -/* Invalid block type (type == 3)*/ - -#define ERR_CODE_NO_MATCH_ONES_COMP -2 -/* Stored block length does not match one's complement */ - -#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3 -/* Too many length or distance codes */ - -#define ERR_CODE_INCOMPLETE_LEN -4 -/* Code lengths codes incomplete */ - -#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5 -/* Repeat lengths with no first length */ - -#define ERR_CODE_RPT_GT_SPEC_LEN -6 -/* Repeat more than specified lengths */ - -#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7 -/* Invalid lit/len code lengths */ - -#define ERR_CODE_INV_DIS_CODE_LEN -8 -/* Invalid distance code lengths */ - -#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9 -/* Invalid lit/len or distance code in fixed/dynamic block */ - -#define ERR_CODE_DIS_TOO_FAR_BACK -10 -/* Distance too far back in fixed or dynamic block */ - -/* Common Error code definitions */ -#define ERR_CODE_OVERFLOW_ERROR -11 -/* Error Code constant value for overflow error */ - -#define ERR_CODE_SOFT_ERROR -12 -/* Error Code constant value for soft error */ - -#define ERR_CODE_FATAL_ERROR -13 -/* Error Code constant value for hard/fatal error */ - -#define ERR_CODE_COMP_OUTPUT_CORRUPTION -14 -/* Error Code constant for compression output corruption */ - -#define ERR_CODE_HW_INCOMPLETE_FILE -15 -/* Error Code constant value for incomplete file hardware error */ - -#define ERR_CODE_SSM_ERROR -16 -/* Error Code constant value for error detected by SSM e.g. slice hang */ - -#define ERR_CODE_ENDPOINT_ERROR -17 -/* Error Code constant value for error detected by PCIe Endpoint, e.g. push - * data error */ - -#define ERR_CODE_CNV_ERROR -18 -/* Error Code constant value for cnv failure */ - -#define ERR_CODE_EMPTY_DYM_BLOCK -19 -/* Error Code constant value for submission of empty dynamic stored block to - * slice */ - -#define ERR_CODE_KPT_CRYPTO_SERVICE_FAIL_INVALID_HANDLE -20 -/* Error Code constant for invalid handle in kpt crypto service */ - -#define ERR_CODE_KPT_CRYPTO_SERVICE_FAIL_HMAC_FAILED -21 -/* Error Code constant for failed hmac in kpt crypto service */ - -#define ERR_CODE_KPT_CRYPTO_SERVICE_FAIL_INVALID_WRAPPING_ALGO -22 -/* Error Code constant for invalid wrapping algo in kpt crypto service */ - -#define ERR_CODE_KPT_DRNG_SEED_NOT_LOAD -23 -/* Error Code constant for no drng seed is not loaded in kpt ecdsa signrs -/service */ - -#define FW_LA_ICV_VER_STATUS_PASS FW_COMN_STATUS_FLAG_OK -/* Status flag indicating that the ICV verification passed */ - -#define FW_LA_ICV_VER_STATUS_FAIL FW_COMN_STATUS_FLAG_ERROR -/* Status flag indicating that the ICV verification failed */ - -#define FW_LA_TRNG_STATUS_PASS FW_COMN_STATUS_FLAG_OK -/* Status flag indicating that the TRNG returned valid entropy data */ - -#define FW_LA_TRNG_STATUS_FAIL FW_COMN_STATUS_FLAG_ERROR -/* Status flag indicating that the TRNG Command Failed. */ - -/* -------------------------------------------------------------------------- */ - -/* - * Definition of the full bulk processing request structure. - * Used for hash, cipher, hash-cipher and authentication-encryption - * requests etc. - */ -struct fw_la_bulk_req -{ - /* LWs 0-1 */ - struct fw_comn_req_hdr comn_hdr; - /* Common request header - for Service Command Id, - * use service-specific Crypto Command Id. - * Service Specific Flags - use Symmetric Crypto Command Flags - * (all of cipher, auth, SSL3, TLS and MGF, - * excluding TRNG - field unused) */ - - /* LWs 2-5 */ - union fw_comn_req_hdr_cd_pars cd_pars; - /* Common Request content descriptor field which points either to a - * content descriptor - * parameter block or contains the service-specific data itself. */ - - /* LWs 6-13 */ - struct fw_comn_req_mid comn_mid; - /* Common request middle section */ - - /* LWs 14-26 */ - struct fw_comn_req_rqpars serv_specif_rqpars; - /* Common request service-specific parameter field */ - - /* LWs 27-31 */ - struct fw_comn_req_cd_ctrl cd_ctrl; - /* Common request content descriptor control block - - * this field is service-specific */ - -}; - -/* clang-format off */ - -/* - * LA BULK (SYMMETRIC CRYPTO) COMMAND FLAGS - * - * + ===== + ---------- + ----- + ----- + ----- + ----- + ----- + ----- + ----- + ----- + ----- + ----- + - * | Bit | [15:13] | 12 | 11 | 10 | 7-9 | 6 | 5 | 4 | 3 | 2 | 1-0 | - * + ===== + ---------- + ----- + ----- + ----- + ----- + ----- + ----- + ----- + ----- + ------+ ----- + - * | Flags | Resvd Bits | ZUC | GcmIV |Digest | Prot | Cmp | Rtn | Upd | Ciph/ | CiphIV| Part- | - * | | =0 | Prot | Len | In Buf| flgs | Auth | Auth | State | Auth | Field | ial | - * + ===== + ---------- + ----- + ----- + ----- + ----- + ----- + ----- + ----- + ----- + ------+ ----- + - */ - -/* clang-format on */ - -/* Private defines */ - -#define FW_LA_ZUC_3G_PROTO __BIT(12) -/* Indicating ZUC processing for a encrypt command - * Must be set for Cipher-only, Cipher + Auth and Auth-only */ - -#define FW_LA_GCM_IV_LEN_12_OCTETS __BIT(11) -/* Indicates the IV Length for GCM protocol is 96 Bits (12 Octets) - * If set FW does the padding to compute CTR0 */ - -#define FW_LA_DIGEST_IN_BUFFER __BIT(10) -/* Flag representing that authentication digest is stored or is extracted - * from the source buffer. Auth Result Pointer will be ignored in this case. */ - -#define FW_LA_PROTO __BITS(7, 9) -#define FW_LA_PROTO_SNOW_3G __BIT(9) -/* Indicates SNOW_3G processing for a encrypt command */ -#define FW_LA_PROTO_GCM __BIT(8) -/* Indicates GCM processing for a auth_encrypt command */ -#define FW_LA_PROTO_CCM __BIT(7) -/* Indicates CCM processing for a auth_encrypt command */ -#define FW_LA_PROTO_NONE 0 -/* Indicates no specific protocol processing for the command */ - -#define FW_LA_CMP_AUTH_RES __BIT(6) -/* Flag representing the need to compare the auth result data to the expected - * value in DRAM at the auth_address. */ - -#define FW_LA_RET_AUTH_RES __BIT(5) -/* Flag representing the need to return the auth result data to dram after the - * request processing is complete */ - -#define FW_LA_UPDATE_STATE __BIT(4) -/* Flag representing the need to update the state data in dram after the - * request processing is complete */ - -#define FW_CIPH_AUTH_CFG_OFFSET_IN_SHRAM_CP __BIT(3) -/* Flag representing Cipher/Auth Config Offset Type, where the offset - * is contained in SHRAM constants page. When the SHRAM constants page - * is not used for cipher/auth configuration, then the Content Descriptor - * pointer field must be a pointer (as opposed to a 16-byte key), since - * the block pointed to must contain both the slice config and the key */ - -#define FW_CIPH_IV_16BYTE_DATA __BIT(2) -/* Flag representing Cipher IV field contents as 16-byte data array - * Otherwise Cipher IV field contents via 64-bit pointer */ - -#define FW_LA_PARTIAL __BITS(0, 1) -#define FW_LA_PARTIAL_NONE 0 -/* Flag representing no need for partial processing condition i.e. - * entire packet processed in the current command */ -#define FW_LA_PARTIAL_START 1 -/* Flag representing the first chunk of the partial packet */ -#define FW_LA_PARTIAL_MID 3 -/* Flag representing a middle chunk of the partial packet */ -#define FW_LA_PARTIAL_END 2 -/* Flag representing the final/end chunk of the partial packet */ - -/* The table below defines the meaning of the prefix_addr & hash_state_sz in - * the case of partial processing. See the HLD for further details - * - * + ====== + ------------------------- + ----------------------- + - * | Parial | Prefix Addr | Hash State Sz | - * | State | | | - * + ====== + ------------------------- + ----------------------- + - * | FULL | Points to the prefix data | Prefix size as below. | - * | | | No update of state | - * + ====== + ------------------------- + ----------------------- + - * | SOP | Points to the prefix | = inner prefix rounded | - * | | data. State is updated | to qwrds + outer prefix | - * | | at prefix_addr - state_sz | rounded to qwrds. The | - * | | - 8 (counter size) | writeback state sz | - * | | | comes from the CD | - * + ====== + ------------------------- + ----------------------- + - * | MOP | Points to the state data | State size rounded to | - * | | Updated state written to | num qwrds + 8 (for the | - * | | same location | counter) + inner prefix | - * | | | rounded to qwrds + | - * | | | outer prefix rounded to | - * | | | qwrds. | - * + ====== + ------------------------- + ----------------------- + - * | EOP | Points to the state data | State size rounded to | - * | | | num qwrds + 8 (for the | - * | | | counter) + inner prefix | - * | | | rounded to qwrds + | - * | | | outer prefix rounded to | - * | | | qwrds. | - * + ====== + ------------------------- + ----------------------- + - * - * Notes: - * - * - If the EOP is set it is assumed that no state update is to be performed. - * However it is the clients responsibility to set the update_state flag - * correctly i.e. not set for EOP or Full packet cases. Only set for SOP and - * MOP with no EOP flag - * - The SOP take precedence over the MOP and EOP i.e. in the calculation of - * the address to writeback the state. - * - The prefix address must be on at least the 8 byte boundary - */ - -/* Macros for extracting field bits */ -/* - * Macro for extraction of the Cipher IV field contents (bit 2) - * - * flags Flags to extract the Cipher IV field contents - * - */ -#define FW_LA_CIPH_IV_FLD_FLAG_GET(flags) \ - FIELD_GET(flags, LA_CIPH_IV_FLD_BITPOS, LA_CIPH_IV_FLD_MASK) - -/* - * Macro for extraction of the Cipher/Auth Config - * offset type (bit 3) - * - * flags Flags to extract the Cipher/Auth Config offset type - * - */ -#define FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_GET(flags) \ - FIELD_GET(flags, \ - LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \ - LA_CIPH_AUTH_CFG_OFFSET_MASK) - -/* - * Macro for extraction of the ZUC protocol bit - * information (bit 11) - * - * flags Flags to extract the ZUC protocol bit - */ -#define FW_LA_ZUC_3G_PROTO_FLAG_GET(flags) \ - FIELD_GET(flags, \ - FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \ - FW_LA_ZUC_3G_PROTO_FLAG_MASK) - -/* - * Macro for extraction of the GCM IV Len is 12 Octets / 96 Bits - * information (bit 11) - * - * flags Flags to extract the GCM IV length - */ -#define FW_LA_GCM_IV_LEN_FLAG_GET(flags) \ - FIELD_GET( \ - flags, LA_GCM_IV_LEN_FLAG_BITPOS, LA_GCM_IV_LEN_FLAG_MASK) - -/* - * Macro for extraction of the LA protocol state (bits 9-7) - * - * flags Flags to extract the protocol state - */ -#define FW_LA_PROTO_GET(flags) \ - FIELD_GET(flags, LA_PROTO_BITPOS, LA_PROTO_MASK) - -/* - * Macro for extraction of the "compare auth" state (bit 6) - * - * flags Flags to extract the compare auth result state - * - */ -#define FW_LA_CMP_AUTH_GET(flags) \ - FIELD_GET(flags, LA_CMP_AUTH_RES_BITPOS, LA_CMP_AUTH_RES_MASK) - -/* - * Macro for extraction of the "return auth" state (bit 5) - * - * flags Flags to extract the return auth result state - * - */ -#define FW_LA_RET_AUTH_GET(flags) \ - FIELD_GET(flags, LA_RET_AUTH_RES_BITPOS, LA_RET_AUTH_RES_MASK) - -/* - * Macro for extraction of the "digest in buffer" state (bit 10) - * - * flags Flags to extract the digest in buffer state - * - */ -#define FW_LA_DIGEST_IN_BUFFER_GET(flags) \ - FIELD_GET( \ - flags, LA_DIGEST_IN_BUFFER_BITPOS, LA_DIGEST_IN_BUFFER_MASK) - -/* - * Macro for extraction of the update content state value. (bit 4) - * - * flags Flags to extract the update content state bit - */ -#define FW_LA_UPDATE_STATE_GET(flags) \ - FIELD_GET(flags, LA_UPDATE_STATE_BITPOS, LA_UPDATE_STATE_MASK) - -/* - * Macro for extraction of the "partial" packet state (bits 1-0) - * - * flags Flags to extract the partial state - */ -#define FW_LA_PARTIAL_GET(flags) \ - FIELD_GET(flags, LA_PARTIAL_BITPOS, LA_PARTIAL_MASK) - -/* Macros for setting field bits */ -/* - * Macro for setting the Cipher IV field contents - * - * flags Flags to set with the Cipher IV field contents - * val Field contents indicator value - */ -#define FW_LA_CIPH_IV_FLD_FLAG_SET(flags, val) \ - FIELD_SET( \ - flags, val, LA_CIPH_IV_FLD_BITPOS, LA_CIPH_IV_FLD_MASK) - -/* - * Macro for setting the Cipher/Auth Config - * offset type - * - * flags Flags to set the Cipher/Auth Config offset type - * val Offset type value - */ -#define FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_SET(flags, val) \ - FIELD_SET(flags, \ - val, \ - LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \ - LA_CIPH_AUTH_CFG_OFFSET_MASK) - -/* - * Macro for setting the ZUC protocol flag - * - * flags Flags to set the ZUC protocol flag - * val Protocol value - */ -#define FW_LA_ZUC_3G_PROTO_FLAG_SET(flags, val) \ - FIELD_SET(flags, \ - val, \ - FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \ - FW_LA_ZUC_3G_PROTO_FLAG_MASK) - -/* - * Macro for setting the GCM IV length flag state - * - * flags Flags to set the GCM IV length flag state - * val Protocol value - */ -#define FW_LA_GCM_IV_LEN_FLAG_SET(flags, val) \ - FIELD_SET(flags, \ - val, \ - LA_GCM_IV_LEN_FLAG_BITPOS, \ - LA_GCM_IV_LEN_FLAG_MASK) - -/* - * Macro for setting the LA protocol flag state - * - * flags Flags to set the protocol state - * val Protocol value - */ -#define FW_LA_PROTO_SET(flags, val) \ - FIELD_SET(flags, val, LA_PROTO_BITPOS, LA_PROTO_MASK) - -/* - * Macro for setting the "compare auth" flag state - * - * flags Flags to set the compare auth result state - * val Compare Auth value - */ -#define FW_LA_CMP_AUTH_SET(flags, val) \ - FIELD_SET( \ - flags, val, LA_CMP_AUTH_RES_BITPOS, LA_CMP_AUTH_RES_MASK) - -/* - * Macro for setting the "return auth" flag state - * - * flags Flags to set the return auth result state - * val Return Auth value - */ -#define FW_LA_RET_AUTH_SET(flags, val) \ - FIELD_SET( \ - flags, val, LA_RET_AUTH_RES_BITPOS, LA_RET_AUTH_RES_MASK) - -/* - * Macro for setting the "digest in buffer" flag state - * - * flags Flags to set the digest in buffer state - * val Digest in buffer value - */ -#define FW_LA_DIGEST_IN_BUFFER_SET(flags, val) \ - FIELD_SET(flags, \ - val, \ - LA_DIGEST_IN_BUFFER_BITPOS, \ - LA_DIGEST_IN_BUFFER_MASK) - -/* - * Macro for setting the "update state" flag value - * - * flags Flags to set the update content state - * val Update Content State flag value - */ -#define FW_LA_UPDATE_STATE_SET(flags, val) \ - FIELD_SET( \ - flags, val, LA_UPDATE_STATE_BITPOS, LA_UPDATE_STATE_MASK) - -/* - * Macro for setting the "partial" packet flag state - * - * flags Flags to set the partial state - * val Partial state value - */ -#define FW_LA_PARTIAL_SET(flags, val) \ - FIELD_SET(flags, val, LA_PARTIAL_BITPOS, LA_PARTIAL_MASK) - -/* - * Definition of the Cipher header Content Descriptor pars block - * Definition of the cipher processing header cd pars block. - * The structure is a service-specific implementation of the common - * 'fw_comn_req_hdr_cd_pars_s' structure. - */ -union fw_cipher_req_hdr_cd_pars { - /* LWs 2-5 */ - struct - { - uint64_t content_desc_addr; - /* Address of the content descriptor */ - - uint16_t content_desc_resrvd1; - /* Content descriptor reserved field */ - - uint8_t content_desc_params_sz; - /* Size of the content descriptor parameters in quad words. These - * parameters describe the session setup configuration info for the - * slices that this request relies upon i.e. the configuration word and - * cipher key needed by the cipher slice if there is a request for - * cipher processing. */ - - uint8_t content_desc_hdr_resrvd2; - /* Content descriptor reserved field */ - - uint32_t content_desc_resrvd3; - /* Content descriptor reserved field */ - } s; - - struct - { - uint32_t cipher_key_array[FW_NUM_LONGWORDS_4]; - /* Cipher Key Array */ - - } s1; - -}; - -/* - * Definition of the Authentication header Content Descriptor pars block - * Definition of the authentication processing header cd pars block. - */ -/* Note: Authentication uses the common 'fw_comn_req_hdr_cd_pars_s' - * structure - similarly, it is also used by SSL3, TLS and MGF. Only cipher - * and cipher + authentication require service-specific implementations of - * the structure */ - -/* - * Definition of the Cipher + Auth header Content Descriptor pars block - * Definition of the cipher + auth processing header cd pars block. - * The structure is a service-specific implementation of the common - * 'fw_comn_req_hdr_cd_pars_s' structure. - */ -union fw_cipher_auth_req_hdr_cd_pars { - /* LWs 2-5 */ - struct - { - uint64_t content_desc_addr; - /* Address of the content descriptor */ - - uint16_t content_desc_resrvd1; - /* Content descriptor reserved field */ - - uint8_t content_desc_params_sz; - /* Size of the content descriptor parameters in quad words. These - * parameters describe the session setup configuration info for the - * slices that this request relies upon i.e. the configuration word and - * cipher key needed by the cipher slice if there is a request for - * cipher processing. */ - - uint8_t content_desc_hdr_resrvd2; - /* Content descriptor reserved field */ - - uint32_t content_desc_resrvd3; - /* Content descriptor reserved field */ - } s; - - struct - { - uint32_t cipher_key_array[FW_NUM_LONGWORDS_4]; - /* Cipher Key Array */ - - } sl; - -}; - -/* - * Cipher content descriptor control block (header) - * Definition of the service-specific cipher control block header - * structure. This header forms part of the content descriptor - * block incorporating LWs 27-31, as defined by the common base - * parameters structure. - */ -struct fw_cipher_cd_ctrl_hdr -{ - /* LW 27 */ - uint8_t cipher_state_sz; - /* State size in quad words of the cipher algorithm used in this session. - * Set to zero if the algorithm doesnt provide any state */ - - uint8_t cipher_key_sz; - /* Key size in quad words of the cipher algorithm used in this session */ - - uint8_t cipher_cfg_offset; - /* Quad word offset from the content descriptor parameters address i.e. - * (content_address + (cd_hdr_sz << 3)) to the parameters for the cipher - * processing */ - - uint8_t next_curr_id; - /* This field combines the next and current id (each four bits) - - * the next id is the most significant nibble. - * Next Id: Set to the next slice to pass the ciphered data through. - * Set to FW_SLICE_DRAM_WR if the data is not to go through - * any more slices after cipher. - * Current Id: Initialised with the cipher slice type */ - - /* LW 28 */ - uint8_t cipher_padding_sz; - /* State padding size in quad words. Set to 0 if no padding is required. - */ - - uint8_t resrvd1; - uint16_t resrvd2; - /* Reserved bytes to bring the struct to the word boundary, used by - * authentication. MUST be set to 0 */ - - /* LWs 29-31 */ - uint32_t resrvd3[FW_NUM_LONGWORDS_3]; - /* Reserved bytes used by authentication. MUST be set to 0 */ - -}; - -/* - * Authentication content descriptor control block (header) - * Definition of the service-specific authentication control block - * header structure. This header forms part of the content descriptor - * block incorporating LWs 27-31, as defined by the common base - * parameters structure, the first portion of which is reserved for - * cipher. - */ -struct fw_auth_cd_ctrl_hdr -{ - /* LW 27 */ - uint32_t resrvd1; - /* Reserved bytes, used by cipher only. MUST be set to 0 */ - - /* LW 28 */ - uint8_t resrvd2; - /* Reserved byte, used by cipher only. MUST be set to 0 */ - - uint8_t hash_flags; - /* General flags defining the processing to perform. 0 is normal - * processing - * and 1 means there is a nested hash processing loop to go through */ - - uint8_t hash_cfg_offset; - /* Quad word offset from the content descriptor parameters address to the - * parameters for the auth processing */ - - uint8_t next_curr_id; - /* This field combines the next and current id (each four bits) - - * the next id is the most significant nibble. - * Next Id: Set to the next slice to pass the authentication data through. - * Set to FW_SLICE_DRAM_WR if the data is not to go through - * any more slices after authentication. - * Current Id: Initialised with the authentication slice type */ - - /* LW 29 */ - uint8_t resrvd3; - /* Now a reserved field. MUST be set to 0 */ - - uint8_t outer_prefix_sz; - /* Size in bytes of outer prefix data */ - - uint8_t final_sz; - /* Size in bytes of digest to be returned to the client if requested */ - - uint8_t inner_res_sz; - /* Size in bytes of the digest from the inner hash algorithm */ - - /* LW 30 */ - uint8_t resrvd4; - /* Now a reserved field. MUST be set to zero. */ - - uint8_t inner_state1_sz; - /* Size in bytes of inner hash state1 data. Must be a qword multiple */ - - uint8_t inner_state2_offset; - /* Quad word offset from the content descriptor parameters pointer to the - * inner state2 value */ - - uint8_t inner_state2_sz; - /* Size in bytes of inner hash state2 data. Must be a qword multiple */ - - /* LW 31 */ - uint8_t outer_config_offset; - /* Quad word offset from the content descriptor parameters pointer to the - * outer configuration information */ - - uint8_t outer_state1_sz; - /* Size in bytes of the outer state1 value */ - - uint8_t outer_res_sz; - /* Size in bytes of digest from the outer auth algorithm */ - - uint8_t outer_prefix_offset; - /* Quad word offset from the start of the inner prefix data to the outer - * prefix information. Should equal the rounded inner prefix size, converted - * to qwords */ - -}; - -/* - * Cipher + Authentication content descriptor control block header - * Definition of both service-specific cipher + authentication control - * block header structures. This header forms part of the content - * descriptor block incorporating LWs 27-31, as defined by the common - * base parameters structure. - */ -struct fw_cipher_auth_cd_ctrl_hdr -{ - /* LW 27 */ - uint8_t cipher_state_sz; - /* State size in quad words of the cipher algorithm used in this session. - * Set to zero if the algorithm doesnt provide any state */ - - uint8_t cipher_key_sz; - /* Key size in quad words of the cipher algorithm used in this session */ - - uint8_t cipher_cfg_offset; - /* Quad word offset from the content descriptor parameters address i.e. - * (content_address + (cd_hdr_sz << 3)) to the parameters for the cipher - * processing */ - - uint8_t next_curr_id_cipher; - /* This field combines the next and current id (each four bits) - - * the next id is the most significant nibble. - * Next Id: Set to the next slice to pass the ciphered data through. - * Set to FW_SLICE_DRAM_WR if the data is not to go through - * any more slices after cipher. - * Current Id: Initialised with the cipher slice type */ - - /* LW 28 */ - uint8_t cipher_padding_sz; - /* State padding size in quad words. Set to 0 if no padding is required. - */ - - uint8_t hash_flags; - /* General flags defining the processing to perform. 0 is normal - * processing - * and 1 means there is a nested hash processing loop to go through */ - - uint8_t hash_cfg_offset; - /* Quad word offset from the content descriptor parameters address to the - * parameters for the auth processing */ - - uint8_t next_curr_id_auth; - /* This field combines the next and current id (each four bits) - - * the next id is the most significant nibble. - * Next Id: Set to the next slice to pass the authentication data through. - * Set to FW_SLICE_DRAM_WR if the data is not to go through - * any more slices after authentication. - * Current Id: Initialised with the authentication slice type */ - - /* LW 29 */ - uint8_t resrvd1; - /* Reserved field. MUST be set to 0 */ - - uint8_t outer_prefix_sz; - /* Size in bytes of outer prefix data */ - - uint8_t final_sz; - /* Size in bytes of digest to be returned to the client if requested */ - - uint8_t inner_res_sz; - /* Size in bytes of the digest from the inner hash algorithm */ - - /* LW 30 */ - uint8_t resrvd2; - /* Now a reserved field. MUST be set to zero. */ - - uint8_t inner_state1_sz; - /* Size in bytes of inner hash state1 data. Must be a qword multiple */ - - uint8_t inner_state2_offset; - /* Quad word offset from the content descriptor parameters pointer to the - * inner state2 value */ - - uint8_t inner_state2_sz; - /* Size in bytes of inner hash state2 data. Must be a qword multiple */ - - /* LW 31 */ - uint8_t outer_config_offset; - /* Quad word offset from the content descriptor parameters pointer to the - * outer configuration information */ - - uint8_t outer_state1_sz; - /* Size in bytes of the outer state1 value */ - - uint8_t outer_res_sz; - /* Size in bytes of digest from the outer auth algorithm */ - - uint8_t outer_prefix_offset; - /* Quad word offset from the start of the inner prefix data to the outer - * prefix information. Should equal the rounded inner prefix size, converted - * to qwords */ - -}; - -#define FW_AUTH_HDR_FLAG_DO_NESTED 1 -/* Definition of the hash_flags bit of the auth_hdr to indicate the request - * requires nested hashing */ - -#define FW_AUTH_HDR_FLAG_NO_NESTED 0 -/* Definition of the hash_flags bit of the auth_hdr for no nested hashing - * required */ - -#define FW_CCM_GCM_AAD_SZ_MAX 240 -/* Maximum size of AAD data allowed for CCM or GCM processing. AAD data size90 - - * is stored in 8-bit field and must be multiple of hash block size. 240 is - * largest value which satisfy both requirements.AAD_SZ_MAX is in byte units */ - -/* - * request parameter #defines - */ -#define FW_HASH_REQUEST_PARAMETERS_OFFSET \ - (sizeof(fw_la_cipher_req_params_t)) -/* Offset in bytes from the start of the request parameters block to the hash - * (auth) request parameters */ - -#define FW_CIPHER_REQUEST_PARAMETERS_OFFSET (0) -/* Offset in bytes from the start of the request parameters block to the cipher - * request parameters */ - -/* - * Definition of the cipher request parameters block - * - * Definition of the cipher processing request parameters block - * structure, which forms part of the block incorporating LWs 14-26, - * as defined by the common base parameters structure. - * Unused fields must be set to 0. - */ -struct fw_la_cipher_req_params { - /* LW 14 */ - uint32_t cipher_offset; - /* Cipher offset long word. */ - - /* LW 15 */ - uint32_t cipher_length; - /* Cipher length long word. */ - - /* LWs 16-19 */ - union { - uint32_t cipher_IV_array[FW_NUM_LONGWORDS_4]; - /* Cipher IV array */ - - struct - { - uint64_t cipher_IV_ptr; - /* Cipher IV pointer or Partial State Pointer */ - - uint64_t resrvd1; - /* reserved */ - - } s; - - } u; - -}; - -/* - * Definition of the auth request parameters block - * Definition of the authentication processing request parameters block - * structure, which forms part of the block incorporating LWs 14-26, - * as defined by the common base parameters structure. Note: - * This structure is used by TLS only. - */ -struct fw_la_auth_req_params { - /* LW 20 */ - uint32_t auth_off; - /* Byte offset from the start of packet to the auth data region */ - - /* LW 21 */ - uint32_t auth_len; - /* Byte length of the auth data region */ - - /* LWs 22-23 */ - union { - uint64_t auth_partial_st_prefix; - /* Address of the authentication partial state prefix - * information */ - - uint64_t aad_adr; - /* Address of the AAD info in DRAM. Used for the CCM and GCM - * protocols */ - - } u1; - - /* LWs 24-25 */ - uint64_t auth_res_addr; - /* Address of the authentication result information to validate or - * the location to which the digest information can be written back to */ - - /* LW 26 */ - union { - uint8_t inner_prefix_sz; - /* Size in bytes of the inner prefix data */ - - uint8_t aad_sz; - /* Size in bytes of padded AAD data to prefix to the packet for CCM - * or GCM processing */ - } u2; - - uint8_t resrvd1; - /* reserved */ - - uint8_t hash_state_sz; - /* Number of quad words of inner and outer hash prefix data to process - * Maximum size is 240 */ - - uint8_t auth_res_sz; - /* Size in bytes of the authentication result */ - -} __packed; - -/* - * Definition of the auth request parameters block - * Definition of the authentication processing request parameters block - * structure, which forms part of the block incorporating LWs 14-26, - * as defined by the common base parameters structure. Note: - * This structure is used by SSL3 and MGF1 only. All fields other than - * inner prefix/ AAD size are unused and therefore reserved. - */ -struct fw_la_auth_req_params_resrvd_flds { - /* LWs 20-25 */ - uint32_t resrvd[FW_NUM_LONGWORDS_6]; - - /* LW 26 */ - union { - uint8_t inner_prefix_sz; - /* Size in bytes of the inner prefix data */ - - uint8_t aad_sz; - /* Size in bytes of padded AAD data to prefix to the packet for CCM - * or GCM processing */ - } u2; - - uint8_t resrvd1; - /* reserved */ - - uint16_t resrvd2; - /* reserved */ -}; - -/* - * Definition of the shared fields within the parameter block - * containing SSL, TLS or MGF information. - * This structure defines the shared fields for SSL, TLS or MGF - * within the parameter block incorporating LWs 14-26, as defined - * by the common base parameters structure. - * Unused fields must be set to 0. - */ -struct fw_la_key_gen_common { - /* LW 14 */ - union { - /* SSL3 */ - uint16_t secret_lgth_ssl; - /* Length of Secret information for SSL. In the case of TLS the - * secret is supplied in the content descriptor */ - - /* MGF */ - uint16_t mask_length; - /* Size in bytes of the desired output mask for MGF1*/ - - /* TLS */ - uint16_t secret_lgth_tls; - /* TLS Secret length */ - - } u; - - union { - /* SSL3 */ - struct - { - uint8_t output_lgth_ssl; - /* Output length */ - - uint8_t label_lgth_ssl; - /* Label length */ - - } s1; - - /* MGF */ - struct - { - uint8_t hash_length; - /* Hash length */ - - uint8_t seed_length; - /* Seed length */ - - } s2; - - /* TLS */ - struct - { - uint8_t output_lgth_tls; - /* Output length */ - - uint8_t label_lgth_tls; - /* Label length */ - - } s3; - - } u1; - - /* LW 15 */ - union { - /* SSL3 */ - uint8_t iter_count; - /* Iteration count used by the SSL key gen request */ - - /* TLS */ - uint8_t tls_seed_length; - /* TLS Seed length */ - - uint8_t resrvd1; - /* Reserved field set to 0 for MGF1 */ - - } u2; - - uint8_t resrvd2; - uint16_t resrvd3; - /* Reserved space - unused */ - -}; - -/* - * Definition of the SSL3 request parameters block - * This structure contains the the SSL3 processing request parameters - * incorporating LWs 14-26, as defined by the common base - * parameters structure. Unused fields must be set to 0. - */ -struct fw_la_ssl3_req_params { - /* LWs 14-15 */ - struct fw_la_key_gen_common keygen_comn; - /* For other key gen processing these field holds ssl, tls or mgf - * parameters */ - - /* LW 16-25 */ - uint32_t resrvd[FW_NUM_LONGWORDS_10]; - /* Reserved */ - - /* LW 26 */ - union { - uint8_t inner_prefix_sz; - /* Size in bytes of the inner prefix data */ - - uint8_t aad_sz; - /* Size in bytes of padded AAD data to prefix to the packet for CCM - * or GCM processing */ - } u2; - - uint8_t resrvd1; - /* reserved */ - - uint16_t resrvd2; - /* reserved */ - -}; - -/* - * Definition of the MGF request parameters block - * This structure contains the the MGF processing request parameters - * incorporating LWs 14-26, as defined by the common base parameters - * structure. Unused fields must be set to 0. - */ -struct fw_la_mgf_req_params { - /* LWs 14-15 */ - struct fw_la_key_gen_common keygen_comn; - /* For other key gen processing these field holds ssl or mgf - * parameters */ - - /* LW 16-25 */ - uint32_t resrvd[FW_NUM_LONGWORDS_10]; - /* Reserved */ - - /* LW 26 */ - union { - uint8_t inner_prefix_sz; - /* Size in bytes of the inner prefix data */ - - uint8_t aad_sz; - /* Size in bytes of padded AAD data to prefix to the packet for CCM - * or GCM processing */ - } u2; - - uint8_t resrvd1; - /* reserved */ - - uint16_t resrvd2; - /* reserved */ - -}; - -/* - * Definition of the TLS request parameters block - * This structure contains the the TLS processing request parameters - * incorporating LWs 14-26, as defined by the common base parameters - * structure. Unused fields must be set to 0. - */ -struct fw_la_tls_req_params { - /* LWs 14-15 */ - struct fw_la_key_gen_common keygen_comn; - /* For other key gen processing these field holds ssl, tls or mgf - * parameters */ - - /* LW 16-19 */ - uint32_t resrvd[FW_NUM_LONGWORDS_4]; - /* Reserved */ - -}; - -/* - * Definition of the common QAT FW request middle block for TRNG. - * Common section of the request used across all of the services exposed - * by the QAT FW. Each of the services inherit these common fields. TRNG - * requires a specific implementation. - */ -struct fw_la_trng_req_mid { - /* LWs 6-13 */ - uint64_t opaque_data; - /* Opaque data passed unmodified from the request to response messages by - * firmware (fw) */ - - uint64_t resrvd1; - /* Reserved, unused for TRNG */ - - uint64_t dest_data_addr; - /* Generic definition of the destination data supplied to the QAT AE. The - * common flags are used to further describe the attributes of this - * field */ - - uint32_t resrvd2; - /* Reserved, unused for TRNG */ - - uint32_t entropy_length; - /* Size of the data in bytes to process. Used by the get_random - * command. Set to 0 for commands that dont need a length parameter */ - -}; - -/* - * Definition of the common LA QAT FW TRNG request - * Definition of the TRNG processing request type - */ -struct fw_la_trng_req { - /* LWs 0-1 */ - struct fw_comn_req_hdr comn_hdr; - /* Common request header */ - - /* LWs 2-5 */ - union fw_comn_req_hdr_cd_pars cd_pars; - /* Common Request content descriptor field which points either to a - * content descriptor - * parameter block or contains the service-specific data itself. */ - - /* LWs 6-13 */ - struct fw_la_trng_req_mid comn_mid; - /* TRNG request middle section - differs from the common mid-section */ - - /* LWs 14-26 */ - uint32_t resrvd1[FW_NUM_LONGWORDS_13]; - - /* LWs 27-31 */ - uint32_t resrvd2[FW_NUM_LONGWORDS_5]; - -}; - -/* - * Definition of the Lookaside Eagle Tail Response - * This is the response delivered to the ET rings by the Lookaside - * QAT FW service for all commands - */ -struct fw_la_resp { - /* LWs 0-1 */ - struct fw_comn_resp_hdr comn_resp; - /* Common interface response format see fw.h */ - - /* LWs 2-3 */ - uint64_t opaque_data; - /* Opaque data passed from the request to the response message */ - - /* LWs 4-7 */ - uint32_t resrvd[FW_NUM_LONGWORDS_4]; - /* Reserved */ - -}; - -/* - * Definition of the Lookaside TRNG Test Status Structure - * As an addition to FW_LA_TRNG_STATUS Pass or Fail information - * in common response fields, as a response to TRNG_TEST request, Test - * status, Counter for failed tests and 4 entropy counter values are - * sent - * Status of test status and the fail counts. - */ -struct fw_la_trng_test_result { - uint32_t test_status_info; - /* TRNG comparator health test status& Validity information - see Test Status Bit Fields below. */ - - uint32_t test_status_fail_count; - /* TRNG comparator health test status, 32bit fail counter */ - - uint64_t r_ent_ones_cnt; - /* Raw Entropy ones counter */ - - uint64_t r_ent_zeros_cnt; - /* Raw Entropy zeros counter */ - - uint64_t c_ent_ones_cnt; - /* Conditioned Entropy ones counter */ - - uint64_t c_ent_zeros_cnt; - /* Conditioned Entropy zeros counter */ - - uint64_t resrvd; - /* Reserved field must be set to zero */ - -}; - -/* - * Definition of the Lookaside SSL Key Material Input - * This struct defines the layout of input parameters for the - * SSL3 key generation (source flat buffer format) - */ -struct fw_la_ssl_key_material_input { - uint64_t seed_addr; - /* Pointer to seed */ - - uint64_t label_addr; - /* Pointer to label(s) */ - - uint64_t secret_addr; - /* Pointer to secret */ - -}; - -/* - * Definition of the Lookaside TLS Key Material Input - * This struct defines the layout of input parameters for the - * TLS key generation (source flat buffer format) - * NOTE: - * Secret state value (S split into S1 and S2 parts) is supplied via - * Content Descriptor. S1 is placed in an outer prefix buffer, and S2 - * inside the inner prefix buffer. - */ -struct fw_la_tls_key_material_input { - uint64_t seed_addr; - /* Pointer to seed */ - - uint64_t label_addr; - /* Pointer to label(s) */ - -}; - -/* - * Macros using the bit position and mask to set/extract the next - * and current id nibbles within the next_curr_id field of the - * content descriptor header block, ONLY FOR CIPHER + AUTH COMBINED. - * Note that for cipher only or authentication only, the common macros - * need to be used. These are defined in the 'fw.h' common header - * file, as they are used by compression, cipher and authentication. - * - * cd_ctrl_hdr_t Content descriptor control block header. - * val Value of the field being set. - */ -/* Cipher fields within Cipher + Authentication structure */ -#define FW_CIPHER_NEXT_ID_GET(cd_ctrl_hdr_t) \ - ((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \ - FW_COMN_NEXT_ID_MASK) >> \ - (FW_COMN_NEXT_ID_BITPOS)) - -#define FW_CIPHER_NEXT_ID_SET(cd_ctrl_hdr_t, val) \ - (cd_ctrl_hdr_t)->next_curr_id_cipher = \ - ((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \ - FW_COMN_CURR_ID_MASK) | \ - ((val << FW_COMN_NEXT_ID_BITPOS) & \ - FW_COMN_NEXT_ID_MASK)) - -#define FW_CIPHER_CURR_ID_GET(cd_ctrl_hdr_t) \ - (((cd_ctrl_hdr_t)->next_curr_id_cipher) & FW_COMN_CURR_ID_MASK) - -#define FW_CIPHER_CURR_ID_SET(cd_ctrl_hdr_t, val) \ - (cd_ctrl_hdr_t)->next_curr_id_cipher = \ - ((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \ - FW_COMN_NEXT_ID_MASK) | \ - ((val)&FW_COMN_CURR_ID_MASK)) - -/* Authentication fields within Cipher + Authentication structure */ -#define FW_AUTH_NEXT_ID_GET(cd_ctrl_hdr_t) \ - ((((cd_ctrl_hdr_t)->next_curr_id_auth) & FW_COMN_NEXT_ID_MASK) >> \ - (FW_COMN_NEXT_ID_BITPOS)) - -#define FW_AUTH_NEXT_ID_SET(cd_ctrl_hdr_t, val) \ - (cd_ctrl_hdr_t)->next_curr_id_auth = \ - ((((cd_ctrl_hdr_t)->next_curr_id_auth) & \ - FW_COMN_CURR_ID_MASK) | \ - ((val << FW_COMN_NEXT_ID_BITPOS) & \ - FW_COMN_NEXT_ID_MASK)) - -#define FW_AUTH_CURR_ID_GET(cd_ctrl_hdr_t) \ - (((cd_ctrl_hdr_t)->next_curr_id_auth) & FW_COMN_CURR_ID_MASK) - -#define FW_AUTH_CURR_ID_SET(cd_ctrl_hdr_t, val) \ - (cd_ctrl_hdr_t)->next_curr_id_auth = \ - ((((cd_ctrl_hdr_t)->next_curr_id_auth) & \ - FW_COMN_NEXT_ID_MASK) | \ - ((val)&FW_COMN_CURR_ID_MASK)) - -/* Definitions of the bits in the test_status_info of the TRNG_TEST response. - * The values returned by the Lookaside service are given below - * The Test result and Test Fail Count values are only valid if the Test - * Results Valid (Tv) is set. - * - * TRNG Test Status Info - * + ===== + ------------------------------------------------ + --- + --- + - * | Bit | 31 - 2 | 1 | 0 | - * + ===== + ------------------------------------------------ + --- + --- + - * | Flags | RESERVED = 0 | Tv | Ts | - * + ===== + ------------------------------------------------------------ + - */ -/* - * Definition of the Lookaside TRNG Test Status Information received as - * a part of fw_la_trng_test_result_t - * - */ -#define FW_LA_TRNG_TEST_STATUS_TS_BITPOS 0 -/* TRNG Test Result t_status field bit pos definition. */ - -#define FW_LA_TRNG_TEST_STATUS_TS_MASK 0x1 -/* TRNG Test Result t_status field mask definition. */ - -#define FW_LA_TRNG_TEST_STATUS_TV_BITPOS 1 -/* TRNG Test Result test results valid field bit pos definition. */ - -#define FW_LA_TRNG_TEST_STATUS_TV_MASK 0x1 -/* TRNG Test Result test results valid field mask definition. */ - -/* - * Definition of the Lookaside TRNG test_status values. - * - * - */ -#define FW_LA_TRNG_TEST_STATUS_TV_VALID 1 -/* TRNG TEST Response Test Results Valid Value. */ - -#define FW_LA_TRNG_TEST_STATUS_TV_NOT_VALID 0 -/* TRNG TEST Response Test Results are NOT Valid Value. */ - -#define FW_LA_TRNG_TEST_STATUS_TS_NO_FAILS 1 -/* Value for TRNG Test status tests have NO FAILs Value. */ - -#define FW_LA_TRNG_TEST_STATUS_TS_HAS_FAILS 0 -/* Value for TRNG Test status tests have one or more FAILS Value. */ - -/* - * Macro for extraction of the Test Status Field returned in the response - * to TRNG TEST command. - * - * test_status 8 bit test_status value to extract the status bit - */ -#define FW_LA_TRNG_TEST_STATUS_TS_FLD_GET(test_status) \ - FIELD_GET(test_status, \ - FW_LA_TRNG_TEST_STATUS_TS_BITPOS, \ - FW_LA_TRNG_TEST_STATUS_TS_MASK) -/* - * Macro for extraction of the Test Results Valid Field returned in the - * response to TRNG TEST command. - * - * test_status 8 bit test_status value to extract the Tests - * Results valid bit - */ -#define FW_LA_TRNG_TEST_STATUS_TV_FLD_GET(test_status) \ - FIELD_GET(test_status, \ - FW_LA_TRNG_TEST_STATUS_TV_BITPOS, \ - FW_LA_TRNG_TEST_STATUS_TV_MASK) - -/* - * MGF Max supported input parameters - */ -#define FW_LA_MGF_SEED_LEN_MAX 255 -/* Maximum seed length for MGF1 request in bytes - * Typical values may be 48, 64, 128 bytes (or any). */ - -#define FW_LA_MGF_MASK_LEN_MAX 65528 -/* Maximum mask length for MGF1 request in bytes - * Typical values may be 8 (64-bit), 16 (128-bit). MUST be quad word multiple */ - -/* - * SSL Max supported input parameters - */ -#define FW_LA_SSL_SECRET_LEN_MAX 512 -/* Maximum secret length for SSL3 Key Gen request (bytes) */ - -#define FW_LA_SSL_ITERATES_LEN_MAX 16 -/* Maximum iterations for SSL3 Key Gen request (integer) */ - -#define FW_LA_SSL_LABEL_LEN_MAX 136 -/* Maximum label length for SSL3 Key Gen request (bytes) */ - -#define FW_LA_SSL_SEED_LEN_MAX 64 -/* Maximum seed length for SSL3 Key Gen request (bytes) */ - -#define FW_LA_SSL_OUTPUT_LEN_MAX 248 -/* Maximum output length for SSL3 Key Gen request (bytes) */ - -/* - * TLS Max supported input parameters - */ -#define FW_LA_TLS_SECRET_LEN_MAX 128 -/* Maximum secret length for TLS Key Gen request (bytes) */ - -#define FW_LA_TLS_V1_1_SECRET_LEN_MAX 128 -/* Maximum secret length for TLS Key Gen request (bytes) */ - -#define FW_LA_TLS_V1_2_SECRET_LEN_MAX 64 -/* Maximum secret length for TLS Key Gen request (bytes) */ - -#define FW_LA_TLS_LABEL_LEN_MAX 255 -/* Maximum label length for TLS Key Gen request (bytes) */ - -#define FW_LA_TLS_SEED_LEN_MAX 64 -/* Maximum seed length for TLS Key Gen request (bytes) */ - -#define FW_LA_TLS_OUTPUT_LEN_MAX 248 -/* Maximum output length for TLS Key Gen request (bytes) */ - -#endif Index: sys/dev/qat/qat_hw17var.h =================================================================== --- sys/dev/qat/qat_hw17var.h +++ /dev/null @@ -1,80 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qat_hw17var.h,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2014 Intel Corporation. - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* $FreeBSD$ */ - -#ifndef _DEV_PCI_QAT_HW17VAR_H_ -#define _DEV_PCI_QAT_HW17VAR_H_ - -CTASSERT(CONTENT_DESC_MAX_SIZE >= - roundup(sizeof(union hw_cipher_algo_blk), 8) + - roundup(sizeof(union hw_auth_algo_blk), 8)); - -int qat_adm_mailbox_init(struct qat_softc *); -int qat_adm_mailbox_send_init(struct qat_softc *); -int qat_arb_init(struct qat_softc *); -int qat_set_ssm_wdtimer(struct qat_softc *); -int qat_check_slice_hang(struct qat_softc *); - -void qat_hw17_crypto_setup_desc(struct qat_crypto *, - struct qat_session *, struct qat_crypto_desc *); -void qat_hw17_crypto_setup_req_params(struct qat_crypto_bank *, - struct qat_session *, struct qat_crypto_desc const *, - struct qat_sym_cookie *, struct cryptop *); - -#endif Index: sys/dev/qat/qatreg.h =================================================================== --- sys/dev/qat/qatreg.h +++ /dev/null @@ -1,1585 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qatreg.h,v 1.1 2019/11/20 09:37:46 hikaru Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2007-2019 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* $FreeBSD$ */ - -#ifndef _DEV_PCI_QATREG_H_ -#define _DEV_PCI_QATREG_H_ - -#define __BIT(__n) \ - (((uintmax_t)(__n) >= NBBY * sizeof(uintmax_t)) ? 0 : \ - ((uintmax_t)1 << (uintmax_t)((__n) & (NBBY * sizeof(uintmax_t) - 1)))) -#define __BITS(__m, __n) \ - ((__BIT(MAX((__m), (__n)) + 1) - 1) ^ (__BIT(MIN((__m), (__n))) - 1)) - -#define __LOWEST_SET_BIT(__mask) ((((__mask) - 1) & (__mask)) ^ (__mask)) -#define __SHIFTOUT(__x, __mask) (((__x) & (__mask)) / __LOWEST_SET_BIT(__mask)) -#define __SHIFTIN(__x, __mask) ((__x) * __LOWEST_SET_BIT(__mask)) - -/* Limits */ -#define MAX_NUM_AE 0x10 -#define MAX_NUM_ACCEL 6 -#define MAX_AE 0x18 -#define MAX_AE_CTX 8 -#define MAX_ARB 4 - -#define MAX_USTORE_PER_SEG 0x8000 /* 16k * 2 */ -#define MAX_USTORE MAX_USTORE_PER_SEG - -#define MAX_AE_PER_ACCEL 4 /* XXX */ -#define MAX_BANK_PER_ACCEL 16 /* XXX */ -#define MAX_RING_PER_BANK 16 - -#define MAX_XFER_REG 128 -#define MAX_GPR_REG 128 -#define MAX_NN_REG 128 -#define MAX_LMEM_REG 1024 -#define MAX_INP_STATE 16 -#define MAX_CAM_REG 16 -#define MAX_FIFO_QWADDR 160 - -#define MAX_EXEC_INST 100 -#define UWORD_CPYBUF_SIZE 1024 /* micro-store copy buffer (bytes) */ -#define INVLD_UWORD 0xffffffffffull /* invalid micro-instruction */ -#define AEV2_PACKED_UWORD_BYTES 6 /* version 2 packed uword size */ -#define UWORD_MASK 0xbffffffffffull /* micro-word mask without parity */ - -#define AE_ALL_CTX 0xff - -/* PCIe configuration space parameter */ -#define NO_PCI_REG (-1) -#define NO_REG_OFFSET 0 - -#define MAX_BARS 3 - -/* Fuse Control */ -#define FUSECTL_REG 0x40 -#define FUSECTL_MASK __BIT(31) - -#define LEGFUSE_REG 0x4c -#define LEGFUSE_ACCEL_MASK_CIPHER_SLICE __BIT(0) -#define LEGFUSE_ACCEL_MASK_AUTH_SLICE __BIT(1) -#define LEGFUSE_ACCEL_MASK_PKE_SLICE __BIT(2) -#define LEGFUSE_ACCEL_MASK_COMPRESS_SLICE __BIT(3) -#define LEGFUSE_ACCEL_MASK_LZS_SLICE __BIT(4) -#define LEGFUSE_ACCEL_MASK_EIA3_SLICE __BIT(5) -#define LEGFUSE_ACCEL_MASK_SHA3_SLICE __BIT(6) - -/* -------------------------------------------------------------------------- */ -/* PETRINGCSR region */ - -/* ETR parameters */ -#define ETR_MAX_RINGS_PER_BANK 16 - -/* ETR registers */ -#define ETR_RING_CONFIG 0x0000 -#define ETR_RING_LBASE 0x0040 -#define ETR_RING_UBASE 0x0080 -#define ETR_RING_HEAD_OFFSET 0x00C0 -#define ETR_RING_TAIL_OFFSET 0x0100 -#define ETR_RING_STAT 0x0140 -#define ETR_UO_STAT 0x0148 -#define ETR_E_STAT 0x014C -#define ETR_NE_STAT 0x0150 -#define ETR_NF_STAT 0x0154 -#define ETR_F_STAT 0x0158 -#define ETR_C_STAT 0x015C -#define ETR_INT_EN 0x016C -#define ETR_INT_REG 0x0170 -#define ETR_INT_SRCSEL 0x0174 -#define ETR_INT_SRCSEL_2 0x0178 -#define ETR_INT_COL_EN 0x017C -#define ETR_INT_COL_CTL 0x0180 -#define ETR_AP_NF_MASK 0x2000 -#define ETR_AP_NF_DEST 0x2020 -#define ETR_AP_NE_MASK 0x2040 -#define ETR_AP_NE_DEST 0x2060 -#define ETR_AP_DELAY 0x2080 - -/* ARB registers */ -#define ARB_OFFSET 0x30000 -#define ARB_REG_SIZE 0x4 -#define ARB_WTR_SIZE 0x20 -#define ARB_REG_SLOT 0x1000 -#define ARB_WTR_OFFSET 0x010 -#define ARB_RO_EN_OFFSET 0x090 -#define ARB_WRK_2_SER_MAP_OFFSET 0x180 -#define ARB_RINGSRVARBEN_OFFSET 0x19c - -/* Ring Config */ -#define ETR_RING_CONFIG_LATE_HEAD_POINTER_MODE __BIT(31) -#define ETR_RING_CONFIG_NEAR_FULL_WM __BITS(14, 10) -#define ETR_RING_CONFIG_NEAR_EMPTY_WM __BITS(9, 5) -#define ETR_RING_CONFIG_RING_SIZE __BITS(4, 0) - -#define ETR_RING_CONFIG_NEAR_WM_0 0x00 -#define ETR_RING_CONFIG_NEAR_WM_4 0x01 -#define ETR_RING_CONFIG_NEAR_WM_8 0x02 -#define ETR_RING_CONFIG_NEAR_WM_16 0x03 -#define ETR_RING_CONFIG_NEAR_WM_32 0x04 -#define ETR_RING_CONFIG_NEAR_WM_64 0x05 -#define ETR_RING_CONFIG_NEAR_WM_128 0x06 -#define ETR_RING_CONFIG_NEAR_WM_256 0x07 -#define ETR_RING_CONFIG_NEAR_WM_512 0x08 -#define ETR_RING_CONFIG_NEAR_WM_1K 0x09 -#define ETR_RING_CONFIG_NEAR_WM_2K 0x0A -#define ETR_RING_CONFIG_NEAR_WM_4K 0x0B -#define ETR_RING_CONFIG_NEAR_WM_8K 0x0C -#define ETR_RING_CONFIG_NEAR_WM_16K 0x0D -#define ETR_RING_CONFIG_NEAR_WM_32K 0x0E -#define ETR_RING_CONFIG_NEAR_WM_64K 0x0F -#define ETR_RING_CONFIG_NEAR_WM_128K 0x10 -#define ETR_RING_CONFIG_NEAR_WM_256K 0x11 -#define ETR_RING_CONFIG_NEAR_WM_512K 0x12 -#define ETR_RING_CONFIG_NEAR_WM_1M 0x13 -#define ETR_RING_CONFIG_NEAR_WM_2M 0x14 -#define ETR_RING_CONFIG_NEAR_WM_4M 0x15 - -#define ETR_RING_CONFIG_SIZE_64 0x00 -#define ETR_RING_CONFIG_SIZE_128 0x01 -#define ETR_RING_CONFIG_SIZE_256 0x02 -#define ETR_RING_CONFIG_SIZE_512 0x03 -#define ETR_RING_CONFIG_SIZE_1K 0x04 -#define ETR_RING_CONFIG_SIZE_2K 0x05 -#define ETR_RING_CONFIG_SIZE_4K 0x06 -#define ETR_RING_CONFIG_SIZE_8K 0x07 -#define ETR_RING_CONFIG_SIZE_16K 0x08 -#define ETR_RING_CONFIG_SIZE_32K 0x09 -#define ETR_RING_CONFIG_SIZE_64K 0x0A -#define ETR_RING_CONFIG_SIZE_128K 0x0B -#define ETR_RING_CONFIG_SIZE_256K 0x0C -#define ETR_RING_CONFIG_SIZE_512K 0x0D -#define ETR_RING_CONFIG_SIZE_1M 0x0E -#define ETR_RING_CONFIG_SIZE_2M 0x0F -#define ETR_RING_CONFIG_SIZE_4M 0x10 - -/* Default Ring Config is Nearly Full = Full and Nearly Empty = Empty */ -#define ETR_RING_CONFIG_BUILD(size) \ - (__SHIFTIN(ETR_RING_CONFIG_NEAR_WM_0, \ - ETR_RING_CONFIG_NEAR_FULL_WM) | \ - __SHIFTIN(ETR_RING_CONFIG_NEAR_WM_0, \ - ETR_RING_CONFIG_NEAR_EMPTY_WM) | \ - __SHIFTIN((size), ETR_RING_CONFIG_RING_SIZE)) - -/* Response Ring Configuration */ -#define ETR_RING_CONFIG_BUILD_RESP(size, wm_nf, wm_ne) \ - (__SHIFTIN((wm_nf), ETR_RING_CONFIG_NEAR_FULL_WM) | \ - __SHIFTIN((wm_ne), ETR_RING_CONFIG_NEAR_EMPTY_WM) | \ - __SHIFTIN((size), ETR_RING_CONFIG_RING_SIZE)) - -/* Ring Base */ -#define ETR_RING_BASE_BUILD(addr, size) \ - (((addr) >> 6) & (0xFFFFFFFFFFFFFFFFULL << (size))) - -#define ETR_INT_REG_CLEAR_MASK 0xffff - -/* Initial bank Interrupt Source mask */ -#define ETR_INT_SRCSEL_MASK 0x44444444UL - -#define ETR_INT_SRCSEL_NEXT_OFFSET 4 - -#define ETR_RINGS_PER_INT_SRCSEL 8 - -#define ETR_INT_COL_CTL_ENABLE __BIT(31) - -#define ETR_AP_NF_MASK_INIT 0xAAAAAAAA -#define ETR_AP_NE_MASK_INIT 0x55555555 - -/* Autopush destination AE bit */ -#define ETR_AP_DEST_ENABLE __BIT(7) -#define ETR_AP_DEST_AE __BITS(6, 2) -#define ETR_AP_DEST_MAILBOX __BITS(1, 0) - -/* Autopush destination enable bit */ - -/* Autopush CSR Offset */ -#define ETR_AP_BANK_OFFSET 4 - -/* Autopush maximum rings per bank */ -#define ETR_MAX_RINGS_PER_AP_BANK 32 - -/* Maximum mailbox per acclerator */ -#define ETR_MAX_MAILBOX_PER_ACCELERATOR 4 - -/* Maximum AEs per mailbox */ -#define ETR_MAX_AE_PER_MAILBOX 4 - -/* Macro to get the ring's autopush bank number */ -#define ETR_RING_AP_BANK_NUMBER(ring) ((ring) >> 5) - -/* Macro to get the ring's autopush mailbox number */ -#define ETR_RING_AP_MAILBOX_NUMBER(ring) \ - (ETR_RING_AP_BANK_NUMBER(ring) % ETR_MAX_MAILBOX_PER_ACCELERATOR) - -/* Macro to get the ring number in the autopush bank */ -#define ETR_RING_NUMBER_IN_AP_BANK(ring) \ - ((ring) % ETR_MAX_RINGS_PER_AP_BANK) - -#define ETR_RING_EMPTY_ENTRY_SIG (0x7F7F7F7F) - -/* -------------------------------------------------------------------------- */ -/* CAP_GLOBAL_CTL region */ - -#define FCU_CTRL 0x8c0 -#define FCU_CTRL_CMD_NOOP 0 -#define FCU_CTRL_CMD_AUTH 1 -#define FCU_CTRL_CMD_LOAD 2 -#define FCU_CTRL_CMD_START 3 -#define FCU_CTRL_AE __BITS(8, 31) - -#define FCU_STATUS 0x8c4 -#define FCU_STATUS_STS __BITS(0, 2) -#define FCU_STATUS_STS_NO 0 -#define FCU_STATUS_STS_VERI_DONE 1 -#define FCU_STATUS_STS_LOAD_DONE 2 -#define FCU_STATUS_STS_VERI_FAIL 3 -#define FCU_STATUS_STS_LOAD_FAIL 4 -#define FCU_STATUS_STS_BUSY 5 -#define FCU_STATUS_AUTHFWLD __BIT(8) -#define FCU_STATUS_DONE __BIT(9) -#define FCU_STATUS_LOADED_AE __BITS(22, 31) - -#define FCU_STATUS1 0x8c8 - -#define FCU_DRAM_ADDR_LO 0x8cc -#define FCU_DRAM_ADDR_HI 0x8d0 -#define FCU_RAMBASE_ADDR_HI 0x8d4 -#define FCU_RAMBASE_ADDR_LO 0x8d8 - -#define FW_AUTH_WAIT_PERIOD 10 -#define FW_AUTH_MAX_RETRY 300 - -#define CAP_GLOBAL_CTL_BASE 0xa00 -#define CAP_GLOBAL_CTL_MISC CAP_GLOBAL_CTL_BASE + 0x04 -#define CAP_GLOBAL_CTL_MISC_TIMESTAMP_EN __BIT(7) -#define CAP_GLOBAL_CTL_RESET CAP_GLOBAL_CTL_BASE + 0x0c -#define CAP_GLOBAL_CTL_RESET_MASK __BITS(31, 26) -#define CAP_GLOBAL_CTL_RESET_ACCEL_MASK __BITS(25, 20) -#define CAP_GLOBAL_CTL_RESET_AE_MASK __BITS(19, 0) -#define CAP_GLOBAL_CTL_CLK_EN CAP_GLOBAL_CTL_BASE + 0x50 -#define CAP_GLOBAL_CTL_CLK_EN_ACCEL_MASK __BITS(25, 20) -#define CAP_GLOBAL_CTL_CLK_EN_AE_MASK __BITS(19, 0) - -/* -------------------------------------------------------------------------- */ -/* AE region */ -#define UPC_MASK 0x1ffff -#define USTORE_SIZE QAT_16K - -#define AE_LOCAL_AE_MASK __BITS(31, 12) -#define AE_LOCAL_CSR_MASK __BITS(9, 0) - -/* AE_LOCAL registers */ -/* Control Store Address Register */ -#define USTORE_ADDRESS 0x000 -#define USTORE_ADDRESS_ECS __BIT(31) - -#define USTORE_ECC_BIT_0 44 -#define USTORE_ECC_BIT_1 45 -#define USTORE_ECC_BIT_2 46 -#define USTORE_ECC_BIT_3 47 -#define USTORE_ECC_BIT_4 48 -#define USTORE_ECC_BIT_5 49 -#define USTORE_ECC_BIT_6 50 - -/* Control Store Data Lower Register */ -#define USTORE_DATA_LOWER 0x004 -/* Control Store Data Upper Register */ -#define USTORE_DATA_UPPER 0x008 -/* Control Store Error Status Register */ -#define USTORE_ERROR_STATUS 0x00c -/* Arithmetic Logic Unit Output Register */ -#define ALU_OUT 0x010 -/* Context Arbiter Control Register */ -#define CTX_ARB_CNTL 0x014 -#define CTX_ARB_CNTL_INIT 0x00000000 -/* Context Enables Register */ -#define CTX_ENABLES 0x018 -#define CTX_ENABLES_INIT 0 -#define CTX_ENABLES_INUSE_CONTEXTS __BIT(31) -#define CTX_ENABLES_CNTL_STORE_PARITY_ERROR __BIT(29) -#define CTX_ENABLES_CNTL_STORE_PARITY_ENABLE __BIT(28) -#define CTX_ENABLES_BREAKPOINT __BIT(27) -#define CTX_ENABLES_PAR_ERR __BIT(25) -#define CTX_ENABLES_NN_MODE __BIT(20) -#define CTX_ENABLES_NN_RING_EMPTY __BIT(18) -#define CTX_ENABLES_LMADDR_1_GLOBAL __BIT(17) -#define CTX_ENABLES_LMADDR_0_GLOBAL __BIT(16) -#define CTX_ENABLES_ENABLE __BITS(15,8) - -#define CTX_ENABLES_IGNORE_W1C_MASK \ - (~(CTX_ENABLES_PAR_ERR | \ - CTX_ENABLES_BREAKPOINT | \ - CTX_ENABLES_CNTL_STORE_PARITY_ERROR)) - -/* cycles from CTX_ENABLE high to CTX entering executing state */ -#define CYCLES_FROM_READY2EXE 8 - -/* Condition Code Enable Register */ -#define CC_ENABLE 0x01c -#define CC_ENABLE_INIT 0x2000 - -/* CSR Context Pointer Register */ -#define CSR_CTX_POINTER 0x020 -#define CSR_CTX_POINTER_CONTEXT __BITS(2,0) -/* Register Error Status Register */ -#define REG_ERROR_STATUS 0x030 -/* Indirect Context Status Register */ -#define CTX_STS_INDIRECT 0x040 -#define CTX_STS_INDIRECT_UPC_INIT 0x00000000 - -/* Active Context Status Register */ -#define ACTIVE_CTX_STATUS 0x044 -#define ACTIVE_CTX_STATUS_ABO __BIT(31) -#define ACTIVE_CTX_STATUS_ACNO __BITS(0, 2) -/* Indirect Context Signal Events Register */ -#define CTX_SIG_EVENTS_INDIRECT 0x048 -#define CTX_SIG_EVENTS_INDIRECT_INIT 0x00000001 -/* Active Context Signal Events Register */ -#define CTX_SIG_EVENTS_ACTIVE 0x04c -/* Indirect Context Wakeup Events Register */ -#define CTX_WAKEUP_EVENTS_INDIRECT 0x050 -#define CTX_WAKEUP_EVENTS_INDIRECT_VOLUNTARY 0x00000001 -#define CTX_WAKEUP_EVENTS_INDIRECT_SLEEP 0x00010000 - -#define CTX_WAKEUP_EVENTS_INDIRECT_INIT 0x00000001 - -/* Active Context Wakeup Events Register */ -#define CTX_WAKEUP_EVENTS_ACTIVE 0x054 -/* Indirect Context Future Count Register */ -#define CTX_FUTURE_COUNT_INDIRECT 0x058 -/* Active Context Future Count Register */ -#define CTX_FUTURE_COUNT_ACTIVE 0x05c -/* Indirect Local Memory Address 0 Register */ -#define LM_ADDR_0_INDIRECT 0x060 -/* Active Local Memory Address 0 Register */ -#define LM_ADDR_0_ACTIVE 0x064 -/* Indirect Local Memory Address 1 Register */ -#define LM_ADDR_1_INDIRECT 0x068 -/* Active Local Memory Address 1 Register */ -#define LM_ADDR_1_ACTIVE 0x06c -/* Byte Index Register */ -#define BYTE_INDEX 0x070 -/* Indirect Local Memory Address 0 Byte Index Register */ -#define INDIRECT_LM_ADDR_0_BYTE_INDEX 0x0e0 -/* Active Local Memory Address 0 Byte Index Register */ -#define ACTIVE_LM_ADDR_0_BYTE_INDEX 0x0e4 -/* Indirect Local Memory Address 1 Byte Index Register */ -#define INDIRECT_LM_ADDR_1_BYTE_INDEX 0x0e8 -/* Active Local Memory Address 1 Byte Index Register */ -#define ACTIVE_LM_ADDR_1_BYTE_INDEX 0x0ec -/* Transfer Index Concatenated with Byte Index Register */ -#define T_INDEX_BYTE_INDEX 0x0f4 -/* Transfer Index Register */ -#define T_INDEX 0x074 -/* Indirect Future Count Signal Signal Register */ -#define FUTURE_COUNT_SIGNAL_INDIRECT 0x078 -/* Active Context Future Count Register */ -#define FUTURE_COUNT_SIGNAL_ACTIVE 0x07c -/* Next Neighbor Put Register */ -#define NN_PUT 0x080 -/* Next Neighbor Get Register */ -#define NN_GET 0x084 -/* Timestamp Low Register */ -#define TIMESTAMP_LOW 0x0c0 -/* Timestamp High Register */ -#define TIMESTAMP_HIGH 0x0c4 -/* Next Neighbor Signal Register */ -#define NEXT_NEIGHBOR_SIGNAL 0x100 -/* Previous Neighbor Signal Register */ -#define PREV_NEIGHBOR_SIGNAL 0x104 -/* Same AccelEngine Signal Register */ -#define SAME_AE_SIGNAL 0x108 -/* Cyclic Redundancy Check Remainder Register */ -#define CRC_REMAINDER 0x140 -/* Profile Count Register */ -#define PROFILE_COUNT 0x144 -/* Pseudorandom Number Register */ -#define PSEUDO_RANDOM_NUMBER 0x148 -/* Signature Enable Register */ -#define SIGNATURE_ENABLE 0x150 -/* Miscellaneous Control Register */ -#define AE_MISC_CONTROL 0x160 -#define AE_MISC_CONTROL_PARITY_ENABLE __BIT(24) -#define AE_MISC_CONTROL_FORCE_BAD_PARITY __BIT(23) -#define AE_MISC_CONTROL_ONE_CTX_RELOAD __BIT(22) -#define AE_MISC_CONTROL_CS_RELOAD __BITS(21, 20) -#define AE_MISC_CONTROL_SHARE_CS __BIT(2) -/* Control Store Address 1 Register */ -#define USTORE_ADDRESS1 0x158 -/* Local CSR Status Register */ -#define LOCAL_CSR_STATUS 0x180 -#define LOCAL_CSR_STATUS_STATUS 0x1 -/* NULL Register */ -#define NULL_CSR 0x3fc - -/* AE_XFER macros */ -#define AE_XFER_AE_MASK __BITS(31, 12) -#define AE_XFER_CSR_MASK __BITS(9, 2) - -#define AEREG_BAD_REGADDR 0xffff /* bad register address */ - -/* -------------------------------------------------------------------------- */ - -#define SSMWDT(i) ((i) * 0x4000 + 0x54) -#define SSMWDTPKE(i) ((i) * 0x4000 + 0x58) -#define INTSTATSSM(i) ((i) * 0x4000 + 0x04) -#define INTSTATSSM_SHANGERR __BIT(13) -#define PPERR(i) ((i) * 0x4000 + 0x08) -#define PPERRID(i) ((i) * 0x4000 + 0x0C) -#define CERRSSMSH(i) ((i) * 0x4000 + 0x10) -#define UERRSSMSH(i) ((i) * 0x4000 + 0x18) -#define UERRSSMSHAD(i) ((i) * 0x4000 + 0x1C) -#define SLICEHANGSTATUS(i) ((i) * 0x4000 + 0x4C) -#define SLICE_HANG_AUTH0_MASK __BIT(0) -#define SLICE_HANG_AUTH1_MASK __BIT(1) -#define SLICE_HANG_CPHR0_MASK __BIT(4) -#define SLICE_HANG_CPHR1_MASK __BIT(5) -#define SLICE_HANG_CMP0_MASK __BIT(8) -#define SLICE_HANG_CMP1_MASK __BIT(9) -#define SLICE_HANG_XLT0_MASK __BIT(12) -#define SLICE_HANG_XLT1_MASK __BIT(13) -#define SLICE_HANG_MMP0_MASK __BIT(16) -#define SLICE_HANG_MMP1_MASK __BIT(17) -#define SLICE_HANG_MMP2_MASK __BIT(18) -#define SLICE_HANG_MMP3_MASK __BIT(19) -#define SLICE_HANG_MMP4_MASK __BIT(20) - -#define SHINTMASKSSM(i) ((i) * 0x4000 + 0x1018) -#define ENABLE_SLICE_HANG 0x000000 -#define MAX_MMP (5) -#define MMP_BASE(i) ((i) * 0x1000 % 0x3800) -#define CERRSSMMMP(i, n) ((i) * 0x4000 + MMP_BASE(n) + 0x380) -#define UERRSSMMMP(i, n) ((i) * 0x4000 + MMP_BASE(n) + 0x388) -#define UERRSSMMMPAD(i, n) ((i) * 0x4000 + MMP_BASE(n) + 0x38C) - -#define CPP_CFC_ERR_STATUS (0x30000 + 0xC04) -#define CPP_CFC_ERR_PPID (0x30000 + 0xC08) - -#define ERRSOU0 (0x3A000 + 0x00) -#define ERRSOU1 (0x3A000 + 0x04) -#define ERRSOU2 (0x3A000 + 0x08) -#define ERRSOU3 (0x3A000 + 0x0C) -#define ERRSOU4 (0x3A000 + 0xD0) -#define ERRSOU5 (0x3A000 + 0xD8) -#define ERRMSK0 (0x3A000 + 0x10) -#define ERRMSK1 (0x3A000 + 0x14) -#define ERRMSK2 (0x3A000 + 0x18) -#define ERRMSK3 (0x3A000 + 0x1C) -#define ERRMSK4 (0x3A000 + 0xD4) -#define ERRMSK5 (0x3A000 + 0xDC) -#define EMSK3_CPM0_MASK __BIT(2) -#define EMSK3_CPM1_MASK __BIT(3) -#define EMSK5_CPM2_MASK __BIT(16) -#define EMSK5_CPM3_MASK __BIT(17) -#define EMSK5_CPM4_MASK __BIT(18) -#define RICPPINTSTS (0x3A000 + 0x114) -#define RIERRPUSHID (0x3A000 + 0x118) -#define RIERRPULLID (0x3A000 + 0x11C) - -#define TICPPINTSTS (0x3A400 + 0x13C) -#define TIERRPUSHID (0x3A400 + 0x140) -#define TIERRPULLID (0x3A400 + 0x144) -#define SECRAMUERR (0x3AC00 + 0x04) -#define SECRAMUERRAD (0x3AC00 + 0x0C) -#define CPPMEMTGTERR (0x3AC00 + 0x10) -#define ERRPPID (0x3AC00 + 0x14) - -#define ADMINMSGUR 0x3a574 -#define ADMINMSGLR 0x3a578 -#define MAILBOX_BASE 0x20970 -#define MAILBOX_STRIDE 0x1000 -#define ADMINMSG_LEN 32 - -/* -------------------------------------------------------------------------- */ -static const uint8_t mailbox_const_tab[1024] __aligned(1024) = { -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x11, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x11, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x21, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x03, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x01, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x13, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x13, 0x02, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x13, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x13, -0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x23, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x33, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef, 0xfe, 0xdc, 0xba, 0x98, 0x76, -0x54, 0x32, 0x10, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x67, 0x45, 0x23, 0x01, 0xef, 0xcd, 0xab, -0x89, 0x98, 0xba, 0xdc, 0xfe, 0x10, 0x32, 0x54, 0x76, 0xc3, 0xd2, 0xe1, 0xf0, -0x00, 0x00, 0x00, 0x00, 0x11, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xc1, 0x05, 0x9e, -0xd8, 0x36, 0x7c, 0xd5, 0x07, 0x30, 0x70, 0xdd, 0x17, 0xf7, 0x0e, 0x59, 0x39, -0xff, 0xc0, 0x0b, 0x31, 0x68, 0x58, 0x15, 0x11, 0x64, 0xf9, 0x8f, 0xa7, 0xbe, -0xfa, 0x4f, 0xa4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x6a, 0x09, 0xe6, 0x67, 0xbb, 0x67, 0xae, -0x85, 0x3c, 0x6e, 0xf3, 0x72, 0xa5, 0x4f, 0xf5, 0x3a, 0x51, 0x0e, 0x52, 0x7f, -0x9b, 0x05, 0x68, 0x8c, 0x1f, 0x83, 0xd9, 0xab, 0x5b, 0xe0, 0xcd, 0x19, 0x05, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0xcb, 0xbb, 0x9d, 0x5d, 0xc1, 0x05, 0x9e, 0xd8, 0x62, 0x9a, 0x29, -0x2a, 0x36, 0x7c, 0xd5, 0x07, 0x91, 0x59, 0x01, 0x5a, 0x30, 0x70, 0xdd, 0x17, -0x15, 0x2f, 0xec, 0xd8, 0xf7, 0x0e, 0x59, 0x39, 0x67, 0x33, 0x26, 0x67, 0xff, -0xc0, 0x0b, 0x31, 0x8e, 0xb4, 0x4a, 0x87, 0x68, 0x58, 0x15, 0x11, 0xdb, 0x0c, -0x2e, 0x0d, 0x64, 0xf9, 0x8f, 0xa7, 0x47, 0xb5, 0x48, 0x1d, 0xbe, 0xfa, 0x4f, -0xa4, 0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x6a, 0x09, 0xe6, 0x67, 0xf3, 0xbc, 0xc9, 0x08, 0xbb, -0x67, 0xae, 0x85, 0x84, 0xca, 0xa7, 0x3b, 0x3c, 0x6e, 0xf3, 0x72, 0xfe, 0x94, -0xf8, 0x2b, 0xa5, 0x4f, 0xf5, 0x3a, 0x5f, 0x1d, 0x36, 0xf1, 0x51, 0x0e, 0x52, -0x7f, 0xad, 0xe6, 0x82, 0xd1, 0x9b, 0x05, 0x68, 0x8c, 0x2b, 0x3e, 0x6c, 0x1f, -0x1f, 0x83, 0xd9, 0xab, 0xfb, 0x41, 0xbd, 0x6b, 0x5b, 0xe0, 0xcd, 0x19, 0x13, -0x7e, 0x21, 0x79, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; - -/* -------------------------------------------------------------------------- */ -/* Microcode */ - -/* Clear GPR of AE */ -static const uint64_t ae_clear_gprs_inst[] = { - 0x0F0000C0000ull, /* .0 l0000!val = 0 ; immed[l0000!val, 0x0] */ - 0x0F000000380ull, /* .1 l0000!count = 128 ; immed[l0000!count, 0x80] */ - 0x0D805000011ull, /* .2 br!=ctx[0, ctx_init#] */ - 0x0FC082C0300ull, /* .3 local_csr_wr[nn_put, 0] */ - 0x0F0000C0300ull, /* .4 nop */ - 0x0F0000C0300ull, /* .5 nop */ - 0x0F0000C0300ull, /* .6 nop */ - 0x0F0000C0300ull, /* .7 nop */ - 0x0A0643C0000ull, /* .8 init_nn#:alu[*n$index++, --, b, l0000!val] */ - 0x0BAC0000301ull, /* .9 alu[l0000!count, l0000!count, -, 1] */ - 0x0D802000101ull, /* .10 bne[init_nn#] */ - 0x0F0000C0001ull, /* .11 l0000!indx = 0 ; immed[l0000!indx, 0x0] */ - 0x0FC066C0001ull, /* .12 local_csr_wr[active_lm_addr_0, l0000!indx]; - * put indx to lm_addr */ - 0x0F0000C0300ull, /* .13 nop */ - 0x0F0000C0300ull, /* .14 nop */ - 0x0F0000C0300ull, /* .15 nop */ - 0x0F000400300ull, /* .16 l0000!count = 1024 ; immed[l0000!count, 0x400] */ - 0x0A0610C0000ull, /* .17 init_lm#:alu[*l$index0++, --, b, l0000!val] */ - 0x0BAC0000301ull, /* .18 alu[l0000!count, l0000!count, -, 1] */ - 0x0D804400101ull, /* .19 bne[init_lm#] */ - 0x0A0580C0000ull, /* .20 ctx_init#:alu[$l0000!xfers[0], --, b, l0000!val] */ - 0x0A0581C0000ull, /* .21 alu[$l0000!xfers[1], --, b, l0000!val] */ - 0x0A0582C0000ull, /* .22 alu[$l0000!xfers[2], --, b, l0000!val] */ - 0x0A0583C0000ull, /* .23 alu[$l0000!xfers[3], --, b, l0000!val] */ - 0x0A0584C0000ull, /* .24 alu[$l0000!xfers[4], --, b, l0000!val] */ - 0x0A0585C0000ull, /* .25 alu[$l0000!xfers[5], --, b, l0000!val] */ - 0x0A0586C0000ull, /* .26 alu[$l0000!xfers[6], --, b, l0000!val] */ - 0x0A0587C0000ull, /* .27 alu[$l0000!xfers[7], --, b, l0000!val] */ - 0x0A0588C0000ull, /* .28 alu[$l0000!xfers[8], --, b, l0000!val] */ - 0x0A0589C0000ull, /* .29 alu[$l0000!xfers[9], --, b, l0000!val] */ - 0x0A058AC0000ull, /* .30 alu[$l0000!xfers[10], --, b, l0000!val] */ - 0x0A058BC0000ull, /* .31 alu[$l0000!xfers[11], --, b, l0000!val] */ - 0x0A058CC0000ull, /* .32 alu[$l0000!xfers[12], --, b, l0000!val] */ - 0x0A058DC0000ull, /* .33 alu[$l0000!xfers[13], --, b, l0000!val] */ - 0x0A058EC0000ull, /* .34 alu[$l0000!xfers[14], --, b, l0000!val] */ - 0x0A058FC0000ull, /* .35 alu[$l0000!xfers[15], --, b, l0000!val] */ - 0x0A05C0C0000ull, /* .36 alu[$l0000!xfers[16], --, b, l0000!val] */ - 0x0A05C1C0000ull, /* .37 alu[$l0000!xfers[17], --, b, l0000!val] */ - 0x0A05C2C0000ull, /* .38 alu[$l0000!xfers[18], --, b, l0000!val] */ - 0x0A05C3C0000ull, /* .39 alu[$l0000!xfers[19], --, b, l0000!val] */ - 0x0A05C4C0000ull, /* .40 alu[$l0000!xfers[20], --, b, l0000!val] */ - 0x0A05C5C0000ull, /* .41 alu[$l0000!xfers[21], --, b, l0000!val] */ - 0x0A05C6C0000ull, /* .42 alu[$l0000!xfers[22], --, b, l0000!val] */ - 0x0A05C7C0000ull, /* .43 alu[$l0000!xfers[23], --, b, l0000!val] */ - 0x0A05C8C0000ull, /* .44 alu[$l0000!xfers[24], --, b, l0000!val] */ - 0x0A05C9C0000ull, /* .45 alu[$l0000!xfers[25], --, b, l0000!val] */ - 0x0A05CAC0000ull, /* .46 alu[$l0000!xfers[26], --, b, l0000!val] */ - 0x0A05CBC0000ull, /* .47 alu[$l0000!xfers[27], --, b, l0000!val] */ - 0x0A05CCC0000ull, /* .48 alu[$l0000!xfers[28], --, b, l0000!val] */ - 0x0A05CDC0000ull, /* .49 alu[$l0000!xfers[29], --, b, l0000!val] */ - 0x0A05CEC0000ull, /* .50 alu[$l0000!xfers[30], --, b, l0000!val] */ - 0x0A05CFC0000ull, /* .51 alu[$l0000!xfers[31], --, b, l0000!val] */ - 0x0A0400C0000ull, /* .52 alu[l0000!gprega[0], --, b, l0000!val] */ - 0x0B0400C0000ull, /* .53 alu[l0000!gpregb[0], --, b, l0000!val] */ - 0x0A0401C0000ull, /* .54 alu[l0000!gprega[1], --, b, l0000!val] */ - 0x0B0401C0000ull, /* .55 alu[l0000!gpregb[1], --, b, l0000!val] */ - 0x0A0402C0000ull, /* .56 alu[l0000!gprega[2], --, b, l0000!val] */ - 0x0B0402C0000ull, /* .57 alu[l0000!gpregb[2], --, b, l0000!val] */ - 0x0A0403C0000ull, /* .58 alu[l0000!gprega[3], --, b, l0000!val] */ - 0x0B0403C0000ull, /* .59 alu[l0000!gpregb[3], --, b, l0000!val] */ - 0x0A0404C0000ull, /* .60 alu[l0000!gprega[4], --, b, l0000!val] */ - 0x0B0404C0000ull, /* .61 alu[l0000!gpregb[4], --, b, l0000!val] */ - 0x0A0405C0000ull, /* .62 alu[l0000!gprega[5], --, b, l0000!val] */ - 0x0B0405C0000ull, /* .63 alu[l0000!gpregb[5], --, b, l0000!val] */ - 0x0A0406C0000ull, /* .64 alu[l0000!gprega[6], --, b, l0000!val] */ - 0x0B0406C0000ull, /* .65 alu[l0000!gpregb[6], --, b, l0000!val] */ - 0x0A0407C0000ull, /* .66 alu[l0000!gprega[7], --, b, l0000!val] */ - 0x0B0407C0000ull, /* .67 alu[l0000!gpregb[7], --, b, l0000!val] */ - 0x0A0408C0000ull, /* .68 alu[l0000!gprega[8], --, b, l0000!val] */ - 0x0B0408C0000ull, /* .69 alu[l0000!gpregb[8], --, b, l0000!val] */ - 0x0A0409C0000ull, /* .70 alu[l0000!gprega[9], --, b, l0000!val] */ - 0x0B0409C0000ull, /* .71 alu[l0000!gpregb[9], --, b, l0000!val] */ - 0x0A040AC0000ull, /* .72 alu[l0000!gprega[10], --, b, l0000!val] */ - 0x0B040AC0000ull, /* .73 alu[l0000!gpregb[10], --, b, l0000!val] */ - 0x0A040BC0000ull, /* .74 alu[l0000!gprega[11], --, b, l0000!val] */ - 0x0B040BC0000ull, /* .75 alu[l0000!gpregb[11], --, b, l0000!val] */ - 0x0A040CC0000ull, /* .76 alu[l0000!gprega[12], --, b, l0000!val] */ - 0x0B040CC0000ull, /* .77 alu[l0000!gpregb[12], --, b, l0000!val] */ - 0x0A040DC0000ull, /* .78 alu[l0000!gprega[13], --, b, l0000!val] */ - 0x0B040DC0000ull, /* .79 alu[l0000!gpregb[13], --, b, l0000!val] */ - 0x0A040EC0000ull, /* .80 alu[l0000!gprega[14], --, b, l0000!val] */ - 0x0B040EC0000ull, /* .81 alu[l0000!gpregb[14], --, b, l0000!val] */ - 0x0A040FC0000ull, /* .82 alu[l0000!gprega[15], --, b, l0000!val] */ - 0x0B040FC0000ull, /* .83 alu[l0000!gpregb[15], --, b, l0000!val] */ - 0x0D81581C010ull, /* .84 br=ctx[7, exit#] */ - 0x0E000010000ull, /* .85 ctx_arb[kill], any */ - 0x0E000010000ull, /* .86 exit#:ctx_arb[kill], any */ -}; - -static const uint64_t ae_inst_4b[] = { - 0x0F0400C0000ull, /* .0 immed_w0[l0000!indx, 0] */ - 0x0F4400C0000ull, /* .1 immed_w1[l0000!indx, 0] */ - 0x0F040000300ull, /* .2 immed_w0[l0000!myvalue, 0x0] */ - 0x0F440000300ull, /* .3 immed_w1[l0000!myvalue, 0x0] */ - 0x0FC066C0000ull, /* .4 local_csr_wr[active_lm_addr_0, - l0000!indx]; put indx to lm_addr */ - 0x0F0000C0300ull, /* .5 nop */ - 0x0F0000C0300ull, /* .6 nop */ - 0x0F0000C0300ull, /* .7 nop */ - 0x0A021000000ull, /* .8 alu[*l$index0++, --, b, l0000!myvalue] */ -}; - -static const uint64_t ae_inst_1b[] = { - 0x0F0400C0000ull, /* .0 immed_w0[l0000!indx, 0] */ - 0x0F4400C0000ull, /* .1 immed_w1[l0000!indx, 0] */ - 0x0F040000300ull, /* .2 immed_w0[l0000!myvalue, 0x0] */ - 0x0F440000300ull, /* .3 immed_w1[l0000!myvalue, 0x0] */ - 0x0FC066C0000ull, /* .4 local_csr_wr[active_lm_addr_0, - l0000!indx]; put indx to lm_addr */ - 0x0F0000C0300ull, /* .5 nop */ - 0x0F0000C0300ull, /* .6 nop */ - 0x0F0000C0300ull, /* .7 nop */ - 0x0A000180000ull, /* .8 alu[l0000!val, --, b, *l$index0] */ - 0x09080000200ull, /* .9 alu_shf[l0000!myvalue, --, b, - l0000!myvalue, <<24 ] */ - 0x08180280201ull, /* .10 alu_shf[l0000!val1, --, b, l0000!val, <<8 ] */ - 0x08080280102ull, /* .11 alu_shf[l0000!val1, --, b, l0000!val1 , >>8 ] */ - 0x0BA00100002ull, /* .12 alu[l0000!val2, l0000!val1, or, l0000!myvalue] */ - -}; - -static const uint64_t ae_inst_2b[] = { - 0x0F0400C0000ull, /* .0 immed_w0[l0000!indx, 0] */ - 0x0F4400C0000ull, /* .1 immed_w1[l0000!indx, 0] */ - 0x0F040000300ull, /* .2 immed_w0[l0000!myvalue, 0x0] */ - 0x0F440000300ull, /* .3 immed_w1[l0000!myvalue, 0x0] */ - 0x0FC066C0000ull, /* .4 local_csr_wr[active_lm_addr_0, - l0000!indx]; put indx to lm_addr */ - 0x0F0000C0300ull, /* .5 nop */ - 0x0F0000C0300ull, /* .6 nop */ - 0x0F0000C0300ull, /* .7 nop */ - 0x0A000180000ull, /* .8 alu[l0000!val, --, b, *l$index0] */ - 0x09100000200ull, /* .9 alu_shf[l0000!myvalue, --, b, - l0000!myvalue, <<16 ] */ - 0x08100280201ull, /* .10 alu_shf[l0000!val1, --, b, l0000!val, <<16 ] */ - 0x08100280102ull, /* .11 alu_shf[l0000!val1, --, b, l0000!val1 , >>16 ] */ - 0x0BA00100002ull, /* .12 alu[l0000!val2, l0000!val1, or, l0000!myvalue] */ -}; - -static const uint64_t ae_inst_3b[] = { - 0x0F0400C0000ull, /* .0 immed_w0[l0000!indx, 0] */ - 0x0F4400C0000ull, /* .1 immed_w1[l0000!indx, 0] */ - 0x0F040000300ull, /* .2 immed_w0[l0000!myvalue, 0x0] */ - 0x0F440000300ull, /* .3 immed_w1[l0000!myvalue, 0x0] */ - 0x0FC066C0000ull, /* .4 local_csr_wr[active_lm_addr_0, - l0000!indx]; put indx to lm_addr */ - 0x0F0000C0300ull, /* .5 nop */ - 0x0F0000C0300ull, /* .6 nop */ - 0x0F0000C0300ull, /* .7 nop */ - 0x0A000180000ull, /* .8 alu[l0000!val, --, b, *l$index0] */ - 0x09180000200ull, /* .9 alu_shf[l0000!myvalue, --, - b, l0000!myvalue, <<8 ] */ - 0x08080280201ull, /* .10 alu_shf[l0000!val1, --, b, l0000!val, <<24 ] */ - 0x08180280102ull, /* .11 alu_shf[l0000!val1, --, b, l0000!val1 , >>24 ] */ - 0x0BA00100002ull, /* .12 alu[l0000!val2, l0000!val1, or, l0000!myvalue] */ -}; - -/* micro-instr fixup */ -#define INSERT_IMMED_GPRA_CONST(inst, const_val) \ - inst = (inst & 0xFFFF00C03FFull) | \ - ((((const_val) << 12) & 0x0FF00000ull) | \ - (((const_val) << 10) & 0x0003FC00ull)) -#define INSERT_IMMED_GPRB_CONST(inst, const_val) \ - inst = (inst & 0xFFFF00FFF00ull) | \ - ((((const_val) << 12) & 0x0FF00000ull) | \ - (((const_val) << 0) & 0x000000FFull)) - -enum aereg_type { - AEREG_NO_DEST, /* no destination */ - AEREG_GPA_REL, /* general-purpose A register under relative mode */ - AEREG_GPA_ABS, /* general-purpose A register under absolute mode */ - AEREG_GPB_REL, /* general-purpose B register under relative mode */ - AEREG_GPB_ABS, /* general-purpose B register under absolute mode */ - AEREG_SR_REL, /* sram register under relative mode */ - AEREG_SR_RD_REL, /* sram read register under relative mode */ - AEREG_SR_WR_REL, /* sram write register under relative mode */ - AEREG_SR_ABS, /* sram register under absolute mode */ - AEREG_SR_RD_ABS, /* sram read register under absolute mode */ - AEREG_SR_WR_ABS, /* sram write register under absolute mode */ - AEREG_SR0_SPILL, /* sram0 spill register */ - AEREG_SR1_SPILL, /* sram1 spill register */ - AEREG_SR2_SPILL, /* sram2 spill register */ - AEREG_SR3_SPILL, /* sram3 spill register */ - AEREG_SR0_MEM_ADDR, /* sram0 memory address register */ - AEREG_SR1_MEM_ADDR, /* sram1 memory address register */ - AEREG_SR2_MEM_ADDR, /* sram2 memory address register */ - AEREG_SR3_MEM_ADDR, /* sram3 memory address register */ - AEREG_DR_REL, /* dram register under relative mode */ - AEREG_DR_RD_REL, /* dram read register under relative mode */ - AEREG_DR_WR_REL, /* dram write register under relative mode */ - AEREG_DR_ABS, /* dram register under absolute mode */ - AEREG_DR_RD_ABS, /* dram read register under absolute mode */ - AEREG_DR_WR_ABS, /* dram write register under absolute mode */ - AEREG_DR_MEM_ADDR, /* dram memory address register */ - AEREG_LMEM, /* local memory */ - AEREG_LMEM0, /* local memory bank0 */ - AEREG_LMEM1, /* local memory bank1 */ - AEREG_LMEM_SPILL, /* local memory spill */ - AEREG_LMEM_ADDR, /* local memory address */ - AEREG_NEIGH_REL, /* next neighbour register under relative mode */ - AEREG_NEIGH_INDX, /* next neighbour register under index mode */ - AEREG_SIG_REL, /* signal register under relative mode */ - AEREG_SIG_INDX, /* signal register under index mode */ - AEREG_SIG_DOUBLE, /* signal register */ - AEREG_SIG_SINGLE, /* signal register */ - AEREG_SCRATCH_MEM_ADDR, /* scratch memory address */ - AEREG_UMEM0, /* ustore memory bank0 */ - AEREG_UMEM1, /* ustore memory bank1 */ - AEREG_UMEM_SPILL, /* ustore memory spill */ - AEREG_UMEM_ADDR, /* ustore memory address */ - AEREG_DR1_MEM_ADDR, /* dram segment1 address */ - AEREG_SR0_IMPORTED, /* sram segment0 imported data */ - AEREG_SR1_IMPORTED, /* sram segment1 imported data */ - AEREG_SR2_IMPORTED, /* sram segment2 imported data */ - AEREG_SR3_IMPORTED, /* sram segment3 imported data */ - AEREG_DR_IMPORTED, /* dram segment0 imported data */ - AEREG_DR1_IMPORTED, /* dram segment1 imported data */ - AEREG_SCRATCH_IMPORTED, /* scratch imported data */ - AEREG_XFER_RD_ABS, /* transfer read register under absolute mode */ - AEREG_XFER_WR_ABS, /* transfer write register under absolute mode */ - AEREG_CONST_VALUE, /* const alue */ - AEREG_ADDR_TAKEN, /* address taken */ - AEREG_OPTIMIZED_AWAY, /* optimized away */ - AEREG_SHRAM_ADDR, /* shared ram0 address */ - AEREG_SHRAM1_ADDR, /* shared ram1 address */ - AEREG_SHRAM2_ADDR, /* shared ram2 address */ - AEREG_SHRAM3_ADDR, /* shared ram3 address */ - AEREG_SHRAM4_ADDR, /* shared ram4 address */ - AEREG_SHRAM5_ADDR, /* shared ram5 address */ - AEREG_ANY = 0xffff /* any register */ -}; -#define AEREG_SR_INDX AEREG_SR_ABS - /* sram transfer register under index mode */ -#define AEREG_DR_INDX AEREG_DR_ABS - /* dram transfer register under index mode */ -#define AEREG_NEIGH_ABS AEREG_NEIGH_INDX - /* next neighbor register under absolute mode */ - - -#define QAT_2K 0x0800 -#define QAT_4K 0x1000 -#define QAT_6K 0x1800 -#define QAT_8K 0x2000 -#define QAT_16K 0x4000 - -#define MOF_OBJ_ID_LEN 8 -#define MOF_FID 0x00666f6d -#define MOF_MIN_VER 0x1 -#define MOF_MAJ_VER 0x0 -#define SYM_OBJS "SYM_OBJS" /* symbol object string */ -#define UOF_OBJS "UOF_OBJS" /* uof object string */ -#define SUOF_OBJS "SUF_OBJS" /* suof object string */ -#define SUOF_IMAG "SUF_IMAG" /* suof chunk ID string */ - -#define UOF_STRT "UOF_STRT" /* string table section ID */ -#define UOF_GTID "UOF_GTID" /* GTID section ID */ -#define UOF_IMAG "UOF_IMAG" /* image section ID */ -#define UOF_IMEM "UOF_IMEM" /* import section ID */ -#define UOF_MSEG "UOF_MSEG" /* memory section ID */ - -#define CRC_POLY 0x1021 -#define CRC_WIDTH 16 -#define CRC_BITMASK(x) (1L << (x)) -#define CRC_WIDTHMASK(width) ((((1L<<(width-1))-1L)<<1)|1L) - -struct mof_file_hdr { - u_int mfh_fid; - u_int mfh_csum; - char mfh_min_ver; - char mfh_maj_ver; - u_short mfh_reserved; - u_short mfh_max_chunks; - u_short mfh_num_chunks; -}; - -struct mof_file_chunk_hdr { - char mfch_id[MOF_OBJ_ID_LEN]; - uint64_t mfch_offset; - uint64_t mfch_size; -}; - -struct mof_uof_hdr { - u_short muh_max_chunks; - u_short muh_num_chunks; - u_int muh_reserved; -}; - -struct mof_uof_chunk_hdr { - char much_id[MOF_OBJ_ID_LEN]; /* should be UOF_IMAG */ - uint64_t much_offset; /* uof image */ - uint64_t much_size; /* uof image size */ - u_int much_name; /* uof name string-table offset */ - u_int much_reserved; -}; - -#define UOF_MAX_NUM_OF_AE 16 /* maximum number of AE */ - -#define UOF_OBJ_ID_LEN 8 /* length of object ID */ -#define UOF_FIELD_POS_SIZE 12 /* field postion size */ -#define MIN_UOF_SIZE 24 /* minimum .uof file size */ -#define UOF_FID 0xc6c2 /* uof magic number */ -#define UOF_MIN_VER 0x11 -#define UOF_MAJ_VER 0x4 - -struct uof_file_hdr { - u_short ufh_id; /* file id and endian indicator */ - u_short ufh_reserved1; /* reserved for future use */ - char ufh_min_ver; /* file format minor version */ - char ufh_maj_ver; /* file format major version */ - u_short ufh_reserved2; /* reserved for future use */ - u_short ufh_max_chunks; /* max chunks in file */ - u_short ufh_num_chunks; /* num of actual chunks */ -}; - -struct uof_file_chunk_hdr { - char ufch_id[UOF_OBJ_ID_LEN]; /* chunk identifier */ - u_int ufch_csum; /* chunk checksum */ - u_int ufch_offset; /* offset of the chunk in the file */ - u_int ufch_size; /* size of the chunk */ -}; - -struct uof_obj_hdr { - u_int uoh_cpu_type; /* CPU type */ - u_short uoh_min_cpu_ver; /* starting CPU version */ - u_short uoh_max_cpu_ver; /* ending CPU version */ - short uoh_max_chunks; /* max chunks in chunk obj */ - short uoh_num_chunks; /* num of actual chunks */ - u_int uoh_reserved1; - u_int uoh_reserved2; -}; - -struct uof_chunk_hdr { - char uch_id[UOF_OBJ_ID_LEN]; - u_int uch_offset; - u_int uch_size; -}; - -struct uof_str_tab { - u_int ust_table_len; /* length of table */ - u_int ust_reserved; /* reserved for future use */ - uint64_t ust_strings; /* pointer to string table. - * NULL terminated strings */ -}; - -#define AE_MODE_RELOAD_CTX_SHARED __BIT(12) -#define AE_MODE_SHARED_USTORE __BIT(11) -#define AE_MODE_LMEM1 __BIT(9) -#define AE_MODE_LMEM0 __BIT(8) -#define AE_MODE_NN_MODE __BITS(7, 4) -#define AE_MODE_CTX_MODE __BITS(3, 0) - -#define AE_MODE_NN_MODE_NEIGH 0 -#define AE_MODE_NN_MODE_SELF 1 -#define AE_MODE_NN_MODE_DONTCARE 0xff - -struct uof_image { - u_int ui_name; /* image name */ - u_int ui_ae_assigned; /* AccelEngines assigned */ - u_int ui_ctx_assigned; /* AccelEngine contexts assigned */ - u_int ui_cpu_type; /* cpu type */ - u_int ui_entry_address; /* entry uaddress */ - u_int ui_fill_pattern[2]; /* uword fill value */ - u_int ui_reloadable_size; /* size of reloadable ustore section */ - - u_char ui_sensitivity; /* - * case sensitivity: 0 = insensitive, - * 1 = sensitive - */ - u_char ui_reserved; /* reserved for future use */ - u_short ui_ae_mode; /* - * unused<15:14>, legacyMode<13>, - * reloadCtxShared<12>, sharedUstore<11>, - * ecc<10>, locMem1<9>, locMem0<8>, - * nnMode<7:4>, ctx<3:0> - */ - - u_short ui_max_ver; /* max cpu ver on which the image can run */ - u_short ui_min_ver; /* min cpu ver on which the image can run */ - - u_short ui_image_attrib; /* image attributes */ - u_short ui_reserved2; /* reserved for future use */ - - u_short ui_num_page_regions; /* number of page regions */ - u_short ui_num_pages; /* number of pages */ - - u_int ui_reg_tab; /* offset to register table */ - u_int ui_init_reg_sym_tab; /* reg/sym init table */ - u_int ui_sbreak_tab; /* offset to sbreak table */ - - u_int ui_app_metadata; /* application meta-data */ - /* ui_npages of code page follows this header */ -}; - -struct uof_obj_table { - u_int uot_nentries; /* number of table entries */ - /* uot_nentries of object follows */ -}; - -struct uof_ae_reg { - u_int uar_name; /* reg name string-table offset */ - u_int uar_vis_name; /* reg visible name string-table offset */ - u_short uar_type; /* reg type */ - u_short uar_addr; /* reg address */ - u_short uar_access_mode; /* uof_RegAccessMode_T: read/write/both/undef */ - u_char uar_visible; /* register visibility */ - u_char uar_reserved1; /* reserved for future use */ - u_short uar_ref_count; /* number of contiguous registers allocated */ - u_short uar_reserved2; /* reserved for future use */ - u_int uar_xoid; /* xfer order ID */ -}; - -enum uof_value_kind { - UNDEF_VAL, /* undefined value */ - CHAR_VAL, /* character value */ - SHORT_VAL, /* short value */ - INT_VAL, /* integer value */ - STR_VAL, /* string value */ - STRTAB_VAL, /* string table value */ - NUM_VAL, /* number value */ - EXPR_VAL /* expression value */ -}; - -enum uof_init_type { - INIT_EXPR, - INIT_REG, - INIT_REG_CTX, - INIT_EXPR_ENDIAN_SWAP -}; - -struct uof_init_reg_sym { - u_int uirs_name; /* symbol name */ - char uirs_init_type; /* 0=expr, 1=register, 2=ctxReg, - * 3=expr_endian_swap */ - char uirs_value_type; /* EXPR_VAL, STRTAB_VAL */ - char uirs_reg_type; /* register type: ae_reg_type */ - u_char uirs_ctx; /* AE context when initType=2 */ - u_int uirs_addr_offset; /* reg address, or sym-value offset */ - u_int uirs_value; /* integer value, or expression */ -}; - -struct uof_sbreak { - u_int us_page_num; /* page number */ - u_int us_virt_uaddr; /* virt uaddress */ - u_char us_sbreak_type; /* sbreak type */ - u_char us_reg_type; /* register type: ae_reg_type */ - u_short us_reserved1; /* reserved for future use */ - u_int us_addr_offset; /* branch target address or offset - * to be used with the reg value to - * calculate the target address */ - u_int us_reg_rddr; /* register address */ -}; -struct uof_code_page { - u_int ucp_page_region; /* page associated region */ - u_int ucp_page_num; /* code-page number */ - u_char ucp_def_page; /* default page indicator */ - u_char ucp_reserved2; /* reserved for future use */ - u_short ucp_reserved1; /* reserved for future use */ - u_int ucp_beg_vaddr; /* starting virtual uaddr */ - u_int ucp_beg_paddr; /* starting physical uaddr */ - u_int ucp_neigh_reg_tab; /* offset to neighbour-reg table */ - u_int ucp_uc_var_tab; /* offset to uC var table */ - u_int ucp_imp_var_tab; /* offset to import var table */ - u_int ucp_imp_expr_tab; /* offset to import expression table */ - u_int ucp_code_area; /* offset to code area */ -}; - -struct uof_code_area { - u_int uca_num_micro_words; /* number of micro words */ - u_int uca_uword_block_tab; /* offset to ublock table */ -}; - -struct uof_uword_block { - u_int uub_start_addr; /* start address */ - u_int uub_num_words; /* number of microwords */ - u_int uub_uword_offset; /* offset to the uwords */ - u_int uub_reserved; /* reserved for future use */ -}; - -struct uof_uword_fixup { - u_int uuf_name; /* offset to string table */ - u_int uuf_uword_address; /* micro word address */ - u_int uuf_expr_value; /* string table offset of expr string, or value */ - u_char uuf_val_type; /* VALUE_UNDEF, VALUE_NUM, VALUE_EXPR */ - u_char uuf_value_attrs; /* bit<0> (Scope: 0=global, 1=local), - * bit<1> (init: 0=no, 1=yes) */ - u_short uuf_reserved1; /* reserved for future use */ - char uuf_field_attrs[UOF_FIELD_POS_SIZE]; - /* field pos, size, and right shift value */ -}; - -struct uof_import_var { - u_int uiv_name; /* import var name string-table offset */ - u_char uiv_value_attrs; /* bit<0> (Scope: 0=global), - * bit<1> (init: 0=no, 1=yes) */ - u_char uiv_reserved1; /* reserved for future use */ - u_short uiv_reserved2; /* reserved for future use */ - uint64_t uiv_value; /* 64-bit imported value */ -}; - -struct uof_mem_val_attr { - u_int umva_byte_offset; /* byte-offset from the allocated memory */ - u_int umva_value; /* memory value */ -}; - -enum uof_mem_region { - SRAM_REGION, /* SRAM region */ - DRAM_REGION, /* DRAM0 region */ - DRAM1_REGION, /* DRAM1 region */ - LMEM_REGION, /* local memory region */ - SCRATCH_REGION, /* SCRATCH region */ - UMEM_REGION, /* micro-store region */ - RAM_REGION, /* RAM region */ - SHRAM_REGION, /* shared memory-0 region */ - SHRAM1_REGION, /* shared memory-1 region */ - SHRAM2_REGION, /* shared memory-2 region */ - SHRAM3_REGION, /* shared memory-3 region */ - SHRAM4_REGION, /* shared memory-4 region */ - SHRAM5_REGION /* shared memory-5 region */ -}; - -#define UOF_SCOPE_GLOBAL 0 -#define UOF_SCOPE_LOCAL 1 - -struct uof_init_mem { - u_int uim_sym_name; /* symbol name */ - char uim_region; /* memory region -- uof_mem_region */ - char uim_scope; /* visibility scope */ - u_short uim_reserved1; /* reserved for future use */ - u_int uim_addr; /* memory address */ - u_int uim_num_bytes; /* number of bytes */ - u_int uim_num_val_attr; /* number of values attributes */ - - /* uim_num_val_attr of uof_mem_val_attr follows this header */ -}; - -struct uof_var_mem_seg { - u_int uvms_sram_base; /* SRAM memory segment base addr */ - u_int uvms_sram_size; /* SRAM segment size bytes */ - u_int uvms_sram_alignment; /* SRAM segment alignment bytes */ - u_int uvms_sdram_base; /* DRAM0 memory segment base addr */ - u_int uvms_sdram_size; /* DRAM0 segment size bytes */ - u_int uvms_sdram_alignment; /* DRAM0 segment alignment bytes */ - u_int uvms_sdram1_base; /* DRAM1 memory segment base addr */ - u_int uvms_sdram1_size; /* DRAM1 segment size bytes */ - u_int uvms_sdram1_alignment; /* DRAM1 segment alignment bytes */ - u_int uvms_scratch_base; /* SCRATCH memory segment base addr */ - u_int uvms_scratch_size; /* SCRATCH segment size bytes */ - u_int uvms_scratch_alignment; /* SCRATCH segment alignment bytes */ -}; - -#define SUOF_OBJ_ID_LEN 8 -#define SUOF_FID 0x53554f46 -#define SUOF_MAJ_VER 0x0 -#define SUOF_MIN_VER 0x1 -#define SIMG_AE_INIT_SEQ_LEN (50 * sizeof(unsigned long long)) -#define SIMG_AE_INSTS_LEN (0x4000 * sizeof(unsigned long long)) -#define CSS_FWSK_MODULUS_LEN 256 -#define CSS_FWSK_EXPONENT_LEN 4 -#define CSS_FWSK_PAD_LEN 252 -#define CSS_FWSK_PUB_LEN (CSS_FWSK_MODULUS_LEN + \ - CSS_FWSK_EXPONENT_LEN + \ - CSS_FWSK_PAD_LEN) -#define CSS_SIGNATURE_LEN 256 -#define CSS_AE_IMG_LEN (sizeof(struct simg_ae_mode) + \ - SIMG_AE_INIT_SEQ_LEN + \ - SIMG_AE_INSTS_LEN) -#define CSS_AE_SIMG_LEN (sizeof(struct css_hdr) + \ - CSS_FWSK_PUB_LEN + \ - CSS_SIGNATURE_LEN + \ - CSS_AE_IMG_LEN) -#define AE_IMG_OFFSET (sizeof(struct css_hdr) + \ - CSS_FWSK_MODULUS_LEN + \ - CSS_FWSK_EXPONENT_LEN + \ - CSS_SIGNATURE_LEN) -#define CSS_MAX_IMAGE_LEN 0x40000 - -struct fw_auth_desc { - u_int fad_img_len; - u_int fad_reserved; - u_int fad_css_hdr_high; - u_int fad_css_hdr_low; - u_int fad_img_high; - u_int fad_img_low; - u_int fad_signature_high; - u_int fad_signature_low; - u_int fad_fwsk_pub_high; - u_int fad_fwsk_pub_low; - u_int fad_img_ae_mode_data_high; - u_int fad_img_ae_mode_data_low; - u_int fad_img_ae_init_data_high; - u_int fad_img_ae_init_data_low; - u_int fad_img_ae_insts_high; - u_int fad_img_ae_insts_low; -}; - -struct auth_chunk { - struct fw_auth_desc ac_fw_auth_desc; - uint64_t ac_chunk_size; - uint64_t ac_chunk_bus_addr; -}; - -enum css_fwtype { - CSS_AE_FIRMWARE = 0, - CSS_MMP_FIRMWARE = 1 -}; - -struct css_hdr { - u_int css_module_type; - u_int css_header_len; - u_int css_header_ver; - u_int css_module_id; - u_int css_module_vendor; - u_int css_date; - u_int css_size; - u_int css_key_size; - u_int css_module_size; - u_int css_exponent_size; - u_int css_fw_type; - u_int css_reserved[21]; -}; - -struct simg_ae_mode { - u_int sam_file_id; - u_short sam_maj_ver; - u_short sam_min_ver; - u_int sam_dev_type; - u_short sam_devmax_ver; - u_short sam_devmin_ver; - u_int sam_ae_mask; - u_int sam_ctx_enables; - char sam_fw_type; - char sam_ctx_mode; - char sam_nn_mode; - char sam_lm0_mode; - char sam_lm1_mode; - char sam_scs_mode; - char sam_lm2_mode; - char sam_lm3_mode; - char sam_tindex_mode; - u_char sam_reserved[7]; - char sam_simg_name[256]; - char sam_appmeta_data[256]; -}; - -struct suof_file_hdr { - u_int sfh_file_id; - u_int sfh_check_sum; - char sfh_min_ver; - char sfh_maj_ver; - char sfh_fw_type; - char sfh_reserved; - u_short sfh_max_chunks; - u_short sfh_num_chunks; -}; - -struct suof_chunk_hdr { - char sch_chunk_id[SUOF_OBJ_ID_LEN]; - uint64_t sch_offset; - uint64_t sch_size; -}; - -struct suof_str_tab { - u_int sst_tab_length; - u_int sst_strings; -}; - -struct suof_obj_hdr { - u_int soh_img_length; - u_int soh_reserved; -}; - -/* -------------------------------------------------------------------------- */ -/* accel */ - -enum fw_slice { - FW_SLICE_NULL = 0, /* NULL slice type */ - FW_SLICE_CIPHER = 1, /* CIPHER slice type */ - FW_SLICE_AUTH = 2, /* AUTH slice type */ - FW_SLICE_DRAM_RD = 3, /* DRAM_RD Logical slice type */ - FW_SLICE_DRAM_WR = 4, /* DRAM_WR Logical slice type */ - FW_SLICE_COMP = 5, /* Compression slice type */ - FW_SLICE_XLAT = 6, /* Translator slice type */ - FW_SLICE_DELIMITER /* End delimiter */ -}; -#define MAX_FW_SLICE FW_SLICE_DELIMITER - -#define QAT_OPTIMAL_ALIGN_SHIFT 6 -#define QAT_OPTIMAL_ALIGN (1 << QAT_OPTIMAL_ALIGN_SHIFT) - -enum hw_auth_algo { - HW_AUTH_ALGO_NULL = 0, /* Null hashing */ - HW_AUTH_ALGO_SHA1 = 1, /* SHA1 hashing */ - HW_AUTH_ALGO_MD5 = 2, /* MD5 hashing */ - HW_AUTH_ALGO_SHA224 = 3, /* SHA-224 hashing */ - HW_AUTH_ALGO_SHA256 = 4, /* SHA-256 hashing */ - HW_AUTH_ALGO_SHA384 = 5, /* SHA-384 hashing */ - HW_AUTH_ALGO_SHA512 = 6, /* SHA-512 hashing */ - HW_AUTH_ALGO_AES_XCBC_MAC = 7, /* AES-XCBC-MAC hashing */ - HW_AUTH_ALGO_AES_CBC_MAC = 8, /* AES-CBC-MAC hashing */ - HW_AUTH_ALGO_AES_F9 = 9, /* AES F9 hashing */ - HW_AUTH_ALGO_GALOIS_128 = 10, /* Galois 128 bit hashing */ - HW_AUTH_ALGO_GALOIS_64 = 11, /* Galois 64 hashing */ - HW_AUTH_ALGO_KASUMI_F9 = 12, /* Kasumi F9 hashing */ - HW_AUTH_ALGO_SNOW_3G_UIA2 = 13, /* UIA2/SNOW_3H F9 hashing */ - HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14, - HW_AUTH_RESERVED_1 = 15, - HW_AUTH_RESERVED_2 = 16, - HW_AUTH_ALGO_SHA3_256 = 17, - HW_AUTH_RESERVED_3 = 18, - HW_AUTH_ALGO_SHA3_512 = 19, - HW_AUTH_ALGO_DELIMITER = 20 -}; - -enum hw_auth_mode { - HW_AUTH_MODE0, - HW_AUTH_MODE1, - HW_AUTH_MODE2, - HW_AUTH_MODE_DELIMITER -}; - -struct hw_auth_config { - uint32_t config; - /* Configuration used for setting up the slice */ - uint32_t reserved; - /* Reserved */ -}; - -#define HW_AUTH_CONFIG_SHA3_ALGO __BITS(22, 23) -#define HW_AUTH_CONFIG_SHA3_PADDING __BIT(16) -#define HW_AUTH_CONFIG_CMPLEN __BITS(14, 8) - /* The length of the digest if the QAT is to the check*/ -#define HW_AUTH_CONFIG_MODE __BITS(7, 4) -#define HW_AUTH_CONFIG_ALGO __BITS(3, 0) - -#define HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \ - __SHIFTIN(mode, HW_AUTH_CONFIG_MODE) | \ - __SHIFTIN(algo, HW_AUTH_CONFIG_ALGO) | \ - __SHIFTIN(cmp_len, HW_AUTH_CONFIG_CMPLEN) - -struct hw_auth_counter { - uint32_t counter; /* Counter value */ - uint32_t reserved; /* Reserved */ -}; - -struct hw_auth_setup { - struct hw_auth_config auth_config; - /* Configuration word for the auth slice */ - struct hw_auth_counter auth_counter; - /* Auth counter value for this request */ -}; - -#define HW_NULL_STATE1_SZ 32 -#define HW_MD5_STATE1_SZ 16 -#define HW_SHA1_STATE1_SZ 20 -#define HW_SHA224_STATE1_SZ 32 -#define HW_SHA256_STATE1_SZ 32 -#define HW_SHA3_256_STATE1_SZ 32 -#define HW_SHA384_STATE1_SZ 64 -#define HW_SHA512_STATE1_SZ 64 -#define HW_SHA3_512_STATE1_SZ 64 -#define HW_SHA3_224_STATE1_SZ 28 -#define HW_SHA3_384_STATE1_SZ 48 -#define HW_AES_XCBC_MAC_STATE1_SZ 16 -#define HW_AES_CBC_MAC_STATE1_SZ 16 -#define HW_AES_F9_STATE1_SZ 32 -#define HW_KASUMI_F9_STATE1_SZ 16 -#define HW_GALOIS_128_STATE1_SZ 16 -#define HW_SNOW_3G_UIA2_STATE1_SZ 8 -#define HW_ZUC_3G_EIA3_STATE1_SZ 8 -#define HW_NULL_STATE2_SZ 32 -#define HW_MD5_STATE2_SZ 16 -#define HW_SHA1_STATE2_SZ 20 -#define HW_SHA224_STATE2_SZ 32 -#define HW_SHA256_STATE2_SZ 32 -#define HW_SHA3_256_STATE2_SZ 0 -#define HW_SHA384_STATE2_SZ 64 -#define HW_SHA512_STATE2_SZ 64 -#define HW_SHA3_512_STATE2_SZ 0 -#define HW_SHA3_224_STATE2_SZ 0 -#define HW_SHA3_384_STATE2_SZ 0 -#define HW_AES_XCBC_MAC_KEY_SZ 16 -#define HW_AES_CBC_MAC_KEY_SZ 16 -#define HW_AES_CCM_CBC_E_CTR0_SZ 16 -#define HW_F9_IK_SZ 16 -#define HW_F9_FK_SZ 16 -#define HW_KASUMI_F9_STATE2_SZ (HW_F9_IK_SZ + HW_F9_FK_SZ) -#define HW_AES_F9_STATE2_SZ HW_KASUMI_F9_STATE2_SZ -#define HW_SNOW_3G_UIA2_STATE2_SZ 24 -#define HW_ZUC_3G_EIA3_STATE2_SZ 32 -#define HW_GALOIS_H_SZ 16 -#define HW_GALOIS_LEN_A_SZ 8 -#define HW_GALOIS_E_CTR0_SZ 16 - -struct hw_auth_sha512 { - struct hw_auth_setup inner_setup; - /* Inner loop configuration word for the slice */ - uint8_t state1[HW_SHA512_STATE1_SZ]; - /* Slice state1 variable */ - struct hw_auth_setup outer_setup; - /* Outer configuration word for the slice */ - uint8_t state2[HW_SHA512_STATE2_SZ]; - /* Slice state2 variable */ -}; - -union hw_auth_algo_blk { - struct hw_auth_sha512 max; - /* This is the largest possible auth setup block size */ -}; - -enum hw_cipher_algo { - HW_CIPHER_ALGO_NULL = 0, /* Null ciphering */ - HW_CIPHER_ALGO_DES = 1, /* DES ciphering */ - HW_CIPHER_ALGO_3DES = 2, /* 3DES ciphering */ - HW_CIPHER_ALGO_AES128 = 3, /* AES-128 ciphering */ - HW_CIPHER_ALGO_AES192 = 4, /* AES-192 ciphering */ - HW_CIPHER_ALGO_AES256 = 5, /* AES-256 ciphering */ - HW_CIPHER_ALGO_ARC4 = 6, /* ARC4 ciphering */ - HW_CIPHER_ALGO_KASUMI = 7, /* Kasumi */ - HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8, /* Snow_3G */ - HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9, - HW_CIPHER_DELIMITER = 10 /* Delimiter type */ -}; - -enum hw_cipher_mode { - HW_CIPHER_ECB_MODE = 0, /* ECB mode */ - HW_CIPHER_CBC_MODE = 1, /* CBC mode */ - HW_CIPHER_CTR_MODE = 2, /* CTR mode */ - HW_CIPHER_F8_MODE = 3, /* F8 mode */ - HW_CIPHER_XTS_MODE = 6, - HW_CIPHER_MODE_DELIMITER = 7 /* Delimiter type */ -}; - -struct hw_cipher_config { - uint32_t val; /* Cipher slice configuration */ - uint32_t reserved; /* Reserved */ -}; - -#define CIPHER_CONFIG_CONVERT __BIT(9) -#define CIPHER_CONFIG_DIR __BIT(8) -#define CIPHER_CONFIG_MODE __BITS(7, 4) -#define CIPHER_CONFIG_ALGO __BITS(3, 0) -#define HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \ - __SHIFTIN(mode, CIPHER_CONFIG_MODE) | \ - __SHIFTIN(algo, CIPHER_CONFIG_ALGO) | \ - __SHIFTIN(convert, CIPHER_CONFIG_CONVERT) | \ - __SHIFTIN(dir, CIPHER_CONFIG_DIR) - -enum hw_cipher_dir { - HW_CIPHER_ENCRYPT = 0, /* encryption is required */ - HW_CIPHER_DECRYPT = 1, /* decryption is required */ -}; - -enum hw_cipher_convert { - HW_CIPHER_NO_CONVERT = 0, /* no key convert is required*/ - HW_CIPHER_KEY_CONVERT = 1, /* key conversion is required*/ -}; - -#define CIPHER_MODE_F8_KEY_SZ_MULT 2 -#define CIPHER_MODE_XTS_KEY_SZ_MULT 2 - -#define HW_DES_BLK_SZ 8 -#define HW_3DES_BLK_SZ 8 -#define HW_NULL_BLK_SZ 8 -#define HW_AES_BLK_SZ 16 -#define HW_KASUMI_BLK_SZ 8 -#define HW_SNOW_3G_BLK_SZ 8 -#define HW_ZUC_3G_BLK_SZ 8 -#define HW_NULL_KEY_SZ 256 -#define HW_DES_KEY_SZ 8 -#define HW_3DES_KEY_SZ 24 -#define HW_AES_128_KEY_SZ 16 -#define HW_AES_192_KEY_SZ 24 -#define HW_AES_256_KEY_SZ 32 -#define HW_AES_128_F8_KEY_SZ (HW_AES_128_KEY_SZ * \ - CIPHER_MODE_F8_KEY_SZ_MULT) -#define HW_AES_192_F8_KEY_SZ (HW_AES_192_KEY_SZ * \ - CIPHER_MODE_F8_KEY_SZ_MULT) -#define HW_AES_256_F8_KEY_SZ (HW_AES_256_KEY_SZ * \ - CIPHER_MODE_F8_KEY_SZ_MULT) -#define HW_AES_128_XTS_KEY_SZ (HW_AES_128_KEY_SZ * \ - CIPHER_MODE_XTS_KEY_SZ_MULT) -#define HW_AES_256_XTS_KEY_SZ (HW_AES_256_KEY_SZ * \ - CIPHER_MODE_XTS_KEY_SZ_MULT) -#define HW_KASUMI_KEY_SZ 16 -#define HW_KASUMI_F8_KEY_SZ (HW_KASUMI_KEY_SZ * \ - CIPHER_MODE_F8_KEY_SZ_MULT) -#define HW_AES_128_XTS_KEY_SZ (HW_AES_128_KEY_SZ * \ - CIPHER_MODE_XTS_KEY_SZ_MULT) -#define HW_AES_256_XTS_KEY_SZ (HW_AES_256_KEY_SZ * \ - CIPHER_MODE_XTS_KEY_SZ_MULT) -#define HW_ARC4_KEY_SZ 256 -#define HW_SNOW_3G_UEA2_KEY_SZ 16 -#define HW_SNOW_3G_UEA2_IV_SZ 16 -#define HW_ZUC_3G_EEA3_KEY_SZ 16 -#define HW_ZUC_3G_EEA3_IV_SZ 16 -#define HW_MODE_F8_NUM_REG_TO_CLEAR 2 - -struct hw_cipher_aes256_f8 { - struct hw_cipher_config cipher_config; - /* Cipher configuration word for the slice set to - * AES-256 and the F8 mode */ - uint8_t key[HW_AES_256_F8_KEY_SZ]; - /* Cipher key */ -}; - -union hw_cipher_algo_blk { - struct hw_cipher_aes256_f8 max; /* AES-256 F8 Cipher */ - /* This is the largest possible cipher setup block size */ -}; - -struct flat_buffer_desc { - uint32_t data_len_in_bytes; - uint32_t reserved; - uint64_t phy_buffer; -}; - -#define HW_MAXSEG 32 - -struct buffer_list_desc { - uint64_t resrvd; - uint32_t num_buffers; - uint32_t reserved; - struct flat_buffer_desc flat_bufs[HW_MAXSEG]; -}; - -/* -------------------------------------------------------------------------- */ -/* look aside */ - -enum fw_la_cmd_id { - FW_LA_CMD_CIPHER, /* Cipher Request */ - FW_LA_CMD_AUTH, /* Auth Request */ - FW_LA_CMD_CIPHER_HASH, /* Cipher-Hash Request */ - FW_LA_CMD_HASH_CIPHER, /* Hash-Cipher Request */ - FW_LA_CMD_TRNG_GET_RANDOM, /* TRNG Get Random Request */ - FW_LA_CMD_TRNG_TEST, /* TRNG Test Request */ - FW_LA_CMD_SSL3_KEY_DERIVE, /* SSL3 Key Derivation Request */ - FW_LA_CMD_TLS_V1_1_KEY_DERIVE, /* TLS Key Derivation Request */ - FW_LA_CMD_TLS_V1_2_KEY_DERIVE, /* TLS Key Derivation Request */ - FW_LA_CMD_MGF1, /* MGF1 Request */ - FW_LA_CMD_AUTH_PRE_COMP, /* Auth Pre-Compute Request */ -#if 0 /* incompatible between qat 1.5 and 1.7 */ - FW_LA_CMD_CIPHER_CIPHER, /* Cipher-Cipher Request */ - FW_LA_CMD_HASH_HASH, /* Hash-Hash Request */ - FW_LA_CMD_CIPHER_PRE_COMP, /* Auth Pre-Compute Request */ -#endif - FW_LA_CMD_DELIMITER, /* Delimiter type */ -}; - -#endif Index: sys/dev/qat/qatvar.h =================================================================== --- sys/dev/qat/qatvar.h +++ /dev/null @@ -1,1071 +0,0 @@ -/* SPDX-License-Identifier: BSD-2-Clause-NetBSD AND BSD-3-Clause */ -/* $NetBSD: qatvar.h,v 1.2 2020/03/14 18:08:39 ad Exp $ */ - -/* - * Copyright (c) 2019 Internet Initiative Japan, Inc. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS - * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED - * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS - * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Copyright(c) 2007-2019 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* $FreeBSD$ */ - -#ifndef _DEV_PCI_QATVAR_H_ -#define _DEV_PCI_QATVAR_H_ - -#include -#include - -#include - -#define QAT_NSYMREQ 256 -#define QAT_NSYMCOOKIE ((QAT_NSYMREQ * 2 + 1) * 2) - -#define QAT_EV_NAME_SIZE 32 -#define QAT_RING_NAME_SIZE 32 - -#define QAT_MAXSEG HW_MAXSEG /* max segments for sg dma */ -#define QAT_MAXLEN 65535 /* IP_MAXPACKET */ - -#define QAT_HB_INTERVAL 500 /* heartbeat msec */ -#define QAT_SSM_WDT 100 - -enum qat_chip_type { - QAT_CHIP_C2XXX = 0, /* NanoQAT: Atom C2000 */ - QAT_CHIP_C2XXX_IOV, - QAT_CHIP_C3XXX, /* Atom C3000 */ - QAT_CHIP_C3XXX_IOV, - QAT_CHIP_C62X, - QAT_CHIP_C62X_IOV, - QAT_CHIP_D15XX, - QAT_CHIP_D15XX_IOV, - QAT_CHIP_DH895XCC, - QAT_CHIP_DH895XCC_IOV, -}; - -enum qat_sku { - QAT_SKU_UNKNOWN = 0, - QAT_SKU_1, - QAT_SKU_2, - QAT_SKU_3, - QAT_SKU_4, - QAT_SKU_VF, -}; - -enum qat_ae_status { - QAT_AE_ENABLED = 1, - QAT_AE_ACTIVE, - QAT_AE_DISABLED -}; - -#define TIMEOUT_AE_RESET 100 -#define TIMEOUT_AE_CHECK 10000 -#define TIMEOUT_AE_CSR 500 -#define AE_EXEC_CYCLE 20 - -#define QAT_UOF_MAX_PAGE 1 -#define QAT_UOF_MAX_PAGE_REGION 1 - -struct qat_dmamem { - bus_dma_tag_t qdm_dma_tag; - bus_dmamap_t qdm_dma_map; - bus_size_t qdm_dma_size; - bus_dma_segment_t qdm_dma_seg; - void *qdm_dma_vaddr; -}; - -/* Valid internal ring size values */ -#define QAT_RING_SIZE_128 0x01 -#define QAT_RING_SIZE_256 0x02 -#define QAT_RING_SIZE_512 0x03 -#define QAT_RING_SIZE_4K 0x06 -#define QAT_RING_SIZE_16K 0x08 -#define QAT_RING_SIZE_4M 0x10 -#define QAT_MIN_RING_SIZE QAT_RING_SIZE_128 -#define QAT_MAX_RING_SIZE QAT_RING_SIZE_4M -#define QAT_DEFAULT_RING_SIZE QAT_RING_SIZE_16K - -/* Valid internal msg size values */ -#define QAT_MSG_SIZE_32 0x01 -#define QAT_MSG_SIZE_64 0x02 -#define QAT_MSG_SIZE_128 0x04 -#define QAT_MIN_MSG_SIZE QAT_MSG_SIZE_32 -#define QAT_MAX_MSG_SIZE QAT_MSG_SIZE_128 - -/* Size to bytes conversion macros for ring and msg size values */ -#define QAT_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5) -#define QAT_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5) -#define QAT_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7) -#define QAT_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7) - -/* Minimum ring bufer size for memory allocation */ -#define QAT_RING_SIZE_BYTES_MIN(SIZE) \ - ((SIZE < QAT_SIZE_TO_RING_SIZE_IN_BYTES(QAT_RING_SIZE_4K)) ? \ - QAT_SIZE_TO_RING_SIZE_IN_BYTES(QAT_RING_SIZE_4K) : SIZE) -#define QAT_RING_SIZE_MODULO(SIZE) (SIZE + 0x6) -#define QAT_SIZE_TO_POW(SIZE) ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | \ - SIZE) & ~0x4) -/* Max outstanding requests */ -#define QAT_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \ - ((((1 << (RING_SIZE - 1)) << 3) >> QAT_SIZE_TO_POW(MSG_SIZE)) - 1) - -#define QAT_RING_PATTERN 0x7f - -struct qat_softc; - -typedef int (*qat_cb_t)(struct qat_softc *, void *, void *); - -struct qat_ring { - struct mtx qr_ring_mtx; /* Lock per ring */ - bool qr_need_wakeup; - void *qr_ring_vaddr; - uint32_t * volatile qr_inflight; /* tx/rx shared */ - uint32_t qr_head; - uint32_t qr_tail; - uint8_t qr_msg_size; - uint8_t qr_ring_size; - uint32_t qr_ring; /* ring number in bank */ - uint32_t qr_bank; /* bank number in device */ - uint32_t qr_ring_id; - uint32_t qr_ring_mask; - qat_cb_t qr_cb; - void *qr_cb_arg; - struct qat_dmamem qr_dma; - bus_addr_t qr_ring_paddr; - - const char *qr_name; -}; - -struct qat_bank { - struct qat_softc *qb_sc; /* back pointer to softc */ - uint32_t qb_intr_mask; /* current interrupt mask */ - uint32_t qb_allocated_rings; /* current allocated ring bitfiled */ - uint32_t qb_coalescing_time; /* timer in nano sec, 0: disabled */ -#define COALESCING_TIME_INTERVAL_DEFAULT 10000 -#define COALESCING_TIME_INTERVAL_MIN 500 -#define COALESCING_TIME_INTERVAL_MAX 0xfffff - uint32_t qb_bank; /* bank index */ - struct mtx qb_bank_mtx; - struct resource *qb_ih; - void *qb_ih_cookie; - - struct qat_ring qb_et_rings[MAX_RING_PER_BANK]; - -}; - -struct qat_ap_bank { - uint32_t qab_nf_mask; - uint32_t qab_nf_dest; - uint32_t qab_ne_mask; - uint32_t qab_ne_dest; -}; - -struct qat_ae_page { - struct qat_ae_page *qap_next; - struct qat_uof_page *qap_page; - struct qat_ae_region *qap_region; - u_int qap_flags; -}; - -#define QAT_AE_PAGA_FLAG_WAITING (1 << 0) - -struct qat_ae_region { - struct qat_ae_page *qar_loaded_page; - STAILQ_HEAD(, qat_ae_page) qar_waiting_pages; -}; - -struct qat_ae_slice { - u_int qas_assigned_ctx_mask; - struct qat_ae_region qas_regions[QAT_UOF_MAX_PAGE_REGION]; - struct qat_ae_page qas_pages[QAT_UOF_MAX_PAGE]; - struct qat_ae_page *qas_cur_pages[MAX_AE_CTX]; - struct qat_uof_image *qas_image; -}; - -#define QAT_AE(sc, ae) \ - ((sc)->sc_ae[ae]) - -struct qat_ae { - u_int qae_state; /* AE state */ - u_int qae_ustore_size; /* free micro-store address */ - u_int qae_free_addr; /* free micro-store address */ - u_int qae_free_size; /* free micro-store size */ - u_int qae_live_ctx_mask; /* live context mask */ - u_int qae_ustore_dram_addr; /* mirco-store DRAM address */ - u_int qae_reload_size; /* reloadable code size */ - - /* aefw */ - u_int qae_num_slices; - struct qat_ae_slice qae_slices[MAX_AE_CTX]; - u_int qae_reloc_ustore_dram; /* reloadable ustore-dram address */ - u_int qae_effect_ustore_size; /* effective AE ustore size */ - u_int qae_shareable_ustore; -}; - -struct qat_mof { - void *qmf_sym; /* SYM_OBJS in sc_fw_mof */ - size_t qmf_sym_size; - void *qmf_uof_objs; /* UOF_OBJS in sc_fw_mof */ - size_t qmf_uof_objs_size; - void *qmf_suof_objs; /* SUOF_OBJS in sc_fw_mof */ - size_t qmf_suof_objs_size; -}; - -struct qat_ae_batch_init { - u_int qabi_ae; - u_int qabi_addr; - u_int *qabi_value; - u_int qabi_size; - STAILQ_ENTRY(qat_ae_batch_init) qabi_next; -}; - -STAILQ_HEAD(qat_ae_batch_init_list, qat_ae_batch_init); - -/* overwritten struct uof_uword_block */ -struct qat_uof_uword_block { - u_int quub_start_addr; /* start address */ - u_int quub_num_words; /* number of microwords */ - uint64_t quub_micro_words; /* pointer to the uwords */ -}; - -struct qat_uof_page { - u_int qup_page_num; /* page number */ - u_int qup_def_page; /* default page */ - u_int qup_page_region; /* region of page */ - u_int qup_beg_vaddr; /* begin virtual address */ - u_int qup_beg_paddr; /* begin physical address */ - - u_int qup_num_uc_var; /* num of uC var in array */ - struct uof_uword_fixup *qup_uc_var; - /* array of import variables */ - u_int qup_num_imp_var; /* num of import var in array */ - struct uof_import_var *qup_imp_var; - /* array of import variables */ - u_int qup_num_imp_expr; /* num of import expr in array */ - struct uof_uword_fixup *qup_imp_expr; - /* array of import expressions */ - u_int qup_num_neigh_reg; /* num of neigh-reg in array */ - struct uof_uword_fixup *qup_neigh_reg; - /* array of neigh-reg assignments */ - u_int qup_num_micro_words; /* number of microwords in the seg */ - - u_int qup_num_uw_blocks; /* number of uword blocks */ - struct qat_uof_uword_block *qup_uw_blocks; - /* array of uword blocks */ -}; - -struct qat_uof_image { - struct uof_image *qui_image; /* image pointer */ - struct qat_uof_page qui_pages[QAT_UOF_MAX_PAGE]; - /* array of pages */ - - u_int qui_num_ae_reg; /* num of registers */ - struct uof_ae_reg *qui_ae_reg; /* array of registers */ - - u_int qui_num_init_reg_sym; /* num of reg/sym init values */ - struct uof_init_reg_sym *qui_init_reg_sym; - /* array of reg/sym init values */ - - u_int qui_num_sbreak; /* num of sbreak values */ - struct qui_sbreak *qui_sbreak; /* array of sbreak values */ - - u_int qui_num_uwords_used; - /* highest uword addressreferenced + 1 */ -}; - -struct qat_aefw_uof { - size_t qafu_size; /* uof size */ - struct uof_obj_hdr *qafu_obj_hdr; /* UOF_OBJS */ - - void *qafu_str_tab; - size_t qafu_str_tab_size; - - u_int qafu_num_init_mem; - struct uof_init_mem *qafu_init_mem; - size_t qafu_init_mem_size; - - struct uof_var_mem_seg *qafu_var_mem_seg; - - struct qat_ae_batch_init_list qafu_lm_init[MAX_AE]; - size_t qafu_num_lm_init[MAX_AE]; - size_t qafu_num_lm_init_inst[MAX_AE]; - - u_int qafu_num_imgs; /* number of uof image */ - struct qat_uof_image qafu_imgs[MAX_NUM_AE * MAX_AE_CTX]; - /* uof images */ -}; - -#define QAT_SERVICE_CRYPTO_A (1 << 0) -#define QAT_SERVICE_CRYPTO_B (1 << 1) - -struct qat_admin_rings { - uint32_t qadr_active_aes_per_accel; - uint8_t qadr_srv_mask[MAX_AE_PER_ACCEL]; - - struct qat_dmamem qadr_dma; - struct fw_init_ring_table *qadr_master_ring_tbl; - struct fw_init_ring_table *qadr_cya_ring_tbl; - struct fw_init_ring_table *qadr_cyb_ring_tbl; - - struct qat_ring *qadr_admin_tx; - struct qat_ring *qadr_admin_rx; -}; - -struct qat_accel_init_cb { - int qaic_status; -}; - -struct qat_admin_comms { - struct qat_dmamem qadc_dma; - struct qat_dmamem qadc_const_tbl_dma; - struct qat_dmamem qadc_hb_dma; -}; - -#define QAT_PID_MINOR_REV 0xf -#define QAT_PID_MAJOR_REV (0xf << 4) - -struct qat_suof_image { - char *qsi_simg_buf; - u_long qsi_simg_len; - char *qsi_css_header; - char *qsi_css_key; - char *qsi_css_signature; - char *qsi_css_simg; - u_long qsi_simg_size; - u_int qsi_ae_num; - u_int qsi_ae_mask; - u_int qsi_fw_type; - u_long qsi_simg_name; - u_long qsi_appmeta_data; - struct qat_dmamem qsi_dma; -}; - -struct qat_aefw_suof { - u_int qafs_file_id; - u_int qafs_check_sum; - char qafs_min_ver; - char qafs_maj_ver; - char qafs_fw_type; - char *qafs_suof_buf; - u_int qafs_suof_size; - char *qafs_sym_str; - u_int qafs_sym_size; - u_int qafs_num_simgs; - struct qat_suof_image *qafs_simg; -}; - -enum qat_sym_hash_algorithm { - QAT_SYM_HASH_NONE = 0, - QAT_SYM_HASH_MD5 = 1, - QAT_SYM_HASH_SHA1 = 2, - QAT_SYM_HASH_SHA224 = 3, - QAT_SYM_HASH_SHA256 = 4, - QAT_SYM_HASH_SHA384 = 5, - QAT_SYM_HASH_SHA512 = 6, - QAT_SYM_HASH_AES_XCBC = 7, - QAT_SYM_HASH_AES_CCM = 8, - QAT_SYM_HASH_AES_GCM = 9, - QAT_SYM_HASH_KASUMI_F9 = 10, - QAT_SYM_HASH_SNOW3G_UIA2 = 11, - QAT_SYM_HASH_AES_CMAC = 12, - QAT_SYM_HASH_AES_GMAC = 13, - QAT_SYM_HASH_AES_CBC_MAC = 14, -}; - -#define QAT_HASH_MD5_BLOCK_SIZE 64 -#define QAT_HASH_MD5_DIGEST_SIZE 16 -#define QAT_HASH_MD5_STATE_SIZE 16 -#define QAT_HASH_SHA1_BLOCK_SIZE 64 -#define QAT_HASH_SHA1_DIGEST_SIZE 20 -#define QAT_HASH_SHA1_STATE_SIZE 20 -#define QAT_HASH_SHA224_BLOCK_SIZE 64 -#define QAT_HASH_SHA224_DIGEST_SIZE 28 -#define QAT_HASH_SHA224_STATE_SIZE 32 -#define QAT_HASH_SHA256_BLOCK_SIZE 64 -#define QAT_HASH_SHA256_DIGEST_SIZE 32 -#define QAT_HASH_SHA256_STATE_SIZE 32 -#define QAT_HASH_SHA384_BLOCK_SIZE 128 -#define QAT_HASH_SHA384_DIGEST_SIZE 48 -#define QAT_HASH_SHA384_STATE_SIZE 64 -#define QAT_HASH_SHA512_BLOCK_SIZE 128 -#define QAT_HASH_SHA512_DIGEST_SIZE 64 -#define QAT_HASH_SHA512_STATE_SIZE 64 -#define QAT_HASH_XCBC_PRECOMP_KEY_NUM 3 -#define QAT_HASH_XCBC_MAC_BLOCK_SIZE 16 -#define QAT_HASH_XCBC_MAC_128_DIGEST_SIZE 16 -#define QAT_HASH_CMAC_BLOCK_SIZE 16 -#define QAT_HASH_CMAC_128_DIGEST_SIZE 16 -#define QAT_HASH_AES_CCM_BLOCK_SIZE 16 -#define QAT_HASH_AES_CCM_DIGEST_SIZE 16 -#define QAT_HASH_AES_GCM_BLOCK_SIZE 16 -#define QAT_HASH_AES_GCM_DIGEST_SIZE 16 -#define QAT_HASH_AES_GCM_STATE_SIZE 16 -#define QAT_HASH_KASUMI_F9_BLOCK_SIZE 8 -#define QAT_HASH_KASUMI_F9_DIGEST_SIZE 4 -#define QAT_HASH_SNOW3G_UIA2_BLOCK_SIZE 8 -#define QAT_HASH_SNOW3G_UIA2_DIGEST_SIZE 4 -#define QAT_HASH_AES_CBC_MAC_BLOCK_SIZE 16 -#define QAT_HASH_AES_CBC_MAC_DIGEST_SIZE 16 -#define QAT_HASH_AES_GCM_ICV_SIZE_8 8 -#define QAT_HASH_AES_GCM_ICV_SIZE_12 12 -#define QAT_HASH_AES_GCM_ICV_SIZE_16 16 -#define QAT_HASH_AES_CCM_ICV_SIZE_MIN 4 -#define QAT_HASH_AES_CCM_ICV_SIZE_MAX 16 -#define QAT_HASH_IPAD_BYTE 0x36 -#define QAT_HASH_OPAD_BYTE 0x5c -#define QAT_HASH_IPAD_4_BYTES 0x36363636 -#define QAT_HASH_OPAD_4_BYTES 0x5c5c5c5c -#define QAT_HASH_KASUMI_F9_KEY_MODIFIER_4_BYTES 0xAAAAAAAA - -#define QAT_SYM_XCBC_STATE_SIZE ((QAT_HASH_XCBC_MAC_BLOCK_SIZE) * 3) -#define QAT_SYM_CMAC_STATE_SIZE ((QAT_HASH_CMAC_BLOCK_SIZE) * 3) - -struct qat_sym_hash_alg_info { - uint32_t qshai_digest_len; /* Digest length in bytes */ - uint32_t qshai_block_len; /* Block length in bytes */ - uint32_t qshai_state_size; /* size of above state in bytes */ - const uint8_t *qshai_init_state; /* Initial state */ - - const struct auth_hash *qshai_sah; /* software auth hash */ - uint32_t qshai_state_offset; /* offset to state in *_CTX */ - uint32_t qshai_state_word; -}; - -struct qat_sym_hash_qat_info { - uint32_t qshqi_algo_enc; /* QAT Algorithm encoding */ - uint32_t qshqi_auth_counter; /* Counter value for Auth */ - uint32_t qshqi_state1_len; /* QAT state1 length in bytes */ - uint32_t qshqi_state2_len; /* QAT state2 length in bytes */ -}; - -struct qat_sym_hash_def { - const struct qat_sym_hash_alg_info *qshd_alg; - const struct qat_sym_hash_qat_info *qshd_qat; -}; - -#define QAT_SYM_REQ_PARAMS_SIZE_MAX (24 + 32) -/* Reserve enough space for cipher and authentication request params */ -/* Basis of values are guaranteed in qat_hw*var.h with CTASSERT */ - -#define QAT_SYM_REQ_PARAMS_SIZE_PADDED \ - roundup(QAT_SYM_REQ_PARAMS_SIZE_MAX, QAT_OPTIMAL_ALIGN) -/* Pad out to 64-byte multiple to ensure optimal alignment of next field */ - -#define QAT_SYM_KEY_TLS_PREFIX_SIZE (128) -/* Hash Prefix size in bytes for TLS (128 = MAX = SHA2 (384, 512)*/ - -#define QAT_SYM_KEY_MAX_HASH_STATE_BUFFER \ - (QAT_SYM_KEY_TLS_PREFIX_SIZE * 2) -/* hash state prefix buffer structure that holds the maximum sized secret */ - -#define QAT_SYM_HASH_BUFFER_LEN QAT_HASH_SHA512_STATE_SIZE -/* Buffer length to hold 16 byte MD5 key and 20 byte SHA1 key */ - -#define QAT_GCM_AAD_SIZE_MAX 240 -/* Maximum AAD size */ - -#define QAT_AES_GCM_AAD_ALIGN 16 - -struct qat_sym_bulk_cookie { - uint8_t qsbc_req_params_buf[QAT_SYM_REQ_PARAMS_SIZE_PADDED]; - /* memory block reserved for request params, QAT 1.5 only - * NOTE: Field must be correctly aligned in memory for access by QAT - * engine */ - struct qat_crypto *qsbc_crypto; - struct qat_session *qsbc_session; - /* Session context */ - void *qsbc_cb_tag; - /* correlator supplied by the client */ - uint8_t qsbc_msg[QAT_MSG_SIZE_TO_BYTES(QAT_MAX_MSG_SIZE)]; - /* QAT request message */ -} __aligned(QAT_OPTIMAL_ALIGN); - -/* Basis of values are guaranteed in qat_hw*var.h with CTASSERT */ -#define HASH_CONTENT_DESC_SIZE 176 -#define CIPHER_CONTENT_DESC_SIZE 64 - -#define CONTENT_DESC_MAX_SIZE roundup( \ - HASH_CONTENT_DESC_SIZE + CIPHER_CONTENT_DESC_SIZE, \ - QAT_OPTIMAL_ALIGN) - -enum qat_sym_dma { - QAT_SYM_DMA_AADBUF = 0, - QAT_SYM_DMA_BUF, - QAT_SYM_DMA_OBUF, - QAT_SYM_DMA_COUNT, -}; - -struct qat_sym_dmamap { - bus_dmamap_t qsd_dmamap; - bus_dma_tag_t qsd_dma_tag; -}; - -struct qat_sym_cookie { - struct qat_sym_bulk_cookie qsc_bulk_cookie; - - /* should be 64-byte aligned */ - struct buffer_list_desc qsc_buf_list; - struct buffer_list_desc qsc_obuf_list; - - bus_dmamap_t qsc_self_dmamap; - bus_dma_tag_t qsc_self_dma_tag; - - uint8_t qsc_iv_buf[EALG_MAX_BLOCK_LEN]; - uint8_t qsc_auth_res[QAT_SYM_HASH_BUFFER_LEN]; - uint8_t qsc_gcm_aad[QAT_GCM_AAD_SIZE_MAX]; - uint8_t qsc_content_desc[CONTENT_DESC_MAX_SIZE]; - - struct qat_sym_dmamap qsc_dma[QAT_SYM_DMA_COUNT]; - - bus_addr_t qsc_bulk_req_params_buf_paddr; - bus_addr_t qsc_buffer_list_desc_paddr; - bus_addr_t qsc_obuffer_list_desc_paddr; - bus_addr_t qsc_iv_buf_paddr; - bus_addr_t qsc_auth_res_paddr; - bus_addr_t qsc_gcm_aad_paddr; - bus_addr_t qsc_content_desc_paddr; -}; - -CTASSERT(offsetof(struct qat_sym_cookie, - qsc_bulk_cookie.qsbc_req_params_buf) % QAT_OPTIMAL_ALIGN == 0); -CTASSERT(offsetof(struct qat_sym_cookie, qsc_buf_list) % QAT_OPTIMAL_ALIGN == 0); - -#define MAX_CIPHER_SETUP_BLK_SZ \ - (sizeof(struct hw_cipher_config) + \ - 2 * HW_KASUMI_KEY_SZ + 2 * HW_KASUMI_BLK_SZ) -#define MAX_HASH_SETUP_BLK_SZ sizeof(union hw_auth_algo_blk) - -struct qat_crypto_desc { - uint8_t qcd_content_desc[CONTENT_DESC_MAX_SIZE]; /* must be first */ - /* using only for qat 1.5 */ - uint8_t qcd_hash_state_prefix_buf[QAT_GCM_AAD_SIZE_MAX]; - - bus_addr_t qcd_desc_paddr; - bus_addr_t qcd_hash_state_paddr; - - enum fw_slice qcd_slices[MAX_FW_SLICE + 1]; - enum fw_la_cmd_id qcd_cmd_id; - enum hw_cipher_dir qcd_cipher_dir; - - /* content desc info */ - uint8_t qcd_hdr_sz; /* in quad words */ - uint8_t qcd_hw_blk_sz; /* in quad words */ - uint32_t qcd_cipher_offset; - uint32_t qcd_auth_offset; - /* hash info */ - uint8_t qcd_state_storage_sz; /* in quad words */ - uint32_t qcd_gcm_aad_sz_offset1; - uint32_t qcd_gcm_aad_sz_offset2; - /* cipher info */ - uint16_t qcd_cipher_blk_sz; /* in bytes */ - uint16_t qcd_auth_sz; /* in bytes */ - - uint8_t qcd_req_cache[QAT_MSG_SIZE_TO_BYTES(QAT_MAX_MSG_SIZE)]; -} __aligned(QAT_OPTIMAL_ALIGN); - -struct qat_session { - struct qat_crypto_desc *qs_dec_desc; /* should be at top of struct*/ - /* decrypt or auth then decrypt or auth */ - - struct qat_crypto_desc *qs_enc_desc; - /* encrypt or encrypt then auth */ - - struct qat_dmamem qs_desc_mem; - - enum hw_cipher_algo qs_cipher_algo; - enum hw_cipher_mode qs_cipher_mode; - enum hw_auth_algo qs_auth_algo; - enum hw_auth_mode qs_auth_mode; - - const uint8_t *qs_cipher_key; - int qs_cipher_klen; - const uint8_t *qs_auth_key; - int qs_auth_klen; - int qs_auth_mlen; - - uint32_t qs_status; -#define QAT_SESSION_STATUS_ACTIVE (1 << 0) -#define QAT_SESSION_STATUS_FREEING (1 << 1) - uint32_t qs_inflight; - int qs_aad_length; - bool qs_need_wakeup; - - struct mtx qs_session_mtx; -}; - -struct qat_crypto_bank { - uint16_t qcb_bank; - - struct qat_ring *qcb_sym_tx; - struct qat_ring *qcb_sym_rx; - - struct qat_dmamem qcb_symck_dmamems[QAT_NSYMCOOKIE]; - struct qat_sym_cookie *qcb_symck_free[QAT_NSYMCOOKIE]; - uint32_t qcb_symck_free_count; - - struct mtx qcb_bank_mtx; - - char qcb_ring_names[2][QAT_RING_NAME_SIZE]; /* sym tx,rx */ -}; - -struct qat_crypto { - struct qat_softc *qcy_sc; - uint32_t qcy_bank_mask; - uint16_t qcy_num_banks; - - int32_t qcy_cid; /* OpenCrypto driver ID */ - - struct qat_crypto_bank *qcy_banks; /* array of qat_crypto_bank */ - - uint32_t qcy_session_free_count; - - struct mtx qcy_crypto_mtx; -}; - -struct qat_hw { - int8_t qhw_sram_bar_id; - int8_t qhw_misc_bar_id; - int8_t qhw_etr_bar_id; - - bus_size_t qhw_cap_global_offset; - bus_size_t qhw_ae_offset; - bus_size_t qhw_ae_local_offset; - bus_size_t qhw_etr_bundle_size; - - /* crypto processing callbacks */ - size_t qhw_crypto_opaque_offset; - void (*qhw_crypto_setup_req_params)(struct qat_crypto_bank *, - struct qat_session *, struct qat_crypto_desc const *, - struct qat_sym_cookie *, struct cryptop *); - void (*qhw_crypto_setup_desc)(struct qat_crypto *, struct qat_session *, - struct qat_crypto_desc *); - - uint8_t qhw_num_banks; /* max number of banks */ - uint8_t qhw_num_ap_banks; /* max number of AutoPush banks */ - uint8_t qhw_num_rings_per_bank; /* rings per bank */ - uint8_t qhw_num_accel; /* max number of accelerators */ - uint8_t qhw_num_engines; /* max number of accelerator engines */ - uint8_t qhw_tx_rx_gap; - uint32_t qhw_tx_rings_mask; - uint32_t qhw_clock_per_sec; - bool qhw_fw_auth; - uint32_t qhw_fw_req_size; - uint32_t qhw_fw_resp_size; - - uint8_t qhw_ring_sym_tx; - uint8_t qhw_ring_sym_rx; - uint8_t qhw_ring_asym_tx; - uint8_t qhw_ring_asym_rx; - - /* MSIx */ - uint32_t qhw_msix_ae_vec_gap; /* gap to ae vec from bank */ - - const char *qhw_mof_fwname; - const char *qhw_mmp_fwname; - - uint32_t qhw_prod_type; /* cpu type */ - - /* setup callbacks */ - uint32_t (*qhw_get_accel_mask)(struct qat_softc *); - uint32_t (*qhw_get_ae_mask)(struct qat_softc *); - enum qat_sku (*qhw_get_sku)(struct qat_softc *); - uint32_t (*qhw_get_accel_cap)(struct qat_softc *); - const char *(*qhw_get_fw_uof_name)(struct qat_softc *); - void (*qhw_enable_intr)(struct qat_softc *); - void (*qhw_init_etr_intr)(struct qat_softc *, int); - int (*qhw_init_admin_comms)(struct qat_softc *); - int (*qhw_send_admin_init)(struct qat_softc *); - int (*qhw_init_arb)(struct qat_softc *); - void (*qhw_get_arb_mapping)(struct qat_softc *, const uint32_t **); - void (*qhw_enable_error_correction)(struct qat_softc *); - int (*qhw_check_uncorrectable_error)(struct qat_softc *); - void (*qhw_print_err_registers)(struct qat_softc *); - void (*qhw_disable_error_interrupts)(struct qat_softc *); - int (*qhw_check_slice_hang)(struct qat_softc *); - int (*qhw_set_ssm_wdtimer)(struct qat_softc *); -}; - - -/* sc_flags */ -#define QAT_FLAG_ESRAM_ENABLE_AUTO_INIT (1 << 0) -#define QAT_FLAG_SHRAM_WAIT_READY (1 << 1) - -/* sc_accel_cap */ -#define QAT_ACCEL_CAP_CRYPTO_SYMMETRIC (1 << 0) -#define QAT_ACCEL_CAP_CRYPTO_ASYMMETRIC (1 << 1) -#define QAT_ACCEL_CAP_CIPHER (1 << 2) -#define QAT_ACCEL_CAP_AUTHENTICATION (1 << 3) -#define QAT_ACCEL_CAP_REGEX (1 << 4) -#define QAT_ACCEL_CAP_COMPRESSION (1 << 5) -#define QAT_ACCEL_CAP_LZS_COMPRESSION (1 << 6) -#define QAT_ACCEL_CAP_RANDOM_NUMBER (1 << 7) -#define QAT_ACCEL_CAP_ZUC (1 << 8) -#define QAT_ACCEL_CAP_SHA3 (1 << 9) -#define QAT_ACCEL_CAP_KPT (1 << 10) - -#define QAT_ACCEL_CAP_BITS \ - "\177\020" \ - "b\x0a" "KPT\0" \ - "b\x09" "SHA3\0" \ - "b\x08" "ZUC\0" \ - "b\x07" "RANDOM_NUMBER\0" \ - "b\x06" "LZS_COMPRESSION\0" \ - "b\x05" "COMPRESSION\0" \ - "b\x04" "REGEX\0" \ - "b\x03" "AUTHENTICATION\0" \ - "b\x02" "CIPHER\0" \ - "b\x01" "CRYPTO_ASYMMETRIC\0" \ - "b\x00" "CRYPTO_SYMMETRIC\0" - -#define QAT_HI_PRIO_RING_WEIGHT 0xfc -#define QAT_LO_PRIO_RING_WEIGHT 0xfe -#define QAT_DEFAULT_RING_WEIGHT 0xff -#define QAT_DEFAULT_PVL 0 - -struct firmware; -struct resource; - -struct qat_softc { - device_t sc_dev; - - struct resource *sc_res[MAX_BARS]; - int sc_rid[MAX_BARS]; - bus_space_tag_t sc_csrt[MAX_BARS]; - bus_space_handle_t sc_csrh[MAX_BARS]; - - uint32_t sc_ae_num; - uint32_t sc_ae_mask; - - struct qat_crypto sc_crypto; /* crypto services */ - - struct qat_hw sc_hw; - - uint8_t sc_rev; - enum qat_sku sc_sku; - uint32_t sc_flags; - - uint32_t sc_accel_num; - uint32_t sc_accel_mask; - uint32_t sc_accel_cap; - - struct qat_admin_rings sc_admin_rings; /* use only for qat 1.5 */ - struct qat_admin_comms sc_admin_comms; /* use only for qat 1.7 */ - - /* ETR */ - struct qat_bank *sc_etr_banks; /* array of etr banks */ - struct qat_ap_bank *sc_etr_ap_banks; /* array of etr auto push banks */ - - /* AE */ - struct qat_ae sc_ae[MAX_NUM_AE]; - - /* Interrupt */ - struct resource *sc_ih; /* ae cluster ih */ - void *sc_ih_cookie; /* ae cluster ih cookie */ - - /* Counters */ - counter_u64_t sc_gcm_aad_restarts; - counter_u64_t sc_gcm_aad_updates; - counter_u64_t sc_ring_full_restarts; - counter_u64_t sc_sym_alloc_failures; - - /* Firmware */ - void *sc_fw_mof; /* mof data */ - size_t sc_fw_mof_size; /* mof size */ - struct qat_mof sc_mof; /* mof sections */ - - const char *sc_fw_uof_name; /* uof/suof name in mof */ - - void *sc_fw_uof; /* uof head */ - size_t sc_fw_uof_size; /* uof size */ - struct qat_aefw_uof sc_aefw_uof; /* UOF_OBJS in uof */ - - void *sc_fw_suof; /* suof head */ - size_t sc_fw_suof_size; /* suof size */ - struct qat_aefw_suof sc_aefw_suof; /* suof context */ - - void *sc_fw_mmp; /* mmp data */ - size_t sc_fw_mmp_size; /* mmp size */ -}; - -static inline void -qat_bar_write_4(struct qat_softc *sc, int baroff, bus_size_t offset, - uint32_t value) -{ - - MPASS(baroff >= 0 && baroff < MAX_BARS); - - bus_space_write_4(sc->sc_csrt[baroff], - sc->sc_csrh[baroff], offset, value); -} - -static inline uint32_t -qat_bar_read_4(struct qat_softc *sc, int baroff, bus_size_t offset) -{ - - MPASS(baroff >= 0 && baroff < MAX_BARS); - - return bus_space_read_4(sc->sc_csrt[baroff], - sc->sc_csrh[baroff], offset); -} - -static inline void -qat_misc_write_4(struct qat_softc *sc, bus_size_t offset, uint32_t value) -{ - - qat_bar_write_4(sc, sc->sc_hw.qhw_misc_bar_id, offset, value); -} - -static inline uint32_t -qat_misc_read_4(struct qat_softc *sc, bus_size_t offset) -{ - - return qat_bar_read_4(sc, sc->sc_hw.qhw_misc_bar_id, offset); -} - -static inline void -qat_misc_read_write_or_4(struct qat_softc *sc, bus_size_t offset, - uint32_t value) -{ - uint32_t reg; - - reg = qat_misc_read_4(sc, offset); - reg |= value; - qat_misc_write_4(sc, offset, reg); -} - -static inline void -qat_misc_read_write_and_4(struct qat_softc *sc, bus_size_t offset, - uint32_t mask) -{ - uint32_t reg; - - reg = qat_misc_read_4(sc, offset); - reg &= mask; - qat_misc_write_4(sc, offset, reg); -} - -static inline void -qat_etr_write_4(struct qat_softc *sc, bus_size_t offset, uint32_t value) -{ - - qat_bar_write_4(sc, sc->sc_hw.qhw_etr_bar_id, offset, value); -} - -static inline uint32_t -qat_etr_read_4(struct qat_softc *sc, bus_size_t offset) -{ - - return qat_bar_read_4(sc, sc->sc_hw.qhw_etr_bar_id, offset); -} - -static inline void -qat_ae_local_write_4(struct qat_softc *sc, u_char ae, bus_size_t offset, - uint32_t value) -{ - - offset = __SHIFTIN(ae & sc->sc_ae_mask, AE_LOCAL_AE_MASK) | - (offset & AE_LOCAL_CSR_MASK); - - qat_misc_write_4(sc, sc->sc_hw.qhw_ae_local_offset + offset, - value); -} - -static inline uint32_t -qat_ae_local_read_4(struct qat_softc *sc, u_char ae, bus_size_t offset) -{ - - offset = __SHIFTIN(ae & sc->sc_ae_mask, AE_LOCAL_AE_MASK) | - (offset & AE_LOCAL_CSR_MASK); - - return qat_misc_read_4(sc, sc->sc_hw.qhw_ae_local_offset + offset); -} - -static inline void -qat_ae_xfer_write_4(struct qat_softc *sc, u_char ae, bus_size_t offset, - uint32_t value) -{ - offset = __SHIFTIN(ae & sc->sc_ae_mask, AE_XFER_AE_MASK) | - __SHIFTIN(offset, AE_XFER_CSR_MASK); - - qat_misc_write_4(sc, sc->sc_hw.qhw_ae_offset + offset, value); -} - -static inline void -qat_cap_global_write_4(struct qat_softc *sc, bus_size_t offset, uint32_t value) -{ - - qat_misc_write_4(sc, sc->sc_hw.qhw_cap_global_offset + offset, value); -} - -static inline uint32_t -qat_cap_global_read_4(struct qat_softc *sc, bus_size_t offset) -{ - - return qat_misc_read_4(sc, sc->sc_hw.qhw_cap_global_offset + offset); -} - - -static inline void -qat_etr_bank_write_4(struct qat_softc *sc, int bank, - bus_size_t offset, uint32_t value) -{ - - qat_etr_write_4(sc, sc->sc_hw.qhw_etr_bundle_size * bank + offset, - value); -} - -static inline uint32_t -qat_etr_bank_read_4(struct qat_softc *sc, int bank, - bus_size_t offset) -{ - - return qat_etr_read_4(sc, - sc->sc_hw.qhw_etr_bundle_size * bank + offset); -} - -static inline void -qat_etr_ap_bank_write_4(struct qat_softc *sc, int ap_bank, - bus_size_t offset, uint32_t value) -{ - - qat_etr_write_4(sc, ETR_AP_BANK_OFFSET * ap_bank + offset, value); -} - -static inline uint32_t -qat_etr_ap_bank_read_4(struct qat_softc *sc, int ap_bank, - bus_size_t offset) -{ - - return qat_etr_read_4(sc, ETR_AP_BANK_OFFSET * ap_bank + offset); -} - - -static inline void -qat_etr_bank_ring_write_4(struct qat_softc *sc, int bank, int ring, - bus_size_t offset, uint32_t value) -{ - - qat_etr_bank_write_4(sc, bank, (ring << 2) + offset, value); -} - -static inline uint32_t -qat_etr_bank_ring_read_4(struct qat_softc *sc, int bank, int ring, - bus_size_t offset) -{ - - return qat_etr_bank_read_4(sc, bank, (ring << 2) * offset); -} - -static inline void -qat_etr_bank_ring_base_write_8(struct qat_softc *sc, int bank, int ring, - uint64_t value) -{ - uint32_t lo, hi; - - lo = (uint32_t)(value & 0xffffffff); - hi = (uint32_t)((value & 0xffffffff00000000ULL) >> 32); - qat_etr_bank_ring_write_4(sc, bank, ring, ETR_RING_LBASE, lo); - qat_etr_bank_ring_write_4(sc, bank, ring, ETR_RING_UBASE, hi); -} - -static inline void -qat_arb_ringsrvarben_write_4(struct qat_softc *sc, int index, uint32_t value) -{ - - qat_etr_write_4(sc, ARB_RINGSRVARBEN_OFFSET + - (ARB_REG_SLOT * index), value); -} - -static inline void -qat_arb_sarconfig_write_4(struct qat_softc *sc, int index, uint32_t value) -{ - - qat_etr_write_4(sc, ARB_OFFSET + - (ARB_REG_SIZE * index), value); -} - -static inline void -qat_arb_wrk_2_ser_map_write_4(struct qat_softc *sc, int index, uint32_t value) -{ - - qat_etr_write_4(sc, ARB_OFFSET + ARB_WRK_2_SER_MAP_OFFSET + - (ARB_REG_SIZE * index), value); -} - -void * qat_alloc_mem(size_t); -void qat_free_mem(void *); -void qat_free_dmamem(struct qat_softc *, struct qat_dmamem *); -int qat_alloc_dmamem(struct qat_softc *, struct qat_dmamem *, int, - bus_size_t, bus_size_t); - -int qat_etr_setup_ring(struct qat_softc *, int, uint32_t, uint32_t, - uint32_t, qat_cb_t, void *, const char *, - struct qat_ring **); -int qat_etr_put_msg(struct qat_softc *, struct qat_ring *, - uint32_t *); - -void qat_memcpy_htobe64(void *, const void *, size_t); -void qat_memcpy_htobe32(void *, const void *, size_t); -void qat_memcpy_htobe(void *, const void *, size_t, uint32_t); -void qat_crypto_gmac_precompute(const struct qat_crypto_desc *, - const uint8_t *key, int klen, - const struct qat_sym_hash_def *, uint8_t *); -void qat_crypto_hmac_precompute(const struct qat_crypto_desc *, - const uint8_t *, int, const struct qat_sym_hash_def *, - uint8_t *, uint8_t *); -uint16_t qat_crypto_load_cipher_session(const struct qat_crypto_desc *, - const struct qat_session *); -uint16_t qat_crypto_load_auth_session(const struct qat_crypto_desc *, - const struct qat_session *, - struct qat_sym_hash_def const **); - -#endif Index: sys/modules/qat/Makefile =================================================================== --- sys/modules/qat/Makefile +++ sys/modules/qat/Makefile @@ -1,19 +1,9 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2007-2022 Intel Corporation # $FreeBSD$ +SUBDIR= qat_common \ + qat_api \ + qat_hw \ + qat -.PATH: ${SRCTOP}/sys/dev/qat - -KMOD= qat - -SRCS= qat.c \ - qat_ae.c \ - qat_c2xxx.c \ - qat_c3xxx.c \ - qat_c62x.c \ - qat_d15xx.c \ - qat_dh895xcc.c \ - qat_hw15.c \ - qat_hw17.c - -SRCS+= bus_if.h cryptodev_if.h device_if.h pci_if.h - -.include +.include Index: sys/modules/qat/qat/Makefile =================================================================== --- /dev/null +++ sys/modules/qat/qat/Makefile @@ -0,0 +1,24 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2007-2022 Intel Corporation +# $FreeBSD$ +.PATH: ${SRCTOP}/sys/dev/qat/qat + +KMOD= qat +SRCS+= qat_ocf.c qat_ocf_mem_pool.c qat_ocf_utils.c +SRCS+= device_if.h bus_if.h vnode_if.h pci_if.h cryptodev_if.h + +CFLAGS+= -I${SYSDIR}/compat/linuxkpi/common/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/include/common +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/common/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/include/lac +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/qat_utils/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/qat_direct/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/firmware/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/common/crypto/sym/include + +CWARNFLAGS.qat_ocf.c += -Wno-incompatible-pointer-types-discards-qualifiers +CWARNFLAGS.qat_ocf_utils.c += -Wno-incompatible-pointer-types-discards-qualifiers + +.include Index: sys/modules/qat/qat_api/Makefile =================================================================== --- /dev/null +++ sys/modules/qat/qat_api/Makefile @@ -0,0 +1,75 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2007-2022 Intel Corporation +# $FreeBSD$ +.PATH: ${SRCTOP}/sys/dev/qat/qat_api + +KMOD= qat_api + +SRCS+= freebsd_module.c +SRCS+= common/compression/dc_datapath.c +SRCS+= common/compression/dc_header_footer.c +SRCS+= common/compression/dc_session.c +SRCS+= common/compression/dc_stats.c +SRCS+= common/compression/dc_buffers.c +SRCS+= common/compression/dc_dp.c +SRCS+= common/compression/icp_sal_dc_err.c +SRCS+= common/utils/lac_buffer_desc.c +SRCS+= common/utils/lac_mem.c +SRCS+= common/utils/lac_mem_pools.c +SRCS+= common/utils/lac_sync.c +SRCS+= common/utils/sal_service_state.c +SRCS+= common/utils/sal_statistics.c +SRCS+= common/utils/sal_string_parse.c +SRCS+= common/utils/sal_versions.c +SRCS+= common/utils/sal_user_process.c +SRCS+= common/ctrl/sal_list.c +SRCS+= common/ctrl/sal_compression.c +SRCS+= common/ctrl/sal_ctrl_services.c +SRCS+= common/ctrl/sal_create_services.c +SRCS+= common/ctrl/sal_crypto.c +SRCS+= common/qat_comms/sal_qat_cmn_msg.c +SRCS+= common/crypto/sym/lac_sym_api.c +SRCS+= common/crypto/sym/lac_sym_cb.c +SRCS+= common/crypto/sym/lac_sym_queue.c +SRCS+= common/crypto/sym/lac_sym_cipher.c +SRCS+= common/crypto/sym/lac_sym_alg_chain.c +SRCS+= common/crypto/sym/lac_sym_auth_enc.c +SRCS+= common/crypto/sym/lac_sym_hash.c +SRCS+= common/crypto/sym/lac_sym_hash_sw_precomputes.c +SRCS+= common/crypto/sym/lac_sym_stats.c +SRCS+= common/crypto/sym/lac_sym_compile_check.c +SRCS+= common/crypto/sym/lac_sym_partial.c +SRCS+= common/crypto/sym/lac_sym_dp.c +SRCS+= common/crypto/sym/qat/lac_sym_qat.c +SRCS+= common/crypto/sym/qat/lac_sym_qat_hash.c +SRCS+= common/crypto/sym/qat/lac_sym_qat_hash_defs_lookup.c +SRCS+= common/crypto/sym/qat/lac_sym_qat_cipher.c +SRCS+= common/crypto/sym/qat/lac_sym_qat_key.c +SRCS+= common/crypto/sym/key/lac_sym_key.c +SRCS+= common/stubs/lac_stubs.c +SRCS+= device/dev_info.c +SRCS+= qat_kernel/src/lac_adf_interface_freebsd.c +SRCS+= qat_kernel/src/qat_transport.c +SRCS+= qat_kernel/src/lac_symbols.c +SRCS+= qat_utils/src/QatUtilsServices.c +SRCS+= qat_utils/src/QatUtilsSemaphore.c +SRCS+= qat_utils/src/QatUtilsSpinLock.c +SRCS+= qat_utils/src/QatUtilsAtomic.c +SRCS+= qat_utils/src/QatUtilsCrypto.c +SRCS+= bus_if.h cryptodev_if.h device_if.h pci_if.h vnode_if.h + +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/include/lac +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/include/dc +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/qat_direct/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/qat_kernel/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/qat_utils/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/common/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/common/compression/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/common/crypto/sym/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/firmware/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/include/common +CFLAGS+= -I${SYSDIR}/compat/linuxkpi/common/include + +CWARNFLAGS += -Wno-pointer-sign +.include Index: sys/modules/qat/qat_common/Makefile =================================================================== --- /dev/null +++ sys/modules/qat/qat_common/Makefile @@ -0,0 +1,29 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2007-2022 Intel Corporation +# $FreeBSD$ +.PATH: ${SRCTOP}/sys/dev/qat/qat_common + +KMOD= qat_common + +SRCS+= adf_accel_engine.c adf_freebsd_admin.c adf_aer.c adf_cfg.c qat_common_module.c +SRCS+= adf_heartbeat.c adf_freebsd_heartbeat_dbg.c +SRCS+= adf_dev_mgr.c adf_hw_arbiter.c +SRCS+= adf_init.c adf_transport.c adf_isr.c adf_fw_counters.c adf_dev_err.c +SRCS+= qat_freebsd.c +SRCS+= adf_freebsd_cfg_dev_dbg.c adf_freebsd_ver_dbg.c +SRCS+= adf_cfg_device.c adf_cfg_section.c adf_cfg_instance.c adf_cfg_bundle.c +SRCS+= qat_hal.c qat_uclo.c +SRCS+= adf_vf_isr.c adf_pf2vf_msg.c +SRCS+= adf_vf2pf_msg.c +SRCS+= adf_pf2vf_capabilities.c +SRCS+= adf_pf2vf_ring_to_svc_map.c +SRCS+= adf_freebsd_transport_debug.c adf_clock.c +SRCS+= adf_freebsd_cnvnr_ctrs_dbg.c +SRCS+= adf_freebsd_pfvf_ctrs_dbg.c +SRCS+= bus_if.h device_if.h pci_if.h vnode_if.h + +CFLAGS+= -I${SRCTOP}/sys/dev/qat/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/include/common +CFLAGS+= -I${SYSDIR}/compat/linuxkpi/common/include + +.include Index: sys/modules/qat/qat_hw/Makefile =================================================================== --- /dev/null +++ sys/modules/qat/qat_hw/Makefile @@ -0,0 +1,27 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2007-2022 Intel Corporation +# $FreeBSD$ +.PATH: ${SRCTOP}/sys/dev/qat/qat_hw + +KMOD= qat_hw +SRCS+= qat_c62x/adf_c62x_hw_data.c qat_c62x/adf_drv.c +SRCS+= qat_200xx/adf_200xx_hw_data.c qat_200xx/adf_drv.c +SRCS+= qat_c3xxx/adf_c3xxx_hw_data.c qat_c3xxx/adf_drv.c +SRCS+= qat_dh895xcc/adf_dh895xcc_hw_data.c qat_dh895xcc/adf_drv.c +SRCS+= qat_c4xxx/adf_c4xxx_hw_data.c qat_c4xxx/adf_drv.c qat_c4xxx/adf_c4xxx_ae_config.c qat_c4xxx/adf_c4xxx_misc_error_stats.c +SRCS+= qat_c4xxx/adf_c4xxx_pke_replay_stats.c qat_c4xxx/adf_c4xxx_ras.c qat_c4xxx/adf_c4xxx_res_part.c +SRCS+= qat_c4xxx/adf_c4xxx_reset.c +SRCS+= device_if.h bus_if.h vnode_if.h pci_if.h cryptodev_if.h + +CFLAGS+= -I${SYSDIR}/compat/linuxkpi/common/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/include/common +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/common/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/include/lac +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/qat_utils/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/qat_direct/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/firmware/include +CFLAGS+= -I${SRCTOP}/sys/dev/qat/qat_api/common/crypto/sym/include + +.include Index: sys/modules/qatfw/Makefile =================================================================== --- sys/modules/qatfw/Makefile +++ sys/modules/qatfw/Makefile @@ -1,9 +1,10 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2007-2022 Intel Corporation # $FreeBSD$ - -SUBDIR= qat_c2xxx \ +SUBDIR= qat_c62x \ + qat_200xx \ qat_c3xxx \ - qat_c62x \ - qat_d15xx \ + qat_c4xxx \ qat_dh895xcc .include Index: sys/modules/qatfw/qat_200xx/Makefile =================================================================== --- /dev/null +++ sys/modules/qatfw/qat_200xx/Makefile @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2007-2022 Intel Corporation +# $FreeBSD$ +.PATH: ${SRCTOP}/sys/contrib/dev/qat + +KMOD= qat_200xx_fw + +FIRMWS= ${SRCTOP}/sys/contrib/dev/qat/qat_200xx.bin:qat_200xx_fw:111 ${SRCTOP}/sys/contrib/dev/qat/qat_200xx_mmp.bin:qat_200xx_mmp_fw:111 + +.include Index: sys/modules/qatfw/qat_c2xxx/Makefile =================================================================== --- sys/modules/qatfw/qat_c2xxx/Makefile +++ /dev/null @@ -1,11 +0,0 @@ -# $FreeBSD$ - -.PATH: ${SRCTOP}/sys/contrib/dev/qat - -KMOD= qat_c2xxxfw -IMG1= mof_firmware_c2xxx -IMG2= mmp_firmware_c2xxx - -FIRMWS= ${IMG1}.bin:${KMOD}:111 ${IMG2}.bin:${IMG2}:111 - -.include Index: sys/modules/qatfw/qat_c3xxx/Makefile =================================================================== --- sys/modules/qatfw/qat_c3xxx/Makefile +++ sys/modules/qatfw/qat_c3xxx/Makefile @@ -1,11 +1,10 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2007-2022 Intel Corporation # $FreeBSD$ - .PATH: ${SRCTOP}/sys/contrib/dev/qat -KMOD= qat_c3xxxfw -IMG1= qat_c3xxx -IMG2= qat_c3xxx_mmp +KMOD= qat_c3xxx_fw -FIRMWS= ${IMG1}.bin:${KMOD}:111 ${IMG2}.bin:${IMG2}:111 +FIRMWS= ${SRCTOP}/sys/contrib/dev/qat/qat_c3xxx.bin:qat_c3xxx_fw:111 ${SRCTOP}/sys/contrib/dev/qat/qat_c3xxx_mmp.bin:qat_c3xxx_mmp_fw:111 .include Index: sys/modules/qatfw/qat_c4xxx/Makefile =================================================================== --- /dev/null +++ sys/modules/qatfw/qat_c4xxx/Makefile @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2007-2022 Intel Corporation +# $FreeBSD$ +.PATH: ${SRCTOP}/sys/contrib/dev/qat + +KMOD= qat_c4xxx_fw + +FIRMWS= ${SRCTOP}/sys/contrib/dev/qat/qat_c4xxx.bin:qat_c4xxx_fw:111 ${SRCTOP}/sys/contrib/dev/qat/qat_c4xxx_mmp.bin:qat_c4xxx_mmp_fw:111 + +.include Index: sys/modules/qatfw/qat_c62x/Makefile =================================================================== --- sys/modules/qatfw/qat_c62x/Makefile +++ sys/modules/qatfw/qat_c62x/Makefile @@ -1,11 +1,10 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2007-2022 Intel Corporation # $FreeBSD$ - .PATH: ${SRCTOP}/sys/contrib/dev/qat -KMOD= qat_c62xfw -IMG1= qat_c62x -IMG2= qat_c62x_mmp +KMOD= qat_c62x_fw -FIRMWS= ${IMG1}.bin:${KMOD}:111 ${IMG2}.bin:${IMG2}:111 +FIRMWS= ${SRCTOP}/sys/contrib/dev/qat/qat_c62x.bin:qat_c62x_fw:111 ${SRCTOP}/sys/contrib/dev/qat/qat_c62x_mmp.bin:qat_c62x_mmp_fw:111 .include Index: sys/modules/qatfw/qat_d15xx/Makefile =================================================================== --- sys/modules/qatfw/qat_d15xx/Makefile +++ /dev/null @@ -1,11 +0,0 @@ -# $FreeBSD$ - -.PATH: ${SRCTOP}/sys/contrib/dev/qat - -KMOD= qat_d15xxfw -IMG1= qat_d15xx -IMG2= qat_d15xx_mmp - -FIRMWS= ${IMG1}.bin:${KMOD}:111 ${IMG2}.bin:${IMG2}:111 - -.include Index: sys/modules/qatfw/qat_dh895xcc/Makefile =================================================================== --- sys/modules/qatfw/qat_dh895xcc/Makefile +++ sys/modules/qatfw/qat_dh895xcc/Makefile @@ -1,11 +1,10 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2007-2022 Intel Corporation # $FreeBSD$ - .PATH: ${SRCTOP}/sys/contrib/dev/qat -KMOD= qat_dh895xccfw -IMG1= qat_895xcc -IMG2= qat_895xcc_mmp +KMOD= qat_dh895xcc_fw -FIRMWS= ${IMG1}.bin:${KMOD}:111 ${IMG2}.bin:${IMG2}:111 +FIRMWS= ${SRCTOP}/sys/contrib/dev/qat/qat_895xcc.bin:qat_dh895xcc_fw:111 ${SRCTOP}/sys/contrib/dev/qat/qat_895xcc_mmp.bin:qat_dh895xcc_mmp_fw:111 .include