Index: releng/12.1/sys/dev/mpr/mpi/mpi2.h =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2.h (revision 352761) @@ -1,1374 +1,1377 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ /* - * Copyright (c) 2000-2015 LSI Corporation. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2.h * Title: MPI Message independent structures and definitions * including System Interface Register Set and * scatter/gather formats. * Creation Date: June 21, 2006 * - * mpi2.h Version: 02.00.48 + * mpi2.h Version: 02.00.52 * * NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25 * prefix are for use only on MPI v2.5 products, and must not be used * with MPI v2.0 products. Unless otherwise noted, names beginning with * MPI2 or Mpi2 are for use with both MPI v2.0 and MPI v2.5 products. * * Version History * --------------- * * Date Version Description * -------- -------- ------------------------------------------------------ * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 06-04-07 02.00.01 Bumped MPI2_HEADER_VERSION_UNIT. * 06-26-07 02.00.02 Bumped MPI2_HEADER_VERSION_UNIT. * 08-31-07 02.00.03 Bumped MPI2_HEADER_VERSION_UNIT. * Moved ReplyPostHostIndex register to offset 0x6C of the * MPI2_SYSTEM_INTERFACE_REGS and modified the define for * MPI2_REPLY_POST_HOST_INDEX_OFFSET. * Added union of request descriptors. * Added union of reply descriptors. * 10-31-07 02.00.04 Bumped MPI2_HEADER_VERSION_UNIT. * Added define for MPI2_VERSION_02_00. * Fixed the size of the FunctionDependent5 field in the * MPI2_DEFAULT_REPLY structure. * 12-18-07 02.00.05 Bumped MPI2_HEADER_VERSION_UNIT. * Removed the MPI-defined Fault Codes and extended the * product specific codes up to 0xEFFF. * Added a sixth key value for the WriteSequence register * and changed the flush value to 0x0. * Added message function codes for Diagnostic Buffer Post * and Diagnsotic Release. * New IOCStatus define: MPI2_IOCSTATUS_DIAGNOSTIC_RELEASED * Moved MPI2_VERSION_UNION from mpi2_ioc.h. * 02-29-08 02.00.06 Bumped MPI2_HEADER_VERSION_UNIT. * 03-03-08 02.00.07 Bumped MPI2_HEADER_VERSION_UNIT. * 05-21-08 02.00.08 Bumped MPI2_HEADER_VERSION_UNIT. * Added #defines for marking a reply descriptor as unused. * 06-27-08 02.00.09 Bumped MPI2_HEADER_VERSION_UNIT. * 10-02-08 02.00.10 Bumped MPI2_HEADER_VERSION_UNIT. * Moved LUN field defines from mpi2_init.h. * 01-19-09 02.00.11 Bumped MPI2_HEADER_VERSION_UNIT. * 05-06-09 02.00.12 Bumped MPI2_HEADER_VERSION_UNIT. * In all request and reply descriptors, replaced VF_ID * field with MSIxIndex field. * Removed DevHandle field from * MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR and made those * bytes reserved. * Added RAID Accelerator functionality. * 07-30-09 02.00.13 Bumped MPI2_HEADER_VERSION_UNIT. * 10-28-09 02.00.14 Bumped MPI2_HEADER_VERSION_UNIT. * Added MSI-x index mask and shift for Reply Post Host * Index register. * Added function code for Host Based Discovery Action. * 02-10-10 02.00.15 Bumped MPI2_HEADER_VERSION_UNIT. * Added define for MPI2_FUNCTION_PWR_MGMT_CONTROL. * Added defines for product-specific range of message * function codes, 0xF0 to 0xFF. * 05-12-10 02.00.16 Bumped MPI2_HEADER_VERSION_UNIT. * Added alternative defines for the SGE Direction bit. * 08-11-10 02.00.17 Bumped MPI2_HEADER_VERSION_UNIT. * 11-10-10 02.00.18 Bumped MPI2_HEADER_VERSION_UNIT. * Added MPI2_IEEE_SGE_FLAGS_SYSTEMPLBCPI_ADDR define. * 02-23-11 02.00.19 Bumped MPI2_HEADER_VERSION_UNIT. * Added MPI2_FUNCTION_SEND_HOST_MESSAGE. * 03-09-11 02.00.20 Bumped MPI2_HEADER_VERSION_UNIT. * 05-25-11 02.00.21 Bumped MPI2_HEADER_VERSION_UNIT. * 08-24-11 02.00.22 Bumped MPI2_HEADER_VERSION_UNIT. * 11-18-11 02.00.23 Bumped MPI2_HEADER_VERSION_UNIT. * Incorporating additions for MPI v2.5. * 02-06-12 02.00.24 Bumped MPI2_HEADER_VERSION_UNIT. * 03-29-12 02.00.25 Bumped MPI2_HEADER_VERSION_UNIT. * Added Hard Reset delay timings. * 07-10-12 02.00.26 Bumped MPI2_HEADER_VERSION_UNIT. * 07-26-12 02.00.27 Bumped MPI2_HEADER_VERSION_UNIT. * 11-27-12 02.00.28 Bumped MPI2_HEADER_VERSION_UNIT. * 12-20-12 02.00.29 Bumped MPI2_HEADER_VERSION_UNIT. * Added MPI25_SUP_REPLY_POST_HOST_INDEX_OFFSET. * 04-09-13 02.00.30 Bumped MPI2_HEADER_VERSION_UNIT. * 04-17-13 02.00.31 Bumped MPI2_HEADER_VERSION_UNIT. * 08-19-13 02.00.32 Bumped MPI2_HEADER_VERSION_UNIT. * 12-05-13 02.00.33 Bumped MPI2_HEADER_VERSION_UNIT. * 01-08-14 02.00.34 Bumped MPI2_HEADER_VERSION_UNIT. * 06-13-14 02.00.35 Bumped MPI2_HEADER_VERSION_UNIT. * 11-18-14 02.00.36 Updated copyright information. * Bumped MPI2_HEADER_VERSION_UNIT. * 03-16-15 02.00.37 Updated for MPI v2.6. * Bumped MPI2_HEADER_VERSION_UNIT. * Added Scratchpad registers and * AtomicRequestDescriptorPost register to * MPI2_SYSTEM_INTERFACE_REGS. * Added MPI2_DIAG_SBR_RELOAD. * Added MPI2_IOCSTATUS_INSUFFICIENT_POWER. * 03-19-15 02.00.38 Bumped MPI2_HEADER_VERSION_UNIT. * 05-25-15 02.00.39 Bumped MPI2_HEADER_VERSION_UNIT * 08-25-15 02.00.40 Bumped MPI2_HEADER_VERSION_UNIT. * Added V7 HostDiagnostic register defines * 12-15-15 02.00.41 Bumped MPI_HEADER_VERSION_UNIT * 01-01-16 02.00.42 Bumped MPI_HEADER_VERSION_UNIT * 04-05-16 02.00.43 Modified MPI26_DIAG_BOOT_DEVICE_SELECT defines * to be unique within first 32 characters. * Removed AHCI support. * Removed SOP support. * Bumped MPI2_HEADER_VERSION_UNIT. * 04-10-16 02.00.44 Bumped MPI2_HEADER_VERSION_UNIT. * 07-06-16 02.00.45 Bumped MPI2_HEADER_VERSION_UNIT. * 09-02-16 02.00.46 Bumped MPI2_HEADER_VERSION_UNIT. * 11-23-16 02.00.47 Bumped MPI2_HEADER_VERSION_UNIT. * 02-03-17 02.00.48 Bumped MPI2_HEADER_VERSION_UNIT. + * 06-13-17 02.00.49 Bumped MPI2_HEADER_VERSION_UNIT. + * 09-29-17 02.00.50 Bumped MPI2_HEADER_VERSION_UNIT. + * 07-22-18 02.00.51 Added SECURE_BOOT define. + * Bumped MPI2_HEADER_VERSION_UNIT + * 08-15-18 02.00.52 Bumped MPI2_HEADER_VERSION_UNIT. * -------------------------------------------------------------------------- */ #ifndef MPI2_H #define MPI2_H /***************************************************************************** * * MPI Version Definitions * *****************************************************************************/ #define MPI2_VERSION_MAJOR_MASK (0xFF00) #define MPI2_VERSION_MAJOR_SHIFT (8) #define MPI2_VERSION_MINOR_MASK (0x00FF) #define MPI2_VERSION_MINOR_SHIFT (0) /* major version for all MPI v2.x */ #define MPI2_VERSION_MAJOR (0x02) /* minor version for MPI v2.0 compatible products */ #define MPI2_VERSION_MINOR (0x00) #define MPI2_VERSION ((MPI2_VERSION_MAJOR << MPI2_VERSION_MAJOR_SHIFT) | \ MPI2_VERSION_MINOR) #define MPI2_VERSION_02_00 (0x0200) /* minor version for MPI v2.5 compatible products */ #define MPI25_VERSION_MINOR (0x05) #define MPI25_VERSION ((MPI2_VERSION_MAJOR << MPI2_VERSION_MAJOR_SHIFT) | \ MPI25_VERSION_MINOR) #define MPI2_VERSION_02_05 (0x0205) /* minor version for MPI v2.6 compatible products */ #define MPI26_VERSION_MINOR (0x06) #define MPI26_VERSION ((MPI2_VERSION_MAJOR << MPI2_VERSION_MAJOR_SHIFT) | \ MPI26_VERSION_MINOR) #define MPI2_VERSION_02_06 (0x0206) /* Unit and Dev versioning for this MPI header set */ -#define MPI2_HEADER_VERSION_UNIT (0x30) +#define MPI2_HEADER_VERSION_UNIT (0x34) #define MPI2_HEADER_VERSION_DEV (0x00) #define MPI2_HEADER_VERSION_UNIT_MASK (0xFF00) #define MPI2_HEADER_VERSION_UNIT_SHIFT (8) #define MPI2_HEADER_VERSION_DEV_MASK (0x00FF) #define MPI2_HEADER_VERSION_DEV_SHIFT (0) #define MPI2_HEADER_VERSION ((MPI2_HEADER_VERSION_UNIT << 8) | MPI2_HEADER_VERSION_DEV) /***************************************************************************** * * IOC State Definitions * *****************************************************************************/ #define MPI2_IOC_STATE_RESET (0x00000000) #define MPI2_IOC_STATE_READY (0x10000000) #define MPI2_IOC_STATE_OPERATIONAL (0x20000000) #define MPI2_IOC_STATE_FAULT (0x40000000) #define MPI2_IOC_STATE_MASK (0xF0000000) #define MPI2_IOC_STATE_SHIFT (28) /* Fault state range for prodcut specific codes */ #define MPI2_FAULT_PRODUCT_SPECIFIC_MIN (0x0000) #define MPI2_FAULT_PRODUCT_SPECIFIC_MAX (0xEFFF) /***************************************************************************** * * System Interface Register Definitions * *****************************************************************************/ typedef volatile struct _MPI2_SYSTEM_INTERFACE_REGS { U32 Doorbell; /* 0x00 */ U32 WriteSequence; /* 0x04 */ U32 HostDiagnostic; /* 0x08 */ U32 Reserved1; /* 0x0C */ U32 DiagRWData; /* 0x10 */ U32 DiagRWAddressLow; /* 0x14 */ U32 DiagRWAddressHigh; /* 0x18 */ U32 Reserved2[5]; /* 0x1C */ U32 HostInterruptStatus; /* 0x30 */ U32 HostInterruptMask; /* 0x34 */ U32 DCRData; /* 0x38 */ U32 DCRAddress; /* 0x3C */ U32 Reserved3[2]; /* 0x40 */ U32 ReplyFreeHostIndex; /* 0x48 */ U32 Reserved4[8]; /* 0x4C */ U32 ReplyPostHostIndex; /* 0x6C */ U32 Reserved5; /* 0x70 */ U32 HCBSize; /* 0x74 */ U32 HCBAddressLow; /* 0x78 */ U32 HCBAddressHigh; /* 0x7C */ U32 Reserved6[12]; /* 0x80 */ U32 Scratchpad[4]; /* 0xB0 */ U32 RequestDescriptorPostLow; /* 0xC0 */ U32 RequestDescriptorPostHigh; /* 0xC4 */ U32 AtomicRequestDescriptorPost;/* 0xC8 */ /* MPI v2.6 and later; reserved in earlier versions */ U32 Reserved7[13]; /* 0xCC */ } MPI2_SYSTEM_INTERFACE_REGS, MPI2_POINTER PTR_MPI2_SYSTEM_INTERFACE_REGS, Mpi2SystemInterfaceRegs_t, MPI2_POINTER pMpi2SystemInterfaceRegs_t; /* * Defines for working with the Doorbell register. */ #define MPI2_DOORBELL_OFFSET (0x00000000) /* IOC --> System values */ #define MPI2_DOORBELL_USED (0x08000000) #define MPI2_DOORBELL_WHO_INIT_MASK (0x07000000) #define MPI2_DOORBELL_WHO_INIT_SHIFT (24) #define MPI2_DOORBELL_FAULT_CODE_MASK (0x0000FFFF) #define MPI2_DOORBELL_DATA_MASK (0x0000FFFF) /* System --> IOC values */ #define MPI2_DOORBELL_FUNCTION_MASK (0xFF000000) #define MPI2_DOORBELL_FUNCTION_SHIFT (24) #define MPI2_DOORBELL_ADD_DWORDS_MASK (0x00FF0000) #define MPI2_DOORBELL_ADD_DWORDS_SHIFT (16) /* * Defines for the WriteSequence register */ #define MPI2_WRITE_SEQUENCE_OFFSET (0x00000004) #define MPI2_WRSEQ_KEY_VALUE_MASK (0x0000000F) #define MPI2_WRSEQ_FLUSH_KEY_VALUE (0x0) #define MPI2_WRSEQ_1ST_KEY_VALUE (0xF) #define MPI2_WRSEQ_2ND_KEY_VALUE (0x4) #define MPI2_WRSEQ_3RD_KEY_VALUE (0xB) #define MPI2_WRSEQ_4TH_KEY_VALUE (0x2) #define MPI2_WRSEQ_5TH_KEY_VALUE (0x7) #define MPI2_WRSEQ_6TH_KEY_VALUE (0xD) /* * Defines for the HostDiagnostic register */ #define MPI2_HOST_DIAGNOSTIC_OFFSET (0x00000008) + +#define MPI26_DIAG_SECURE_BOOT (0x80000000) #define MPI2_DIAG_SBR_RELOAD (0x00002000) #define MPI2_DIAG_BOOT_DEVICE_SELECT_MASK (0x00001800) #define MPI2_DIAG_BOOT_DEVICE_SELECT_DEFAULT (0x00000000) #define MPI2_DIAG_BOOT_DEVICE_SELECT_HCDW (0x00000800) /* Defines for V7A/V7R HostDiagnostic Register */ #define MPI26_DIAG_BOOT_DEVICE_SEL_64FLASH (0x00000000) #define MPI26_DIAG_BOOT_DEVICE_SEL_64HCDW (0x00000800) #define MPI26_DIAG_BOOT_DEVICE_SEL_32FLASH (0x00001000) #define MPI26_DIAG_BOOT_DEVICE_SEL_32HCDW (0x00001800) #define MPI2_DIAG_CLEAR_FLASH_BAD_SIG (0x00000400) #define MPI2_DIAG_FORCE_HCB_ON_RESET (0x00000200) #define MPI2_DIAG_HCB_MODE (0x00000100) #define MPI2_DIAG_DIAG_WRITE_ENABLE (0x00000080) #define MPI2_DIAG_FLASH_BAD_SIG (0x00000040) #define MPI2_DIAG_RESET_HISTORY (0x00000020) #define MPI2_DIAG_DIAG_RW_ENABLE (0x00000010) #define MPI2_DIAG_RESET_ADAPTER (0x00000004) #define MPI2_DIAG_HOLD_IOC_RESET (0x00000002) /* * Offsets for DiagRWData and address */ #define MPI2_DIAG_RW_DATA_OFFSET (0x00000010) #define MPI2_DIAG_RW_ADDRESS_LOW_OFFSET (0x00000014) #define MPI2_DIAG_RW_ADDRESS_HIGH_OFFSET (0x00000018) /* * Defines for the HostInterruptStatus register */ #define MPI2_HOST_INTERRUPT_STATUS_OFFSET (0x00000030) #define MPI2_HIS_SYS2IOC_DB_STATUS (0x80000000) #define MPI2_HIS_IOP_DOORBELL_STATUS MPI2_HIS_SYS2IOC_DB_STATUS #define MPI2_HIS_RESET_IRQ_STATUS (0x40000000) #define MPI2_HIS_REPLY_DESCRIPTOR_INTERRUPT (0x00000008) #define MPI2_HIS_IOC2SYS_DB_STATUS (0x00000001) #define MPI2_HIS_DOORBELL_INTERRUPT MPI2_HIS_IOC2SYS_DB_STATUS /* * Defines for the HostInterruptMask register */ #define MPI2_HOST_INTERRUPT_MASK_OFFSET (0x00000034) #define MPI2_HIM_RESET_IRQ_MASK (0x40000000) #define MPI2_HIM_REPLY_INT_MASK (0x00000008) #define MPI2_HIM_RIM MPI2_HIM_REPLY_INT_MASK #define MPI2_HIM_IOC2SYS_DB_MASK (0x00000001) #define MPI2_HIM_DIM MPI2_HIM_IOC2SYS_DB_MASK /* * Offsets for DCRData and address */ #define MPI2_DCR_DATA_OFFSET (0x00000038) #define MPI2_DCR_ADDRESS_OFFSET (0x0000003C) /* * Offset for the Reply Free Queue */ #define MPI2_REPLY_FREE_HOST_INDEX_OFFSET (0x00000048) /* * Defines for the Reply Descriptor Post Queue */ #define MPI2_REPLY_POST_HOST_INDEX_OFFSET (0x0000006C) #define MPI2_REPLY_POST_HOST_INDEX_MASK (0x00FFFFFF) #define MPI2_RPHI_MSIX_INDEX_MASK (0xFF000000) #define MPI2_RPHI_MSIX_INDEX_SHIFT (24) #define MPI25_SUP_REPLY_POST_HOST_INDEX_OFFSET (0x0000030C) /* MPI v2.5 only */ /* * Defines for the HCBSize and address */ #define MPI2_HCB_SIZE_OFFSET (0x00000074) #define MPI2_HCB_SIZE_SIZE_MASK (0xFFFFF000) #define MPI2_HCB_SIZE_HCB_ENABLE (0x00000001) #define MPI2_HCB_ADDRESS_LOW_OFFSET (0x00000078) #define MPI2_HCB_ADDRESS_HIGH_OFFSET (0x0000007C) /* * Offsets for the Scratchpad registers */ #define MPI26_SCRATCHPAD0_OFFSET (0x000000B0) #define MPI26_SCRATCHPAD1_OFFSET (0x000000B4) #define MPI26_SCRATCHPAD2_OFFSET (0x000000B8) #define MPI26_SCRATCHPAD3_OFFSET (0x000000BC) /* * Offsets for the Request Descriptor Post Queue */ #define MPI2_REQUEST_DESCRIPTOR_POST_LOW_OFFSET (0x000000C0) #define MPI2_REQUEST_DESCRIPTOR_POST_HIGH_OFFSET (0x000000C4) #define MPI26_ATOMIC_REQUEST_DESCRIPTOR_POST_OFFSET (0x000000C8) /* Hard Reset delay timings */ #define MPI2_HARD_RESET_PCIE_FIRST_READ_DELAY_MICRO_SEC (50000) #define MPI2_HARD_RESET_PCIE_RESET_READ_WINDOW_MICRO_SEC (255000) #define MPI2_HARD_RESET_PCIE_SECOND_READ_DELAY_MICRO_SEC (256000) /***************************************************************************** * * Message Descriptors * *****************************************************************************/ /* Request Descriptors */ /* Default Request Descriptor */ typedef struct _MPI2_DEFAULT_REQUEST_DESCRIPTOR { U8 RequestFlags; /* 0x00 */ U8 MSIxIndex; /* 0x01 */ U16 SMID; /* 0x02 */ U16 LMID; /* 0x04 */ U16 DescriptorTypeDependent; /* 0x06 */ } MPI2_DEFAULT_REQUEST_DESCRIPTOR, MPI2_POINTER PTR_MPI2_DEFAULT_REQUEST_DESCRIPTOR, Mpi2DefaultRequestDescriptor_t, MPI2_POINTER pMpi2DefaultRequestDescriptor_t; /* defines for the RequestFlags field */ #define MPI2_REQ_DESCRIPT_FLAGS_TYPE_MASK (0x1E) #define MPI2_REQ_DESCRIPT_FLAGS_TYPE_RSHIFT (1) /* use carefully; values below are pre-shifted left */ #define MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO (0x00) #define MPI2_REQ_DESCRIPT_FLAGS_SCSI_TARGET (0x02) #define MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY (0x06) #define MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE (0x08) #define MPI2_REQ_DESCRIPT_FLAGS_RAID_ACCELERATOR (0x0A) #define MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO (0x0C) #define MPI26_REQ_DESCRIPT_FLAGS_PCIE_ENCAPSULATED (0x10) #define MPI2_REQ_DESCRIPT_FLAGS_IOC_FIFO_MARKER (0x01) /* High Priority Request Descriptor */ typedef struct _MPI2_HIGH_PRIORITY_REQUEST_DESCRIPTOR { U8 RequestFlags; /* 0x00 */ U8 MSIxIndex; /* 0x01 */ U16 SMID; /* 0x02 */ U16 LMID; /* 0x04 */ U16 Reserved1; /* 0x06 */ } MPI2_HIGH_PRIORITY_REQUEST_DESCRIPTOR, MPI2_POINTER PTR_MPI2_HIGH_PRIORITY_REQUEST_DESCRIPTOR, Mpi2HighPriorityRequestDescriptor_t, MPI2_POINTER pMpi2HighPriorityRequestDescriptor_t; /* SCSI IO Request Descriptor */ typedef struct _MPI2_SCSI_IO_REQUEST_DESCRIPTOR { U8 RequestFlags; /* 0x00 */ U8 MSIxIndex; /* 0x01 */ U16 SMID; /* 0x02 */ U16 LMID; /* 0x04 */ U16 DevHandle; /* 0x06 */ } MPI2_SCSI_IO_REQUEST_DESCRIPTOR, MPI2_POINTER PTR_MPI2_SCSI_IO_REQUEST_DESCRIPTOR, Mpi2SCSIIORequestDescriptor_t, MPI2_POINTER pMpi2SCSIIORequestDescriptor_t; /* SCSI Target Request Descriptor */ typedef struct _MPI2_SCSI_TARGET_REQUEST_DESCRIPTOR { U8 RequestFlags; /* 0x00 */ U8 MSIxIndex; /* 0x01 */ U16 SMID; /* 0x02 */ U16 LMID; /* 0x04 */ U16 IoIndex; /* 0x06 */ } MPI2_SCSI_TARGET_REQUEST_DESCRIPTOR, MPI2_POINTER PTR_MPI2_SCSI_TARGET_REQUEST_DESCRIPTOR, Mpi2SCSITargetRequestDescriptor_t, MPI2_POINTER pMpi2SCSITargetRequestDescriptor_t; /* RAID Accelerator Request Descriptor */ typedef struct _MPI2_RAID_ACCEL_REQUEST_DESCRIPTOR { U8 RequestFlags; /* 0x00 */ U8 MSIxIndex; /* 0x01 */ U16 SMID; /* 0x02 */ U16 LMID; /* 0x04 */ U16 Reserved; /* 0x06 */ } MPI2_RAID_ACCEL_REQUEST_DESCRIPTOR, MPI2_POINTER PTR_MPI2_RAID_ACCEL_REQUEST_DESCRIPTOR, Mpi2RAIDAcceleratorRequestDescriptor_t, MPI2_POINTER pMpi2RAIDAcceleratorRequestDescriptor_t; /* Fast Path SCSI IO Request Descriptor */ typedef MPI2_SCSI_IO_REQUEST_DESCRIPTOR MPI25_FP_SCSI_IO_REQUEST_DESCRIPTOR, MPI2_POINTER PTR_MPI25_FP_SCSI_IO_REQUEST_DESCRIPTOR, Mpi25FastPathSCSIIORequestDescriptor_t, MPI2_POINTER pMpi25FastPathSCSIIORequestDescriptor_t; /* PCIe Encapsulated Request Descriptor */ typedef MPI2_SCSI_IO_REQUEST_DESCRIPTOR MPI26_PCIE_ENCAPSULATED_REQUEST_DESCRIPTOR, MPI2_POINTER PTR_MPI26_PCIE_ENCAPSULATED_REQUEST_DESCRIPTOR, Mpi26PCIeEncapsulatedRequestDescriptor_t, MPI2_POINTER pMpi26PCIeEncapsulatedRequestDescriptor_t; /* union of Request Descriptors */ typedef union _MPI2_REQUEST_DESCRIPTOR_UNION { MPI2_DEFAULT_REQUEST_DESCRIPTOR Default; MPI2_HIGH_PRIORITY_REQUEST_DESCRIPTOR HighPriority; MPI2_SCSI_IO_REQUEST_DESCRIPTOR SCSIIO; MPI2_SCSI_TARGET_REQUEST_DESCRIPTOR SCSITarget; MPI2_RAID_ACCEL_REQUEST_DESCRIPTOR RAIDAccelerator; MPI25_FP_SCSI_IO_REQUEST_DESCRIPTOR FastPathSCSIIO; MPI26_PCIE_ENCAPSULATED_REQUEST_DESCRIPTOR PCIeEncapsulated; U64 Words; } MPI2_REQUEST_DESCRIPTOR_UNION, MPI2_POINTER PTR_MPI2_REQUEST_DESCRIPTOR_UNION, Mpi2RequestDescriptorUnion_t, MPI2_POINTER pMpi2RequestDescriptorUnion_t; /* Atomic Request Descriptors */ /* * All Atomic Request Descriptors have the same format, so the following * structure is used for all Atomic Request Descriptors: * Atomic Default Request Descriptor * Atomic High Priority Request Descriptor * Atomic SCSI IO Request Descriptor * Atomic SCSI Target Request Descriptor * Atomic RAID Accelerator Request Descriptor * Atomic Fast Path SCSI IO Request Descriptor * Atomic PCIe Encapsulated Request Descriptor */ /* Atomic Request Descriptor */ typedef struct _MPI26_ATOMIC_REQUEST_DESCRIPTOR { U8 RequestFlags; /* 0x00 */ U8 MSIxIndex; /* 0x01 */ U16 SMID; /* 0x02 */ } MPI26_ATOMIC_REQUEST_DESCRIPTOR, MPI2_POINTER PTR_MPI26_ATOMIC_REQUEST_DESCRIPTOR, Mpi26AtomicRequestDescriptor_t, MPI2_POINTER pMpi26AtomicRequestDescriptor_t; /* for the RequestFlags field, use the same defines as MPI2_DEFAULT_REQUEST_DESCRIPTOR */ /* Reply Descriptors */ /* Default Reply Descriptor */ typedef struct _MPI2_DEFAULT_REPLY_DESCRIPTOR { U8 ReplyFlags; /* 0x00 */ U8 MSIxIndex; /* 0x01 */ U16 DescriptorTypeDependent1; /* 0x02 */ U32 DescriptorTypeDependent2; /* 0x04 */ } MPI2_DEFAULT_REPLY_DESCRIPTOR, MPI2_POINTER PTR_MPI2_DEFAULT_REPLY_DESCRIPTOR, Mpi2DefaultReplyDescriptor_t, MPI2_POINTER pMpi2DefaultReplyDescriptor_t; /* defines for the ReplyFlags field */ #define MPI2_RPY_DESCRIPT_FLAGS_TYPE_MASK (0x0F) #define MPI2_RPY_DESCRIPT_FLAGS_SCSI_IO_SUCCESS (0x00) #define MPI2_RPY_DESCRIPT_FLAGS_ADDRESS_REPLY (0x01) #define MPI2_RPY_DESCRIPT_FLAGS_TARGETASSIST_SUCCESS (0x02) #define MPI2_RPY_DESCRIPT_FLAGS_TARGET_COMMAND_BUFFER (0x03) #define MPI2_RPY_DESCRIPT_FLAGS_RAID_ACCELERATOR_SUCCESS (0x05) #define MPI25_RPY_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO_SUCCESS (0x06) #define MPI26_RPY_DESCRIPT_FLAGS_PCIE_ENCAPSULATED_SUCCESS (0x08) #define MPI2_RPY_DESCRIPT_FLAGS_UNUSED (0x0F) /* values for marking a reply descriptor as unused */ #define MPI2_RPY_DESCRIPT_UNUSED_WORD0_MARK (0xFFFFFFFF) #define MPI2_RPY_DESCRIPT_UNUSED_WORD1_MARK (0xFFFFFFFF) /* Address Reply Descriptor */ typedef struct _MPI2_ADDRESS_REPLY_DESCRIPTOR { U8 ReplyFlags; /* 0x00 */ U8 MSIxIndex; /* 0x01 */ U16 SMID; /* 0x02 */ U32 ReplyFrameAddress; /* 0x04 */ } MPI2_ADDRESS_REPLY_DESCRIPTOR, MPI2_POINTER PTR_MPI2_ADDRESS_REPLY_DESCRIPTOR, Mpi2AddressReplyDescriptor_t, MPI2_POINTER pMpi2AddressReplyDescriptor_t; #define MPI2_ADDRESS_REPLY_SMID_INVALID (0x00) /* SCSI IO Success Reply Descriptor */ typedef struct _MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR { U8 ReplyFlags; /* 0x00 */ U8 MSIxIndex; /* 0x01 */ U16 SMID; /* 0x02 */ U16 TaskTag; /* 0x04 */ U16 Reserved1; /* 0x06 */ } MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR, MPI2_POINTER PTR_MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR, Mpi2SCSIIOSuccessReplyDescriptor_t, MPI2_POINTER pMpi2SCSIIOSuccessReplyDescriptor_t; /* TargetAssist Success Reply Descriptor */ typedef struct _MPI2_TARGETASSIST_SUCCESS_REPLY_DESCRIPTOR { U8 ReplyFlags; /* 0x00 */ U8 MSIxIndex; /* 0x01 */ U16 SMID; /* 0x02 */ U8 SequenceNumber; /* 0x04 */ U8 Reserved1; /* 0x05 */ U16 IoIndex; /* 0x06 */ } MPI2_TARGETASSIST_SUCCESS_REPLY_DESCRIPTOR, MPI2_POINTER PTR_MPI2_TARGETASSIST_SUCCESS_REPLY_DESCRIPTOR, Mpi2TargetAssistSuccessReplyDescriptor_t, MPI2_POINTER pMpi2TargetAssistSuccessReplyDescriptor_t; /* Target Command Buffer Reply Descriptor */ typedef struct _MPI2_TARGET_COMMAND_BUFFER_REPLY_DESCRIPTOR { U8 ReplyFlags; /* 0x00 */ U8 MSIxIndex; /* 0x01 */ U8 VP_ID; /* 0x02 */ U8 Flags; /* 0x03 */ U16 InitiatorDevHandle; /* 0x04 */ U16 IoIndex; /* 0x06 */ } MPI2_TARGET_COMMAND_BUFFER_REPLY_DESCRIPTOR, MPI2_POINTER PTR_MPI2_TARGET_COMMAND_BUFFER_REPLY_DESCRIPTOR, Mpi2TargetCommandBufferReplyDescriptor_t, MPI2_POINTER pMpi2TargetCommandBufferReplyDescriptor_t; /* defines for Flags field */ #define MPI2_RPY_DESCRIPT_TCB_FLAGS_PHYNUM_MASK (0x3F) /* RAID Accelerator Success Reply Descriptor */ typedef struct _MPI2_RAID_ACCELERATOR_SUCCESS_REPLY_DESCRIPTOR { U8 ReplyFlags; /* 0x00 */ U8 MSIxIndex; /* 0x01 */ U16 SMID; /* 0x02 */ U32 Reserved; /* 0x04 */ } MPI2_RAID_ACCELERATOR_SUCCESS_REPLY_DESCRIPTOR, MPI2_POINTER PTR_MPI2_RAID_ACCELERATOR_SUCCESS_REPLY_DESCRIPTOR, Mpi2RAIDAcceleratorSuccessReplyDescriptor_t, MPI2_POINTER pMpi2RAIDAcceleratorSuccessReplyDescriptor_t; /* Fast Path SCSI IO Success Reply Descriptor */ typedef MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR MPI25_FP_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR, MPI2_POINTER PTR_MPI25_FP_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR, Mpi25FastPathSCSIIOSuccessReplyDescriptor_t, MPI2_POINTER pMpi25FastPathSCSIIOSuccessReplyDescriptor_t; /* PCIe Encapsulated Success Reply Descriptor */ typedef MPI2_RAID_ACCELERATOR_SUCCESS_REPLY_DESCRIPTOR MPI26_PCIE_ENCAPSULATED_SUCCESS_REPLY_DESCRIPTOR, MPI2_POINTER PTR_MPI26_PCIE_ENCAPSULATED_SUCCESS_REPLY_DESCRIPTOR, Mpi26PCIeEncapsulatedSuccessReplyDescriptor_t, MPI2_POINTER pMpi26PCIeEncapsulatedSuccessReplyDescriptor_t; /* union of Reply Descriptors */ typedef union _MPI2_REPLY_DESCRIPTORS_UNION { MPI2_DEFAULT_REPLY_DESCRIPTOR Default; MPI2_ADDRESS_REPLY_DESCRIPTOR AddressReply; MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR SCSIIOSuccess; MPI2_TARGETASSIST_SUCCESS_REPLY_DESCRIPTOR TargetAssistSuccess; MPI2_TARGET_COMMAND_BUFFER_REPLY_DESCRIPTOR TargetCommandBuffer; MPI2_RAID_ACCELERATOR_SUCCESS_REPLY_DESCRIPTOR RAIDAcceleratorSuccess; MPI25_FP_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR FastPathSCSIIOSuccess; MPI26_PCIE_ENCAPSULATED_SUCCESS_REPLY_DESCRIPTOR PCIeEncapsulatedSuccess; U64 Words; } MPI2_REPLY_DESCRIPTORS_UNION, MPI2_POINTER PTR_MPI2_REPLY_DESCRIPTORS_UNION, Mpi2ReplyDescriptorsUnion_t, MPI2_POINTER pMpi2ReplyDescriptorsUnion_t; /***************************************************************************** * * Message Functions * *****************************************************************************/ #define MPI2_FUNCTION_SCSI_IO_REQUEST (0x00) /* SCSI IO */ #define MPI2_FUNCTION_SCSI_TASK_MGMT (0x01) /* SCSI Task Management */ #define MPI2_FUNCTION_IOC_INIT (0x02) /* IOC Init */ #define MPI2_FUNCTION_IOC_FACTS (0x03) /* IOC Facts */ #define MPI2_FUNCTION_CONFIG (0x04) /* Configuration */ #define MPI2_FUNCTION_PORT_FACTS (0x05) /* Port Facts */ #define MPI2_FUNCTION_PORT_ENABLE (0x06) /* Port Enable */ #define MPI2_FUNCTION_EVENT_NOTIFICATION (0x07) /* Event Notification */ #define MPI2_FUNCTION_EVENT_ACK (0x08) /* Event Acknowledge */ #define MPI2_FUNCTION_FW_DOWNLOAD (0x09) /* FW Download */ #define MPI2_FUNCTION_TARGET_ASSIST (0x0B) /* Target Assist */ #define MPI2_FUNCTION_TARGET_STATUS_SEND (0x0C) /* Target Status Send */ #define MPI2_FUNCTION_TARGET_MODE_ABORT (0x0D) /* Target Mode Abort */ #define MPI2_FUNCTION_FW_UPLOAD (0x12) /* FW Upload */ #define MPI2_FUNCTION_RAID_ACTION (0x15) /* RAID Action */ #define MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH (0x16) /* SCSI IO RAID Passthrough */ #define MPI2_FUNCTION_TOOLBOX (0x17) /* Toolbox */ #define MPI2_FUNCTION_SCSI_ENCLOSURE_PROCESSOR (0x18) /* SCSI Enclosure Processor */ #define MPI2_FUNCTION_SMP_PASSTHROUGH (0x1A) /* SMP Passthrough */ #define MPI2_FUNCTION_SAS_IO_UNIT_CONTROL (0x1B) /* SAS IO Unit Control */ /* for MPI v2.5 and earlier */ #define MPI2_FUNCTION_IO_UNIT_CONTROL (0x1B) /* IO Unit Control */ /* for MPI v2.6 and later */ #define MPI2_FUNCTION_SATA_PASSTHROUGH (0x1C) /* SATA Passthrough */ #define MPI2_FUNCTION_DIAG_BUFFER_POST (0x1D) /* Diagnostic Buffer Post */ #define MPI2_FUNCTION_DIAG_RELEASE (0x1E) /* Diagnostic Release */ #define MPI2_FUNCTION_TARGET_CMD_BUF_BASE_POST (0x24) /* Target Command Buffer Post Base */ #define MPI2_FUNCTION_TARGET_CMD_BUF_LIST_POST (0x25) /* Target Command Buffer Post List */ #define MPI2_FUNCTION_RAID_ACCELERATOR (0x2C) /* RAID Accelerator */ #define MPI2_FUNCTION_HOST_BASED_DISCOVERY_ACTION (0x2F) /* Host Based Discovery Action */ #define MPI2_FUNCTION_PWR_MGMT_CONTROL (0x30) /* Power Management Control */ #define MPI2_FUNCTION_SEND_HOST_MESSAGE (0x31) /* Send Host Message */ #define MPI2_FUNCTION_NVME_ENCAPSULATED (0x33) /* NVMe Encapsulated (MPI v2.6) */ #define MPI2_FUNCTION_MIN_PRODUCT_SPECIFIC (0xF0) /* beginning of product-specific range */ #define MPI2_FUNCTION_MAX_PRODUCT_SPECIFIC (0xFF) /* end of product-specific range */ /* Doorbell functions */ #define MPI2_FUNCTION_IOC_MESSAGE_UNIT_RESET (0x40) #define MPI2_FUNCTION_HANDSHAKE (0x42) /***************************************************************************** * * IOC Status Values * *****************************************************************************/ /* mask for IOCStatus status value */ #define MPI2_IOCSTATUS_MASK (0x7FFF) /**************************************************************************** * Common IOCStatus values for all replies ****************************************************************************/ #define MPI2_IOCSTATUS_SUCCESS (0x0000) #define MPI2_IOCSTATUS_INVALID_FUNCTION (0x0001) #define MPI2_IOCSTATUS_BUSY (0x0002) #define MPI2_IOCSTATUS_INVALID_SGL (0x0003) #define MPI2_IOCSTATUS_INTERNAL_ERROR (0x0004) #define MPI2_IOCSTATUS_INVALID_VPID (0x0005) #define MPI2_IOCSTATUS_INSUFFICIENT_RESOURCES (0x0006) #define MPI2_IOCSTATUS_INVALID_FIELD (0x0007) #define MPI2_IOCSTATUS_INVALID_STATE (0x0008) #define MPI2_IOCSTATUS_OP_STATE_NOT_SUPPORTED (0x0009) #define MPI2_IOCSTATUS_INSUFFICIENT_POWER (0x000A) /* MPI v2.6 and later */ /**************************************************************************** * Config IOCStatus values ****************************************************************************/ #define MPI2_IOCSTATUS_CONFIG_INVALID_ACTION (0x0020) #define MPI2_IOCSTATUS_CONFIG_INVALID_TYPE (0x0021) #define MPI2_IOCSTATUS_CONFIG_INVALID_PAGE (0x0022) #define MPI2_IOCSTATUS_CONFIG_INVALID_DATA (0x0023) #define MPI2_IOCSTATUS_CONFIG_NO_DEFAULTS (0x0024) #define MPI2_IOCSTATUS_CONFIG_CANT_COMMIT (0x0025) /**************************************************************************** * SCSI IO Reply ****************************************************************************/ #define MPI2_IOCSTATUS_SCSI_RECOVERED_ERROR (0x0040) #define MPI2_IOCSTATUS_SCSI_INVALID_DEVHANDLE (0x0042) #define MPI2_IOCSTATUS_SCSI_DEVICE_NOT_THERE (0x0043) #define MPI2_IOCSTATUS_SCSI_DATA_OVERRUN (0x0044) #define MPI2_IOCSTATUS_SCSI_DATA_UNDERRUN (0x0045) #define MPI2_IOCSTATUS_SCSI_IO_DATA_ERROR (0x0046) #define MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR (0x0047) #define MPI2_IOCSTATUS_SCSI_TASK_TERMINATED (0x0048) #define MPI2_IOCSTATUS_SCSI_RESIDUAL_MISMATCH (0x0049) #define MPI2_IOCSTATUS_SCSI_TASK_MGMT_FAILED (0x004A) #define MPI2_IOCSTATUS_SCSI_IOC_TERMINATED (0x004B) #define MPI2_IOCSTATUS_SCSI_EXT_TERMINATED (0x004C) /**************************************************************************** * For use by SCSI Initiator and SCSI Target end-to-end data protection ****************************************************************************/ #define MPI2_IOCSTATUS_EEDP_GUARD_ERROR (0x004D) #define MPI2_IOCSTATUS_EEDP_REF_TAG_ERROR (0x004E) #define MPI2_IOCSTATUS_EEDP_APP_TAG_ERROR (0x004F) /**************************************************************************** * SCSI Target values ****************************************************************************/ #define MPI2_IOCSTATUS_TARGET_INVALID_IO_INDEX (0x0062) #define MPI2_IOCSTATUS_TARGET_ABORTED (0x0063) #define MPI2_IOCSTATUS_TARGET_NO_CONN_RETRYABLE (0x0064) #define MPI2_IOCSTATUS_TARGET_NO_CONNECTION (0x0065) #define MPI2_IOCSTATUS_TARGET_XFER_COUNT_MISMATCH (0x006A) #define MPI2_IOCSTATUS_TARGET_DATA_OFFSET_ERROR (0x006D) #define MPI2_IOCSTATUS_TARGET_TOO_MUCH_WRITE_DATA (0x006E) #define MPI2_IOCSTATUS_TARGET_IU_TOO_SHORT (0x006F) #define MPI2_IOCSTATUS_TARGET_ACK_NAK_TIMEOUT (0x0070) #define MPI2_IOCSTATUS_TARGET_NAK_RECEIVED (0x0071) /**************************************************************************** * Serial Attached SCSI values ****************************************************************************/ #define MPI2_IOCSTATUS_SAS_SMP_REQUEST_FAILED (0x0090) #define MPI2_IOCSTATUS_SAS_SMP_DATA_OVERRUN (0x0091) /**************************************************************************** * Diagnostic Buffer Post / Diagnostic Release values ****************************************************************************/ #define MPI2_IOCSTATUS_DIAGNOSTIC_RELEASED (0x00A0) /**************************************************************************** * RAID Accelerator values ****************************************************************************/ #define MPI2_IOCSTATUS_RAID_ACCEL_ERROR (0x00B0) /**************************************************************************** * IOCStatus flag to indicate that log info is available ****************************************************************************/ #define MPI2_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE (0x8000) /**************************************************************************** * IOCLogInfo Types ****************************************************************************/ #define MPI2_IOCLOGINFO_TYPE_MASK (0xF0000000) #define MPI2_IOCLOGINFO_TYPE_SHIFT (28) #define MPI2_IOCLOGINFO_TYPE_NONE (0x0) #define MPI2_IOCLOGINFO_TYPE_SCSI (0x1) #define MPI2_IOCLOGINFO_TYPE_FC (0x2) #define MPI2_IOCLOGINFO_TYPE_SAS (0x3) #define MPI2_IOCLOGINFO_TYPE_ISCSI (0x4) #define MPI2_IOCLOGINFO_LOG_DATA_MASK (0x0FFFFFFF) /***************************************************************************** * * Standard Message Structures * *****************************************************************************/ /**************************************************************************** * Request Message Header for all request messages ****************************************************************************/ typedef struct _MPI2_REQUEST_HEADER { U16 FunctionDependent1; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 FunctionDependent2; /* 0x04 */ U8 FunctionDependent3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved1; /* 0x0A */ } MPI2_REQUEST_HEADER, MPI2_POINTER PTR_MPI2_REQUEST_HEADER, MPI2RequestHeader_t, MPI2_POINTER pMPI2RequestHeader_t; /**************************************************************************** * Default Reply ****************************************************************************/ typedef struct _MPI2_DEFAULT_REPLY { U16 FunctionDependent1; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 FunctionDependent2; /* 0x04 */ U8 FunctionDependent3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved1; /* 0x0A */ U16 FunctionDependent5; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ } MPI2_DEFAULT_REPLY, MPI2_POINTER PTR_MPI2_DEFAULT_REPLY, MPI2DefaultReply_t, MPI2_POINTER pMPI2DefaultReply_t; /* common version structure/union used in messages and configuration pages */ typedef struct _MPI2_VERSION_STRUCT { U8 Dev; /* 0x00 */ U8 Unit; /* 0x01 */ U8 Minor; /* 0x02 */ U8 Major; /* 0x03 */ } MPI2_VERSION_STRUCT; typedef union _MPI2_VERSION_UNION { MPI2_VERSION_STRUCT Struct; U32 Word; } MPI2_VERSION_UNION; /* LUN field defines, common to many structures */ #define MPI2_LUN_FIRST_LEVEL_ADDRESSING (0x0000FFFF) #define MPI2_LUN_SECOND_LEVEL_ADDRESSING (0xFFFF0000) #define MPI2_LUN_THIRD_LEVEL_ADDRESSING (0x0000FFFF) #define MPI2_LUN_FOURTH_LEVEL_ADDRESSING (0xFFFF0000) #define MPI2_LUN_LEVEL_1_WORD (0xFF00) #define MPI2_LUN_LEVEL_1_DWORD (0x0000FF00) /***************************************************************************** * * Fusion-MPT MPI Scatter Gather Elements * *****************************************************************************/ /**************************************************************************** * MPI Simple Element structures ****************************************************************************/ typedef struct _MPI2_SGE_SIMPLE32 { U32 FlagsLength; U32 Address; } MPI2_SGE_SIMPLE32, MPI2_POINTER PTR_MPI2_SGE_SIMPLE32, Mpi2SGESimple32_t, MPI2_POINTER pMpi2SGESimple32_t; typedef struct _MPI2_SGE_SIMPLE64 { U32 FlagsLength; U64 Address; } MPI2_SGE_SIMPLE64, MPI2_POINTER PTR_MPI2_SGE_SIMPLE64, Mpi2SGESimple64_t, MPI2_POINTER pMpi2SGESimple64_t; typedef struct _MPI2_SGE_SIMPLE_UNION { U32 FlagsLength; union { U32 Address32; U64 Address64; } u; } MPI2_SGE_SIMPLE_UNION, MPI2_POINTER PTR_MPI2_SGE_SIMPLE_UNION, Mpi2SGESimpleUnion_t, MPI2_POINTER pMpi2SGESimpleUnion_t; /**************************************************************************** * MPI Chain Element structures - for MPI v2.0 products only ****************************************************************************/ typedef struct _MPI2_SGE_CHAIN32 { U16 Length; U8 NextChainOffset; U8 Flags; U32 Address; } MPI2_SGE_CHAIN32, MPI2_POINTER PTR_MPI2_SGE_CHAIN32, Mpi2SGEChain32_t, MPI2_POINTER pMpi2SGEChain32_t; typedef struct _MPI2_SGE_CHAIN64 { U16 Length; U8 NextChainOffset; U8 Flags; U64 Address; } MPI2_SGE_CHAIN64, MPI2_POINTER PTR_MPI2_SGE_CHAIN64, Mpi2SGEChain64_t, MPI2_POINTER pMpi2SGEChain64_t; typedef struct _MPI2_SGE_CHAIN_UNION { U16 Length; U8 NextChainOffset; U8 Flags; union { U32 Address32; U64 Address64; } u; } MPI2_SGE_CHAIN_UNION, MPI2_POINTER PTR_MPI2_SGE_CHAIN_UNION, Mpi2SGEChainUnion_t, MPI2_POINTER pMpi2SGEChainUnion_t; /**************************************************************************** * MPI Transaction Context Element structures - for MPI v2.0 products only ****************************************************************************/ typedef struct _MPI2_SGE_TRANSACTION32 { U8 Reserved; U8 ContextSize; U8 DetailsLength; U8 Flags; U32 TransactionContext[1]; U32 TransactionDetails[1]; } MPI2_SGE_TRANSACTION32, MPI2_POINTER PTR_MPI2_SGE_TRANSACTION32, Mpi2SGETransaction32_t, MPI2_POINTER pMpi2SGETransaction32_t; typedef struct _MPI2_SGE_TRANSACTION64 { U8 Reserved; U8 ContextSize; U8 DetailsLength; U8 Flags; U32 TransactionContext[2]; U32 TransactionDetails[1]; } MPI2_SGE_TRANSACTION64, MPI2_POINTER PTR_MPI2_SGE_TRANSACTION64, Mpi2SGETransaction64_t, MPI2_POINTER pMpi2SGETransaction64_t; typedef struct _MPI2_SGE_TRANSACTION96 { U8 Reserved; U8 ContextSize; U8 DetailsLength; U8 Flags; U32 TransactionContext[3]; U32 TransactionDetails[1]; } MPI2_SGE_TRANSACTION96, MPI2_POINTER PTR_MPI2_SGE_TRANSACTION96, Mpi2SGETransaction96_t, MPI2_POINTER pMpi2SGETransaction96_t; typedef struct _MPI2_SGE_TRANSACTION128 { U8 Reserved; U8 ContextSize; U8 DetailsLength; U8 Flags; U32 TransactionContext[4]; U32 TransactionDetails[1]; } MPI2_SGE_TRANSACTION128, MPI2_POINTER PTR_MPI2_SGE_TRANSACTION128, Mpi2SGETransaction_t128, MPI2_POINTER pMpi2SGETransaction_t128; typedef struct _MPI2_SGE_TRANSACTION_UNION { U8 Reserved; U8 ContextSize; U8 DetailsLength; U8 Flags; union { U32 TransactionContext32[1]; U32 TransactionContext64[2]; U32 TransactionContext96[3]; U32 TransactionContext128[4]; } u; U32 TransactionDetails[1]; } MPI2_SGE_TRANSACTION_UNION, MPI2_POINTER PTR_MPI2_SGE_TRANSACTION_UNION, Mpi2SGETransactionUnion_t, MPI2_POINTER pMpi2SGETransactionUnion_t; /**************************************************************************** * MPI SGE union for IO SGL's - for MPI v2.0 products only ****************************************************************************/ typedef struct _MPI2_MPI_SGE_IO_UNION { union { MPI2_SGE_SIMPLE_UNION Simple; MPI2_SGE_CHAIN_UNION Chain; } u; } MPI2_MPI_SGE_IO_UNION, MPI2_POINTER PTR_MPI2_MPI_SGE_IO_UNION, Mpi2MpiSGEIOUnion_t, MPI2_POINTER pMpi2MpiSGEIOUnion_t; /**************************************************************************** * MPI SGE union for SGL's with Simple and Transaction elements - for MPI v2.0 products only ****************************************************************************/ typedef struct _MPI2_SGE_TRANS_SIMPLE_UNION { union { MPI2_SGE_SIMPLE_UNION Simple; MPI2_SGE_TRANSACTION_UNION Transaction; } u; } MPI2_SGE_TRANS_SIMPLE_UNION, MPI2_POINTER PTR_MPI2_SGE_TRANS_SIMPLE_UNION, Mpi2SGETransSimpleUnion_t, MPI2_POINTER pMpi2SGETransSimpleUnion_t; /**************************************************************************** * All MPI SGE types union ****************************************************************************/ typedef struct _MPI2_MPI_SGE_UNION { union { MPI2_SGE_SIMPLE_UNION Simple; MPI2_SGE_CHAIN_UNION Chain; MPI2_SGE_TRANSACTION_UNION Transaction; } u; } MPI2_MPI_SGE_UNION, MPI2_POINTER PTR_MPI2_MPI_SGE_UNION, Mpi2MpiSgeUnion_t, MPI2_POINTER pMpi2MpiSgeUnion_t; /**************************************************************************** * MPI SGE field definition and masks ****************************************************************************/ /* Flags field bit definitions */ #define MPI2_SGE_FLAGS_LAST_ELEMENT (0x80) #define MPI2_SGE_FLAGS_END_OF_BUFFER (0x40) #define MPI2_SGE_FLAGS_ELEMENT_TYPE_MASK (0x30) #define MPI2_SGE_FLAGS_LOCAL_ADDRESS (0x08) #define MPI2_SGE_FLAGS_DIRECTION (0x04) #define MPI2_SGE_FLAGS_ADDRESS_SIZE (0x02) #define MPI2_SGE_FLAGS_END_OF_LIST (0x01) #define MPI2_SGE_FLAGS_SHIFT (24) #define MPI2_SGE_LENGTH_MASK (0x00FFFFFF) #define MPI2_SGE_CHAIN_LENGTH_MASK (0x0000FFFF) /* Element Type */ #define MPI2_SGE_FLAGS_TRANSACTION_ELEMENT (0x00) /* for MPI v2.0 products only */ #define MPI2_SGE_FLAGS_SIMPLE_ELEMENT (0x10) #define MPI2_SGE_FLAGS_CHAIN_ELEMENT (0x30) /* for MPI v2.0 products only */ #define MPI2_SGE_FLAGS_ELEMENT_MASK (0x30) /* Address location */ #define MPI2_SGE_FLAGS_SYSTEM_ADDRESS (0x00) /* Direction */ #define MPI2_SGE_FLAGS_IOC_TO_HOST (0x00) #define MPI2_SGE_FLAGS_HOST_TO_IOC (0x04) #define MPI2_SGE_FLAGS_DEST (MPI2_SGE_FLAGS_IOC_TO_HOST) #define MPI2_SGE_FLAGS_SOURCE (MPI2_SGE_FLAGS_HOST_TO_IOC) /* Address Size */ #define MPI2_SGE_FLAGS_32_BIT_ADDRESSING (0x00) #define MPI2_SGE_FLAGS_64_BIT_ADDRESSING (0x02) /* Context Size */ #define MPI2_SGE_FLAGS_32_BIT_CONTEXT (0x00) #define MPI2_SGE_FLAGS_64_BIT_CONTEXT (0x02) #define MPI2_SGE_FLAGS_96_BIT_CONTEXT (0x04) #define MPI2_SGE_FLAGS_128_BIT_CONTEXT (0x06) #define MPI2_SGE_CHAIN_OFFSET_MASK (0x00FF0000) #define MPI2_SGE_CHAIN_OFFSET_SHIFT (16) /**************************************************************************** * MPI SGE operation Macros ****************************************************************************/ /* SIMPLE FlagsLength manipulations... */ #define MPI2_SGE_SET_FLAGS(f) ((U32)(f) << MPI2_SGE_FLAGS_SHIFT) #define MPI2_SGE_GET_FLAGS(f) (((f) & ~MPI2_SGE_LENGTH_MASK) >> MPI2_SGE_FLAGS_SHIFT) #define MPI2_SGE_LENGTH(f) ((f) & MPI2_SGE_LENGTH_MASK) #define MPI2_SGE_CHAIN_LENGTH(f) ((f) & MPI2_SGE_CHAIN_LENGTH_MASK) #define MPI2_SGE_SET_FLAGS_LENGTH(f,l) (MPI2_SGE_SET_FLAGS(f) | MPI2_SGE_LENGTH(l)) #define MPI2_pSGE_GET_FLAGS(psg) MPI2_SGE_GET_FLAGS((psg)->FlagsLength) #define MPI2_pSGE_GET_LENGTH(psg) MPI2_SGE_LENGTH((psg)->FlagsLength) #define MPI2_pSGE_SET_FLAGS_LENGTH(psg,f,l) (psg)->FlagsLength = MPI2_SGE_SET_FLAGS_LENGTH(f,l) /* CAUTION - The following are READ-MODIFY-WRITE! */ #define MPI2_pSGE_SET_FLAGS(psg,f) (psg)->FlagsLength |= MPI2_SGE_SET_FLAGS(f) #define MPI2_pSGE_SET_LENGTH(psg,l) (psg)->FlagsLength |= MPI2_SGE_LENGTH(l) #define MPI2_GET_CHAIN_OFFSET(x) ((x & MPI2_SGE_CHAIN_OFFSET_MASK) >> MPI2_SGE_CHAIN_OFFSET_SHIFT) /***************************************************************************** * * Fusion-MPT IEEE Scatter Gather Elements * *****************************************************************************/ /**************************************************************************** * IEEE Simple Element structures ****************************************************************************/ /* MPI2_IEEE_SGE_SIMPLE32 is for MPI v2.0 products only */ typedef struct _MPI2_IEEE_SGE_SIMPLE32 { U32 Address; U32 FlagsLength; } MPI2_IEEE_SGE_SIMPLE32, MPI2_POINTER PTR_MPI2_IEEE_SGE_SIMPLE32, Mpi2IeeeSgeSimple32_t, MPI2_POINTER pMpi2IeeeSgeSimple32_t; typedef struct _MPI2_IEEE_SGE_SIMPLE64 { U64 Address; U32 Length; U16 Reserved1; U8 Reserved2; U8 Flags; } MPI2_IEEE_SGE_SIMPLE64, MPI2_POINTER PTR_MPI2_IEEE_SGE_SIMPLE64, Mpi2IeeeSgeSimple64_t, MPI2_POINTER pMpi2IeeeSgeSimple64_t; typedef union _MPI2_IEEE_SGE_SIMPLE_UNION { MPI2_IEEE_SGE_SIMPLE32 Simple32; MPI2_IEEE_SGE_SIMPLE64 Simple64; } MPI2_IEEE_SGE_SIMPLE_UNION, MPI2_POINTER PTR_MPI2_IEEE_SGE_SIMPLE_UNION, Mpi2IeeeSgeSimpleUnion_t, MPI2_POINTER pMpi2IeeeSgeSimpleUnion_t; /**************************************************************************** * IEEE Chain Element structures ****************************************************************************/ /* MPI2_IEEE_SGE_CHAIN32 is for MPI v2.0 products only */ typedef MPI2_IEEE_SGE_SIMPLE32 MPI2_IEEE_SGE_CHAIN32; /* MPI2_IEEE_SGE_CHAIN64 is for MPI v2.0 products only */ typedef MPI2_IEEE_SGE_SIMPLE64 MPI2_IEEE_SGE_CHAIN64; typedef union _MPI2_IEEE_SGE_CHAIN_UNION { MPI2_IEEE_SGE_CHAIN32 Chain32; MPI2_IEEE_SGE_CHAIN64 Chain64; } MPI2_IEEE_SGE_CHAIN_UNION, MPI2_POINTER PTR_MPI2_IEEE_SGE_CHAIN_UNION, Mpi2IeeeSgeChainUnion_t, MPI2_POINTER pMpi2IeeeSgeChainUnion_t; /* MPI25_IEEE_SGE_CHAIN64 is for MPI v2.5 and later */ typedef struct _MPI25_IEEE_SGE_CHAIN64 { U64 Address; U32 Length; U16 Reserved1; U8 NextChainOffset; U8 Flags; } MPI25_IEEE_SGE_CHAIN64, MPI2_POINTER PTR_MPI25_IEEE_SGE_CHAIN64, Mpi25IeeeSgeChain64_t, MPI2_POINTER pMpi25IeeeSgeChain64_t; /**************************************************************************** * All IEEE SGE types union ****************************************************************************/ /* MPI2_IEEE_SGE_UNION is for MPI v2.0 products only */ typedef struct _MPI2_IEEE_SGE_UNION { union { MPI2_IEEE_SGE_SIMPLE_UNION Simple; MPI2_IEEE_SGE_CHAIN_UNION Chain; } u; } MPI2_IEEE_SGE_UNION, MPI2_POINTER PTR_MPI2_IEEE_SGE_UNION, Mpi2IeeeSgeUnion_t, MPI2_POINTER pMpi2IeeeSgeUnion_t; /**************************************************************************** * IEEE SGE union for IO SGL's ****************************************************************************/ typedef union _MPI25_SGE_IO_UNION { MPI2_IEEE_SGE_SIMPLE64 IeeeSimple; MPI25_IEEE_SGE_CHAIN64 IeeeChain; } MPI25_SGE_IO_UNION, MPI2_POINTER PTR_MPI25_SGE_IO_UNION, Mpi25SGEIOUnion_t, MPI2_POINTER pMpi25SGEIOUnion_t; /**************************************************************************** * IEEE SGE field definitions and masks ****************************************************************************/ /* Flags field bit definitions */ #define MPI2_IEEE_SGE_FLAGS_ELEMENT_TYPE_MASK (0x80) #define MPI25_IEEE_SGE_FLAGS_END_OF_LIST (0x40) #define MPI2_IEEE32_SGE_FLAGS_SHIFT (24) #define MPI2_IEEE32_SGE_LENGTH_MASK (0x00FFFFFF) /* Element Type */ #define MPI2_IEEE_SGE_FLAGS_SIMPLE_ELEMENT (0x00) #define MPI2_IEEE_SGE_FLAGS_CHAIN_ELEMENT (0x80) /* Next Segment Format */ #define MPI26_IEEE_SGE_FLAGS_NSF_MASK (0x1C) #define MPI26_IEEE_SGE_FLAGS_NSF_MPI_IEEE (0x00) #define MPI26_IEEE_SGE_FLAGS_NSF_NVME_PRP (0x08) #define MPI26_IEEE_SGE_FLAGS_NSF_NVME_SGL (0x10) /* Data Location Address Space */ #define MPI2_IEEE_SGE_FLAGS_ADDR_MASK (0x03) #define MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR (0x00) /* for MPI v2.0, use in IEEE Simple Element only; for MPI v2.5 and later, use in IEEE Simple or Chain element */ #define MPI2_IEEE_SGE_FLAGS_IOCDDR_ADDR (0x01) /* use in IEEE Simple Element only */ #define MPI2_IEEE_SGE_FLAGS_IOCPLB_ADDR (0x02) #define MPI2_IEEE_SGE_FLAGS_IOCPLBNTA_ADDR (0x03) /* for MPI v2.0, use in IEEE Simple Element only; for MPI v2.5, use in IEEE Simple or Chain element */ #define MPI2_IEEE_SGE_FLAGS_SYSTEMPLBPCI_ADDR (0x03) /* use in MPI v2.0 IEEE Chain Element only */ #define MPI2_IEEE_SGE_FLAGS_SYSTEMPLBCPI_ADDR (MPI2_IEEE_SGE_FLAGS_SYSTEMPLBPCI_ADDR) /* typo in name */ #define MPI26_IEEE_SGE_FLAGS_IOCCTL_ADDR (0x02) /* for MPI v2.6 only */ /**************************************************************************** * IEEE SGE operation Macros ****************************************************************************/ /* SIMPLE FlagsLength manipulations... */ #define MPI2_IEEE32_SGE_SET_FLAGS(f) ((U32)(f) << MPI2_IEEE32_SGE_FLAGS_SHIFT) #define MPI2_IEEE32_SGE_GET_FLAGS(f) (((f) & ~MPI2_IEEE32_SGE_LENGTH_MASK) >> MPI2_IEEE32_SGE_FLAGS_SHIFT) #define MPI2_IEEE32_SGE_LENGTH(f) ((f) & MPI2_IEEE32_SGE_LENGTH_MASK) #define MPI2_IEEE32_SGE_SET_FLAGS_LENGTH(f, l) (MPI2_IEEE32_SGE_SET_FLAGS(f) | MPI2_IEEE32_SGE_LENGTH(l)) #define MPI2_IEEE32_pSGE_GET_FLAGS(psg) MPI2_IEEE32_SGE_GET_FLAGS((psg)->FlagsLength) #define MPI2_IEEE32_pSGE_GET_LENGTH(psg) MPI2_IEEE32_SGE_LENGTH((psg)->FlagsLength) #define MPI2_IEEE32_pSGE_SET_FLAGS_LENGTH(psg,f,l) (psg)->FlagsLength = MPI2_IEEE32_SGE_SET_FLAGS_LENGTH(f,l) /* CAUTION - The following are READ-MODIFY-WRITE! */ #define MPI2_IEEE32_pSGE_SET_FLAGS(psg,f) (psg)->FlagsLength |= MPI2_IEEE32_SGE_SET_FLAGS(f) #define MPI2_IEEE32_pSGE_SET_LENGTH(psg,l) (psg)->FlagsLength |= MPI2_IEEE32_SGE_LENGTH(l) /***************************************************************************** * * Fusion-MPT MPI/IEEE Scatter Gather Unions * *****************************************************************************/ typedef union _MPI2_SIMPLE_SGE_UNION { MPI2_SGE_SIMPLE_UNION MpiSimple; MPI2_IEEE_SGE_SIMPLE_UNION IeeeSimple; } MPI2_SIMPLE_SGE_UNION, MPI2_POINTER PTR_MPI2_SIMPLE_SGE_UNION, Mpi2SimpleSgeUntion_t, MPI2_POINTER pMpi2SimpleSgeUntion_t; typedef union _MPI2_SGE_IO_UNION { MPI2_SGE_SIMPLE_UNION MpiSimple; MPI2_SGE_CHAIN_UNION MpiChain; MPI2_IEEE_SGE_SIMPLE_UNION IeeeSimple; MPI2_IEEE_SGE_CHAIN_UNION IeeeChain; } MPI2_SGE_IO_UNION, MPI2_POINTER PTR_MPI2_SGE_IO_UNION, Mpi2SGEIOUnion_t, MPI2_POINTER pMpi2SGEIOUnion_t; /**************************************************************************** * * Values for SGLFlags field, used in many request messages with an SGL * ****************************************************************************/ /* values for MPI SGL Data Location Address Space subfield */ #define MPI2_SGLFLAGS_ADDRESS_SPACE_MASK (0x0C) #define MPI2_SGLFLAGS_SYSTEM_ADDRESS_SPACE (0x00) #define MPI2_SGLFLAGS_IOCDDR_ADDRESS_SPACE (0x04) #define MPI2_SGLFLAGS_IOCPLB_ADDRESS_SPACE (0x08) /* only for MPI v2.5 and earlier */ #define MPI26_SGLFLAGS_IOCPLB_ADDRESS_SPACE (0x08) /* only for MPI v2.6 */ #define MPI2_SGLFLAGS_IOCPLBNTA_ADDRESS_SPACE (0x0C) /* only for MPI v2.5 and earlier */ /* values for SGL Type subfield */ #define MPI2_SGLFLAGS_SGL_TYPE_MASK (0x03) #define MPI2_SGLFLAGS_SGL_TYPE_MPI (0x00) #define MPI2_SGLFLAGS_SGL_TYPE_IEEE32 (0x01) /* MPI v2.0 products only */ #define MPI2_SGLFLAGS_SGL_TYPE_IEEE64 (0x02) #endif Index: releng/12.1/sys/dev/mpr/mpi/mpi2_cnfg.h =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2_cnfg.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2_cnfg.h (revision 352761) @@ -1,3838 +1,3898 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ /* - * Copyright (c) 2000-2015 LSI Corporation. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_cnfg.h * Title: MPI Configuration messages and pages * Creation Date: November 10, 2006 * - * mpi2_cnfg.h Version: 02.00.40 + * mpi2_cnfg.h Version: 02.00.45 * * NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25 * prefix are for use only on MPI v2.5 products, and must not be used * with MPI v2.0 products. Unless otherwise noted, names beginning with * MPI2 or Mpi2 are for use with both MPI v2.0 and MPI v2.5 products. * * Version History * --------------- * * Date Version Description * -------- -------- ------------------------------------------------------ * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 06-04-07 02.00.01 Added defines for SAS IO Unit Page 2 PhyFlags. * Added Manufacturing Page 11. * Added MPI2_SAS_EXPANDER0_FLAGS_CONNECTOR_END_DEVICE * define. * 06-26-07 02.00.02 Adding generic structure for product-specific * Manufacturing pages: MPI2_CONFIG_PAGE_MANUFACTURING_PS. * Rework of BIOS Page 2 configuration page. * Fixed MPI2_BIOSPAGE2_BOOT_DEVICE to be a union of the * forms. * Added configuration pages IOC Page 8 and Driver * Persistent Mapping Page 0. * 08-31-07 02.00.03 Modified configuration pages dealing with Integrated * RAID (Manufacturing Page 4, RAID Volume Pages 0 and 1, * RAID Physical Disk Pages 0 and 1, RAID Configuration * Page 0). * Added new value for AccessStatus field of SAS Device * Page 0 (_SATA_NEEDS_INITIALIZATION). * 10-31-07 02.00.04 Added missing SEPDevHandle field to * MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0. * 12-18-07 02.00.05 Modified IO Unit Page 0 to use 32-bit version fields for * NVDATA. * Modified IOC Page 7 to use masks and added field for * SASBroadcastPrimitiveMasks. * Added MPI2_CONFIG_PAGE_BIOS_4. * Added MPI2_CONFIG_PAGE_LOG_0. * 02-29-08 02.00.06 Modified various names to make them 32-character unique. * Added SAS Device IDs. * Updated Integrated RAID configuration pages including * Manufacturing Page 4, IOC Page 6, and RAID Configuration * Page 0. * 05-21-08 02.00.07 Added define MPI2_MANPAGE4_MIX_SSD_SAS_SATA. * Added define MPI2_MANPAGE4_PHYSDISK_128MB_COERCION. * Fixed define MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING. * Added missing MaxNumRoutedSasAddresses field to * MPI2_CONFIG_PAGE_EXPANDER_0. * Added SAS Port Page 0. * Modified structure layout for * MPI2_CONFIG_PAGE_DRIVER_MAPPING_0. * 06-27-08 02.00.08 Changed MPI2_CONFIG_PAGE_RD_PDISK_1 to use * MPI2_RAID_PHYS_DISK1_PATH_MAX to size the array. * 10-02-08 02.00.09 Changed MPI2_RAID_PGAD_CONFIGNUM_MASK from 0x0000FFFF * to 0x000000FF. * Added two new values for the Physical Disk Coercion Size * bits in the Flags field of Manufacturing Page 4. * Added product-specific Manufacturing pages 16 to 31. * Modified Flags bits for controlling write cache on SATA * drives in IO Unit Page 1. * Added new bit to AdditionalControlFlags of SAS IO Unit * Page 1 to control Invalid Topology Correction. * Added additional defines for RAID Volume Page 0 * VolumeStatusFlags field. * Modified meaning of RAID Volume Page 0 VolumeSettings * define for auto-configure of hot-swap drives. * Added SupportedPhysDisks field to RAID Volume Page 1 and * added related defines. * Added PhysDiskAttributes field (and related defines) to * RAID Physical Disk Page 0. * Added MPI2_SAS_PHYINFO_PHY_VACANT define. * Added three new DiscoveryStatus bits for SAS IO Unit * Page 0 and SAS Expander Page 0. * Removed multiplexing information from SAS IO Unit pages. * Added BootDeviceWaitTime field to SAS IO Unit Page 4. * Removed Zone Address Resolved bit from PhyInfo and from * Expander Page 0 Flags field. * Added two new AccessStatus values to SAS Device Page 0 * for indicating routing problems. Added 3 reserved words * to this page. * 01-19-09 02.00.10 Fixed defines for GPIOVal field of IO Unit Page 3. * Inserted missing reserved field into structure for IOC * Page 6. * Added more pending task bits to RAID Volume Page 0 * VolumeStatusFlags defines. * Added MPI2_PHYSDISK0_STATUS_FLAG_NOT_CERTIFIED define. * Added a new DiscoveryStatus bit for SAS IO Unit Page 0 * and SAS Expander Page 0 to flag a downstream initiator * when in simplified routing mode. * Removed SATA Init Failure defines for DiscoveryStatus * fields of SAS IO Unit Page 0 and SAS Expander Page 0. * Added MPI2_SAS_DEVICE0_ASTATUS_DEVICE_BLOCKED define. * Added PortGroups, DmaGroup, and ControlGroup fields to * SAS Device Page 0. * 05-06-09 02.00.11 Added structures and defines for IO Unit Page 5 and IO * Unit Page 6. * Added expander reduced functionality data to SAS * Expander Page 0. * Added SAS PHY Page 2 and SAS PHY Page 3. * 07-30-09 02.00.12 Added IO Unit Page 7. * Added new device ids. * Added SAS IO Unit Page 5. * Added partial and slumber power management capable flags * to SAS Device Page 0 Flags field. * Added PhyInfo defines for power condition. * Added Ethernet configuration pages. * 10-28-09 02.00.13 Added MPI2_IOUNITPAGE1_ENABLE_HOST_BASED_DISCOVERY. * Added SAS PHY Page 4 structure and defines. * 02-10-10 02.00.14 Modified the comments for the configuration page * structures that contain an array of data. The host * should use the "count" field in the page data (e.g. the * NumPhys field) to determine the number of valid elements * in the array. * Added/modified some MPI2_MFGPAGE_DEVID_SAS defines. * Added PowerManagementCapabilities to IO Unit Page 7. * Added PortWidthModGroup field to * MPI2_SAS_IO_UNIT5_PHY_PM_SETTINGS. * Added MPI2_CONFIG_PAGE_SASIOUNIT_6 and related defines. * Added MPI2_CONFIG_PAGE_SASIOUNIT_7 and related defines. * Added MPI2_CONFIG_PAGE_SASIOUNIT_8 and related defines. * 05-12-10 02.00.15 Added MPI2_RAIDVOL0_STATUS_FLAG_VOL_NOT_CONSISTENT * define. * Added MPI2_PHYSDISK0_INCOMPATIBLE_MEDIA_TYPE define. * Added MPI2_SAS_NEG_LINK_RATE_UNSUPPORTED_PHY define. * 08-11-10 02.00.16 Removed IO Unit Page 1 device path (multi-pathing) * defines. * 11-10-10 02.00.17 Added ReceptacleID field (replacing Reserved1) to * MPI2_MANPAGE7_CONNECTOR_INFO and reworked defines for * the Pinout field. * Added BoardTemperature and BoardTemperatureUnits fields * to MPI2_CONFIG_PAGE_IO_UNIT_7. * Added MPI2_CONFIG_EXTPAGETYPE_EXT_MANUFACTURING define * and MPI2_CONFIG_PAGE_EXT_MAN_PS structure. * 02-23-11 02.00.18 Added ProxyVF_ID field to MPI2_CONFIG_REQUEST. * Added IO Unit Page 8, IO Unit Page 9, * and IO Unit Page 10. * Added SASNotifyPrimitiveMasks field to * MPI2_CONFIG_PAGE_IOC_7. * 03-09-11 02.00.19 Fixed IO Unit Page 10 (to match the spec). * 05-25-11 02.00.20 Cleaned up a few comments. * 08-24-11 02.00.21 Marked the IO Unit Page 7 PowerManagementCapabilities * for PCIe link as obsolete. * Added SpinupFlags field containing a Disable Spin-up bit * to the MPI2_SAS_IOUNIT4_SPINUP_GROUP fields of SAS IO * Unit Page 4. * 11-18-11 02.00.22 Added define MPI2_IOCPAGE6_CAP_FLAGS_4K_SECTORS_SUPPORT. * Added UEFIVersion field to BIOS Page 1 and defined new * BiosOptions bits. * Incorporating additions for MPI v2.5. * 11-27-12 02.00.23 Added MPI2_MANPAGE7_FLAG_EVENTREPLAY_SLOT_ORDER. * Added MPI2_BIOSPAGE1_OPTIONS_MASK_OEM_ID. * 12-20-12 02.00.24 Marked MPI2_SASIOUNIT1_CONTROL_CLEAR_AFFILIATION as * obsolete for MPI v2.5 and later. * Added some defines for 12G SAS speeds. * 04-09-13 02.00.25 Added MPI2_IOUNITPAGE1_ATA_SECURITY_FREEZE_LOCK. * Fixed MPI2_IOUNITPAGE5_DMA_CAP_MASK_MAX_REQUESTS to * match the specification. * 08-19-13 02.00.26 Added reserved words to MPI2_CONFIG_PAGE_IO_UNIT_7 for * future use. * 12-05-13 02.00.27 Added MPI2_MANPAGE7_FLAG_BASE_ENCLOSURE_LEVEL for * MPI2_CONFIG_PAGE_MAN_7. * Added EnclosureLevel and ConnectorName fields to * MPI2_CONFIG_PAGE_SAS_DEV_0. * Added MPI2_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID for * MPI2_CONFIG_PAGE_SAS_DEV_0. * Added EnclosureLevel field to * MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0. * Added MPI2_SAS_ENCLS0_FLAGS_ENCL_LEVEL_VALID for * MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0. * 01-08-14 02.00.28 Added more defines for the BiosOptions field of * MPI2_CONFIG_PAGE_BIOS_1. * 06-13-14 02.00.29 Added SSUTimeout field to MPI2_CONFIG_PAGE_BIOS_1, and * more defines for the BiosOptions field. * 11-18-14 02.00.30 Updated copyright information. * Added MPI2_BIOSPAGE1_OPTIONS_ADVANCED_CONFIG. * Added AdapterOrderAux fields to BIOS Page 3. * 03-16-15 02.00.31 Updated for MPI v2.6. * Added BoardPowerRequirement, PCISlotPowerAllocation, and * Flags field to IO Unit Page 7. * Added IO Unit Page 11. * Added new SAS Phy Event codes * Added PCIe configuration pages. * 03-19-15 02.00.32 Fixed PCIe Link Config page structure names to be * unique in first 32 characters. * 05-25-15 02.00.33 Added more defines for the BiosOptions field of * MPI2_CONFIG_PAGE_BIOS_1. * 08-25-15 02.00.34 Added PCIe Device Page 2 SGL format capability. * 12-18-15 02.00.35 Added SATADeviceWaitTime to SAS IO Unit Page 4. * 01-21-16 02.00.36 Added/modified MPI2_MFGPAGE_DEVID_SAS defines. * Added Link field to PCIe Link Pages * Added EnclosureLevel and ConnectorName to PCIe * Device Page 0. * Added define for PCIE IoUnit page 1 max rate shift. * Added comment for reserved ExtPageTypes. * Added SAS 4 22.5 gbs speed support. * Added PCIe 4 16.0 GT/sec speec support. * Removed AHCI support. * Removed SOP support. * Added NegotiatedLinkRate and NegotiatedPortWidth to * PCIe device page 0. * 04-10-16 02.00.37 Fixed MPI2_MFGPAGE_DEVID_SAS3616/3708 defines * 07-01-16 02.00.38 Added Manufacturing page 7 Connector types. * Changed declaration of ConnectorName in PCIe DevicePage0 * to match SAS DevicePage 0. * Added SATADeviceWaitTime to IO Unit Page 11. * Added MPI26_MFGPAGE_DEVID_SAS4008 * Added x16 PCIe width to IO Unit Page 7 * Added LINKFLAGS to control SRIS in PCIe IO Unit page 1 * phy data. * Added InitStatus to PCIe IO Unit Page 1 header. * 09-01-16 02.00.39 Added MPI26_CONFIG_PAGE_ENCLOSURE_0 and related defines. * Added MPI26_ENCLOS_PGAD_FORM_GET_NEXT_HANDLE and * MPI26_ENCLOS_PGAD_FORM_HANDLE page address formats. * 02-02-17 02.00.40 Added MPI2_MANPAGE7_SLOT_UNKNOWN. * Added ChassisSlot field to SAS Enclosure Page 0. * Added ChassisSlot Valid bit (bit 5) to the Flags field * in SAS Enclosure Page 0. + * 06-13-17 02.00.41 Added MPI26_MFGPAGE_DEVID_SAS3816 and + * MPI26_MFGPAGE_DEVID_SAS3916 defines. + * Removed MPI26_MFGPAGE_DEVID_SAS4008 define. + * Added MPI26_PCIEIOUNIT1_LINKFLAGS_SRNS_EN define. + * Renamed PI26_PCIEIOUNIT1_LINKFLAGS_EN_SRIS to + * PI26_PCIEIOUNIT1_LINKFLAGS_SRIS_EN. + * Renamed MPI26_PCIEIOUNIT1_LINKFLAGS_DIS_SRIS to + * MPI26_PCIEIOUNIT1_LINKFLAGS_DIS_SEPARATE_REFCLK. + * 09-29-17 02.00.42 Added ControllerResetTO field to PCIe Device Page 2. + * Added NOIOB field to PCIe Device Page 2. + * Added MPI26_PCIEDEV2_CAP_DATA_BLK_ALIGN_AND_GRAN to + * the Capabilities field of PCIe Device Page 2. + * 07-22-18 02.00.43 Added defines for SAS3916 and SAS3816. + * Added WRiteCache defines to IO Unit Page 1. + * Added MaxEnclosureLevel to BIOS Page 1. + * Added OEMRD to SAS Enclosure Page 1. + * Added DMDReportPCIe to PCIe IO Unit Page 1. + * Added Flags field and flags for Retimers to + * PCIe Switch Page 1. + * 08-02-18 02.00.44 Added Slotx2, Slotx4 to ManPage 7. + * 08-15-18 02.00.45 Added ProductSpecific field at end of IOC Page 1 * -------------------------------------------------------------------------- */ #ifndef MPI2_CNFG_H #define MPI2_CNFG_H /***************************************************************************** * Configuration Page Header and defines *****************************************************************************/ /* Config Page Header */ typedef struct _MPI2_CONFIG_PAGE_HEADER { U8 PageVersion; /* 0x00 */ U8 PageLength; /* 0x01 */ U8 PageNumber; /* 0x02 */ U8 PageType; /* 0x03 */ } MPI2_CONFIG_PAGE_HEADER, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_HEADER, Mpi2ConfigPageHeader_t, MPI2_POINTER pMpi2ConfigPageHeader_t; typedef union _MPI2_CONFIG_PAGE_HEADER_UNION { MPI2_CONFIG_PAGE_HEADER Struct; U8 Bytes[4]; U16 Word16[2]; U32 Word32; } MPI2_CONFIG_PAGE_HEADER_UNION, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_HEADER_UNION, Mpi2ConfigPageHeaderUnion, MPI2_POINTER pMpi2ConfigPageHeaderUnion; /* Extended Config Page Header */ typedef struct _MPI2_CONFIG_EXTENDED_PAGE_HEADER { U8 PageVersion; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 PageNumber; /* 0x02 */ U8 PageType; /* 0x03 */ U16 ExtPageLength; /* 0x04 */ U8 ExtPageType; /* 0x06 */ U8 Reserved2; /* 0x07 */ } MPI2_CONFIG_EXTENDED_PAGE_HEADER, MPI2_POINTER PTR_MPI2_CONFIG_EXTENDED_PAGE_HEADER, Mpi2ConfigExtendedPageHeader_t, MPI2_POINTER pMpi2ConfigExtendedPageHeader_t; typedef union _MPI2_CONFIG_EXT_PAGE_HEADER_UNION { MPI2_CONFIG_PAGE_HEADER Struct; MPI2_CONFIG_EXTENDED_PAGE_HEADER Ext; U8 Bytes[8]; U16 Word16[4]; U32 Word32[2]; } MPI2_CONFIG_EXT_PAGE_HEADER_UNION, MPI2_POINTER PTR_MPI2_CONFIG_EXT_PAGE_HEADER_UNION, Mpi2ConfigPageExtendedHeaderUnion, MPI2_POINTER pMpi2ConfigPageExtendedHeaderUnion; /* PageType field values */ #define MPI2_CONFIG_PAGEATTR_READ_ONLY (0x00) #define MPI2_CONFIG_PAGEATTR_CHANGEABLE (0x10) #define MPI2_CONFIG_PAGEATTR_PERSISTENT (0x20) #define MPI2_CONFIG_PAGEATTR_MASK (0xF0) #define MPI2_CONFIG_PAGETYPE_IO_UNIT (0x00) #define MPI2_CONFIG_PAGETYPE_IOC (0x01) #define MPI2_CONFIG_PAGETYPE_BIOS (0x02) #define MPI2_CONFIG_PAGETYPE_RAID_VOLUME (0x08) #define MPI2_CONFIG_PAGETYPE_MANUFACTURING (0x09) #define MPI2_CONFIG_PAGETYPE_RAID_PHYSDISK (0x0A) #define MPI2_CONFIG_PAGETYPE_EXTENDED (0x0F) #define MPI2_CONFIG_PAGETYPE_MASK (0x0F) #define MPI2_CONFIG_TYPENUM_MASK (0x0FFF) /* ExtPageType field values */ #define MPI2_CONFIG_EXTPAGETYPE_SAS_IO_UNIT (0x10) #define MPI2_CONFIG_EXTPAGETYPE_SAS_EXPANDER (0x11) #define MPI2_CONFIG_EXTPAGETYPE_SAS_DEVICE (0x12) #define MPI2_CONFIG_EXTPAGETYPE_SAS_PHY (0x13) #define MPI2_CONFIG_EXTPAGETYPE_LOG (0x14) #define MPI2_CONFIG_EXTPAGETYPE_ENCLOSURE (0x15) #define MPI2_CONFIG_EXTPAGETYPE_RAID_CONFIG (0x16) #define MPI2_CONFIG_EXTPAGETYPE_DRIVER_MAPPING (0x17) #define MPI2_CONFIG_EXTPAGETYPE_SAS_PORT (0x18) #define MPI2_CONFIG_EXTPAGETYPE_ETHERNET (0x19) #define MPI2_CONFIG_EXTPAGETYPE_EXT_MANUFACTURING (0x1A) #define MPI2_CONFIG_EXTPAGETYPE_PCIE_IO_UNIT (0x1B) /* MPI v2.6 and later */ #define MPI2_CONFIG_EXTPAGETYPE_PCIE_SWITCH (0x1C) /* MPI v2.6 and later */ #define MPI2_CONFIG_EXTPAGETYPE_PCIE_DEVICE (0x1D) /* MPI v2.6 and later */ #define MPI2_CONFIG_EXTPAGETYPE_PCIE_LINK (0x1E) /* MPI v2.6 and later */ /* Product specific reserved values 0xE0 - 0xEF */ /* Vendor specific reserved values 0xF0 - 0xFF */ /***************************************************************************** * PageAddress defines *****************************************************************************/ /* RAID Volume PageAddress format */ #define MPI2_RAID_VOLUME_PGAD_FORM_MASK (0xF0000000) #define MPI2_RAID_VOLUME_PGAD_FORM_GET_NEXT_HANDLE (0x00000000) #define MPI2_RAID_VOLUME_PGAD_FORM_HANDLE (0x10000000) #define MPI2_RAID_VOLUME_PGAD_HANDLE_MASK (0x0000FFFF) /* RAID Physical Disk PageAddress format */ #define MPI2_PHYSDISK_PGAD_FORM_MASK (0xF0000000) #define MPI2_PHYSDISK_PGAD_FORM_GET_NEXT_PHYSDISKNUM (0x00000000) #define MPI2_PHYSDISK_PGAD_FORM_PHYSDISKNUM (0x10000000) #define MPI2_PHYSDISK_PGAD_FORM_DEVHANDLE (0x20000000) #define MPI2_PHYSDISK_PGAD_PHYSDISKNUM_MASK (0x000000FF) #define MPI2_PHYSDISK_PGAD_DEVHANDLE_MASK (0x0000FFFF) /* SAS Expander PageAddress format */ #define MPI2_SAS_EXPAND_PGAD_FORM_MASK (0xF0000000) #define MPI2_SAS_EXPAND_PGAD_FORM_GET_NEXT_HNDL (0x00000000) #define MPI2_SAS_EXPAND_PGAD_FORM_HNDL_PHY_NUM (0x10000000) #define MPI2_SAS_EXPAND_PGAD_FORM_HNDL (0x20000000) #define MPI2_SAS_EXPAND_PGAD_HANDLE_MASK (0x0000FFFF) #define MPI2_SAS_EXPAND_PGAD_PHYNUM_MASK (0x00FF0000) #define MPI2_SAS_EXPAND_PGAD_PHYNUM_SHIFT (16) /* SAS Device PageAddress format */ #define MPI2_SAS_DEVICE_PGAD_FORM_MASK (0xF0000000) #define MPI2_SAS_DEVICE_PGAD_FORM_GET_NEXT_HANDLE (0x00000000) #define MPI2_SAS_DEVICE_PGAD_FORM_HANDLE (0x20000000) #define MPI2_SAS_DEVICE_PGAD_HANDLE_MASK (0x0000FFFF) /* SAS PHY PageAddress format */ #define MPI2_SAS_PHY_PGAD_FORM_MASK (0xF0000000) #define MPI2_SAS_PHY_PGAD_FORM_PHY_NUMBER (0x00000000) #define MPI2_SAS_PHY_PGAD_FORM_PHY_TBL_INDEX (0x10000000) #define MPI2_SAS_PHY_PGAD_PHY_NUMBER_MASK (0x000000FF) #define MPI2_SAS_PHY_PGAD_PHY_TBL_INDEX_MASK (0x0000FFFF) /* SAS Port PageAddress format */ #define MPI2_SASPORT_PGAD_FORM_MASK (0xF0000000) #define MPI2_SASPORT_PGAD_FORM_GET_NEXT_PORT (0x00000000) #define MPI2_SASPORT_PGAD_FORM_PORT_NUM (0x10000000) #define MPI2_SASPORT_PGAD_PORTNUMBER_MASK (0x00000FFF) /* SAS Enclosure PageAddress format */ #define MPI2_SAS_ENCLOS_PGAD_FORM_MASK (0xF0000000) #define MPI2_SAS_ENCLOS_PGAD_FORM_GET_NEXT_HANDLE (0x00000000) #define MPI2_SAS_ENCLOS_PGAD_FORM_HANDLE (0x10000000) #define MPI2_SAS_ENCLOS_PGAD_HANDLE_MASK (0x0000FFFF) /* Enclosure PageAddress format */ #define MPI26_ENCLOS_PGAD_FORM_MASK (0xF0000000) #define MPI26_ENCLOS_PGAD_FORM_GET_NEXT_HANDLE (0x00000000) #define MPI26_ENCLOS_PGAD_FORM_HANDLE (0x10000000) #define MPI26_ENCLOS_PGAD_HANDLE_MASK (0x0000FFFF) /* RAID Configuration PageAddress format */ #define MPI2_RAID_PGAD_FORM_MASK (0xF0000000) #define MPI2_RAID_PGAD_FORM_GET_NEXT_CONFIGNUM (0x00000000) #define MPI2_RAID_PGAD_FORM_CONFIGNUM (0x10000000) #define MPI2_RAID_PGAD_FORM_ACTIVE_CONFIG (0x20000000) #define MPI2_RAID_PGAD_CONFIGNUM_MASK (0x000000FF) /* Driver Persistent Mapping PageAddress format */ #define MPI2_DPM_PGAD_FORM_MASK (0xF0000000) #define MPI2_DPM_PGAD_FORM_ENTRY_RANGE (0x00000000) #define MPI2_DPM_PGAD_ENTRY_COUNT_MASK (0x0FFF0000) #define MPI2_DPM_PGAD_ENTRY_COUNT_SHIFT (16) #define MPI2_DPM_PGAD_START_ENTRY_MASK (0x0000FFFF) /* Ethernet PageAddress format */ #define MPI2_ETHERNET_PGAD_FORM_MASK (0xF0000000) #define MPI2_ETHERNET_PGAD_FORM_IF_NUM (0x00000000) #define MPI2_ETHERNET_PGAD_IF_NUMBER_MASK (0x000000FF) /* PCIe Switch PageAddress format */ #define MPI26_PCIE_SWITCH_PGAD_FORM_MASK (0xF0000000) #define MPI26_PCIE_SWITCH_PGAD_FORM_GET_NEXT_HNDL (0x00000000) #define MPI26_PCIE_SWITCH_PGAD_FORM_HNDL_PORTNUM (0x10000000) #define MPI26_PCIE_SWITCH_EXPAND_PGAD_FORM_HNDL (0x20000000) #define MPI26_PCIE_SWITCH_PGAD_HANDLE_MASK (0x0000FFFF) #define MPI26_PCIE_SWITCH_PGAD_PORTNUM_MASK (0x00FF0000) #define MPI26_PCIE_SWITCH_PGAD_PORTNUM_SHIFT (16) /* PCIe Device PageAddress format */ #define MPI26_PCIE_DEVICE_PGAD_FORM_MASK (0xF0000000) #define MPI26_PCIE_DEVICE_PGAD_FORM_GET_NEXT_HANDLE (0x00000000) #define MPI26_PCIE_DEVICE_PGAD_FORM_HANDLE (0x20000000) #define MPI26_PCIE_DEVICE_PGAD_HANDLE_MASK (0x0000FFFF) /* PCIe Link PageAddress format */ #define MPI26_PCIE_LINK_PGAD_FORM_MASK (0xF0000000) #define MPI26_PCIE_LINK_PGAD_FORM_GET_NEXT_LINK (0x00000000) #define MPI26_PCIE_LINK_PGAD_FORM_LINK_NUM (0x10000000) #define MPI26_PCIE_DEVICE_PGAD_LINKNUM_MASK (0x000000FF) /**************************************************************************** * Configuration messages ****************************************************************************/ /* Configuration Request Message */ typedef struct _MPI2_CONFIG_REQUEST { U8 Action; /* 0x00 */ U8 SGLFlags; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 ExtPageLength; /* 0x04 */ U8 ExtPageType; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved1; /* 0x0A */ U8 Reserved2; /* 0x0C */ U8 ProxyVF_ID; /* 0x0D */ U16 Reserved4; /* 0x0E */ U32 Reserved3; /* 0x10 */ MPI2_CONFIG_PAGE_HEADER Header; /* 0x14 */ U32 PageAddress; /* 0x18 */ MPI2_SGE_IO_UNION PageBufferSGE; /* 0x1C */ } MPI2_CONFIG_REQUEST, MPI2_POINTER PTR_MPI2_CONFIG_REQUEST, Mpi2ConfigRequest_t, MPI2_POINTER pMpi2ConfigRequest_t; /* values for the Action field */ #define MPI2_CONFIG_ACTION_PAGE_HEADER (0x00) #define MPI2_CONFIG_ACTION_PAGE_READ_CURRENT (0x01) #define MPI2_CONFIG_ACTION_PAGE_WRITE_CURRENT (0x02) #define MPI2_CONFIG_ACTION_PAGE_DEFAULT (0x03) #define MPI2_CONFIG_ACTION_PAGE_WRITE_NVRAM (0x04) #define MPI2_CONFIG_ACTION_PAGE_READ_DEFAULT (0x05) #define MPI2_CONFIG_ACTION_PAGE_READ_NVRAM (0x06) #define MPI2_CONFIG_ACTION_PAGE_GET_CHANGEABLE (0x07) /* use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */ /* Config Reply Message */ typedef struct _MPI2_CONFIG_REPLY { U8 Action; /* 0x00 */ U8 SGLFlags; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 ExtPageLength; /* 0x04 */ U8 ExtPageType; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved1; /* 0x0A */ U16 Reserved2; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ MPI2_CONFIG_PAGE_HEADER Header; /* 0x14 */ } MPI2_CONFIG_REPLY, MPI2_POINTER PTR_MPI2_CONFIG_REPLY, Mpi2ConfigReply_t, MPI2_POINTER pMpi2ConfigReply_t; /***************************************************************************** * * C o n f i g u r a t i o n P a g e s * *****************************************************************************/ /**************************************************************************** * Manufacturing Config pages ****************************************************************************/ #define MPI2_MFGPAGE_VENDORID_LSI (0x1000) /* MPI v2.0 SAS products */ #define MPI2_MFGPAGE_DEVID_SAS2004 (0x0070) #define MPI2_MFGPAGE_DEVID_SAS2008 (0x0072) #define MPI2_MFGPAGE_DEVID_SAS2108_1 (0x0074) #define MPI2_MFGPAGE_DEVID_SAS2108_2 (0x0076) #define MPI2_MFGPAGE_DEVID_SAS2108_3 (0x0077) #define MPI2_MFGPAGE_DEVID_SAS2116_1 (0x0064) #define MPI2_MFGPAGE_DEVID_SAS2116_2 (0x0065) #define MPI2_MFGPAGE_DEVID_SSS6200 (0x007E) #define MPI2_MFGPAGE_DEVID_SAS2208_1 (0x0080) #define MPI2_MFGPAGE_DEVID_SAS2208_2 (0x0081) #define MPI2_MFGPAGE_DEVID_SAS2208_3 (0x0082) #define MPI2_MFGPAGE_DEVID_SAS2208_4 (0x0083) #define MPI2_MFGPAGE_DEVID_SAS2208_5 (0x0084) #define MPI2_MFGPAGE_DEVID_SAS2208_6 (0x0085) #define MPI2_MFGPAGE_DEVID_SAS2308_1 (0x0086) #define MPI2_MFGPAGE_DEVID_SAS2308_2 (0x0087) #define MPI2_MFGPAGE_DEVID_SAS2308_3 (0x006E) /* MPI v2.5 SAS products */ #define MPI25_MFGPAGE_DEVID_SAS3004 (0x0096) #define MPI25_MFGPAGE_DEVID_SAS3008 (0x0097) #define MPI25_MFGPAGE_DEVID_SAS3108_1 (0x0090) #define MPI25_MFGPAGE_DEVID_SAS3108_2 (0x0091) #define MPI25_MFGPAGE_DEVID_SAS3108_5 (0x0094) #define MPI25_MFGPAGE_DEVID_SAS3108_6 (0x0095) /* MPI v2.6 SAS Products */ #define MPI26_MFGPAGE_DEVID_SAS3216 (0x00C9) #define MPI26_MFGPAGE_DEVID_SAS3224 (0x00C4) #define MPI26_MFGPAGE_DEVID_SAS3316_1 (0x00C5) #define MPI26_MFGPAGE_DEVID_SAS3316_2 (0x00C6) #define MPI26_MFGPAGE_DEVID_SAS3316_3 (0x00C7) #define MPI26_MFGPAGE_DEVID_SAS3316_4 (0x00C8) #define MPI26_MFGPAGE_DEVID_SAS3324_1 (0x00C0) #define MPI26_MFGPAGE_DEVID_SAS3324_2 (0x00C1) #define MPI26_MFGPAGE_DEVID_SAS3324_3 (0x00C2) #define MPI26_MFGPAGE_DEVID_SAS3324_4 (0x00C3) #define MPI26_MFGPAGE_DEVID_SAS3516 (0x00AA) #define MPI26_MFGPAGE_DEVID_SAS3516_1 (0x00AB) #define MPI26_MFGPAGE_DEVID_SAS3416 (0x00AC) #define MPI26_MFGPAGE_DEVID_SAS3508 (0x00AD) #define MPI26_MFGPAGE_DEVID_SAS3508_1 (0x00AE) #define MPI26_MFGPAGE_DEVID_SAS3408 (0x00AF) #define MPI26_MFGPAGE_DEVID_SAS3716 (0x00D0) #define MPI26_MFGPAGE_DEVID_SAS3616 (0x00D1) #define MPI26_MFGPAGE_DEVID_SAS3708 (0x00D2) -#define MPI26_MFGPAGE_DEVID_SAS4008 (0x00A1) +#define MPI26_MFGPAGE_DEVID_SEC_MASK_SAS3916 (0x0003) +#define MPI26_MFGPAGE_DEVID_INVALID0_SAS3916 (0x00E0) +#define MPI26_MFGPAGE_DEVID_CFG_SEC_SAS3916 (0x00E1) +#define MPI26_MFGPAGE_DEVID_HARD_SEC_SAS3916 (0x00E2) +#define MPI26_MFGPAGE_DEVID_INVALID1_SAS3916 (0x00E3) +#define MPI26_MFGPAGE_DEVID_SEC_MASK_SAS3816 (0x0003) +#define MPI26_MFGPAGE_DEVID_INVALID0_SAS3816 (0x00E4) +#define MPI26_MFGPAGE_DEVID_CFG_SEC_SAS3816 (0x00E5) +#define MPI26_MFGPAGE_DEVID_HARD_SEC_SAS3816 (0x00E6) +#define MPI26_MFGPAGE_DEVID_INVALID1_SAS3816 (0x00E7) + /* Manufacturing Page 0 */ typedef struct _MPI2_CONFIG_PAGE_MAN_0 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U8 ChipName[16]; /* 0x04 */ U8 ChipRevision[8]; /* 0x14 */ U8 BoardName[16]; /* 0x1C */ U8 BoardAssembly[16]; /* 0x2C */ U8 BoardTracerNumber[16]; /* 0x3C */ } MPI2_CONFIG_PAGE_MAN_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_0, Mpi2ManufacturingPage0_t, MPI2_POINTER pMpi2ManufacturingPage0_t; #define MPI2_MANUFACTURING0_PAGEVERSION (0x00) /* Manufacturing Page 1 */ typedef struct _MPI2_CONFIG_PAGE_MAN_1 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U8 VPD[256]; /* 0x04 */ } MPI2_CONFIG_PAGE_MAN_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_1, Mpi2ManufacturingPage1_t, MPI2_POINTER pMpi2ManufacturingPage1_t; #define MPI2_MANUFACTURING1_PAGEVERSION (0x00) typedef struct _MPI2_CHIP_REVISION_ID { U16 DeviceID; /* 0x00 */ U8 PCIRevisionID; /* 0x02 */ U8 Reserved; /* 0x03 */ } MPI2_CHIP_REVISION_ID, MPI2_POINTER PTR_MPI2_CHIP_REVISION_ID, Mpi2ChipRevisionId_t, MPI2_POINTER pMpi2ChipRevisionId_t; /* Manufacturing Page 2 */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check Header.PageLength at runtime. */ #ifndef MPI2_MAN_PAGE_2_HW_SETTINGS_WORDS #define MPI2_MAN_PAGE_2_HW_SETTINGS_WORDS (1) #endif typedef struct _MPI2_CONFIG_PAGE_MAN_2 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ MPI2_CHIP_REVISION_ID ChipId; /* 0x04 */ U32 HwSettings[MPI2_MAN_PAGE_2_HW_SETTINGS_WORDS];/* 0x08 */ } MPI2_CONFIG_PAGE_MAN_2, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_2, Mpi2ManufacturingPage2_t, MPI2_POINTER pMpi2ManufacturingPage2_t; #define MPI2_MANUFACTURING2_PAGEVERSION (0x00) /* Manufacturing Page 3 */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check Header.PageLength at runtime. */ #ifndef MPI2_MAN_PAGE_3_INFO_WORDS #define MPI2_MAN_PAGE_3_INFO_WORDS (1) #endif typedef struct _MPI2_CONFIG_PAGE_MAN_3 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ MPI2_CHIP_REVISION_ID ChipId; /* 0x04 */ U32 Info[MPI2_MAN_PAGE_3_INFO_WORDS];/* 0x08 */ } MPI2_CONFIG_PAGE_MAN_3, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_3, Mpi2ManufacturingPage3_t, MPI2_POINTER pMpi2ManufacturingPage3_t; #define MPI2_MANUFACTURING3_PAGEVERSION (0x00) /* Manufacturing Page 4 */ typedef struct _MPI2_MANPAGE4_PWR_SAVE_SETTINGS { U8 PowerSaveFlags; /* 0x00 */ U8 InternalOperationsSleepTime; /* 0x01 */ U8 InternalOperationsRunTime; /* 0x02 */ U8 HostIdleTime; /* 0x03 */ } MPI2_MANPAGE4_PWR_SAVE_SETTINGS, MPI2_POINTER PTR_MPI2_MANPAGE4_PWR_SAVE_SETTINGS, Mpi2ManPage4PwrSaveSettings_t, MPI2_POINTER pMpi2ManPage4PwrSaveSettings_t; /* defines for the PowerSaveFlags field */ #define MPI2_MANPAGE4_MASK_POWERSAVE_MODE (0x03) #define MPI2_MANPAGE4_POWERSAVE_MODE_DISABLED (0x00) #define MPI2_MANPAGE4_CUSTOM_POWERSAVE_MODE (0x01) #define MPI2_MANPAGE4_FULL_POWERSAVE_MODE (0x02) typedef struct _MPI2_CONFIG_PAGE_MAN_4 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x04 */ U32 Flags; /* 0x08 */ U8 InquirySize; /* 0x0C */ U8 Reserved2; /* 0x0D */ U16 Reserved3; /* 0x0E */ U8 InquiryData[56]; /* 0x10 */ U32 RAID0VolumeSettings; /* 0x48 */ U32 RAID1EVolumeSettings; /* 0x4C */ U32 RAID1VolumeSettings; /* 0x50 */ U32 RAID10VolumeSettings; /* 0x54 */ U32 Reserved4; /* 0x58 */ U32 Reserved5; /* 0x5C */ MPI2_MANPAGE4_PWR_SAVE_SETTINGS PowerSaveSettings; /* 0x60 */ U8 MaxOCEDisks; /* 0x64 */ U8 ResyncRate; /* 0x65 */ U16 DataScrubDuration; /* 0x66 */ U8 MaxHotSpares; /* 0x68 */ U8 MaxPhysDisksPerVol; /* 0x69 */ U8 MaxPhysDisks; /* 0x6A */ U8 MaxVolumes; /* 0x6B */ } MPI2_CONFIG_PAGE_MAN_4, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_4, Mpi2ManufacturingPage4_t, MPI2_POINTER pMpi2ManufacturingPage4_t; #define MPI2_MANUFACTURING4_PAGEVERSION (0x0A) /* Manufacturing Page 4 Flags field */ #define MPI2_MANPAGE4_METADATA_SIZE_MASK (0x00030000) #define MPI2_MANPAGE4_METADATA_512MB (0x00000000) #define MPI2_MANPAGE4_MIX_SSD_SAS_SATA (0x00008000) #define MPI2_MANPAGE4_MIX_SSD_AND_NON_SSD (0x00004000) #define MPI2_MANPAGE4_HIDE_PHYSDISK_NON_IR (0x00002000) #define MPI2_MANPAGE4_MASK_PHYSDISK_COERCION (0x00001C00) #define MPI2_MANPAGE4_PHYSDISK_COERCION_1GB (0x00000000) #define MPI2_MANPAGE4_PHYSDISK_128MB_COERCION (0x00000400) #define MPI2_MANPAGE4_PHYSDISK_ADAPTIVE_COERCION (0x00000800) #define MPI2_MANPAGE4_PHYSDISK_ZERO_COERCION (0x00000C00) #define MPI2_MANPAGE4_MASK_BAD_BLOCK_MARKING (0x00000300) #define MPI2_MANPAGE4_DEFAULT_BAD_BLOCK_MARKING (0x00000000) #define MPI2_MANPAGE4_TABLE_BAD_BLOCK_MARKING (0x00000100) #define MPI2_MANPAGE4_WRITE_LONG_BAD_BLOCK_MARKING (0x00000200) #define MPI2_MANPAGE4_FORCE_OFFLINE_FAILOVER (0x00000080) #define MPI2_MANPAGE4_RAID10_DISABLE (0x00000040) #define MPI2_MANPAGE4_RAID1E_DISABLE (0x00000020) #define MPI2_MANPAGE4_RAID1_DISABLE (0x00000010) #define MPI2_MANPAGE4_RAID0_DISABLE (0x00000008) #define MPI2_MANPAGE4_IR_MODEPAGE8_DISABLE (0x00000004) #define MPI2_MANPAGE4_IM_RESYNC_CACHE_ENABLE (0x00000002) #define MPI2_MANPAGE4_IR_NO_MIX_SAS_SATA (0x00000001) /* Manufacturing Page 5 */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhys at runtime. */ #ifndef MPI2_MAN_PAGE_5_PHY_ENTRIES #define MPI2_MAN_PAGE_5_PHY_ENTRIES (1) #endif typedef struct _MPI2_MANUFACTURING5_ENTRY { U64 WWID; /* 0x00 */ U64 DeviceName; /* 0x08 */ } MPI2_MANUFACTURING5_ENTRY, MPI2_POINTER PTR_MPI2_MANUFACTURING5_ENTRY, Mpi2Manufacturing5Entry_t, MPI2_POINTER pMpi2Manufacturing5Entry_t; typedef struct _MPI2_CONFIG_PAGE_MAN_5 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U8 NumPhys; /* 0x04 */ U8 Reserved1; /* 0x05 */ U16 Reserved2; /* 0x06 */ U32 Reserved3; /* 0x08 */ U32 Reserved4; /* 0x0C */ MPI2_MANUFACTURING5_ENTRY Phy[MPI2_MAN_PAGE_5_PHY_ENTRIES];/* 0x08 */ } MPI2_CONFIG_PAGE_MAN_5, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_5, Mpi2ManufacturingPage5_t, MPI2_POINTER pMpi2ManufacturingPage5_t; #define MPI2_MANUFACTURING5_PAGEVERSION (0x03) /* Manufacturing Page 6 */ typedef struct _MPI2_CONFIG_PAGE_MAN_6 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 ProductSpecificInfo;/* 0x04 */ } MPI2_CONFIG_PAGE_MAN_6, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_6, Mpi2ManufacturingPage6_t, MPI2_POINTER pMpi2ManufacturingPage6_t; #define MPI2_MANUFACTURING6_PAGEVERSION (0x00) /* Manufacturing Page 7 */ typedef struct _MPI2_MANPAGE7_CONNECTOR_INFO { U32 Pinout; /* 0x00 */ U8 Connector[16]; /* 0x04 */ U8 Location; /* 0x14 */ U8 ReceptacleID; /* 0x15 */ U16 Slot; /* 0x16 */ - U32 Reserved2; /* 0x18 */ + U16 Slotx4; /* 0x18 */ + U16 Slotx2; /* 0x1A */ } MPI2_MANPAGE7_CONNECTOR_INFO, MPI2_POINTER PTR_MPI2_MANPAGE7_CONNECTOR_INFO, Mpi2ManPage7ConnectorInfo_t, MPI2_POINTER pMpi2ManPage7ConnectorInfo_t; /* defines for the Pinout field */ #define MPI2_MANPAGE7_PINOUT_LANE_MASK (0x0000FF00) #define MPI2_MANPAGE7_PINOUT_LANE_SHIFT (8) #define MPI2_MANPAGE7_PINOUT_TYPE_MASK (0x000000FF) #define MPI2_MANPAGE7_PINOUT_TYPE_UNKNOWN (0x00) #define MPI2_MANPAGE7_PINOUT_SATA_SINGLE (0x01) #define MPI2_MANPAGE7_PINOUT_SFF_8482 (0x02) #define MPI2_MANPAGE7_PINOUT_SFF_8486 (0x03) #define MPI2_MANPAGE7_PINOUT_SFF_8484 (0x04) #define MPI2_MANPAGE7_PINOUT_SFF_8087 (0x05) #define MPI2_MANPAGE7_PINOUT_SFF_8643_4I (0x06) #define MPI2_MANPAGE7_PINOUT_SFF_8643_8I (0x07) #define MPI2_MANPAGE7_PINOUT_SFF_8470 (0x08) #define MPI2_MANPAGE7_PINOUT_SFF_8088 (0x09) #define MPI2_MANPAGE7_PINOUT_SFF_8644_4X (0x0A) #define MPI2_MANPAGE7_PINOUT_SFF_8644_8X (0x0B) #define MPI2_MANPAGE7_PINOUT_SFF_8644_16X (0x0C) #define MPI2_MANPAGE7_PINOUT_SFF_8436 (0x0D) #define MPI2_MANPAGE7_PINOUT_SFF_8088_A (0x0E) #define MPI2_MANPAGE7_PINOUT_SFF_8643_16i (0x0F) #define MPI2_MANPAGE7_PINOUT_SFF_8654_4i (0x10) #define MPI2_MANPAGE7_PINOUT_SFF_8654_8i (0x11) #define MPI2_MANPAGE7_PINOUT_SFF_8611_4i (0x12) #define MPI2_MANPAGE7_PINOUT_SFF_8611_8i (0x13) /* defines for the Location field */ #define MPI2_MANPAGE7_LOCATION_UNKNOWN (0x01) #define MPI2_MANPAGE7_LOCATION_INTERNAL (0x02) #define MPI2_MANPAGE7_LOCATION_EXTERNAL (0x04) #define MPI2_MANPAGE7_LOCATION_SWITCHABLE (0x08) #define MPI2_MANPAGE7_LOCATION_AUTO (0x10) #define MPI2_MANPAGE7_LOCATION_NOT_PRESENT (0x20) #define MPI2_MANPAGE7_LOCATION_NOT_CONNECTED (0x80) /* defines for the Slot field */ #define MPI2_MANPAGE7_SLOT_UNKNOWN (0xFFFF) /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhys at runtime. */ #ifndef MPI2_MANPAGE7_CONNECTOR_INFO_MAX #define MPI2_MANPAGE7_CONNECTOR_INFO_MAX (1) #endif typedef struct _MPI2_CONFIG_PAGE_MAN_7 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x04 */ U32 Reserved2; /* 0x08 */ U32 Flags; /* 0x0C */ U8 EnclosureName[16]; /* 0x10 */ U8 NumPhys; /* 0x20 */ U8 Reserved3; /* 0x21 */ U16 Reserved4; /* 0x22 */ MPI2_MANPAGE7_CONNECTOR_INFO ConnectorInfo[MPI2_MANPAGE7_CONNECTOR_INFO_MAX]; /* 0x24 */ } MPI2_CONFIG_PAGE_MAN_7, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_7, Mpi2ManufacturingPage7_t, MPI2_POINTER pMpi2ManufacturingPage7_t; #define MPI2_MANUFACTURING7_PAGEVERSION (0x01) /* defines for the Flags field */ #define MPI2_MANPAGE7_FLAG_BASE_ENCLOSURE_LEVEL (0x00000008) #define MPI2_MANPAGE7_FLAG_EVENTREPLAY_SLOT_ORDER (0x00000002) #define MPI2_MANPAGE7_FLAG_USE_SLOT_INFO (0x00000001) /* * Generic structure to use for product-specific manufacturing pages * (currently Manufacturing Page 8 through Manufacturing Page 31). */ typedef struct _MPI2_CONFIG_PAGE_MAN_PS { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 ProductSpecificInfo;/* 0x04 */ } MPI2_CONFIG_PAGE_MAN_PS, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_PS, Mpi2ManufacturingPagePS_t, MPI2_POINTER pMpi2ManufacturingPagePS_t; #define MPI2_MANUFACTURING8_PAGEVERSION (0x00) #define MPI2_MANUFACTURING9_PAGEVERSION (0x00) #define MPI2_MANUFACTURING10_PAGEVERSION (0x00) #define MPI2_MANUFACTURING11_PAGEVERSION (0x00) #define MPI2_MANUFACTURING12_PAGEVERSION (0x00) #define MPI2_MANUFACTURING13_PAGEVERSION (0x00) #define MPI2_MANUFACTURING14_PAGEVERSION (0x00) #define MPI2_MANUFACTURING15_PAGEVERSION (0x00) #define MPI2_MANUFACTURING16_PAGEVERSION (0x00) #define MPI2_MANUFACTURING17_PAGEVERSION (0x00) #define MPI2_MANUFACTURING18_PAGEVERSION (0x00) #define MPI2_MANUFACTURING19_PAGEVERSION (0x00) #define MPI2_MANUFACTURING20_PAGEVERSION (0x00) #define MPI2_MANUFACTURING21_PAGEVERSION (0x00) #define MPI2_MANUFACTURING22_PAGEVERSION (0x00) #define MPI2_MANUFACTURING23_PAGEVERSION (0x00) #define MPI2_MANUFACTURING24_PAGEVERSION (0x00) #define MPI2_MANUFACTURING25_PAGEVERSION (0x00) #define MPI2_MANUFACTURING26_PAGEVERSION (0x00) #define MPI2_MANUFACTURING27_PAGEVERSION (0x00) #define MPI2_MANUFACTURING28_PAGEVERSION (0x00) #define MPI2_MANUFACTURING29_PAGEVERSION (0x00) #define MPI2_MANUFACTURING30_PAGEVERSION (0x00) #define MPI2_MANUFACTURING31_PAGEVERSION (0x00) /**************************************************************************** * IO Unit Config Pages ****************************************************************************/ /* IO Unit Page 0 */ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_0 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U64 UniqueValue; /* 0x04 */ MPI2_VERSION_UNION NvdataVersionDefault; /* 0x08 */ MPI2_VERSION_UNION NvdataVersionPersistent; /* 0x0A */ } MPI2_CONFIG_PAGE_IO_UNIT_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_0, Mpi2IOUnitPage0_t, MPI2_POINTER pMpi2IOUnitPage0_t; #define MPI2_IOUNITPAGE0_PAGEVERSION (0x02) /* IO Unit Page 1 */ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_1 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 Flags; /* 0x04 */ } MPI2_CONFIG_PAGE_IO_UNIT_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_1, Mpi2IOUnitPage1_t, MPI2_POINTER pMpi2IOUnitPage1_t; #define MPI2_IOUNITPAGE1_PAGEVERSION (0x04) /* IO Unit Page 1 Flags defines */ +#define MPI26_IOUNITPAGE1_NVME_WRITE_CACHE_MASK (0x00030000) +#define MPI26_IOUNITPAGE1_NVME_WRITE_CACHE_ENABLE (0x00000000) +#define MPI26_IOUNITPAGE1_NVME_WRITE_CACHE_DISABLE (0x00010000) +#define MPI26_IOUNITPAGE1_NVME_WRITE_CACHE_NO_CHANGE (0x00020000) #define MPI2_IOUNITPAGE1_ATA_SECURITY_FREEZE_LOCK (0x00004000) #define MPI25_IOUNITPAGE1_NEW_DEVICE_FAST_PATH_DISABLE (0x00002000) #define MPI25_IOUNITPAGE1_DISABLE_FAST_PATH (0x00001000) #define MPI2_IOUNITPAGE1_ENABLE_HOST_BASED_DISCOVERY (0x00000800) #define MPI2_IOUNITPAGE1_MASK_SATA_WRITE_CACHE (0x00000600) #define MPI2_IOUNITPAGE1_SATA_WRITE_CACHE_SHIFT (9) #define MPI2_IOUNITPAGE1_ENABLE_SATA_WRITE_CACHE (0x00000000) #define MPI2_IOUNITPAGE1_DISABLE_SATA_WRITE_CACHE (0x00000200) #define MPI2_IOUNITPAGE1_UNCHANGED_SATA_WRITE_CACHE (0x00000400) #define MPI2_IOUNITPAGE1_NATIVE_COMMAND_Q_DISABLE (0x00000100) #define MPI2_IOUNITPAGE1_DISABLE_IR (0x00000040) #define MPI2_IOUNITPAGE1_DISABLE_TASK_SET_FULL_HANDLING (0x00000020) #define MPI2_IOUNITPAGE1_IR_USE_STATIC_VOLUME_ID (0x00000004) /* IO Unit Page 3 */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for GPIOCount at runtime. */ #ifndef MPI2_IO_UNIT_PAGE_3_GPIO_VAL_MAX #define MPI2_IO_UNIT_PAGE_3_GPIO_VAL_MAX (1) #endif typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_3 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U8 GPIOCount; /* 0x04 */ U8 Reserved1; /* 0x05 */ U16 Reserved2; /* 0x06 */ U16 GPIOVal[MPI2_IO_UNIT_PAGE_3_GPIO_VAL_MAX];/* 0x08 */ } MPI2_CONFIG_PAGE_IO_UNIT_3, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_3, Mpi2IOUnitPage3_t, MPI2_POINTER pMpi2IOUnitPage3_t; #define MPI2_IOUNITPAGE3_PAGEVERSION (0x01) /* defines for IO Unit Page 3 GPIOVal field */ #define MPI2_IOUNITPAGE3_GPIO_FUNCTION_MASK (0xFFFC) #define MPI2_IOUNITPAGE3_GPIO_FUNCTION_SHIFT (2) #define MPI2_IOUNITPAGE3_GPIO_SETTING_OFF (0x0000) #define MPI2_IOUNITPAGE3_GPIO_SETTING_ON (0x0001) /* IO Unit Page 5 */ /* * Upper layer code (drivers, utilities, etc.) should leave this define set to * one and check the value returned for NumDmaEngines at runtime. */ #ifndef MPI2_IOUNITPAGE5_DMAENGINE_ENTRIES #define MPI2_IOUNITPAGE5_DMAENGINE_ENTRIES (1) #endif typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_5 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U64 RaidAcceleratorBufferBaseAddress; /* 0x04 */ U64 RaidAcceleratorBufferSize; /* 0x0C */ U64 RaidAcceleratorControlBaseAddress; /* 0x14 */ U8 RAControlSize; /* 0x1C */ U8 NumDmaEngines; /* 0x1D */ U8 RAMinControlSize; /* 0x1E */ U8 RAMaxControlSize; /* 0x1F */ U32 Reserved1; /* 0x20 */ U32 Reserved2; /* 0x24 */ U32 Reserved3; /* 0x28 */ U32 DmaEngineCapabilities[MPI2_IOUNITPAGE5_DMAENGINE_ENTRIES]; /* 0x2C */ } MPI2_CONFIG_PAGE_IO_UNIT_5, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_5, Mpi2IOUnitPage5_t, MPI2_POINTER pMpi2IOUnitPage5_t; #define MPI2_IOUNITPAGE5_PAGEVERSION (0x00) /* defines for IO Unit Page 5 DmaEngineCapabilities field */ #define MPI2_IOUNITPAGE5_DMA_CAP_MASK_MAX_REQUESTS (0xFFFF0000) #define MPI2_IOUNITPAGE5_DMA_CAP_SHIFT_MAX_REQUESTS (16) #define MPI2_IOUNITPAGE5_DMA_CAP_EEDP (0x0008) #define MPI2_IOUNITPAGE5_DMA_CAP_PARITY_GENERATION (0x0004) #define MPI2_IOUNITPAGE5_DMA_CAP_HASHING (0x0002) #define MPI2_IOUNITPAGE5_DMA_CAP_ENCRYPTION (0x0001) /* IO Unit Page 6 */ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_6 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U16 Flags; /* 0x04 */ U8 RAHostControlSize; /* 0x06 */ U8 Reserved0; /* 0x07 */ U64 RaidAcceleratorHostControlBaseAddress; /* 0x08 */ U32 Reserved1; /* 0x10 */ U32 Reserved2; /* 0x14 */ U32 Reserved3; /* 0x18 */ } MPI2_CONFIG_PAGE_IO_UNIT_6, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_6, Mpi2IOUnitPage6_t, MPI2_POINTER pMpi2IOUnitPage6_t; #define MPI2_IOUNITPAGE6_PAGEVERSION (0x00) /* defines for IO Unit Page 6 Flags field */ #define MPI2_IOUNITPAGE6_FLAGS_ENABLE_RAID_ACCELERATOR (0x0001) /* IO Unit Page 7 */ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_7 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U8 CurrentPowerMode; /* 0x04 */ /* reserved in MPI 2.0 */ U8 PreviousPowerMode; /* 0x05 */ /* reserved in MPI 2.0 */ U8 PCIeWidth; /* 0x06 */ U8 PCIeSpeed; /* 0x07 */ U32 ProcessorState; /* 0x08 */ U32 PowerManagementCapabilities; /* 0x0C */ U16 IOCTemperature; /* 0x10 */ U8 IOCTemperatureUnits; /* 0x12 */ U8 IOCSpeed; /* 0x13 */ U16 BoardTemperature; /* 0x14 */ U8 BoardTemperatureUnits; /* 0x16 */ U8 Reserved3; /* 0x17 */ U32 BoardPowerRequirement; /* 0x18 */ /* reserved prior to MPI v2.6 */ U32 PCISlotPowerAllocation; /* 0x1C */ /* reserved prior to MPI v2.6 */ U8 Flags; /* 0x20 */ /* reserved prior to MPI v2.6 */ U8 Reserved6; /* 0x21 */ U16 Reserved7; /* 0x22 */ U32 Reserved8; /* 0x24 */ } MPI2_CONFIG_PAGE_IO_UNIT_7, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_7, Mpi2IOUnitPage7_t, MPI2_POINTER pMpi2IOUnitPage7_t; #define MPI2_IOUNITPAGE7_PAGEVERSION (0x05) /* defines for IO Unit Page 7 CurrentPowerMode and PreviousPowerMode fields */ #define MPI25_IOUNITPAGE7_PM_INIT_MASK (0xC0) #define MPI25_IOUNITPAGE7_PM_INIT_UNAVAILABLE (0x00) #define MPI25_IOUNITPAGE7_PM_INIT_HOST (0x40) #define MPI25_IOUNITPAGE7_PM_INIT_IO_UNIT (0x80) #define MPI25_IOUNITPAGE7_PM_INIT_PCIE_DPA (0xC0) #define MPI25_IOUNITPAGE7_PM_MODE_MASK (0x07) #define MPI25_IOUNITPAGE7_PM_MODE_UNAVAILABLE (0x00) #define MPI25_IOUNITPAGE7_PM_MODE_UNKNOWN (0x01) #define MPI25_IOUNITPAGE7_PM_MODE_FULL_POWER (0x04) #define MPI25_IOUNITPAGE7_PM_MODE_REDUCED_POWER (0x05) #define MPI25_IOUNITPAGE7_PM_MODE_STANDBY (0x06) /* defines for IO Unit Page 7 PCIeWidth field */ #define MPI2_IOUNITPAGE7_PCIE_WIDTH_X1 (0x01) #define MPI2_IOUNITPAGE7_PCIE_WIDTH_X2 (0x02) #define MPI2_IOUNITPAGE7_PCIE_WIDTH_X4 (0x04) #define MPI2_IOUNITPAGE7_PCIE_WIDTH_X8 (0x08) #define MPI2_IOUNITPAGE7_PCIE_WIDTH_X16 (0x10) /* defines for IO Unit Page 7 PCIeSpeed field */ #define MPI2_IOUNITPAGE7_PCIE_SPEED_2_5_GBPS (0x00) #define MPI2_IOUNITPAGE7_PCIE_SPEED_5_0_GBPS (0x01) #define MPI2_IOUNITPAGE7_PCIE_SPEED_8_0_GBPS (0x02) #define MPI2_IOUNITPAGE7_PCIE_SPEED_16_0_GBPS (0x03) /* defines for IO Unit Page 7 ProcessorState field */ #define MPI2_IOUNITPAGE7_PSTATE_MASK_SECOND (0x0000000F) #define MPI2_IOUNITPAGE7_PSTATE_SHIFT_SECOND (0) #define MPI2_IOUNITPAGE7_PSTATE_NOT_PRESENT (0x00) #define MPI2_IOUNITPAGE7_PSTATE_DISABLED (0x01) #define MPI2_IOUNITPAGE7_PSTATE_ENABLED (0x02) /* defines for IO Unit Page 7 PowerManagementCapabilities field */ #define MPI25_IOUNITPAGE7_PMCAP_DPA_FULL_PWR_MODE (0x00400000) #define MPI25_IOUNITPAGE7_PMCAP_DPA_REDUCED_PWR_MODE (0x00200000) #define MPI25_IOUNITPAGE7_PMCAP_DPA_STANDBY_MODE (0x00100000) #define MPI25_IOUNITPAGE7_PMCAP_HOST_FULL_PWR_MODE (0x00040000) #define MPI25_IOUNITPAGE7_PMCAP_HOST_REDUCED_PWR_MODE (0x00020000) #define MPI25_IOUNITPAGE7_PMCAP_HOST_STANDBY_MODE (0x00010000) #define MPI25_IOUNITPAGE7_PMCAP_IO_FULL_PWR_MODE (0x00004000) #define MPI25_IOUNITPAGE7_PMCAP_IO_REDUCED_PWR_MODE (0x00002000) #define MPI25_IOUNITPAGE7_PMCAP_IO_STANDBY_MODE (0x00001000) #define MPI2_IOUNITPAGE7_PMCAP_HOST_12_5_PCT_IOCSPEED (0x00000400) #define MPI2_IOUNITPAGE7_PMCAP_HOST_25_0_PCT_IOCSPEED (0x00000200) #define MPI2_IOUNITPAGE7_PMCAP_HOST_50_0_PCT_IOCSPEED (0x00000100) #define MPI25_IOUNITPAGE7_PMCAP_IO_12_5_PCT_IOCSPEED (0x00000040) #define MPI25_IOUNITPAGE7_PMCAP_IO_25_0_PCT_IOCSPEED (0x00000020) #define MPI25_IOUNITPAGE7_PMCAP_IO_50_0_PCT_IOCSPEED (0x00000010) #define MPI2_IOUNITPAGE7_PMCAP_HOST_WIDTH_CHANGE_PCIE (0x00000008) /* obsolete */ #define MPI2_IOUNITPAGE7_PMCAP_HOST_SPEED_CHANGE_PCIE (0x00000004) /* obsolete */ #define MPI25_IOUNITPAGE7_PMCAP_IO_WIDTH_CHANGE_PCIE (0x00000002) /* obsolete */ #define MPI25_IOUNITPAGE7_PMCAP_IO_SPEED_CHANGE_PCIE (0x00000001) /* obsolete */ /* obsolete names for the PowerManagementCapabilities bits (above) */ #define MPI2_IOUNITPAGE7_PMCAP_12_5_PCT_IOCSPEED (0x00000400) #define MPI2_IOUNITPAGE7_PMCAP_25_0_PCT_IOCSPEED (0x00000200) #define MPI2_IOUNITPAGE7_PMCAP_50_0_PCT_IOCSPEED (0x00000100) #define MPI2_IOUNITPAGE7_PMCAP_PCIE_WIDTH_CHANGE (0x00000008) /* obsolete */ #define MPI2_IOUNITPAGE7_PMCAP_PCIE_SPEED_CHANGE (0x00000004) /* obsolete */ /* defines for IO Unit Page 7 IOCTemperatureUnits field */ #define MPI2_IOUNITPAGE7_IOC_TEMP_NOT_PRESENT (0x00) #define MPI2_IOUNITPAGE7_IOC_TEMP_FAHRENHEIT (0x01) #define MPI2_IOUNITPAGE7_IOC_TEMP_CELSIUS (0x02) /* defines for IO Unit Page 7 IOCSpeed field */ #define MPI2_IOUNITPAGE7_IOC_SPEED_FULL (0x01) #define MPI2_IOUNITPAGE7_IOC_SPEED_HALF (0x02) #define MPI2_IOUNITPAGE7_IOC_SPEED_QUARTER (0x04) #define MPI2_IOUNITPAGE7_IOC_SPEED_EIGHTH (0x08) /* defines for IO Unit Page 7 BoardTemperatureUnits field */ #define MPI2_IOUNITPAGE7_BOARD_TEMP_NOT_PRESENT (0x00) #define MPI2_IOUNITPAGE7_BOARD_TEMP_FAHRENHEIT (0x01) #define MPI2_IOUNITPAGE7_BOARD_TEMP_CELSIUS (0x02) /* defines for IO Unit Page 7 Flags field */ #define MPI2_IOUNITPAGE7_FLAG_CABLE_POWER_EXC (0x01) /* IO Unit Page 8 */ #define MPI2_IOUNIT8_NUM_THRESHOLDS (4) typedef struct _MPI2_IOUNIT8_SENSOR { U16 Flags; /* 0x00 */ U16 Reserved1; /* 0x02 */ U16 Threshold[MPI2_IOUNIT8_NUM_THRESHOLDS]; /* 0x04 */ U32 Reserved2; /* 0x0C */ U32 Reserved3; /* 0x10 */ U32 Reserved4; /* 0x14 */ } MPI2_IOUNIT8_SENSOR, MPI2_POINTER PTR_MPI2_IOUNIT8_SENSOR, Mpi2IOUnit8Sensor_t, MPI2_POINTER pMpi2IOUnit8Sensor_t; /* defines for IO Unit Page 8 Sensor Flags field */ #define MPI2_IOUNIT8_SENSOR_FLAGS_T3_ENABLE (0x0008) #define MPI2_IOUNIT8_SENSOR_FLAGS_T2_ENABLE (0x0004) #define MPI2_IOUNIT8_SENSOR_FLAGS_T1_ENABLE (0x0002) #define MPI2_IOUNIT8_SENSOR_FLAGS_T0_ENABLE (0x0001) /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumSensors at runtime. */ #ifndef MPI2_IOUNITPAGE8_SENSOR_ENTRIES #define MPI2_IOUNITPAGE8_SENSOR_ENTRIES (1) #endif typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_8 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x04 */ U32 Reserved2; /* 0x08 */ U8 NumSensors; /* 0x0C */ U8 PollingInterval; /* 0x0D */ U16 Reserved3; /* 0x0E */ MPI2_IOUNIT8_SENSOR Sensor[MPI2_IOUNITPAGE8_SENSOR_ENTRIES];/* 0x10 */ } MPI2_CONFIG_PAGE_IO_UNIT_8, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_8, Mpi2IOUnitPage8_t, MPI2_POINTER pMpi2IOUnitPage8_t; #define MPI2_IOUNITPAGE8_PAGEVERSION (0x00) /* IO Unit Page 9 */ typedef struct _MPI2_IOUNIT9_SENSOR { U16 CurrentTemperature; /* 0x00 */ U16 Reserved1; /* 0x02 */ U8 Flags; /* 0x04 */ U8 Reserved2; /* 0x05 */ U16 Reserved3; /* 0x06 */ U32 Reserved4; /* 0x08 */ U32 Reserved5; /* 0x0C */ } MPI2_IOUNIT9_SENSOR, MPI2_POINTER PTR_MPI2_IOUNIT9_SENSOR, Mpi2IOUnit9Sensor_t, MPI2_POINTER pMpi2IOUnit9Sensor_t; /* defines for IO Unit Page 9 Sensor Flags field */ #define MPI2_IOUNIT9_SENSOR_FLAGS_TEMP_VALID (0x01) /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumSensors at runtime. */ #ifndef MPI2_IOUNITPAGE9_SENSOR_ENTRIES #define MPI2_IOUNITPAGE9_SENSOR_ENTRIES (1) #endif typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_9 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x04 */ U32 Reserved2; /* 0x08 */ U8 NumSensors; /* 0x0C */ U8 Reserved4; /* 0x0D */ U16 Reserved3; /* 0x0E */ MPI2_IOUNIT9_SENSOR Sensor[MPI2_IOUNITPAGE9_SENSOR_ENTRIES];/* 0x10 */ } MPI2_CONFIG_PAGE_IO_UNIT_9, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_9, Mpi2IOUnitPage9_t, MPI2_POINTER pMpi2IOUnitPage9_t; #define MPI2_IOUNITPAGE9_PAGEVERSION (0x00) /* IO Unit Page 10 */ typedef struct _MPI2_IOUNIT10_FUNCTION { U8 CreditPercent; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 Reserved2; /* 0x02 */ } MPI2_IOUNIT10_FUNCTION, MPI2_POINTER PTR_MPI2_IOUNIT10_FUNCTION, Mpi2IOUnit10Function_t, MPI2_POINTER pMpi2IOUnit10Function_t; /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumFunctions at runtime. */ #ifndef MPI2_IOUNITPAGE10_FUNCTION_ENTRIES #define MPI2_IOUNITPAGE10_FUNCTION_ENTRIES (1) #endif typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_10 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U8 NumFunctions; /* 0x04 */ U8 Reserved1; /* 0x05 */ U16 Reserved2; /* 0x06 */ U32 Reserved3; /* 0x08 */ U32 Reserved4; /* 0x0C */ MPI2_IOUNIT10_FUNCTION Function[MPI2_IOUNITPAGE10_FUNCTION_ENTRIES]; /* 0x10 */ } MPI2_CONFIG_PAGE_IO_UNIT_10, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_10, Mpi2IOUnitPage10_t, MPI2_POINTER pMpi2IOUnitPage10_t; #define MPI2_IOUNITPAGE10_PAGEVERSION (0x01) /* IO Unit Page 11 (for MPI v2.6 and later) */ typedef struct _MPI26_IOUNIT11_SPINUP_GROUP { U8 MaxTargetSpinup; /* 0x00 */ U8 SpinupDelay; /* 0x01 */ U8 SpinupFlags; /* 0x02 */ U8 Reserved1; /* 0x03 */ } MPI26_IOUNIT11_SPINUP_GROUP, MPI2_POINTER PTR_MPI26_IOUNIT11_SPINUP_GROUP, Mpi26IOUnit11SpinupGroup_t, MPI2_POINTER pMpi26IOUnit11SpinupGroup_t; /* defines for IO Unit Page 11 SpinupFlags */ #define MPI26_IOUNITPAGE11_SPINUP_DISABLE_FLAG (0x01) /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * four and check the value returned for NumPhys at runtime. */ #ifndef MPI26_IOUNITPAGE11_PHY_MAX #define MPI26_IOUNITPAGE11_PHY_MAX (4) #endif typedef struct _MPI26_CONFIG_PAGE_IO_UNIT_11 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x04 */ MPI26_IOUNIT11_SPINUP_GROUP SpinupGroupParameters[4]; /* 0x08 */ U32 Reserved2; /* 0x18 */ U32 Reserved3; /* 0x1C */ U32 Reserved4; /* 0x20 */ U8 BootDeviceWaitTime; /* 0x24 */ U8 SATADeviceWaitTime; /* 0x25 */ U16 Reserved6; /* 0x26 */ U8 NumPhys; /* 0x28 */ U8 PEInitialSpinupDelay; /* 0x29 */ U8 PEReplyDelay; /* 0x2A */ U8 Flags; /* 0x2B */ U8 PHY[MPI26_IOUNITPAGE11_PHY_MAX];/* 0x2C */ } MPI26_CONFIG_PAGE_IO_UNIT_11, MPI2_POINTER PTR_MPI26_CONFIG_PAGE_IO_UNIT_11, Mpi26IOUnitPage11_t, MPI2_POINTER pMpi26IOUnitPage11_t; #define MPI26_IOUNITPAGE11_PAGEVERSION (0x00) /* defines for Flags field */ #define MPI26_IOUNITPAGE11_FLAGS_AUTO_PORTENABLE (0x01) /* defines for PHY field */ #define MPI26_IOUNITPAGE11_PHY_SPINUP_GROUP_MASK (0x03) /**************************************************************************** * IOC Config Pages ****************************************************************************/ /* IOC Page 0 */ typedef struct _MPI2_CONFIG_PAGE_IOC_0 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x04 */ U32 Reserved2; /* 0x08 */ U16 VendorID; /* 0x0C */ U16 DeviceID; /* 0x0E */ U8 RevisionID; /* 0x10 */ U8 Reserved3; /* 0x11 */ U16 Reserved4; /* 0x12 */ U32 ClassCode; /* 0x14 */ U16 SubsystemVendorID; /* 0x18 */ U16 SubsystemID; /* 0x1A */ } MPI2_CONFIG_PAGE_IOC_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IOC_0, Mpi2IOCPage0_t, MPI2_POINTER pMpi2IOCPage0_t; #define MPI2_IOCPAGE0_PAGEVERSION (0x02) /* IOC Page 1 */ typedef struct _MPI2_CONFIG_PAGE_IOC_1 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 Flags; /* 0x04 */ U32 CoalescingTimeout; /* 0x08 */ U8 CoalescingDepth; /* 0x0C */ U8 PCISlotNum; /* 0x0D */ U8 PCIBusNum; /* 0x0E */ U8 PCIDomainSegment; /* 0x0F */ U32 Reserved1; /* 0x10 */ - U32 Reserved2; /* 0x14 */ + U32 ProductSpecific; /* 0x14 */ } MPI2_CONFIG_PAGE_IOC_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IOC_1, Mpi2IOCPage1_t, MPI2_POINTER pMpi2IOCPage1_t; #define MPI2_IOCPAGE1_PAGEVERSION (0x05) /* defines for IOC Page 1 Flags field */ #define MPI2_IOCPAGE1_REPLY_COALESCING (0x00000001) #define MPI2_IOCPAGE1_PCISLOTNUM_UNKNOWN (0xFF) #define MPI2_IOCPAGE1_PCIBUSNUM_UNKNOWN (0xFF) #define MPI2_IOCPAGE1_PCIDOMAIN_UNKNOWN (0xFF) /* IOC Page 6 */ typedef struct _MPI2_CONFIG_PAGE_IOC_6 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 CapabilitiesFlags; /* 0x04 */ U8 MaxDrivesRAID0; /* 0x08 */ U8 MaxDrivesRAID1; /* 0x09 */ U8 MaxDrivesRAID1E; /* 0x0A */ U8 MaxDrivesRAID10; /* 0x0B */ U8 MinDrivesRAID0; /* 0x0C */ U8 MinDrivesRAID1; /* 0x0D */ U8 MinDrivesRAID1E; /* 0x0E */ U8 MinDrivesRAID10; /* 0x0F */ U32 Reserved1; /* 0x10 */ U8 MaxGlobalHotSpares; /* 0x14 */ U8 MaxPhysDisks; /* 0x15 */ U8 MaxVolumes; /* 0x16 */ U8 MaxConfigs; /* 0x17 */ U8 MaxOCEDisks; /* 0x18 */ U8 Reserved2; /* 0x19 */ U16 Reserved3; /* 0x1A */ U32 SupportedStripeSizeMapRAID0; /* 0x1C */ U32 SupportedStripeSizeMapRAID1E; /* 0x20 */ U32 SupportedStripeSizeMapRAID10; /* 0x24 */ U32 Reserved4; /* 0x28 */ U32 Reserved5; /* 0x2C */ U16 DefaultMetadataSize; /* 0x30 */ U16 Reserved6; /* 0x32 */ U16 MaxBadBlockTableEntries; /* 0x34 */ U16 Reserved7; /* 0x36 */ U32 IRNvsramVersion; /* 0x38 */ } MPI2_CONFIG_PAGE_IOC_6, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IOC_6, Mpi2IOCPage6_t, MPI2_POINTER pMpi2IOCPage6_t; #define MPI2_IOCPAGE6_PAGEVERSION (0x05) /* defines for IOC Page 6 CapabilitiesFlags */ #define MPI2_IOCPAGE6_CAP_FLAGS_4K_SECTORS_SUPPORT (0x00000020) #define MPI2_IOCPAGE6_CAP_FLAGS_RAID10_SUPPORT (0x00000010) #define MPI2_IOCPAGE6_CAP_FLAGS_RAID1_SUPPORT (0x00000008) #define MPI2_IOCPAGE6_CAP_FLAGS_RAID1E_SUPPORT (0x00000004) #define MPI2_IOCPAGE6_CAP_FLAGS_RAID0_SUPPORT (0x00000002) #define MPI2_IOCPAGE6_CAP_FLAGS_GLOBAL_HOT_SPARE (0x00000001) /* IOC Page 7 */ #define MPI2_IOCPAGE7_EVENTMASK_WORDS (4) typedef struct _MPI2_CONFIG_PAGE_IOC_7 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x04 */ U32 EventMasks[MPI2_IOCPAGE7_EVENTMASK_WORDS];/* 0x08 */ U16 SASBroadcastPrimitiveMasks; /* 0x18 */ U16 SASNotifyPrimitiveMasks; /* 0x1A */ U32 Reserved3; /* 0x1C */ } MPI2_CONFIG_PAGE_IOC_7, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IOC_7, Mpi2IOCPage7_t, MPI2_POINTER pMpi2IOCPage7_t; #define MPI2_IOCPAGE7_PAGEVERSION (0x02) /* IOC Page 8 */ typedef struct _MPI2_CONFIG_PAGE_IOC_8 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U8 NumDevsPerEnclosure; /* 0x04 */ U8 Reserved1; /* 0x05 */ U16 Reserved2; /* 0x06 */ U16 MaxPersistentEntries; /* 0x08 */ U16 MaxNumPhysicalMappedIDs; /* 0x0A */ U16 Flags; /* 0x0C */ U16 Reserved3; /* 0x0E */ U16 IRVolumeMappingFlags; /* 0x10 */ U16 Reserved4; /* 0x12 */ U32 Reserved5; /* 0x14 */ } MPI2_CONFIG_PAGE_IOC_8, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IOC_8, Mpi2IOCPage8_t, MPI2_POINTER pMpi2IOCPage8_t; #define MPI2_IOCPAGE8_PAGEVERSION (0x00) /* defines for IOC Page 8 Flags field */ #define MPI2_IOCPAGE8_FLAGS_DA_START_SLOT_1 (0x00000020) #define MPI2_IOCPAGE8_FLAGS_RESERVED_TARGETID_0 (0x00000010) #define MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE (0x0000000E) #define MPI2_IOCPAGE8_FLAGS_DEVICE_PERSISTENCE_MAPPING (0x00000000) #define MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING (0x00000002) #define MPI2_IOCPAGE8_FLAGS_DISABLE_PERSISTENT_MAPPING (0x00000001) #define MPI2_IOCPAGE8_FLAGS_ENABLE_PERSISTENT_MAPPING (0x00000000) /* defines for IOC Page 8 IRVolumeMappingFlags */ #define MPI2_IOCPAGE8_IRFLAGS_MASK_VOLUME_MAPPING_MODE (0x00000003) #define MPI2_IOCPAGE8_IRFLAGS_LOW_VOLUME_MAPPING (0x00000000) #define MPI2_IOCPAGE8_IRFLAGS_HIGH_VOLUME_MAPPING (0x00000001) /**************************************************************************** * BIOS Config Pages ****************************************************************************/ /* BIOS Page 1 */ typedef struct _MPI2_CONFIG_PAGE_BIOS_1 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 BiosOptions; /* 0x04 */ U32 IOCSettings; /* 0x08 */ U8 SSUTimeout; /* 0x0C */ - U8 Reserved1; /* 0x0D */ + U8 MaxEnclosureLevel; /* 0x0D */ U16 Reserved2; /* 0x0E */ U32 DeviceSettings; /* 0x10 */ U16 NumberOfDevices; /* 0x14 */ U16 UEFIVersion; /* 0x16 */ U16 IOTimeoutBlockDevicesNonRM; /* 0x18 */ U16 IOTimeoutSequential; /* 0x1A */ U16 IOTimeoutOther; /* 0x1C */ U16 IOTimeoutBlockDevicesRM; /* 0x1E */ } MPI2_CONFIG_PAGE_BIOS_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_BIOS_1, Mpi2BiosPage1_t, MPI2_POINTER pMpi2BiosPage1_t; #define MPI2_BIOSPAGE1_PAGEVERSION (0x07) /* values for BIOS Page 1 BiosOptions field */ #define MPI2_BIOSPAGE1_OPTIONS_BOOT_LIST_ADD_ALT_BOOT_DEVICE (0x00008000) #define MPI2_BIOSPAGE1_OPTIONS_ADVANCED_CONFIG (0x00004000) #define MPI2_BIOSPAGE1_OPTIONS_PNS_MASK (0x00003800) #define MPI2_BIOSPAGE1_OPTIONS_PNS_PBDHL (0x00000000) #define MPI2_BIOSPAGE1_OPTIONS_PNS_ENCSLOSURE (0x00000800) #define MPI2_BIOSPAGE1_OPTIONS_PNS_LWWID (0x00001000) #define MPI2_BIOSPAGE1_OPTIONS_PNS_PSENS (0x00001800) #define MPI2_BIOSPAGE1_OPTIONS_PNS_ESPHY (0x00002000) #define MPI2_BIOSPAGE1_OPTIONS_X86_DISABLE_BIOS (0x00000400) #define MPI2_BIOSPAGE1_OPTIONS_MASK_REGISTRATION_UEFI_BSD (0x00000300) #define MPI2_BIOSPAGE1_OPTIONS_USE_BIT0_REGISTRATION_UEFI_BSD (0x00000000) #define MPI2_BIOSPAGE1_OPTIONS_FULL_REGISTRATION_UEFI_BSD (0x00000100) #define MPI2_BIOSPAGE1_OPTIONS_ADAPTER_REGISTRATION_UEFI_BSD (0x00000200) #define MPI2_BIOSPAGE1_OPTIONS_DISABLE_REGISTRATION_UEFI_BSD (0x00000300) #define MPI2_BIOSPAGE1_OPTIONS_MASK_OEM_ID (0x000000F0) #define MPI2_BIOSPAGE1_OPTIONS_LSI_OEM_ID (0x00000000) #define MPI2_BIOSPAGE1_OPTIONS_MASK_UEFI_HII_REGISTRATION (0x00000006) #define MPI2_BIOSPAGE1_OPTIONS_ENABLE_UEFI_HII (0x00000000) #define MPI2_BIOSPAGE1_OPTIONS_DISABLE_UEFI_HII (0x00000002) #define MPI2_BIOSPAGE1_OPTIONS_VERSION_CHECK_UEFI_HII (0x00000004) #define MPI2_BIOSPAGE1_OPTIONS_DISABLE_BIOS (0x00000001) /* values for BIOS Page 1 IOCSettings field */ #define MPI2_BIOSPAGE1_IOCSET_MASK_BOOT_PREFERENCE (0x00030000) #define MPI2_BIOSPAGE1_IOCSET_ENCLOSURE_SLOT_BOOT (0x00000000) #define MPI2_BIOSPAGE1_IOCSET_SAS_ADDRESS_BOOT (0x00010000) #define MPI2_BIOSPAGE1_IOCSET_MASK_RM_SETTING (0x000000C0) #define MPI2_BIOSPAGE1_IOCSET_NONE_RM_SETTING (0x00000000) #define MPI2_BIOSPAGE1_IOCSET_BOOT_RM_SETTING (0x00000040) #define MPI2_BIOSPAGE1_IOCSET_MEDIA_RM_SETTING (0x00000080) #define MPI2_BIOSPAGE1_IOCSET_MASK_ADAPTER_SUPPORT (0x00000030) #define MPI2_BIOSPAGE1_IOCSET_NO_SUPPORT (0x00000000) #define MPI2_BIOSPAGE1_IOCSET_BIOS_SUPPORT (0x00000010) #define MPI2_BIOSPAGE1_IOCSET_OS_SUPPORT (0x00000020) #define MPI2_BIOSPAGE1_IOCSET_ALL_SUPPORT (0x00000030) #define MPI2_BIOSPAGE1_IOCSET_ALTERNATE_CHS (0x00000008) /* values for BIOS Page 1 DeviceSettings field */ #define MPI2_BIOSPAGE1_DEVSET_DISABLE_SMART_POLLING (0x00000010) #define MPI2_BIOSPAGE1_DEVSET_DISABLE_SEQ_LUN (0x00000008) #define MPI2_BIOSPAGE1_DEVSET_DISABLE_RM_LUN (0x00000004) #define MPI2_BIOSPAGE1_DEVSET_DISABLE_NON_RM_LUN (0x00000002) #define MPI2_BIOSPAGE1_DEVSET_DISABLE_OTHER_LUN (0x00000001) /* defines for BIOS Page 1 UEFIVersion field */ #define MPI2_BIOSPAGE1_UEFI_VER_MAJOR_MASK (0xFF00) #define MPI2_BIOSPAGE1_UEFI_VER_MAJOR_SHIFT (8) #define MPI2_BIOSPAGE1_UEFI_VER_MINOR_MASK (0x00FF) #define MPI2_BIOSPAGE1_UEFI_VER_MINOR_SHIFT (0) /* BIOS Page 2 */ typedef struct _MPI2_BOOT_DEVICE_ADAPTER_ORDER { U32 Reserved1; /* 0x00 */ U32 Reserved2; /* 0x04 */ U32 Reserved3; /* 0x08 */ U32 Reserved4; /* 0x0C */ U32 Reserved5; /* 0x10 */ U32 Reserved6; /* 0x14 */ } MPI2_BOOT_DEVICE_ADAPTER_ORDER, MPI2_POINTER PTR_MPI2_BOOT_DEVICE_ADAPTER_ORDER, Mpi2BootDeviceAdapterOrder_t, MPI2_POINTER pMpi2BootDeviceAdapterOrder_t; typedef struct _MPI2_BOOT_DEVICE_SAS_WWID { U64 SASAddress; /* 0x00 */ U8 LUN[8]; /* 0x08 */ U32 Reserved1; /* 0x10 */ U32 Reserved2; /* 0x14 */ } MPI2_BOOT_DEVICE_SAS_WWID, MPI2_POINTER PTR_MPI2_BOOT_DEVICE_SAS_WWID, Mpi2BootDeviceSasWwid_t, MPI2_POINTER pMpi2BootDeviceSasWwid_t; typedef struct _MPI2_BOOT_DEVICE_ENCLOSURE_SLOT { U64 EnclosureLogicalID; /* 0x00 */ U32 Reserved1; /* 0x08 */ U32 Reserved2; /* 0x0C */ U16 SlotNumber; /* 0x10 */ U16 Reserved3; /* 0x12 */ U32 Reserved4; /* 0x14 */ } MPI2_BOOT_DEVICE_ENCLOSURE_SLOT, MPI2_POINTER PTR_MPI2_BOOT_DEVICE_ENCLOSURE_SLOT, Mpi2BootDeviceEnclosureSlot_t, MPI2_POINTER pMpi2BootDeviceEnclosureSlot_t; typedef struct _MPI2_BOOT_DEVICE_DEVICE_NAME { U64 DeviceName; /* 0x00 */ U8 LUN[8]; /* 0x08 */ U32 Reserved1; /* 0x10 */ U32 Reserved2; /* 0x14 */ } MPI2_BOOT_DEVICE_DEVICE_NAME, MPI2_POINTER PTR_MPI2_BOOT_DEVICE_DEVICE_NAME, Mpi2BootDeviceDeviceName_t, MPI2_POINTER pMpi2BootDeviceDeviceName_t; typedef union _MPI2_MPI2_BIOSPAGE2_BOOT_DEVICE { MPI2_BOOT_DEVICE_ADAPTER_ORDER AdapterOrder; MPI2_BOOT_DEVICE_SAS_WWID SasWwid; MPI2_BOOT_DEVICE_ENCLOSURE_SLOT EnclosureSlot; MPI2_BOOT_DEVICE_DEVICE_NAME DeviceName; } MPI2_BIOSPAGE2_BOOT_DEVICE, MPI2_POINTER PTR_MPI2_BIOSPAGE2_BOOT_DEVICE, Mpi2BiosPage2BootDevice_t, MPI2_POINTER pMpi2BiosPage2BootDevice_t; typedef struct _MPI2_CONFIG_PAGE_BIOS_2 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x04 */ U32 Reserved2; /* 0x08 */ U32 Reserved3; /* 0x0C */ U32 Reserved4; /* 0x10 */ U32 Reserved5; /* 0x14 */ U32 Reserved6; /* 0x18 */ U8 ReqBootDeviceForm; /* 0x1C */ U8 Reserved7; /* 0x1D */ U16 Reserved8; /* 0x1E */ MPI2_BIOSPAGE2_BOOT_DEVICE RequestedBootDevice; /* 0x20 */ U8 ReqAltBootDeviceForm; /* 0x38 */ U8 Reserved9; /* 0x39 */ U16 Reserved10; /* 0x3A */ MPI2_BIOSPAGE2_BOOT_DEVICE RequestedAltBootDevice; /* 0x3C */ U8 CurrentBootDeviceForm; /* 0x58 */ U8 Reserved11; /* 0x59 */ U16 Reserved12; /* 0x5A */ MPI2_BIOSPAGE2_BOOT_DEVICE CurrentBootDevice; /* 0x58 */ } MPI2_CONFIG_PAGE_BIOS_2, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_BIOS_2, Mpi2BiosPage2_t, MPI2_POINTER pMpi2BiosPage2_t; #define MPI2_BIOSPAGE2_PAGEVERSION (0x04) /* values for BIOS Page 2 BootDeviceForm fields */ #define MPI2_BIOSPAGE2_FORM_MASK (0x0F) #define MPI2_BIOSPAGE2_FORM_NO_DEVICE_SPECIFIED (0x00) #define MPI2_BIOSPAGE2_FORM_SAS_WWID (0x05) #define MPI2_BIOSPAGE2_FORM_ENCLOSURE_SLOT (0x06) #define MPI2_BIOSPAGE2_FORM_DEVICE_NAME (0x07) /* BIOS Page 3 */ #define MPI2_BIOSPAGE3_NUM_ADAPTER (4) typedef struct _MPI2_ADAPTER_INFO { U8 PciBusNumber; /* 0x00 */ U8 PciDeviceAndFunctionNumber; /* 0x01 */ U16 AdapterFlags; /* 0x02 */ } MPI2_ADAPTER_INFO, MPI2_POINTER PTR_MPI2_ADAPTER_INFO, Mpi2AdapterInfo_t, MPI2_POINTER pMpi2AdapterInfo_t; #define MPI2_ADAPTER_INFO_FLAGS_EMBEDDED (0x0001) #define MPI2_ADAPTER_INFO_FLAGS_INIT_STATUS (0x0002) typedef struct _MPI2_ADAPTER_ORDER_AUX { U64 WWID; /* 0x00 */ U32 Reserved1; /* 0x08 */ U32 Reserved2; /* 0x0C */ } MPI2_ADAPTER_ORDER_AUX, MPI2_POINTER PTR_MPI2_ADAPTER_ORDER_AUX, Mpi2AdapterOrderAux_t, MPI2_POINTER pMpi2AdapterOrderAux_t; typedef struct _MPI2_CONFIG_PAGE_BIOS_3 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U32 GlobalFlags; /* 0x04 */ U32 BiosVersion; /* 0x08 */ MPI2_ADAPTER_INFO AdapterOrder[MPI2_BIOSPAGE3_NUM_ADAPTER]; /* 0x0C */ U32 Reserved1; /* 0x1C */ MPI2_ADAPTER_ORDER_AUX AdapterOrderAux[MPI2_BIOSPAGE3_NUM_ADAPTER]; /* 0x20 */ /* MPI v2.5 and newer */ } MPI2_CONFIG_PAGE_BIOS_3, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_BIOS_3, Mpi2BiosPage3_t, MPI2_POINTER pMpi2BiosPage3_t; #define MPI2_BIOSPAGE3_PAGEVERSION (0x01) /* values for BIOS Page 3 GlobalFlags */ #define MPI2_BIOSPAGE3_FLAGS_PAUSE_ON_ERROR (0x00000002) #define MPI2_BIOSPAGE3_FLAGS_VERBOSE_ENABLE (0x00000004) #define MPI2_BIOSPAGE3_FLAGS_HOOK_INT_40_DISABLE (0x00000010) #define MPI2_BIOSPAGE3_FLAGS_DEV_LIST_DISPLAY_MASK (0x000000E0) #define MPI2_BIOSPAGE3_FLAGS_INSTALLED_DEV_DISPLAY (0x00000000) #define MPI2_BIOSPAGE3_FLAGS_ADAPTER_DISPLAY (0x00000020) #define MPI2_BIOSPAGE3_FLAGS_ADAPTER_DEV_DISPLAY (0x00000040) /* BIOS Page 4 */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhys at runtime. */ #ifndef MPI2_BIOS_PAGE_4_PHY_ENTRIES #define MPI2_BIOS_PAGE_4_PHY_ENTRIES (1) #endif typedef struct _MPI2_BIOS4_ENTRY { U64 ReassignmentWWID; /* 0x00 */ U64 ReassignmentDeviceName; /* 0x08 */ } MPI2_BIOS4_ENTRY, MPI2_POINTER PTR_MPI2_BIOS4_ENTRY, Mpi2MBios4Entry_t, MPI2_POINTER pMpi2Bios4Entry_t; typedef struct _MPI2_CONFIG_PAGE_BIOS_4 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U8 NumPhys; /* 0x04 */ U8 Reserved1; /* 0x05 */ U16 Reserved2; /* 0x06 */ MPI2_BIOS4_ENTRY Phy[MPI2_BIOS_PAGE_4_PHY_ENTRIES]; /* 0x08 */ } MPI2_CONFIG_PAGE_BIOS_4, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_BIOS_4, Mpi2BiosPage4_t, MPI2_POINTER pMpi2BiosPage4_t; #define MPI2_BIOSPAGE4_PAGEVERSION (0x01) /**************************************************************************** * RAID Volume Config Pages ****************************************************************************/ /* RAID Volume Page 0 */ typedef struct _MPI2_RAIDVOL0_PHYS_DISK { U8 RAIDSetNum; /* 0x00 */ U8 PhysDiskMap; /* 0x01 */ U8 PhysDiskNum; /* 0x02 */ U8 Reserved; /* 0x03 */ } MPI2_RAIDVOL0_PHYS_DISK, MPI2_POINTER PTR_MPI2_RAIDVOL0_PHYS_DISK, Mpi2RaidVol0PhysDisk_t, MPI2_POINTER pMpi2RaidVol0PhysDisk_t; /* defines for the PhysDiskMap field */ #define MPI2_RAIDVOL0_PHYSDISK_PRIMARY (0x01) #define MPI2_RAIDVOL0_PHYSDISK_SECONDARY (0x02) typedef struct _MPI2_RAIDVOL0_SETTINGS { U16 Settings; /* 0x00 */ U8 HotSparePool; /* 0x01 */ U8 Reserved; /* 0x02 */ } MPI2_RAIDVOL0_SETTINGS, MPI2_POINTER PTR_MPI2_RAIDVOL0_SETTINGS, Mpi2RaidVol0Settings_t, MPI2_POINTER pMpi2RaidVol0Settings_t; /* RAID Volume Page 0 HotSparePool defines, also used in RAID Physical Disk */ #define MPI2_RAID_HOT_SPARE_POOL_0 (0x01) #define MPI2_RAID_HOT_SPARE_POOL_1 (0x02) #define MPI2_RAID_HOT_SPARE_POOL_2 (0x04) #define MPI2_RAID_HOT_SPARE_POOL_3 (0x08) #define MPI2_RAID_HOT_SPARE_POOL_4 (0x10) #define MPI2_RAID_HOT_SPARE_POOL_5 (0x20) #define MPI2_RAID_HOT_SPARE_POOL_6 (0x40) #define MPI2_RAID_HOT_SPARE_POOL_7 (0x80) /* RAID Volume Page 0 VolumeSettings defines */ #define MPI2_RAIDVOL0_SETTING_USE_PRODUCT_ID_SUFFIX (0x0008) #define MPI2_RAIDVOL0_SETTING_AUTO_CONFIG_HSWAP_DISABLE (0x0004) #define MPI2_RAIDVOL0_SETTING_MASK_WRITE_CACHING (0x0003) #define MPI2_RAIDVOL0_SETTING_UNCHANGED (0x0000) #define MPI2_RAIDVOL0_SETTING_DISABLE_WRITE_CACHING (0x0001) #define MPI2_RAIDVOL0_SETTING_ENABLE_WRITE_CACHING (0x0002) /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhysDisks at runtime. */ #ifndef MPI2_RAID_VOL_PAGE_0_PHYSDISK_MAX #define MPI2_RAID_VOL_PAGE_0_PHYSDISK_MAX (1) #endif typedef struct _MPI2_CONFIG_PAGE_RAID_VOL_0 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U16 DevHandle; /* 0x04 */ U8 VolumeState; /* 0x06 */ U8 VolumeType; /* 0x07 */ U32 VolumeStatusFlags; /* 0x08 */ MPI2_RAIDVOL0_SETTINGS VolumeSettings; /* 0x0C */ U64 MaxLBA; /* 0x10 */ U32 StripeSize; /* 0x18 */ U16 BlockSize; /* 0x1C */ U16 Reserved1; /* 0x1E */ U8 SupportedPhysDisks; /* 0x20 */ U8 ResyncRate; /* 0x21 */ U16 DataScrubDuration; /* 0x22 */ U8 NumPhysDisks; /* 0x24 */ U8 Reserved2; /* 0x25 */ U8 Reserved3; /* 0x26 */ U8 InactiveStatus; /* 0x27 */ MPI2_RAIDVOL0_PHYS_DISK PhysDisk[MPI2_RAID_VOL_PAGE_0_PHYSDISK_MAX]; /* 0x28 */ } MPI2_CONFIG_PAGE_RAID_VOL_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_RAID_VOL_0, Mpi2RaidVolPage0_t, MPI2_POINTER pMpi2RaidVolPage0_t; #define MPI2_RAIDVOLPAGE0_PAGEVERSION (0x0A) /* values for RAID VolumeState */ #define MPI2_RAID_VOL_STATE_MISSING (0x00) #define MPI2_RAID_VOL_STATE_FAILED (0x01) #define MPI2_RAID_VOL_STATE_INITIALIZING (0x02) #define MPI2_RAID_VOL_STATE_ONLINE (0x03) #define MPI2_RAID_VOL_STATE_DEGRADED (0x04) #define MPI2_RAID_VOL_STATE_OPTIMAL (0x05) /* values for RAID VolumeType */ #define MPI2_RAID_VOL_TYPE_RAID0 (0x00) #define MPI2_RAID_VOL_TYPE_RAID1E (0x01) #define MPI2_RAID_VOL_TYPE_RAID1 (0x02) #define MPI2_RAID_VOL_TYPE_RAID10 (0x05) #define MPI2_RAID_VOL_TYPE_UNKNOWN (0xFF) /* values for RAID Volume Page 0 VolumeStatusFlags field */ #define MPI2_RAIDVOL0_STATUS_FLAG_PENDING_RESYNC (0x02000000) #define MPI2_RAIDVOL0_STATUS_FLAG_BACKG_INIT_PENDING (0x01000000) #define MPI2_RAIDVOL0_STATUS_FLAG_MDC_PENDING (0x00800000) #define MPI2_RAIDVOL0_STATUS_FLAG_USER_CONSIST_PENDING (0x00400000) #define MPI2_RAIDVOL0_STATUS_FLAG_MAKE_DATA_CONSISTENT (0x00200000) #define MPI2_RAIDVOL0_STATUS_FLAG_DATA_SCRUB (0x00100000) #define MPI2_RAIDVOL0_STATUS_FLAG_CONSISTENCY_CHECK (0x00080000) #define MPI2_RAIDVOL0_STATUS_FLAG_CAPACITY_EXPANSION (0x00040000) #define MPI2_RAIDVOL0_STATUS_FLAG_BACKGROUND_INIT (0x00020000) #define MPI2_RAIDVOL0_STATUS_FLAG_RESYNC_IN_PROGRESS (0x00010000) #define MPI2_RAIDVOL0_STATUS_FLAG_VOL_NOT_CONSISTENT (0x00000080) #define MPI2_RAIDVOL0_STATUS_FLAG_OCE_ALLOWED (0x00000040) #define MPI2_RAIDVOL0_STATUS_FLAG_BGI_COMPLETE (0x00000020) #define MPI2_RAIDVOL0_STATUS_FLAG_1E_OFFSET_MIRROR (0x00000000) #define MPI2_RAIDVOL0_STATUS_FLAG_1E_ADJACENT_MIRROR (0x00000010) #define MPI2_RAIDVOL0_STATUS_FLAG_BAD_BLOCK_TABLE_FULL (0x00000008) #define MPI2_RAIDVOL0_STATUS_FLAG_VOLUME_INACTIVE (0x00000004) #define MPI2_RAIDVOL0_STATUS_FLAG_QUIESCED (0x00000002) #define MPI2_RAIDVOL0_STATUS_FLAG_ENABLED (0x00000001) /* values for RAID Volume Page 0 SupportedPhysDisks field */ #define MPI2_RAIDVOL0_SUPPORT_SOLID_STATE_DISKS (0x08) #define MPI2_RAIDVOL0_SUPPORT_HARD_DISKS (0x04) #define MPI2_RAIDVOL0_SUPPORT_SAS_PROTOCOL (0x02) #define MPI2_RAIDVOL0_SUPPORT_SATA_PROTOCOL (0x01) /* values for RAID Volume Page 0 InactiveStatus field */ #define MPI2_RAIDVOLPAGE0_UNKNOWN_INACTIVE (0x00) #define MPI2_RAIDVOLPAGE0_STALE_METADATA_INACTIVE (0x01) #define MPI2_RAIDVOLPAGE0_FOREIGN_VOLUME_INACTIVE (0x02) #define MPI2_RAIDVOLPAGE0_INSUFFICIENT_RESOURCE_INACTIVE (0x03) #define MPI2_RAIDVOLPAGE0_CLONE_VOLUME_INACTIVE (0x04) #define MPI2_RAIDVOLPAGE0_INSUFFICIENT_METADATA_INACTIVE (0x05) #define MPI2_RAIDVOLPAGE0_PREVIOUSLY_DELETED (0x06) /* RAID Volume Page 1 */ typedef struct _MPI2_CONFIG_PAGE_RAID_VOL_1 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U16 DevHandle; /* 0x04 */ U16 Reserved0; /* 0x06 */ U8 GUID[24]; /* 0x08 */ U8 Name[16]; /* 0x20 */ U64 WWID; /* 0x30 */ U32 Reserved1; /* 0x38 */ U32 Reserved2; /* 0x3C */ } MPI2_CONFIG_PAGE_RAID_VOL_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_RAID_VOL_1, Mpi2RaidVolPage1_t, MPI2_POINTER pMpi2RaidVolPage1_t; #define MPI2_RAIDVOLPAGE1_PAGEVERSION (0x03) /**************************************************************************** * RAID Physical Disk Config Pages ****************************************************************************/ /* RAID Physical Disk Page 0 */ typedef struct _MPI2_RAIDPHYSDISK0_SETTINGS { U16 Reserved1; /* 0x00 */ U8 HotSparePool; /* 0x02 */ U8 Reserved2; /* 0x03 */ } MPI2_RAIDPHYSDISK0_SETTINGS, MPI2_POINTER PTR_MPI2_RAIDPHYSDISK0_SETTINGS, Mpi2RaidPhysDisk0Settings_t, MPI2_POINTER pMpi2RaidPhysDisk0Settings_t; /* use MPI2_RAID_HOT_SPARE_POOL_ defines for the HotSparePool field */ typedef struct _MPI2_RAIDPHYSDISK0_INQUIRY_DATA { U8 VendorID[8]; /* 0x00 */ U8 ProductID[16]; /* 0x08 */ U8 ProductRevLevel[4]; /* 0x18 */ U8 SerialNum[32]; /* 0x1C */ } MPI2_RAIDPHYSDISK0_INQUIRY_DATA, MPI2_POINTER PTR_MPI2_RAIDPHYSDISK0_INQUIRY_DATA, Mpi2RaidPhysDisk0InquiryData_t, MPI2_POINTER pMpi2RaidPhysDisk0InquiryData_t; typedef struct _MPI2_CONFIG_PAGE_RD_PDISK_0 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U16 DevHandle; /* 0x04 */ U8 Reserved1; /* 0x06 */ U8 PhysDiskNum; /* 0x07 */ MPI2_RAIDPHYSDISK0_SETTINGS PhysDiskSettings; /* 0x08 */ U32 Reserved2; /* 0x0C */ MPI2_RAIDPHYSDISK0_INQUIRY_DATA InquiryData; /* 0x10 */ U32 Reserved3; /* 0x4C */ U8 PhysDiskState; /* 0x50 */ U8 OfflineReason; /* 0x51 */ U8 IncompatibleReason; /* 0x52 */ U8 PhysDiskAttributes; /* 0x53 */ U32 PhysDiskStatusFlags; /* 0x54 */ U64 DeviceMaxLBA; /* 0x58 */ U64 HostMaxLBA; /* 0x60 */ U64 CoercedMaxLBA; /* 0x68 */ U16 BlockSize; /* 0x70 */ U16 Reserved5; /* 0x72 */ U32 Reserved6; /* 0x74 */ } MPI2_CONFIG_PAGE_RD_PDISK_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_RD_PDISK_0, Mpi2RaidPhysDiskPage0_t, MPI2_POINTER pMpi2RaidPhysDiskPage0_t; #define MPI2_RAIDPHYSDISKPAGE0_PAGEVERSION (0x05) /* PhysDiskState defines */ #define MPI2_RAID_PD_STATE_NOT_CONFIGURED (0x00) #define MPI2_RAID_PD_STATE_NOT_COMPATIBLE (0x01) #define MPI2_RAID_PD_STATE_OFFLINE (0x02) #define MPI2_RAID_PD_STATE_ONLINE (0x03) #define MPI2_RAID_PD_STATE_HOT_SPARE (0x04) #define MPI2_RAID_PD_STATE_DEGRADED (0x05) #define MPI2_RAID_PD_STATE_REBUILDING (0x06) #define MPI2_RAID_PD_STATE_OPTIMAL (0x07) /* OfflineReason defines */ #define MPI2_PHYSDISK0_ONLINE (0x00) #define MPI2_PHYSDISK0_OFFLINE_MISSING (0x01) #define MPI2_PHYSDISK0_OFFLINE_FAILED (0x03) #define MPI2_PHYSDISK0_OFFLINE_INITIALIZING (0x04) #define MPI2_PHYSDISK0_OFFLINE_REQUESTED (0x05) #define MPI2_PHYSDISK0_OFFLINE_FAILED_REQUESTED (0x06) #define MPI2_PHYSDISK0_OFFLINE_OTHER (0xFF) /* IncompatibleReason defines */ #define MPI2_PHYSDISK0_COMPATIBLE (0x00) #define MPI2_PHYSDISK0_INCOMPATIBLE_PROTOCOL (0x01) #define MPI2_PHYSDISK0_INCOMPATIBLE_BLOCKSIZE (0x02) #define MPI2_PHYSDISK0_INCOMPATIBLE_MAX_LBA (0x03) #define MPI2_PHYSDISK0_INCOMPATIBLE_SATA_EXTENDED_CMD (0x04) #define MPI2_PHYSDISK0_INCOMPATIBLE_REMOVEABLE_MEDIA (0x05) #define MPI2_PHYSDISK0_INCOMPATIBLE_MEDIA_TYPE (0x06) #define MPI2_PHYSDISK0_INCOMPATIBLE_UNKNOWN (0xFF) /* PhysDiskAttributes defines */ #define MPI2_PHYSDISK0_ATTRIB_MEDIA_MASK (0x0C) #define MPI2_PHYSDISK0_ATTRIB_SOLID_STATE_DRIVE (0x08) #define MPI2_PHYSDISK0_ATTRIB_HARD_DISK_DRIVE (0x04) #define MPI2_PHYSDISK0_ATTRIB_PROTOCOL_MASK (0x03) #define MPI2_PHYSDISK0_ATTRIB_SAS_PROTOCOL (0x02) #define MPI2_PHYSDISK0_ATTRIB_SATA_PROTOCOL (0x01) /* PhysDiskStatusFlags defines */ #define MPI2_PHYSDISK0_STATUS_FLAG_NOT_CERTIFIED (0x00000040) #define MPI2_PHYSDISK0_STATUS_FLAG_OCE_TARGET (0x00000020) #define MPI2_PHYSDISK0_STATUS_FLAG_WRITE_CACHE_ENABLED (0x00000010) #define MPI2_PHYSDISK0_STATUS_FLAG_OPTIMAL_PREVIOUS (0x00000000) #define MPI2_PHYSDISK0_STATUS_FLAG_NOT_OPTIMAL_PREVIOUS (0x00000008) #define MPI2_PHYSDISK0_STATUS_FLAG_INACTIVE_VOLUME (0x00000004) #define MPI2_PHYSDISK0_STATUS_FLAG_QUIESCED (0x00000002) #define MPI2_PHYSDISK0_STATUS_FLAG_OUT_OF_SYNC (0x00000001) /* RAID Physical Disk Page 1 */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhysDiskPaths at runtime. */ #ifndef MPI2_RAID_PHYS_DISK1_PATH_MAX #define MPI2_RAID_PHYS_DISK1_PATH_MAX (1) #endif typedef struct _MPI2_RAIDPHYSDISK1_PATH { U16 DevHandle; /* 0x00 */ U16 Reserved1; /* 0x02 */ U64 WWID; /* 0x04 */ U64 OwnerWWID; /* 0x0C */ U8 OwnerIdentifier; /* 0x14 */ U8 Reserved2; /* 0x15 */ U16 Flags; /* 0x16 */ } MPI2_RAIDPHYSDISK1_PATH, MPI2_POINTER PTR_MPI2_RAIDPHYSDISK1_PATH, Mpi2RaidPhysDisk1Path_t, MPI2_POINTER pMpi2RaidPhysDisk1Path_t; /* RAID Physical Disk Page 1 Physical Disk Path Flags field defines */ #define MPI2_RAID_PHYSDISK1_FLAG_PRIMARY (0x0004) #define MPI2_RAID_PHYSDISK1_FLAG_BROKEN (0x0002) #define MPI2_RAID_PHYSDISK1_FLAG_INVALID (0x0001) typedef struct _MPI2_CONFIG_PAGE_RD_PDISK_1 { MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ U8 NumPhysDiskPaths; /* 0x04 */ U8 PhysDiskNum; /* 0x05 */ U16 Reserved1; /* 0x06 */ U32 Reserved2; /* 0x08 */ MPI2_RAIDPHYSDISK1_PATH PhysicalDiskPath[MPI2_RAID_PHYS_DISK1_PATH_MAX];/* 0x0C */ } MPI2_CONFIG_PAGE_RD_PDISK_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_RD_PDISK_1, Mpi2RaidPhysDiskPage1_t, MPI2_POINTER pMpi2RaidPhysDiskPage1_t; #define MPI2_RAIDPHYSDISKPAGE1_PAGEVERSION (0x02) /**************************************************************************** * values for fields used by several types of SAS Config Pages ****************************************************************************/ /* values for NegotiatedLinkRates fields */ #define MPI2_SAS_NEG_LINK_RATE_MASK_LOGICAL (0xF0) #define MPI2_SAS_NEG_LINK_RATE_SHIFT_LOGICAL (4) #define MPI2_SAS_NEG_LINK_RATE_MASK_PHYSICAL (0x0F) /* link rates used for Negotiated Physical and Logical Link Rate */ #define MPI2_SAS_NEG_LINK_RATE_UNKNOWN_LINK_RATE (0x00) #define MPI2_SAS_NEG_LINK_RATE_PHY_DISABLED (0x01) #define MPI2_SAS_NEG_LINK_RATE_NEGOTIATION_FAILED (0x02) #define MPI2_SAS_NEG_LINK_RATE_SATA_OOB_COMPLETE (0x03) #define MPI2_SAS_NEG_LINK_RATE_PORT_SELECTOR (0x04) #define MPI2_SAS_NEG_LINK_RATE_SMP_RESET_IN_PROGRESS (0x05) #define MPI2_SAS_NEG_LINK_RATE_UNSUPPORTED_PHY (0x06) #define MPI2_SAS_NEG_LINK_RATE_1_5 (0x08) #define MPI2_SAS_NEG_LINK_RATE_3_0 (0x09) #define MPI2_SAS_NEG_LINK_RATE_6_0 (0x0A) #define MPI25_SAS_NEG_LINK_RATE_12_0 (0x0B) #define MPI26_SAS_NEG_LINK_RATE_22_5 (0x0C) /* values for AttachedPhyInfo fields */ #define MPI2_SAS_APHYINFO_INSIDE_ZPSDS_PERSISTENT (0x00000040) #define MPI2_SAS_APHYINFO_REQUESTED_INSIDE_ZPSDS (0x00000020) #define MPI2_SAS_APHYINFO_BREAK_REPLY_CAPABLE (0x00000010) #define MPI2_SAS_APHYINFO_REASON_MASK (0x0000000F) #define MPI2_SAS_APHYINFO_REASON_UNKNOWN (0x00000000) #define MPI2_SAS_APHYINFO_REASON_POWER_ON (0x00000001) #define MPI2_SAS_APHYINFO_REASON_HARD_RESET (0x00000002) #define MPI2_SAS_APHYINFO_REASON_SMP_PHY_CONTROL (0x00000003) #define MPI2_SAS_APHYINFO_REASON_LOSS_OF_SYNC (0x00000004) #define MPI2_SAS_APHYINFO_REASON_MULTIPLEXING_SEQ (0x00000005) #define MPI2_SAS_APHYINFO_REASON_IT_NEXUS_LOSS_TIMER (0x00000006) #define MPI2_SAS_APHYINFO_REASON_BREAK_TIMEOUT (0x00000007) #define MPI2_SAS_APHYINFO_REASON_PHY_TEST_STOPPED (0x00000008) /* values for PhyInfo fields */ #define MPI2_SAS_PHYINFO_PHY_VACANT (0x80000000) #define MPI2_SAS_PHYINFO_PHY_POWER_CONDITION_MASK (0x18000000) #define MPI2_SAS_PHYINFO_SHIFT_PHY_POWER_CONDITION (27) #define MPI2_SAS_PHYINFO_PHY_POWER_ACTIVE (0x00000000) #define MPI2_SAS_PHYINFO_PHY_POWER_PARTIAL (0x08000000) #define MPI2_SAS_PHYINFO_PHY_POWER_SLUMBER (0x10000000) #define MPI2_SAS_PHYINFO_CHANGED_REQ_INSIDE_ZPSDS (0x04000000) #define MPI2_SAS_PHYINFO_INSIDE_ZPSDS_PERSISTENT (0x02000000) #define MPI2_SAS_PHYINFO_REQ_INSIDE_ZPSDS (0x01000000) #define MPI2_SAS_PHYINFO_ZONE_GROUP_PERSISTENT (0x00400000) #define MPI2_SAS_PHYINFO_INSIDE_ZPSDS (0x00200000) #define MPI2_SAS_PHYINFO_ZONING_ENABLED (0x00100000) #define MPI2_SAS_PHYINFO_REASON_MASK (0x000F0000) #define MPI2_SAS_PHYINFO_REASON_UNKNOWN (0x00000000) #define MPI2_SAS_PHYINFO_REASON_POWER_ON (0x00010000) #define MPI2_SAS_PHYINFO_REASON_HARD_RESET (0x00020000) #define MPI2_SAS_PHYINFO_REASON_SMP_PHY_CONTROL (0x00030000) #define MPI2_SAS_PHYINFO_REASON_LOSS_OF_SYNC (0x00040000) #define MPI2_SAS_PHYINFO_REASON_MULTIPLEXING_SEQ (0x00050000) #define MPI2_SAS_PHYINFO_REASON_IT_NEXUS_LOSS_TIMER (0x00060000) #define MPI2_SAS_PHYINFO_REASON_BREAK_TIMEOUT (0x00070000) #define MPI2_SAS_PHYINFO_REASON_PHY_TEST_STOPPED (0x00080000) #define MPI2_SAS_PHYINFO_MULTIPLEXING_SUPPORTED (0x00008000) #define MPI2_SAS_PHYINFO_SATA_PORT_ACTIVE (0x00004000) #define MPI2_SAS_PHYINFO_SATA_PORT_SELECTOR_PRESENT (0x00002000) #define MPI2_SAS_PHYINFO_VIRTUAL_PHY (0x00001000) #define MPI2_SAS_PHYINFO_MASK_PARTIAL_PATHWAY_TIME (0x00000F00) #define MPI2_SAS_PHYINFO_SHIFT_PARTIAL_PATHWAY_TIME (8) #define MPI2_SAS_PHYINFO_MASK_ROUTING_ATTRIBUTE (0x000000F0) #define MPI2_SAS_PHYINFO_DIRECT_ROUTING (0x00000000) #define MPI2_SAS_PHYINFO_SUBTRACTIVE_ROUTING (0x00000010) #define MPI2_SAS_PHYINFO_TABLE_ROUTING (0x00000020) /* values for SAS ProgrammedLinkRate fields */ #define MPI2_SAS_PRATE_MAX_RATE_MASK (0xF0) #define MPI2_SAS_PRATE_MAX_RATE_NOT_PROGRAMMABLE (0x00) #define MPI2_SAS_PRATE_MAX_RATE_1_5 (0x80) #define MPI2_SAS_PRATE_MAX_RATE_3_0 (0x90) #define MPI2_SAS_PRATE_MAX_RATE_6_0 (0xA0) #define MPI25_SAS_PRATE_MAX_RATE_12_0 (0xB0) #define MPI26_SAS_PRATE_MAX_RATE_22_5 (0xC0) #define MPI2_SAS_PRATE_MIN_RATE_MASK (0x0F) #define MPI2_SAS_PRATE_MIN_RATE_NOT_PROGRAMMABLE (0x00) #define MPI2_SAS_PRATE_MIN_RATE_1_5 (0x08) #define MPI2_SAS_PRATE_MIN_RATE_3_0 (0x09) #define MPI2_SAS_PRATE_MIN_RATE_6_0 (0x0A) #define MPI25_SAS_PRATE_MIN_RATE_12_0 (0x0B) #define MPI26_SAS_PRATE_MIN_RATE_22_5 (0x0C) /* values for SAS HwLinkRate fields */ #define MPI2_SAS_HWRATE_MAX_RATE_MASK (0xF0) #define MPI2_SAS_HWRATE_MAX_RATE_1_5 (0x80) #define MPI2_SAS_HWRATE_MAX_RATE_3_0 (0x90) #define MPI2_SAS_HWRATE_MAX_RATE_6_0 (0xA0) #define MPI25_SAS_HWRATE_MAX_RATE_12_0 (0xB0) #define MPI26_SAS_HWRATE_MAX_RATE_22_5 (0xC0) #define MPI2_SAS_HWRATE_MIN_RATE_MASK (0x0F) #define MPI2_SAS_HWRATE_MIN_RATE_1_5 (0x08) #define MPI2_SAS_HWRATE_MIN_RATE_3_0 (0x09) #define MPI2_SAS_HWRATE_MIN_RATE_6_0 (0x0A) #define MPI25_SAS_HWRATE_MIN_RATE_12_0 (0x0B) #define MPI26_SAS_HWRATE_MIN_RATE_22_5 (0x0C) /**************************************************************************** * SAS IO Unit Config Pages ****************************************************************************/ /* SAS IO Unit Page 0 */ typedef struct _MPI2_SAS_IO_UNIT0_PHY_DATA { U8 Port; /* 0x00 */ U8 PortFlags; /* 0x01 */ U8 PhyFlags; /* 0x02 */ U8 NegotiatedLinkRate; /* 0x03 */ U32 ControllerPhyDeviceInfo;/* 0x04 */ U16 AttachedDevHandle; /* 0x08 */ U16 ControllerDevHandle; /* 0x0A */ U32 DiscoveryStatus; /* 0x0C */ U32 Reserved; /* 0x10 */ } MPI2_SAS_IO_UNIT0_PHY_DATA, MPI2_POINTER PTR_MPI2_SAS_IO_UNIT0_PHY_DATA, Mpi2SasIOUnit0PhyData_t, MPI2_POINTER pMpi2SasIOUnit0PhyData_t; /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhys at runtime. */ #ifndef MPI2_SAS_IOUNIT0_PHY_MAX #define MPI2_SAS_IOUNIT0_PHY_MAX (1) #endif typedef struct _MPI2_CONFIG_PAGE_SASIOUNIT_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x08 */ U8 NumPhys; /* 0x0C */ U8 Reserved2; /* 0x0D */ U16 Reserved3; /* 0x0E */ MPI2_SAS_IO_UNIT0_PHY_DATA PhyData[MPI2_SAS_IOUNIT0_PHY_MAX]; /* 0x10 */ } MPI2_CONFIG_PAGE_SASIOUNIT_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SASIOUNIT_0, Mpi2SasIOUnitPage0_t, MPI2_POINTER pMpi2SasIOUnitPage0_t; #define MPI2_SASIOUNITPAGE0_PAGEVERSION (0x05) /* values for SAS IO Unit Page 0 PortFlags */ #define MPI2_SASIOUNIT0_PORTFLAGS_DISCOVERY_IN_PROGRESS (0x08) #define MPI2_SASIOUNIT0_PORTFLAGS_AUTO_PORT_CONFIG (0x01) /* values for SAS IO Unit Page 0 PhyFlags */ #define MPI2_SASIOUNIT0_PHYFLAGS_INIT_PERSIST_CONNECT (0x40) #define MPI2_SASIOUNIT0_PHYFLAGS_TARG_PERSIST_CONNECT (0x20) #define MPI2_SASIOUNIT0_PHYFLAGS_ZONING_ENABLED (0x10) #define MPI2_SASIOUNIT0_PHYFLAGS_PHY_DISABLED (0x08) /* use MPI2_SAS_NEG_LINK_RATE_ defines for the NegotiatedLinkRate field */ /* see mpi2_sas.h for values for SAS IO Unit Page 0 ControllerPhyDeviceInfo values */ /* values for SAS IO Unit Page 0 DiscoveryStatus */ #define MPI2_SASIOUNIT0_DS_MAX_ENCLOSURES_EXCEED (0x80000000) #define MPI2_SASIOUNIT0_DS_MAX_EXPANDERS_EXCEED (0x40000000) #define MPI2_SASIOUNIT0_DS_MAX_DEVICES_EXCEED (0x20000000) #define MPI2_SASIOUNIT0_DS_MAX_TOPO_PHYS_EXCEED (0x10000000) #define MPI2_SASIOUNIT0_DS_DOWNSTREAM_INITIATOR (0x08000000) #define MPI2_SASIOUNIT0_DS_MULTI_SUBTRACTIVE_SUBTRACTIVE (0x00008000) #define MPI2_SASIOUNIT0_DS_EXP_MULTI_SUBTRACTIVE (0x00004000) #define MPI2_SASIOUNIT0_DS_MULTI_PORT_DOMAIN (0x00002000) #define MPI2_SASIOUNIT0_DS_TABLE_TO_SUBTRACTIVE_LINK (0x00001000) #define MPI2_SASIOUNIT0_DS_UNSUPPORTED_DEVICE (0x00000800) #define MPI2_SASIOUNIT0_DS_TABLE_LINK (0x00000400) #define MPI2_SASIOUNIT0_DS_SUBTRACTIVE_LINK (0x00000200) #define MPI2_SASIOUNIT0_DS_SMP_CRC_ERROR (0x00000100) #define MPI2_SASIOUNIT0_DS_SMP_FUNCTION_FAILED (0x00000080) #define MPI2_SASIOUNIT0_DS_INDEX_NOT_EXIST (0x00000040) #define MPI2_SASIOUNIT0_DS_OUT_ROUTE_ENTRIES (0x00000020) #define MPI2_SASIOUNIT0_DS_SMP_TIMEOUT (0x00000010) #define MPI2_SASIOUNIT0_DS_MULTIPLE_PORTS (0x00000004) #define MPI2_SASIOUNIT0_DS_UNADDRESSABLE_DEVICE (0x00000002) #define MPI2_SASIOUNIT0_DS_LOOP_DETECTED (0x00000001) /* SAS IO Unit Page 1 */ typedef struct _MPI2_SAS_IO_UNIT1_PHY_DATA { U8 Port; /* 0x00 */ U8 PortFlags; /* 0x01 */ U8 PhyFlags; /* 0x02 */ U8 MaxMinLinkRate; /* 0x03 */ U32 ControllerPhyDeviceInfo; /* 0x04 */ U16 MaxTargetPortConnectTime; /* 0x08 */ U16 Reserved1; /* 0x0A */ } MPI2_SAS_IO_UNIT1_PHY_DATA, MPI2_POINTER PTR_MPI2_SAS_IO_UNIT1_PHY_DATA, Mpi2SasIOUnit1PhyData_t, MPI2_POINTER pMpi2SasIOUnit1PhyData_t; /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhys at runtime. */ #ifndef MPI2_SAS_IOUNIT1_PHY_MAX #define MPI2_SAS_IOUNIT1_PHY_MAX (1) #endif typedef struct _MPI2_CONFIG_PAGE_SASIOUNIT_1 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U16 ControlFlags; /* 0x08 */ U16 SASNarrowMaxQueueDepth; /* 0x0A */ U16 AdditionalControlFlags; /* 0x0C */ U16 SASWideMaxQueueDepth; /* 0x0E */ U8 NumPhys; /* 0x10 */ U8 SATAMaxQDepth; /* 0x11 */ U8 ReportDeviceMissingDelay; /* 0x12 */ U8 IODeviceMissingDelay; /* 0x13 */ MPI2_SAS_IO_UNIT1_PHY_DATA PhyData[MPI2_SAS_IOUNIT1_PHY_MAX]; /* 0x14 */ } MPI2_CONFIG_PAGE_SASIOUNIT_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SASIOUNIT_1, Mpi2SasIOUnitPage1_t, MPI2_POINTER pMpi2SasIOUnitPage1_t; #define MPI2_SASIOUNITPAGE1_PAGEVERSION (0x09) /* values for SAS IO Unit Page 1 ControlFlags */ #define MPI2_SASIOUNIT1_CONTROL_DEVICE_SELF_TEST (0x8000) #define MPI2_SASIOUNIT1_CONTROL_SATA_3_0_MAX (0x4000) #define MPI2_SASIOUNIT1_CONTROL_SATA_1_5_MAX (0x2000) /* MPI v2.0 only. Obsolete in MPI v2.5 and later. */ #define MPI2_SASIOUNIT1_CONTROL_SATA_SW_PRESERVE (0x1000) #define MPI2_SASIOUNIT1_CONTROL_MASK_DEV_SUPPORT (0x0600) #define MPI2_SASIOUNIT1_CONTROL_SHIFT_DEV_SUPPORT (9) #define MPI2_SASIOUNIT1_CONTROL_DEV_SUPPORT_BOTH (0x0) #define MPI2_SASIOUNIT1_CONTROL_DEV_SAS_SUPPORT (0x1) #define MPI2_SASIOUNIT1_CONTROL_DEV_SATA_SUPPORT (0x2) #define MPI2_SASIOUNIT1_CONTROL_SATA_48BIT_LBA_REQUIRED (0x0080) #define MPI2_SASIOUNIT1_CONTROL_SATA_SMART_REQUIRED (0x0040) #define MPI2_SASIOUNIT1_CONTROL_SATA_NCQ_REQUIRED (0x0020) #define MPI2_SASIOUNIT1_CONTROL_SATA_FUA_REQUIRED (0x0010) #define MPI2_SASIOUNIT1_CONTROL_TABLE_SUBTRACTIVE_ILLEGAL (0x0008) #define MPI2_SASIOUNIT1_CONTROL_SUBTRACTIVE_ILLEGAL (0x0004) #define MPI2_SASIOUNIT1_CONTROL_FIRST_LVL_DISC_ONLY (0x0002) #define MPI2_SASIOUNIT1_CONTROL_CLEAR_AFFILIATION (0x0001) /* MPI v2.0 only. Obsolete in MPI v2.5 and later. */ /* values for SAS IO Unit Page 1 AdditionalControlFlags */ #define MPI2_SASIOUNIT1_ACONTROL_DA_PERSIST_CONNECT (0x0100) #define MPI2_SASIOUNIT1_ACONTROL_MULTI_PORT_DOMAIN_ILLEGAL (0x0080) #define MPI2_SASIOUNIT1_ACONTROL_SATA_ASYNCHROUNOUS_NOTIFICATION (0x0040) #define MPI2_SASIOUNIT1_ACONTROL_INVALID_TOPOLOGY_CORRECTION (0x0020) #define MPI2_SASIOUNIT1_ACONTROL_PORT_ENABLE_ONLY_SATA_LINK_RESET (0x0010) #define MPI2_SASIOUNIT1_ACONTROL_OTHER_AFFILIATION_SATA_LINK_RESET (0x0008) #define MPI2_SASIOUNIT1_ACONTROL_SELF_AFFILIATION_SATA_LINK_RESET (0x0004) #define MPI2_SASIOUNIT1_ACONTROL_NO_AFFILIATION_SATA_LINK_RESET (0x0002) #define MPI2_SASIOUNIT1_ACONTROL_ALLOW_TABLE_TO_TABLE (0x0001) /* defines for SAS IO Unit Page 1 ReportDeviceMissingDelay */ #define MPI2_SASIOUNIT1_REPORT_MISSING_TIMEOUT_MASK (0x7F) #define MPI2_SASIOUNIT1_REPORT_MISSING_UNIT_16 (0x80) /* values for SAS IO Unit Page 1 PortFlags */ #define MPI2_SASIOUNIT1_PORT_FLAGS_AUTO_PORT_CONFIG (0x01) /* values for SAS IO Unit Page 1 PhyFlags */ #define MPI2_SASIOUNIT1_PHYFLAGS_INIT_PERSIST_CONNECT (0x40) #define MPI2_SASIOUNIT1_PHYFLAGS_TARG_PERSIST_CONNECT (0x20) #define MPI2_SASIOUNIT1_PHYFLAGS_ZONING_ENABLE (0x10) #define MPI2_SASIOUNIT1_PHYFLAGS_PHY_DISABLE (0x08) /* values for SAS IO Unit Page 1 MaxMinLinkRate */ #define MPI2_SASIOUNIT1_MAX_RATE_MASK (0xF0) #define MPI2_SASIOUNIT1_MAX_RATE_1_5 (0x80) #define MPI2_SASIOUNIT1_MAX_RATE_3_0 (0x90) #define MPI2_SASIOUNIT1_MAX_RATE_6_0 (0xA0) #define MPI25_SASIOUNIT1_MAX_RATE_12_0 (0xB0) #define MPI26_SASIOUNIT1_MAX_RATE_22_5 (0xC0) #define MPI2_SASIOUNIT1_MIN_RATE_MASK (0x0F) #define MPI2_SASIOUNIT1_MIN_RATE_1_5 (0x08) #define MPI2_SASIOUNIT1_MIN_RATE_3_0 (0x09) #define MPI2_SASIOUNIT1_MIN_RATE_6_0 (0x0A) #define MPI25_SASIOUNIT1_MIN_RATE_12_0 (0x0B) #define MPI26_SASIOUNIT1_MIN_RATE_22_5 (0x0C) /* see mpi2_sas.h for values for SAS IO Unit Page 1 ControllerPhyDeviceInfo values */ /* SAS IO Unit Page 4 (for MPI v2.5 and earlier) */ typedef struct _MPI2_SAS_IOUNIT4_SPINUP_GROUP { U8 MaxTargetSpinup; /* 0x00 */ U8 SpinupDelay; /* 0x01 */ U8 SpinupFlags; /* 0x02 */ U8 Reserved1; /* 0x03 */ } MPI2_SAS_IOUNIT4_SPINUP_GROUP, MPI2_POINTER PTR_MPI2_SAS_IOUNIT4_SPINUP_GROUP, Mpi2SasIOUnit4SpinupGroup_t, MPI2_POINTER pMpi2SasIOUnit4SpinupGroup_t; /* defines for SAS IO Unit Page 4 SpinupFlags */ #define MPI2_SASIOUNIT4_SPINUP_DISABLE_FLAG (0x01) /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhys at runtime. */ #ifndef MPI2_SAS_IOUNIT4_PHY_MAX #define MPI2_SAS_IOUNIT4_PHY_MAX (4) #endif typedef struct _MPI2_CONFIG_PAGE_SASIOUNIT_4 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ MPI2_SAS_IOUNIT4_SPINUP_GROUP SpinupGroupParameters[4]; /* 0x08 */ U32 Reserved1; /* 0x18 */ U32 Reserved2; /* 0x1C */ U32 Reserved3; /* 0x20 */ U8 BootDeviceWaitTime; /* 0x24 */ U8 SATADeviceWaitTime; /* 0x25 */ U16 Reserved5; /* 0x26 */ U8 NumPhys; /* 0x28 */ U8 PEInitialSpinupDelay; /* 0x29 */ U8 PEReplyDelay; /* 0x2A */ U8 Flags; /* 0x2B */ U8 PHY[MPI2_SAS_IOUNIT4_PHY_MAX]; /* 0x2C */ } MPI2_CONFIG_PAGE_SASIOUNIT_4, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SASIOUNIT_4, Mpi2SasIOUnitPage4_t, MPI2_POINTER pMpi2SasIOUnitPage4_t; #define MPI2_SASIOUNITPAGE4_PAGEVERSION (0x02) /* defines for Flags field */ #define MPI2_SASIOUNIT4_FLAGS_AUTO_PORTENABLE (0x01) /* defines for PHY field */ #define MPI2_SASIOUNIT4_PHY_SPINUP_GROUP_MASK (0x03) /* SAS IO Unit Page 5 */ typedef struct _MPI2_SAS_IO_UNIT5_PHY_PM_SETTINGS { U8 ControlFlags; /* 0x00 */ U8 PortWidthModGroup; /* 0x01 */ U16 InactivityTimerExponent; /* 0x02 */ U8 SATAPartialTimeout; /* 0x04 */ U8 Reserved2; /* 0x05 */ U8 SATASlumberTimeout; /* 0x06 */ U8 Reserved3; /* 0x07 */ U8 SASPartialTimeout; /* 0x08 */ U8 Reserved4; /* 0x09 */ U8 SASSlumberTimeout; /* 0x0A */ U8 Reserved5; /* 0x0B */ } MPI2_SAS_IO_UNIT5_PHY_PM_SETTINGS, MPI2_POINTER PTR_MPI2_SAS_IO_UNIT5_PHY_PM_SETTINGS, Mpi2SasIOUnit5PhyPmSettings_t, MPI2_POINTER pMpi2SasIOUnit5PhyPmSettings_t; /* defines for ControlFlags field */ #define MPI2_SASIOUNIT5_CONTROL_SAS_SLUMBER_ENABLE (0x08) #define MPI2_SASIOUNIT5_CONTROL_SAS_PARTIAL_ENABLE (0x04) #define MPI2_SASIOUNIT5_CONTROL_SATA_SLUMBER_ENABLE (0x02) #define MPI2_SASIOUNIT5_CONTROL_SATA_PARTIAL_ENABLE (0x01) /* defines for PortWidthModeGroup field */ #define MPI2_SASIOUNIT5_PWMG_DISABLE (0xFF) /* defines for InactivityTimerExponent field */ #define MPI2_SASIOUNIT5_ITE_MASK_SAS_SLUMBER (0x7000) #define MPI2_SASIOUNIT5_ITE_SHIFT_SAS_SLUMBER (12) #define MPI2_SASIOUNIT5_ITE_MASK_SAS_PARTIAL (0x0700) #define MPI2_SASIOUNIT5_ITE_SHIFT_SAS_PARTIAL (8) #define MPI2_SASIOUNIT5_ITE_MASK_SATA_SLUMBER (0x0070) #define MPI2_SASIOUNIT5_ITE_SHIFT_SATA_SLUMBER (4) #define MPI2_SASIOUNIT5_ITE_MASK_SATA_PARTIAL (0x0007) #define MPI2_SASIOUNIT5_ITE_SHIFT_SATA_PARTIAL (0) #define MPI2_SASIOUNIT5_ITE_TEN_SECONDS (7) #define MPI2_SASIOUNIT5_ITE_ONE_SECOND (6) #define MPI2_SASIOUNIT5_ITE_HUNDRED_MILLISECONDS (5) #define MPI2_SASIOUNIT5_ITE_TEN_MILLISECONDS (4) #define MPI2_SASIOUNIT5_ITE_ONE_MILLISECOND (3) #define MPI2_SASIOUNIT5_ITE_HUNDRED_MICROSECONDS (2) #define MPI2_SASIOUNIT5_ITE_TEN_MICROSECONDS (1) #define MPI2_SASIOUNIT5_ITE_ONE_MICROSECOND (0) /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhys at runtime. */ #ifndef MPI2_SAS_IOUNIT5_PHY_MAX #define MPI2_SAS_IOUNIT5_PHY_MAX (1) #endif typedef struct _MPI2_CONFIG_PAGE_SASIOUNIT_5 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U8 NumPhys; /* 0x08 */ U8 Reserved1; /* 0x09 */ U16 Reserved2; /* 0x0A */ U32 Reserved3; /* 0x0C */ MPI2_SAS_IO_UNIT5_PHY_PM_SETTINGS SASPhyPowerManagementSettings[MPI2_SAS_IOUNIT5_PHY_MAX]; /* 0x10 */ } MPI2_CONFIG_PAGE_SASIOUNIT_5, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SASIOUNIT_5, Mpi2SasIOUnitPage5_t, MPI2_POINTER pMpi2SasIOUnitPage5_t; #define MPI2_SASIOUNITPAGE5_PAGEVERSION (0x01) /* SAS IO Unit Page 6 */ typedef struct _MPI2_SAS_IO_UNIT6_PORT_WIDTH_MOD_GROUP_STATUS { U8 CurrentStatus; /* 0x00 */ U8 CurrentModulation; /* 0x01 */ U8 CurrentUtilization; /* 0x02 */ U8 Reserved1; /* 0x03 */ U32 Reserved2; /* 0x04 */ } MPI2_SAS_IO_UNIT6_PORT_WIDTH_MOD_GROUP_STATUS, MPI2_POINTER PTR_MPI2_SAS_IO_UNIT6_PORT_WIDTH_MOD_GROUP_STATUS, Mpi2SasIOUnit6PortWidthModGroupStatus_t, MPI2_POINTER pMpi2SasIOUnit6PortWidthModGroupStatus_t; /* defines for CurrentStatus field */ #define MPI2_SASIOUNIT6_STATUS_UNAVAILABLE (0x00) #define MPI2_SASIOUNIT6_STATUS_UNCONFIGURED (0x01) #define MPI2_SASIOUNIT6_STATUS_INVALID_CONFIG (0x02) #define MPI2_SASIOUNIT6_STATUS_LINK_DOWN (0x03) #define MPI2_SASIOUNIT6_STATUS_OBSERVATION_ONLY (0x04) #define MPI2_SASIOUNIT6_STATUS_INACTIVE (0x05) #define MPI2_SASIOUNIT6_STATUS_ACTIVE_IOUNIT (0x06) #define MPI2_SASIOUNIT6_STATUS_ACTIVE_HOST (0x07) /* defines for CurrentModulation field */ #define MPI2_SASIOUNIT6_MODULATION_25_PERCENT (0x00) #define MPI2_SASIOUNIT6_MODULATION_50_PERCENT (0x01) #define MPI2_SASIOUNIT6_MODULATION_75_PERCENT (0x02) #define MPI2_SASIOUNIT6_MODULATION_100_PERCENT (0x03) /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumGroups at runtime. */ #ifndef MPI2_SAS_IOUNIT6_GROUP_MAX #define MPI2_SAS_IOUNIT6_GROUP_MAX (1) #endif typedef struct _MPI2_CONFIG_PAGE_SASIOUNIT_6 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x08 */ U32 Reserved2; /* 0x0C */ U8 NumGroups; /* 0x10 */ U8 Reserved3; /* 0x11 */ U16 Reserved4; /* 0x12 */ MPI2_SAS_IO_UNIT6_PORT_WIDTH_MOD_GROUP_STATUS PortWidthModulationGroupStatus[MPI2_SAS_IOUNIT6_GROUP_MAX]; /* 0x14 */ } MPI2_CONFIG_PAGE_SASIOUNIT_6, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SASIOUNIT_6, Mpi2SasIOUnitPage6_t, MPI2_POINTER pMpi2SasIOUnitPage6_t; #define MPI2_SASIOUNITPAGE6_PAGEVERSION (0x00) /* SAS IO Unit Page 7 */ typedef struct _MPI2_SAS_IO_UNIT7_PORT_WIDTH_MOD_GROUP_SETTINGS { U8 Flags; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 Reserved2; /* 0x02 */ U8 Threshold75Pct; /* 0x04 */ U8 Threshold50Pct; /* 0x05 */ U8 Threshold25Pct; /* 0x06 */ U8 Reserved3; /* 0x07 */ } MPI2_SAS_IO_UNIT7_PORT_WIDTH_MOD_GROUP_SETTINGS, MPI2_POINTER PTR_MPI2_SAS_IO_UNIT7_PORT_WIDTH_MOD_GROUP_SETTINGS, Mpi2SasIOUnit7PortWidthModGroupSettings_t, MPI2_POINTER pMpi2SasIOUnit7PortWidthModGroupSettings_t; /* defines for Flags field */ #define MPI2_SASIOUNIT7_FLAGS_ENABLE_PORT_WIDTH_MODULATION (0x01) /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumGroups at runtime. */ #ifndef MPI2_SAS_IOUNIT7_GROUP_MAX #define MPI2_SAS_IOUNIT7_GROUP_MAX (1) #endif typedef struct _MPI2_CONFIG_PAGE_SASIOUNIT_7 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U8 SamplingInterval; /* 0x08 */ U8 WindowLength; /* 0x09 */ U16 Reserved1; /* 0x0A */ U32 Reserved2; /* 0x0C */ U32 Reserved3; /* 0x10 */ U8 NumGroups; /* 0x14 */ U8 Reserved4; /* 0x15 */ U16 Reserved5; /* 0x16 */ MPI2_SAS_IO_UNIT7_PORT_WIDTH_MOD_GROUP_SETTINGS PortWidthModulationGroupSettings[MPI2_SAS_IOUNIT7_GROUP_MAX]; /* 0x18 */ } MPI2_CONFIG_PAGE_SASIOUNIT_7, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SASIOUNIT_7, Mpi2SasIOUnitPage7_t, MPI2_POINTER pMpi2SasIOUnitPage7_t; #define MPI2_SASIOUNITPAGE7_PAGEVERSION (0x00) /* SAS IO Unit Page 8 */ typedef struct _MPI2_CONFIG_PAGE_SASIOUNIT_8 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x08 */ U32 PowerManagementCapabilities; /* 0x0C */ U8 TxRxSleepStatus; /* 0x10 */ /* reserved in MPI 2.0 */ U8 Reserved2; /* 0x11 */ U16 Reserved3; /* 0x12 */ } MPI2_CONFIG_PAGE_SASIOUNIT_8, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SASIOUNIT_8, Mpi2SasIOUnitPage8_t, MPI2_POINTER pMpi2SasIOUnitPage8_t; #define MPI2_SASIOUNITPAGE8_PAGEVERSION (0x00) /* defines for PowerManagementCapabilities field */ #define MPI2_SASIOUNIT8_PM_HOST_PORT_WIDTH_MOD (0x00001000) #define MPI2_SASIOUNIT8_PM_HOST_SAS_SLUMBER_MODE (0x00000800) #define MPI2_SASIOUNIT8_PM_HOST_SAS_PARTIAL_MODE (0x00000400) #define MPI2_SASIOUNIT8_PM_HOST_SATA_SLUMBER_MODE (0x00000200) #define MPI2_SASIOUNIT8_PM_HOST_SATA_PARTIAL_MODE (0x00000100) #define MPI2_SASIOUNIT8_PM_IOUNIT_PORT_WIDTH_MOD (0x00000010) #define MPI2_SASIOUNIT8_PM_IOUNIT_SAS_SLUMBER_MODE (0x00000008) #define MPI2_SASIOUNIT8_PM_IOUNIT_SAS_PARTIAL_MODE (0x00000004) #define MPI2_SASIOUNIT8_PM_IOUNIT_SATA_SLUMBER_MODE (0x00000002) #define MPI2_SASIOUNIT8_PM_IOUNIT_SATA_PARTIAL_MODE (0x00000001) /* defines for TxRxSleepStatus field */ #define MPI25_SASIOUNIT8_TXRXSLEEP_UNSUPPORTED (0x00) #define MPI25_SASIOUNIT8_TXRXSLEEP_DISENGAGED (0x01) #define MPI25_SASIOUNIT8_TXRXSLEEP_ACTIVE (0x02) #define MPI25_SASIOUNIT8_TXRXSLEEP_SHUTDOWN (0x03) /* SAS IO Unit Page 16 */ typedef struct _MPI2_CONFIG_PAGE_SASIOUNIT16 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U64 TimeStamp; /* 0x08 */ U32 Reserved1; /* 0x10 */ U32 Reserved2; /* 0x14 */ U32 FastPathPendedRequests; /* 0x18 */ U32 FastPathUnPendedRequests; /* 0x1C */ U32 FastPathHostRequestStarts; /* 0x20 */ U32 FastPathFirmwareRequestStarts; /* 0x24 */ U32 FastPathHostCompletions; /* 0x28 */ U32 FastPathFirmwareCompletions; /* 0x2C */ U32 NonFastPathRequestStarts; /* 0x30 */ U32 NonFastPathHostCompletions; /* 0x30 */ } MPI2_CONFIG_PAGE_SASIOUNIT16, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SASIOUNIT16, Mpi2SasIOUnitPage16_t, MPI2_POINTER pMpi2SasIOUnitPage16_t; #define MPI2_SASIOUNITPAGE16_PAGEVERSION (0x00) /**************************************************************************** * SAS Expander Config Pages ****************************************************************************/ /* SAS Expander Page 0 */ typedef struct _MPI2_CONFIG_PAGE_EXPANDER_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U8 PhysicalPort; /* 0x08 */ U8 ReportGenLength; /* 0x09 */ U16 EnclosureHandle; /* 0x0A */ U64 SASAddress; /* 0x0C */ U32 DiscoveryStatus; /* 0x14 */ U16 DevHandle; /* 0x18 */ U16 ParentDevHandle; /* 0x1A */ U16 ExpanderChangeCount; /* 0x1C */ U16 ExpanderRouteIndexes; /* 0x1E */ U8 NumPhys; /* 0x20 */ U8 SASLevel; /* 0x21 */ U16 Flags; /* 0x22 */ U16 STPBusInactivityTimeLimit; /* 0x24 */ U16 STPMaxConnectTimeLimit; /* 0x26 */ U16 STP_SMP_NexusLossTime; /* 0x28 */ U16 MaxNumRoutedSasAddresses; /* 0x2A */ U64 ActiveZoneManagerSASAddress;/* 0x2C */ U16 ZoneLockInactivityLimit; /* 0x34 */ U16 Reserved1; /* 0x36 */ U8 TimeToReducedFunc; /* 0x38 */ U8 InitialTimeToReducedFunc; /* 0x39 */ U8 MaxReducedFuncTime; /* 0x3A */ U8 Reserved2; /* 0x3B */ } MPI2_CONFIG_PAGE_EXPANDER_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_EXPANDER_0, Mpi2ExpanderPage0_t, MPI2_POINTER pMpi2ExpanderPage0_t; #define MPI2_SASEXPANDER0_PAGEVERSION (0x06) /* values for SAS Expander Page 0 DiscoveryStatus field */ #define MPI2_SAS_EXPANDER0_DS_MAX_ENCLOSURES_EXCEED (0x80000000) #define MPI2_SAS_EXPANDER0_DS_MAX_EXPANDERS_EXCEED (0x40000000) #define MPI2_SAS_EXPANDER0_DS_MAX_DEVICES_EXCEED (0x20000000) #define MPI2_SAS_EXPANDER0_DS_MAX_TOPO_PHYS_EXCEED (0x10000000) #define MPI2_SAS_EXPANDER0_DS_DOWNSTREAM_INITIATOR (0x08000000) #define MPI2_SAS_EXPANDER0_DS_MULTI_SUBTRACTIVE_SUBTRACTIVE (0x00008000) #define MPI2_SAS_EXPANDER0_DS_EXP_MULTI_SUBTRACTIVE (0x00004000) #define MPI2_SAS_EXPANDER0_DS_MULTI_PORT_DOMAIN (0x00002000) #define MPI2_SAS_EXPANDER0_DS_TABLE_TO_SUBTRACTIVE_LINK (0x00001000) #define MPI2_SAS_EXPANDER0_DS_UNSUPPORTED_DEVICE (0x00000800) #define MPI2_SAS_EXPANDER0_DS_TABLE_LINK (0x00000400) #define MPI2_SAS_EXPANDER0_DS_SUBTRACTIVE_LINK (0x00000200) #define MPI2_SAS_EXPANDER0_DS_SMP_CRC_ERROR (0x00000100) #define MPI2_SAS_EXPANDER0_DS_SMP_FUNCTION_FAILED (0x00000080) #define MPI2_SAS_EXPANDER0_DS_INDEX_NOT_EXIST (0x00000040) #define MPI2_SAS_EXPANDER0_DS_OUT_ROUTE_ENTRIES (0x00000020) #define MPI2_SAS_EXPANDER0_DS_SMP_TIMEOUT (0x00000010) #define MPI2_SAS_EXPANDER0_DS_MULTIPLE_PORTS (0x00000004) #define MPI2_SAS_EXPANDER0_DS_UNADDRESSABLE_DEVICE (0x00000002) #define MPI2_SAS_EXPANDER0_DS_LOOP_DETECTED (0x00000001) /* values for SAS Expander Page 0 Flags field */ #define MPI2_SAS_EXPANDER0_FLAGS_REDUCED_FUNCTIONALITY (0x2000) #define MPI2_SAS_EXPANDER0_FLAGS_ZONE_LOCKED (0x1000) #define MPI2_SAS_EXPANDER0_FLAGS_SUPPORTED_PHYSICAL_PRES (0x0800) #define MPI2_SAS_EXPANDER0_FLAGS_ASSERTED_PHYSICAL_PRES (0x0400) #define MPI2_SAS_EXPANDER0_FLAGS_ZONING_SUPPORT (0x0200) #define MPI2_SAS_EXPANDER0_FLAGS_ENABLED_ZONING (0x0100) #define MPI2_SAS_EXPANDER0_FLAGS_TABLE_TO_TABLE_SUPPORT (0x0080) #define MPI2_SAS_EXPANDER0_FLAGS_CONNECTOR_END_DEVICE (0x0010) #define MPI2_SAS_EXPANDER0_FLAGS_OTHERS_CONFIG (0x0004) #define MPI2_SAS_EXPANDER0_FLAGS_CONFIG_IN_PROGRESS (0x0002) #define MPI2_SAS_EXPANDER0_FLAGS_ROUTE_TABLE_CONFIG (0x0001) /* SAS Expander Page 1 */ typedef struct _MPI2_CONFIG_PAGE_EXPANDER_1 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U8 PhysicalPort; /* 0x08 */ U8 Reserved1; /* 0x09 */ U16 Reserved2; /* 0x0A */ U8 NumPhys; /* 0x0C */ U8 Phy; /* 0x0D */ U16 NumTableEntriesProgrammed; /* 0x0E */ U8 ProgrammedLinkRate; /* 0x10 */ U8 HwLinkRate; /* 0x11 */ U16 AttachedDevHandle; /* 0x12 */ U32 PhyInfo; /* 0x14 */ U32 AttachedDeviceInfo; /* 0x18 */ U16 ExpanderDevHandle; /* 0x1C */ U8 ChangeCount; /* 0x1E */ U8 NegotiatedLinkRate; /* 0x1F */ U8 PhyIdentifier; /* 0x20 */ U8 AttachedPhyIdentifier; /* 0x21 */ U8 Reserved3; /* 0x22 */ U8 DiscoveryInfo; /* 0x23 */ U32 AttachedPhyInfo; /* 0x24 */ U8 ZoneGroup; /* 0x28 */ U8 SelfConfigStatus; /* 0x29 */ U16 Reserved4; /* 0x2A */ } MPI2_CONFIG_PAGE_EXPANDER_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_EXPANDER_1, Mpi2ExpanderPage1_t, MPI2_POINTER pMpi2ExpanderPage1_t; #define MPI2_SASEXPANDER1_PAGEVERSION (0x02) /* use MPI2_SAS_PRATE_ defines for the ProgrammedLinkRate field */ /* use MPI2_SAS_HWRATE_ defines for the HwLinkRate field */ /* use MPI2_SAS_PHYINFO_ for the PhyInfo field */ /* see mpi2_sas.h for the MPI2_SAS_DEVICE_INFO_ defines used for the AttachedDeviceInfo field */ /* use MPI2_SAS_NEG_LINK_RATE_ defines for the NegotiatedLinkRate field */ /* values for SAS Expander Page 1 DiscoveryInfo field */ #define MPI2_SAS_EXPANDER1_DISCINFO_BAD_PHY_DISABLED (0x04) #define MPI2_SAS_EXPANDER1_DISCINFO_LINK_STATUS_CHANGE (0x02) #define MPI2_SAS_EXPANDER1_DISCINFO_NO_ROUTING_ENTRIES (0x01) /* use MPI2_SAS_APHYINFO_ defines for AttachedPhyInfo field */ /**************************************************************************** * SAS Device Config Pages ****************************************************************************/ /* SAS Device Page 0 */ typedef struct _MPI2_CONFIG_PAGE_SAS_DEV_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U16 Slot; /* 0x08 */ U16 EnclosureHandle; /* 0x0A */ U64 SASAddress; /* 0x0C */ U16 ParentDevHandle; /* 0x14 */ U8 PhyNum; /* 0x16 */ U8 AccessStatus; /* 0x17 */ U16 DevHandle; /* 0x18 */ U8 AttachedPhyIdentifier; /* 0x1A */ U8 ZoneGroup; /* 0x1B */ U32 DeviceInfo; /* 0x1C */ U16 Flags; /* 0x20 */ U8 PhysicalPort; /* 0x22 */ U8 MaxPortConnections; /* 0x23 */ U64 DeviceName; /* 0x24 */ U8 PortGroups; /* 0x2C */ U8 DmaGroup; /* 0x2D */ U8 ControlGroup; /* 0x2E */ U8 EnclosureLevel; /* 0x2F */ U8 ConnectorName[4]; /* 0x30 */ U32 Reserved3; /* 0x34 */ } MPI2_CONFIG_PAGE_SAS_DEV_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_DEV_0, Mpi2SasDevicePage0_t, MPI2_POINTER pMpi2SasDevicePage0_t; #define MPI2_SASDEVICE0_PAGEVERSION (0x09) /* values for SAS Device Page 0 AccessStatus field */ #define MPI2_SAS_DEVICE0_ASTATUS_NO_ERRORS (0x00) #define MPI2_SAS_DEVICE0_ASTATUS_SATA_INIT_FAILED (0x01) #define MPI2_SAS_DEVICE0_ASTATUS_SATA_CAPABILITY_FAILED (0x02) #define MPI2_SAS_DEVICE0_ASTATUS_SATA_AFFILIATION_CONFLICT (0x03) #define MPI2_SAS_DEVICE0_ASTATUS_SATA_NEEDS_INITIALIZATION (0x04) #define MPI2_SAS_DEVICE0_ASTATUS_ROUTE_NOT_ADDRESSABLE (0x05) #define MPI2_SAS_DEVICE0_ASTATUS_SMP_ERROR_NOT_ADDRESSABLE (0x06) #define MPI2_SAS_DEVICE0_ASTATUS_DEVICE_BLOCKED (0x07) /* specific values for SATA Init failures */ #define MPI2_SAS_DEVICE0_ASTATUS_SIF_UNKNOWN (0x10) #define MPI2_SAS_DEVICE0_ASTATUS_SIF_AFFILIATION_CONFLICT (0x11) #define MPI2_SAS_DEVICE0_ASTATUS_SIF_DIAG (0x12) #define MPI2_SAS_DEVICE0_ASTATUS_SIF_IDENTIFICATION (0x13) #define MPI2_SAS_DEVICE0_ASTATUS_SIF_CHECK_POWER (0x14) #define MPI2_SAS_DEVICE0_ASTATUS_SIF_PIO_SN (0x15) #define MPI2_SAS_DEVICE0_ASTATUS_SIF_MDMA_SN (0x16) #define MPI2_SAS_DEVICE0_ASTATUS_SIF_UDMA_SN (0x17) #define MPI2_SAS_DEVICE0_ASTATUS_SIF_ZONING_VIOLATION (0x18) #define MPI2_SAS_DEVICE0_ASTATUS_SIF_NOT_ADDRESSABLE (0x19) #define MPI2_SAS_DEVICE0_ASTATUS_SIF_MAX (0x1F) /* see mpi2_sas.h for values for SAS Device Page 0 DeviceInfo values */ /* values for SAS Device Page 0 Flags field */ #define MPI2_SAS_DEVICE0_FLAGS_UNAUTHORIZED_DEVICE (0x8000) #define MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH (0x4000) #define MPI25_SAS_DEVICE0_FLAGS_FAST_PATH_CAPABLE (0x2000) #define MPI2_SAS_DEVICE0_FLAGS_SLUMBER_PM_CAPABLE (0x1000) #define MPI2_SAS_DEVICE0_FLAGS_PARTIAL_PM_CAPABLE (0x0800) #define MPI2_SAS_DEVICE0_FLAGS_SATA_ASYNCHRONOUS_NOTIFY (0x0400) #define MPI2_SAS_DEVICE0_FLAGS_SATA_SW_PRESERVE (0x0200) #define MPI2_SAS_DEVICE0_FLAGS_UNSUPPORTED_DEVICE (0x0100) #define MPI2_SAS_DEVICE0_FLAGS_SATA_48BIT_LBA_SUPPORTED (0x0080) #define MPI2_SAS_DEVICE0_FLAGS_SATA_SMART_SUPPORTED (0x0040) #define MPI2_SAS_DEVICE0_FLAGS_SATA_NCQ_SUPPORTED (0x0020) #define MPI2_SAS_DEVICE0_FLAGS_SATA_FUA_SUPPORTED (0x0010) #define MPI2_SAS_DEVICE0_FLAGS_PORT_SELECTOR_ATTACH (0x0008) #define MPI2_SAS_DEVICE0_FLAGS_PERSIST_CAPABLE (0x0004) #define MPI2_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID (0x0002) #define MPI2_SAS_DEVICE0_FLAGS_DEVICE_PRESENT (0x0001) /* SAS Device Page 1 */ typedef struct _MPI2_CONFIG_PAGE_SAS_DEV_1 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x08 */ U64 SASAddress; /* 0x0C */ U32 Reserved2; /* 0x14 */ U16 DevHandle; /* 0x18 */ U16 Reserved3; /* 0x1A */ U8 InitialRegDeviceFIS[20];/* 0x1C */ } MPI2_CONFIG_PAGE_SAS_DEV_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_DEV_1, Mpi2SasDevicePage1_t, MPI2_POINTER pMpi2SasDevicePage1_t; #define MPI2_SASDEVICE1_PAGEVERSION (0x01) /**************************************************************************** * SAS PHY Config Pages ****************************************************************************/ /* SAS PHY Page 0 */ typedef struct _MPI2_CONFIG_PAGE_SAS_PHY_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U16 OwnerDevHandle; /* 0x08 */ U16 Reserved1; /* 0x0A */ U16 AttachedDevHandle; /* 0x0C */ U8 AttachedPhyIdentifier; /* 0x0E */ U8 Reserved2; /* 0x0F */ U32 AttachedPhyInfo; /* 0x10 */ U8 ProgrammedLinkRate; /* 0x14 */ U8 HwLinkRate; /* 0x15 */ U8 ChangeCount; /* 0x16 */ U8 Flags; /* 0x17 */ U32 PhyInfo; /* 0x18 */ U8 NegotiatedLinkRate; /* 0x1C */ U8 Reserved3; /* 0x1D */ U16 Reserved4; /* 0x1E */ } MPI2_CONFIG_PAGE_SAS_PHY_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_PHY_0, Mpi2SasPhyPage0_t, MPI2_POINTER pMpi2SasPhyPage0_t; #define MPI2_SASPHY0_PAGEVERSION (0x03) /* use MPI2_SAS_APHYINFO_ defines for AttachedPhyInfo field */ /* use MPI2_SAS_PRATE_ defines for the ProgrammedLinkRate field */ /* use MPI2_SAS_HWRATE_ defines for the HwLinkRate field */ /* values for SAS PHY Page 0 Flags field */ #define MPI2_SAS_PHY0_FLAGS_SGPIO_DIRECT_ATTACH_ENC (0x01) /* use MPI2_SAS_PHYINFO_ for the PhyInfo field */ /* use MPI2_SAS_NEG_LINK_RATE_ defines for the NegotiatedLinkRate field */ /* SAS PHY Page 1 */ typedef struct _MPI2_CONFIG_PAGE_SAS_PHY_1 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x08 */ U32 InvalidDwordCount; /* 0x0C */ U32 RunningDisparityErrorCount; /* 0x10 */ U32 LossDwordSynchCount; /* 0x14 */ U32 PhyResetProblemCount; /* 0x18 */ } MPI2_CONFIG_PAGE_SAS_PHY_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_PHY_1, Mpi2SasPhyPage1_t, MPI2_POINTER pMpi2SasPhyPage1_t; #define MPI2_SASPHY1_PAGEVERSION (0x01) /* SAS PHY Page 2 */ typedef struct _MPI2_SASPHY2_PHY_EVENT { U8 PhyEventCode; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 Reserved2; /* 0x02 */ U32 PhyEventInfo; /* 0x04 */ } MPI2_SASPHY2_PHY_EVENT, MPI2_POINTER PTR_MPI2_SASPHY2_PHY_EVENT, Mpi2SasPhy2PhyEvent_t, MPI2_POINTER pMpi2SasPhy2PhyEvent_t; /* use MPI2_SASPHY3_EVENT_CODE_ for the PhyEventCode field */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhyEvents at runtime. */ #ifndef MPI2_SASPHY2_PHY_EVENT_MAX #define MPI2_SASPHY2_PHY_EVENT_MAX (1) #endif typedef struct _MPI2_CONFIG_PAGE_SAS_PHY_2 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x08 */ U8 NumPhyEvents; /* 0x0C */ U8 Reserved2; /* 0x0D */ U16 Reserved3; /* 0x0E */ MPI2_SASPHY2_PHY_EVENT PhyEvent[MPI2_SASPHY2_PHY_EVENT_MAX]; /* 0x10 */ } MPI2_CONFIG_PAGE_SAS_PHY_2, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_PHY_2, Mpi2SasPhyPage2_t, MPI2_POINTER pMpi2SasPhyPage2_t; #define MPI2_SASPHY2_PAGEVERSION (0x00) /* SAS PHY Page 3 */ typedef struct _MPI2_SASPHY3_PHY_EVENT_CONFIG { U8 PhyEventCode; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 Reserved2; /* 0x02 */ U8 CounterType; /* 0x04 */ U8 ThresholdWindow; /* 0x05 */ U8 TimeUnits; /* 0x06 */ U8 Reserved3; /* 0x07 */ U32 EventThreshold; /* 0x08 */ U16 ThresholdFlags; /* 0x0C */ U16 Reserved4; /* 0x0E */ } MPI2_SASPHY3_PHY_EVENT_CONFIG, MPI2_POINTER PTR_MPI2_SASPHY3_PHY_EVENT_CONFIG, Mpi2SasPhy3PhyEventConfig_t, MPI2_POINTER pMpi2SasPhy3PhyEventConfig_t; /* values for PhyEventCode field */ #define MPI2_SASPHY3_EVENT_CODE_NO_EVENT (0x00) #define MPI2_SASPHY3_EVENT_CODE_INVALID_DWORD (0x01) #define MPI2_SASPHY3_EVENT_CODE_RUNNING_DISPARITY_ERROR (0x02) #define MPI2_SASPHY3_EVENT_CODE_LOSS_DWORD_SYNC (0x03) #define MPI2_SASPHY3_EVENT_CODE_PHY_RESET_PROBLEM (0x04) #define MPI2_SASPHY3_EVENT_CODE_ELASTICITY_BUF_OVERFLOW (0x05) #define MPI2_SASPHY3_EVENT_CODE_RX_ERROR (0x06) #define MPI2_SASPHY3_EVENT_CODE_RX_ADDR_FRAME_ERROR (0x20) #define MPI2_SASPHY3_EVENT_CODE_TX_AC_OPEN_REJECT (0x21) #define MPI2_SASPHY3_EVENT_CODE_RX_AC_OPEN_REJECT (0x22) #define MPI2_SASPHY3_EVENT_CODE_TX_RC_OPEN_REJECT (0x23) #define MPI2_SASPHY3_EVENT_CODE_RX_RC_OPEN_REJECT (0x24) #define MPI2_SASPHY3_EVENT_CODE_RX_AIP_PARTIAL_WAITING_ON (0x25) #define MPI2_SASPHY3_EVENT_CODE_RX_AIP_CONNECT_WAITING_ON (0x26) #define MPI2_SASPHY3_EVENT_CODE_TX_BREAK (0x27) #define MPI2_SASPHY3_EVENT_CODE_RX_BREAK (0x28) #define MPI2_SASPHY3_EVENT_CODE_BREAK_TIMEOUT (0x29) #define MPI2_SASPHY3_EVENT_CODE_CONNECTION (0x2A) #define MPI2_SASPHY3_EVENT_CODE_PEAKTX_PATHWAY_BLOCKED (0x2B) #define MPI2_SASPHY3_EVENT_CODE_PEAKTX_ARB_WAIT_TIME (0x2C) #define MPI2_SASPHY3_EVENT_CODE_PEAK_ARB_WAIT_TIME (0x2D) #define MPI2_SASPHY3_EVENT_CODE_PEAK_CONNECT_TIME (0x2E) #define MPI2_SASPHY3_EVENT_CODE_TX_SSP_FRAMES (0x40) #define MPI2_SASPHY3_EVENT_CODE_RX_SSP_FRAMES (0x41) #define MPI2_SASPHY3_EVENT_CODE_TX_SSP_ERROR_FRAMES (0x42) #define MPI2_SASPHY3_EVENT_CODE_RX_SSP_ERROR_FRAMES (0x43) #define MPI2_SASPHY3_EVENT_CODE_TX_CREDIT_BLOCKED (0x44) #define MPI2_SASPHY3_EVENT_CODE_RX_CREDIT_BLOCKED (0x45) #define MPI2_SASPHY3_EVENT_CODE_TX_SATA_FRAMES (0x50) #define MPI2_SASPHY3_EVENT_CODE_RX_SATA_FRAMES (0x51) #define MPI2_SASPHY3_EVENT_CODE_SATA_OVERFLOW (0x52) #define MPI2_SASPHY3_EVENT_CODE_TX_SMP_FRAMES (0x60) #define MPI2_SASPHY3_EVENT_CODE_RX_SMP_FRAMES (0x61) #define MPI2_SASPHY3_EVENT_CODE_RX_SMP_ERROR_FRAMES (0x63) #define MPI2_SASPHY3_EVENT_CODE_HOTPLUG_TIMEOUT (0xD0) #define MPI2_SASPHY3_EVENT_CODE_MISALIGNED_MUX_PRIMITIVE (0xD1) #define MPI2_SASPHY3_EVENT_CODE_RX_AIP (0xD2) /* Following codes are product specific and in MPI v2.6 and later */ #define MPI2_SASPHY3_EVENT_CODE_LCARB_WAIT_TIME (0xD3) #define MPI2_SASPHY3_EVENT_CODE_RCVD_CONN_RESP_WAIT_TIME (0xD4) #define MPI2_SASPHY3_EVENT_CODE_LCCONN_TIME (0xD5) #define MPI2_SASPHY3_EVENT_CODE_SSP_TX_START_TRANSMIT (0xD6) #define MPI2_SASPHY3_EVENT_CODE_SATA_TX_START (0xD7) #define MPI2_SASPHY3_EVENT_CODE_SMP_TX_START_TRANSMT (0xD8) #define MPI2_SASPHY3_EVENT_CODE_TX_SMP_BREAK_CONN (0xD9) #define MPI2_SASPHY3_EVENT_CODE_SSP_RX_START_RECEIVE (0xDA) #define MPI2_SASPHY3_EVENT_CODE_SATA_RX_START_RECEIVE (0xDB) #define MPI2_SASPHY3_EVENT_CODE_SMP_RX_START_RECEIVE (0xDC) /* values for the CounterType field */ #define MPI2_SASPHY3_COUNTER_TYPE_WRAPPING (0x00) #define MPI2_SASPHY3_COUNTER_TYPE_SATURATING (0x01) #define MPI2_SASPHY3_COUNTER_TYPE_PEAK_VALUE (0x02) /* values for the TimeUnits field */ #define MPI2_SASPHY3_TIME_UNITS_10_MICROSECONDS (0x00) #define MPI2_SASPHY3_TIME_UNITS_100_MICROSECONDS (0x01) #define MPI2_SASPHY3_TIME_UNITS_1_MILLISECOND (0x02) #define MPI2_SASPHY3_TIME_UNITS_10_MILLISECONDS (0x03) /* values for the ThresholdFlags field */ #define MPI2_SASPHY3_TFLAGS_PHY_RESET (0x0002) #define MPI2_SASPHY3_TFLAGS_EVENT_NOTIFY (0x0001) /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhyEvents at runtime. */ #ifndef MPI2_SASPHY3_PHY_EVENT_MAX #define MPI2_SASPHY3_PHY_EVENT_MAX (1) #endif typedef struct _MPI2_CONFIG_PAGE_SAS_PHY_3 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x08 */ U8 NumPhyEvents; /* 0x0C */ U8 Reserved2; /* 0x0D */ U16 Reserved3; /* 0x0E */ MPI2_SASPHY3_PHY_EVENT_CONFIG PhyEventConfig[MPI2_SASPHY3_PHY_EVENT_MAX]; /* 0x10 */ } MPI2_CONFIG_PAGE_SAS_PHY_3, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_PHY_3, Mpi2SasPhyPage3_t, MPI2_POINTER pMpi2SasPhyPage3_t; #define MPI2_SASPHY3_PAGEVERSION (0x00) /* SAS PHY Page 4 */ typedef struct _MPI2_CONFIG_PAGE_SAS_PHY_4 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U16 Reserved1; /* 0x08 */ U8 Reserved2; /* 0x0A */ U8 Flags; /* 0x0B */ U8 InitialFrame[28]; /* 0x0C */ } MPI2_CONFIG_PAGE_SAS_PHY_4, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_PHY_4, Mpi2SasPhyPage4_t, MPI2_POINTER pMpi2SasPhyPage4_t; #define MPI2_SASPHY4_PAGEVERSION (0x00) /* values for the Flags field */ #define MPI2_SASPHY4_FLAGS_FRAME_VALID (0x02) #define MPI2_SASPHY4_FLAGS_SATA_FRAME (0x01) /**************************************************************************** * SAS Port Config Pages ****************************************************************************/ /* SAS Port Page 0 */ typedef struct _MPI2_CONFIG_PAGE_SAS_PORT_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U8 PortNumber; /* 0x08 */ U8 PhysicalPort; /* 0x09 */ U8 PortWidth; /* 0x0A */ U8 PhysicalPortWidth; /* 0x0B */ U8 ZoneGroup; /* 0x0C */ U8 Reserved1; /* 0x0D */ U16 Reserved2; /* 0x0E */ U64 SASAddress; /* 0x10 */ U32 DeviceInfo; /* 0x18 */ U32 Reserved3; /* 0x1C */ U32 Reserved4; /* 0x20 */ } MPI2_CONFIG_PAGE_SAS_PORT_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_PORT_0, Mpi2SasPortPage0_t, MPI2_POINTER pMpi2SasPortPage0_t; #define MPI2_SASPORT0_PAGEVERSION (0x00) /* see mpi2_sas.h for values for SAS Port Page 0 DeviceInfo values */ /**************************************************************************** * SAS Enclosure Config Pages ****************************************************************************/ /* SAS Enclosure Page 0, Enclosure Page 0 */ typedef struct _MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x08 */ U64 EnclosureLogicalID; /* 0x0C */ U16 Flags; /* 0x14 */ U16 EnclosureHandle; /* 0x16 */ U16 NumSlots; /* 0x18 */ U16 StartSlot; /* 0x1A */ U8 ChassisSlot; /* 0x1C */ U8 EnclosureLevel; /* 0x1D */ U16 SEPDevHandle; /* 0x1E */ - U32 Reserved2; /* 0x20 */ + U8 OEMRD; /* 0x20 */ + U8 Reserved1a; /* 0x21 */ + U16 Reserved2; /* 0x22 */ U32 Reserved3; /* 0x24 */ } MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0, Mpi2SasEnclosurePage0_t, MPI2_POINTER pMpi2SasEnclosurePage0_t, MPI26_CONFIG_PAGE_ENCLOSURE_0, MPI2_POINTER PTR_MPI26_CONFIG_PAGE_ENCLOSURE_0, Mpi26EnclosurePage0_t, MPI2_POINTER pMpi26EnclosurePage0_t; #define MPI2_SASENCLOSURE0_PAGEVERSION (0x04) /* values for SAS Enclosure Page 0 Flags field */ +#define MPI26_SAS_ENCLS0_FLAGS_OEMRD_VALID (0x0080) +#define MPI26_SAS_ENCLS0_FLAGS_OEMRD_COLLECTING (0x0040) #define MPI2_SAS_ENCLS0_FLAGS_CHASSIS_SLOT_VALID (0x0020) #define MPI2_SAS_ENCLS0_FLAGS_ENCL_LEVEL_VALID (0x0010) #define MPI2_SAS_ENCLS0_FLAGS_MNG_MASK (0x000F) #define MPI2_SAS_ENCLS0_FLAGS_MNG_UNKNOWN (0x0000) #define MPI2_SAS_ENCLS0_FLAGS_MNG_IOC_SES (0x0001) #define MPI2_SAS_ENCLS0_FLAGS_MNG_IOC_SGPIO (0x0002) #define MPI2_SAS_ENCLS0_FLAGS_MNG_EXP_SGPIO (0x0003) #define MPI2_SAS_ENCLS0_FLAGS_MNG_SES_ENCLOSURE (0x0004) #define MPI2_SAS_ENCLS0_FLAGS_MNG_IOC_GPIO (0x0005) #define MPI26_ENCLOSURE0_PAGEVERSION (0x04) /* Values for Enclosure Page 0 Flags field */ +#define MPI26_ENCLS0_FLAGS_OEMRD_VALID (0x0080) +#define MPI26_ENCLS0_FLAGS_OEMRD_COLLECTING (0x0040) #define MPI26_ENCLS0_FLAGS_CHASSIS_SLOT_VALID (0x0020) #define MPI26_ENCLS0_FLAGS_ENCL_LEVEL_VALID (0x0010) #define MPI26_ENCLS0_FLAGS_MNG_MASK (0x000F) #define MPI26_ENCLS0_FLAGS_MNG_UNKNOWN (0x0000) #define MPI26_ENCLS0_FLAGS_MNG_IOC_SES (0x0001) #define MPI26_ENCLS0_FLAGS_MNG_IOC_SGPIO (0x0002) #define MPI26_ENCLS0_FLAGS_MNG_EXP_SGPIO (0x0003) #define MPI26_ENCLS0_FLAGS_MNG_SES_ENCLOSURE (0x0004) #define MPI26_ENCLS0_FLAGS_MNG_IOC_GPIO (0x0005) /**************************************************************************** * Log Config Page ****************************************************************************/ /* Log Page 0 */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumLogEntries at runtime. */ #ifndef MPI2_LOG_0_NUM_LOG_ENTRIES #define MPI2_LOG_0_NUM_LOG_ENTRIES (1) #endif #define MPI2_LOG_0_LOG_DATA_LENGTH (0x1C) typedef struct _MPI2_LOG_0_ENTRY { U64 TimeStamp; /* 0x00 */ U32 Reserved1; /* 0x08 */ U16 LogSequence; /* 0x0C */ U16 LogEntryQualifier; /* 0x0E */ U8 VP_ID; /* 0x10 */ U8 VF_ID; /* 0x11 */ U16 Reserved2; /* 0x12 */ U8 LogData[MPI2_LOG_0_LOG_DATA_LENGTH];/* 0x14 */ } MPI2_LOG_0_ENTRY, MPI2_POINTER PTR_MPI2_LOG_0_ENTRY, Mpi2Log0Entry_t, MPI2_POINTER pMpi2Log0Entry_t; /* values for Log Page 0 LogEntry LogEntryQualifier field */ #define MPI2_LOG_0_ENTRY_QUAL_ENTRY_UNUSED (0x0000) #define MPI2_LOG_0_ENTRY_QUAL_POWER_ON_RESET (0x0001) #define MPI2_LOG_0_ENTRY_QUAL_TIMESTAMP_UPDATE (0x0002) #define MPI2_LOG_0_ENTRY_QUAL_MIN_IMPLEMENT_SPEC (0x8000) #define MPI2_LOG_0_ENTRY_QUAL_MAX_IMPLEMENT_SPEC (0xFFFF) typedef struct _MPI2_CONFIG_PAGE_LOG_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x08 */ U32 Reserved2; /* 0x0C */ U16 NumLogEntries; /* 0x10 */ U16 Reserved3; /* 0x12 */ MPI2_LOG_0_ENTRY LogEntry[MPI2_LOG_0_NUM_LOG_ENTRIES]; /* 0x14 */ } MPI2_CONFIG_PAGE_LOG_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_LOG_0, Mpi2LogPage0_t, MPI2_POINTER pMpi2LogPage0_t; #define MPI2_LOG_0_PAGEVERSION (0x02) /**************************************************************************** * RAID Config Page ****************************************************************************/ /* RAID Page 0 */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumElements at runtime. */ #ifndef MPI2_RAIDCONFIG0_MAX_ELEMENTS #define MPI2_RAIDCONFIG0_MAX_ELEMENTS (1) #endif typedef struct _MPI2_RAIDCONFIG0_CONFIG_ELEMENT { U16 ElementFlags; /* 0x00 */ U16 VolDevHandle; /* 0x02 */ U8 HotSparePool; /* 0x04 */ U8 PhysDiskNum; /* 0x05 */ U16 PhysDiskDevHandle; /* 0x06 */ } MPI2_RAIDCONFIG0_CONFIG_ELEMENT, MPI2_POINTER PTR_MPI2_RAIDCONFIG0_CONFIG_ELEMENT, Mpi2RaidConfig0ConfigElement_t, MPI2_POINTER pMpi2RaidConfig0ConfigElement_t; /* values for the ElementFlags field */ #define MPI2_RAIDCONFIG0_EFLAGS_MASK_ELEMENT_TYPE (0x000F) #define MPI2_RAIDCONFIG0_EFLAGS_VOLUME_ELEMENT (0x0000) #define MPI2_RAIDCONFIG0_EFLAGS_VOL_PHYS_DISK_ELEMENT (0x0001) #define MPI2_RAIDCONFIG0_EFLAGS_HOT_SPARE_ELEMENT (0x0002) #define MPI2_RAIDCONFIG0_EFLAGS_OCE_ELEMENT (0x0003) typedef struct _MPI2_CONFIG_PAGE_RAID_CONFIGURATION_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U8 NumHotSpares; /* 0x08 */ U8 NumPhysDisks; /* 0x09 */ U8 NumVolumes; /* 0x0A */ U8 ConfigNum; /* 0x0B */ U32 Flags; /* 0x0C */ U8 ConfigGUID[24]; /* 0x10 */ U32 Reserved1; /* 0x28 */ U8 NumElements; /* 0x2C */ U8 Reserved2; /* 0x2D */ U16 Reserved3; /* 0x2E */ MPI2_RAIDCONFIG0_CONFIG_ELEMENT ConfigElement[MPI2_RAIDCONFIG0_MAX_ELEMENTS]; /* 0x30 */ } MPI2_CONFIG_PAGE_RAID_CONFIGURATION_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_RAID_CONFIGURATION_0, Mpi2RaidConfigurationPage0_t, MPI2_POINTER pMpi2RaidConfigurationPage0_t; #define MPI2_RAIDCONFIG0_PAGEVERSION (0x00) /* values for RAID Configuration Page 0 Flags field */ #define MPI2_RAIDCONFIG0_FLAG_FOREIGN_CONFIG (0x00000001) /**************************************************************************** * Driver Persistent Mapping Config Pages ****************************************************************************/ /* Driver Persistent Mapping Page 0 */ typedef struct _MPI2_CONFIG_PAGE_DRIVER_MAP0_ENTRY { U64 PhysicalIdentifier; /* 0x00 */ U16 MappingInformation; /* 0x08 */ U16 DeviceIndex; /* 0x0A */ U32 PhysicalBitsMapping; /* 0x0C */ U32 Reserved1; /* 0x10 */ } MPI2_CONFIG_PAGE_DRIVER_MAP0_ENTRY, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_DRIVER_MAP0_ENTRY, Mpi2DriverMap0Entry_t, MPI2_POINTER pMpi2DriverMap0Entry_t; typedef struct _MPI2_CONFIG_PAGE_DRIVER_MAPPING_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ MPI2_CONFIG_PAGE_DRIVER_MAP0_ENTRY Entry; /* 0x08 */ } MPI2_CONFIG_PAGE_DRIVER_MAPPING_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_DRIVER_MAPPING_0, Mpi2DriverMappingPage0_t, MPI2_POINTER pMpi2DriverMappingPage0_t; #define MPI2_DRIVERMAPPING0_PAGEVERSION (0x00) /* values for Driver Persistent Mapping Page 0 MappingInformation field */ #define MPI2_DRVMAP0_MAPINFO_SLOT_MASK (0x07F0) #define MPI2_DRVMAP0_MAPINFO_SLOT_SHIFT (4) #define MPI2_DRVMAP0_MAPINFO_MISSING_MASK (0x000F) /**************************************************************************** * Ethernet Config Pages ****************************************************************************/ /* Ethernet Page 0 */ /* IP address (union of IPv4 and IPv6) */ typedef union _MPI2_ETHERNET_IP_ADDR { U32 IPv4Addr; U32 IPv6Addr[4]; } MPI2_ETHERNET_IP_ADDR, MPI2_POINTER PTR_MPI2_ETHERNET_IP_ADDR, Mpi2EthernetIpAddr_t, MPI2_POINTER pMpi2EthernetIpAddr_t; #define MPI2_ETHERNET_HOST_NAME_LENGTH (32) typedef struct _MPI2_CONFIG_PAGE_ETHERNET_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U8 NumInterfaces; /* 0x08 */ U8 Reserved0; /* 0x09 */ U16 Reserved1; /* 0x0A */ U32 Status; /* 0x0C */ U8 MediaState; /* 0x10 */ U8 Reserved2; /* 0x11 */ U16 Reserved3; /* 0x12 */ U8 MacAddress[6]; /* 0x14 */ U8 Reserved4; /* 0x1A */ U8 Reserved5; /* 0x1B */ MPI2_ETHERNET_IP_ADDR IpAddress; /* 0x1C */ MPI2_ETHERNET_IP_ADDR SubnetMask; /* 0x2C */ MPI2_ETHERNET_IP_ADDR GatewayIpAddress; /* 0x3C */ MPI2_ETHERNET_IP_ADDR DNS1IpAddress; /* 0x4C */ MPI2_ETHERNET_IP_ADDR DNS2IpAddress; /* 0x5C */ MPI2_ETHERNET_IP_ADDR DhcpIpAddress; /* 0x6C */ U8 HostName[MPI2_ETHERNET_HOST_NAME_LENGTH];/* 0x7C */ } MPI2_CONFIG_PAGE_ETHERNET_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_ETHERNET_0, Mpi2EthernetPage0_t, MPI2_POINTER pMpi2EthernetPage0_t; #define MPI2_ETHERNETPAGE0_PAGEVERSION (0x00) /* values for Ethernet Page 0 Status field */ #define MPI2_ETHPG0_STATUS_IPV6_CAPABLE (0x80000000) #define MPI2_ETHPG0_STATUS_IPV4_CAPABLE (0x40000000) #define MPI2_ETHPG0_STATUS_CONSOLE_CONNECTED (0x20000000) #define MPI2_ETHPG0_STATUS_DEFAULT_IF (0x00000100) #define MPI2_ETHPG0_STATUS_FW_DWNLD_ENABLED (0x00000080) #define MPI2_ETHPG0_STATUS_TELNET_ENABLED (0x00000040) #define MPI2_ETHPG0_STATUS_SSH2_ENABLED (0x00000020) #define MPI2_ETHPG0_STATUS_DHCP_CLIENT_ENABLED (0x00000010) #define MPI2_ETHPG0_STATUS_IPV6_ENABLED (0x00000008) #define MPI2_ETHPG0_STATUS_IPV4_ENABLED (0x00000004) #define MPI2_ETHPG0_STATUS_IPV6_ADDRESSES (0x00000002) #define MPI2_ETHPG0_STATUS_ETH_IF_ENABLED (0x00000001) /* values for Ethernet Page 0 MediaState field */ #define MPI2_ETHPG0_MS_DUPLEX_MASK (0x80) #define MPI2_ETHPG0_MS_HALF_DUPLEX (0x00) #define MPI2_ETHPG0_MS_FULL_DUPLEX (0x80) #define MPI2_ETHPG0_MS_CONNECT_SPEED_MASK (0x07) #define MPI2_ETHPG0_MS_NOT_CONNECTED (0x00) #define MPI2_ETHPG0_MS_10MBIT (0x01) #define MPI2_ETHPG0_MS_100MBIT (0x02) #define MPI2_ETHPG0_MS_1GBIT (0x03) /* Ethernet Page 1 */ typedef struct _MPI2_CONFIG_PAGE_ETHERNET_1 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U32 Reserved0; /* 0x08 */ U32 Flags; /* 0x0C */ U8 MediaState; /* 0x10 */ U8 Reserved1; /* 0x11 */ U16 Reserved2; /* 0x12 */ U8 MacAddress[6]; /* 0x14 */ U8 Reserved3; /* 0x1A */ U8 Reserved4; /* 0x1B */ MPI2_ETHERNET_IP_ADDR StaticIpAddress; /* 0x1C */ MPI2_ETHERNET_IP_ADDR StaticSubnetMask; /* 0x2C */ MPI2_ETHERNET_IP_ADDR StaticGatewayIpAddress; /* 0x3C */ MPI2_ETHERNET_IP_ADDR StaticDNS1IpAddress; /* 0x4C */ MPI2_ETHERNET_IP_ADDR StaticDNS2IpAddress; /* 0x5C */ U32 Reserved5; /* 0x6C */ U32 Reserved6; /* 0x70 */ U32 Reserved7; /* 0x74 */ U32 Reserved8; /* 0x78 */ U8 HostName[MPI2_ETHERNET_HOST_NAME_LENGTH];/* 0x7C */ } MPI2_CONFIG_PAGE_ETHERNET_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_ETHERNET_1, Mpi2EthernetPage1_t, MPI2_POINTER pMpi2EthernetPage1_t; #define MPI2_ETHERNETPAGE1_PAGEVERSION (0x00) /* values for Ethernet Page 1 Flags field */ #define MPI2_ETHPG1_FLAG_SET_DEFAULT_IF (0x00000100) #define MPI2_ETHPG1_FLAG_ENABLE_FW_DOWNLOAD (0x00000080) #define MPI2_ETHPG1_FLAG_ENABLE_TELNET (0x00000040) #define MPI2_ETHPG1_FLAG_ENABLE_SSH2 (0x00000020) #define MPI2_ETHPG1_FLAG_ENABLE_DHCP_CLIENT (0x00000010) #define MPI2_ETHPG1_FLAG_ENABLE_IPV6 (0x00000008) #define MPI2_ETHPG1_FLAG_ENABLE_IPV4 (0x00000004) #define MPI2_ETHPG1_FLAG_USE_IPV6_ADDRESSES (0x00000002) #define MPI2_ETHPG1_FLAG_ENABLE_ETH_IF (0x00000001) /* values for Ethernet Page 1 MediaState field */ #define MPI2_ETHPG1_MS_DUPLEX_MASK (0x80) #define MPI2_ETHPG1_MS_HALF_DUPLEX (0x00) #define MPI2_ETHPG1_MS_FULL_DUPLEX (0x80) #define MPI2_ETHPG1_MS_DATA_RATE_MASK (0x07) #define MPI2_ETHPG1_MS_DATA_RATE_AUTO (0x00) #define MPI2_ETHPG1_MS_DATA_RATE_10MBIT (0x01) #define MPI2_ETHPG1_MS_DATA_RATE_100MBIT (0x02) #define MPI2_ETHPG1_MS_DATA_RATE_1GBIT (0x03) /**************************************************************************** * Extended Manufacturing Config Pages ****************************************************************************/ /* * Generic structure to use for product-specific extended manufacturing pages * (currently Extended Manufacturing Page 40 through Extended Manufacturing * Page 60). */ typedef struct _MPI2_CONFIG_PAGE_EXT_MAN_PS { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U32 ProductSpecificInfo; /* 0x08 */ } MPI2_CONFIG_PAGE_EXT_MAN_PS, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_EXT_MAN_PS, Mpi2ExtManufacturingPagePS_t, MPI2_POINTER pMpi2ExtManufacturingPagePS_t; /* PageVersion should be provided by product-specific code */ /**************************************************************************** * values for fields used by several types of PCIe Config Pages ****************************************************************************/ /* values for NegotiatedLinkRates fields */ #define MPI26_PCIE_NEG_LINK_RATE_MASK_PHYSICAL (0x0F) /* link rates used for Negotiated Physical Link Rate */ #define MPI26_PCIE_NEG_LINK_RATE_UNKNOWN (0x00) #define MPI26_PCIE_NEG_LINK_RATE_PHY_DISABLED (0x01) #define MPI26_PCIE_NEG_LINK_RATE_2_5 (0x02) #define MPI26_PCIE_NEG_LINK_RATE_5_0 (0x03) #define MPI26_PCIE_NEG_LINK_RATE_8_0 (0x04) #define MPI26_PCIE_NEG_LINK_RATE_16_0 (0x05) /**************************************************************************** * PCIe IO Unit Config Pages (MPI v2.6 and later) ****************************************************************************/ /* PCIe IO Unit Page 0 */ typedef struct _MPI26_PCIE_IO_UNIT0_PHY_DATA { U8 Link; /* 0x00 */ U8 LinkFlags; /* 0x01 */ U8 PhyFlags; /* 0x02 */ U8 NegotiatedLinkRate; /* 0x03 */ U32 ControllerPhyDeviceInfo;/* 0x04 */ U16 AttachedDevHandle; /* 0x08 */ U16 ControllerDevHandle; /* 0x0A */ U32 EnumerationStatus; /* 0x0C */ U32 Reserved1; /* 0x10 */ } MPI26_PCIE_IO_UNIT0_PHY_DATA, MPI2_POINTER PTR_MPI26_PCIE_IO_UNIT0_PHY_DATA, Mpi26PCIeIOUnit0PhyData_t, MPI2_POINTER pMpi26PCIeIOUnit0PhyData_t; /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhys at runtime. */ #ifndef MPI26_PCIE_IOUNIT0_PHY_MAX #define MPI26_PCIE_IOUNIT0_PHY_MAX (1) #endif typedef struct _MPI26_CONFIG_PAGE_PIOUNIT_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U32 Reserved1; /* 0x08 */ U8 NumPhys; /* 0x0C */ U8 InitStatus; /* 0x0D */ U16 Reserved3; /* 0x0E */ MPI26_PCIE_IO_UNIT0_PHY_DATA PhyData[MPI26_PCIE_IOUNIT0_PHY_MAX]; /* 0x10 */ } MPI26_CONFIG_PAGE_PIOUNIT_0, MPI2_POINTER PTR_MPI26_CONFIG_PAGE_PIOUNIT_0, Mpi26PCIeIOUnitPage0_t, MPI2_POINTER pMpi26PCIeIOUnitPage0_t; #define MPI26_PCIEIOUNITPAGE0_PAGEVERSION (0x00) /* values for PCIe IO Unit Page 0 LinkFlags */ #define MPI26_PCIEIOUNIT0_LINKFLAGS_ENUMERATION_IN_PROGRESS (0x08) /* values for PCIe IO Unit Page 0 PhyFlags */ #define MPI26_PCIEIOUNIT0_PHYFLAGS_PHY_DISABLED (0x08) /* use MPI26_PCIE_NEG_LINK_RATE_ defines for the NegotiatedLinkRate field */ /* see mpi2_pci.h for values for PCIe IO Unit Page 0 ControllerPhyDeviceInfo values */ /* values for PCIe IO Unit Page 0 EnumerationStatus */ #define MPI26_PCIEIOUNIT0_ES_MAX_SWITCHES_EXCEEDED (0x40000000) #define MPI26_PCIEIOUNIT0_ES_MAX_DEVICES_EXCEEDED (0x20000000) /* PCIe IO Unit Page 1 */ typedef struct _MPI26_PCIE_IO_UNIT1_PHY_DATA { U8 Link; /* 0x00 */ U8 LinkFlags; /* 0x01 */ U8 PhyFlags; /* 0x02 */ U8 MaxMinLinkRate; /* 0x03 */ U32 ControllerPhyDeviceInfo; /* 0x04 */ U32 Reserved1; /* 0x08 */ } MPI26_PCIE_IO_UNIT1_PHY_DATA, MPI2_POINTER PTR_MPI26_PCIE_IO_UNIT1_PHY_DATA, Mpi26PCIeIOUnit1PhyData_t, MPI2_POINTER pMpi26PCIeIOUnit1PhyData_t; /* values for LinkFlags */ -#define MPI26_PCIEIOUNIT1_LINKFLAGS_DIS_SRIS (0x00) -#define MPI26_PCIEIOUNIT1_LINKFLAGS_EN_SRIS (0x01) +#define MPI26_PCIEIOUNIT1_LINKFLAGS_DIS_SEPARATE_REFCLK (0x00) +#define MPI26_PCIEIOUNIT1_LINKFLAGS_SRIS_EN (0x01) +#define MPI26_PCIEIOUNIT1_LINKFLAGS_SRNS_EN (0x02) /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumPhys at runtime. */ #ifndef MPI26_PCIE_IOUNIT1_PHY_MAX #define MPI26_PCIE_IOUNIT1_PHY_MAX (1) #endif typedef struct _MPI26_CONFIG_PAGE_PIOUNIT_1 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U16 ControlFlags; /* 0x08 */ U16 Reserved; /* 0x0A */ U16 AdditionalControlFlags; /* 0x0C */ U16 NVMeMaxQueueDepth; /* 0x0E */ U8 NumPhys; /* 0x10 */ - U8 Reserved1; /* 0x11 */ + U8 DMDReportPCIe; /* 0x11 */ U16 Reserved2; /* 0x12 */ MPI26_PCIE_IO_UNIT1_PHY_DATA PhyData[MPI26_PCIE_IOUNIT1_PHY_MAX];/* 0x14 */ } MPI26_CONFIG_PAGE_PIOUNIT_1, MPI2_POINTER PTR_MPI26_CONFIG_PAGE_PIOUNIT_1, Mpi26PCIeIOUnitPage1_t, MPI2_POINTER pMpi26PCIeIOUnitPage1_t; #define MPI26_PCIEIOUNITPAGE1_PAGEVERSION (0x00) /* values for PCIe IO Unit Page 1 PhyFlags */ #define MPI26_PCIEIOUNIT1_PHYFLAGS_PHY_DISABLE (0x08) #define MPI26_PCIEIOUNIT1_PHYFLAGS_ENDPOINT_ONLY (0x01) /* values for PCIe IO Unit Page 1 MaxMinLinkRate */ #define MPI26_PCIEIOUNIT1_MAX_RATE_MASK (0xF0) #define MPI26_PCIEIOUNIT1_MAX_RATE_SHIFT (4) #define MPI26_PCIEIOUNIT1_MAX_RATE_2_5 (0x20) #define MPI26_PCIEIOUNIT1_MAX_RATE_5_0 (0x30) #define MPI26_PCIEIOUNIT1_MAX_RATE_8_0 (0x40) #define MPI26_PCIEIOUNIT1_MAX_RATE_16_0 (0x50) +/* values for PCIe IO Unit Page 1 DMDReportPCIe */ +#define MPI26_PCIEIOUNIT1_DMD_REPORT_UNITS_MASK (0x80) +#define MPI26_PCIEIOUNIT1_DMD_REPORT_UNITS_1_SEC (0x00) +#define MPI26_PCIEIOUNIT1_DMD_REPORT_UNITS_16_SEC (0x80) +#define MPI26_PCIEIOUNIT1_DMD_REPORT_DELAY_TIME_MASK (0x7F) + + /* see mpi2_pci.h for values for PCIe IO Unit Page 0 ControllerPhyDeviceInfo values */ /**************************************************************************** * PCIe Switch Config Pages (MPI v2.6 and later) ****************************************************************************/ /* PCIe Switch Page 0 */ typedef struct _MPI26_CONFIG_PAGE_PSWITCH_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U8 PhysicalPort; /* 0x08 */ U8 Reserved1; /* 0x09 */ U16 Reserved2; /* 0x0A */ U16 DevHandle; /* 0x0C */ U16 ParentDevHandle; /* 0x0E */ U8 NumPorts; /* 0x10 */ U8 PCIeLevel; /* 0x11 */ U16 Reserved3; /* 0x12 */ U32 Reserved4; /* 0x14 */ U32 Reserved5; /* 0x18 */ U32 Reserved6; /* 0x1C */ } MPI26_CONFIG_PAGE_PSWITCH_0, MPI2_POINTER PTR_MPI26_CONFIG_PAGE_PSWITCH_0, Mpi26PCIeSwitchPage0_t, MPI2_POINTER pMpi26PCIeSwitchPage0_t; #define MPI26_PCIESWITCH0_PAGEVERSION (0x00) /* PCIe Switch Page 1 */ typedef struct _MPI26_CONFIG_PAGE_PSWITCH_1 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U8 PhysicalPort; /* 0x08 */ U8 Reserved1; /* 0x09 */ U16 Reserved2; /* 0x0A */ U8 NumPorts; /* 0x0C */ U8 PortNum; /* 0x0D */ U16 AttachedDevHandle; /* 0x0E */ U16 SwitchDevHandle; /* 0x10 */ U8 NegotiatedPortWidth; /* 0x12 */ U8 NegotiatedLinkRate; /* 0x13 */ - U32 Reserved4; /* 0x14 */ + U16 Flags; /* 0x14 */ + U16 Reserved4; /* 0x16 */ U32 Reserved5; /* 0x18 */ } MPI26_CONFIG_PAGE_PSWITCH_1, MPI2_POINTER PTR_MPI26_CONFIG_PAGE_PSWITCH_1, Mpi26PCIeSwitchPage1_t, MPI2_POINTER pMpi26PCIeSwitchPage1_t; -#define MPI26_PCIESWITCH1_PAGEVERSION (0x00) +#define MPI26_PCIESWITCH1_PAGEVERSION (0x00) /* use MPI26_PCIE_NEG_LINK_RATE_ defines for the NegotiatedLinkRate field */ +/* defines for the Flags field */ +#define MPI26_PCIESWITCH1_2_RETIMER_PRESENCE (0x0002) +#define MPI26_PCIESWITCH1_RETIMER_PRESENCE (0x0001) + + /**************************************************************************** * PCIe Device Config Pages (MPI v2.6 and later) ****************************************************************************/ /* PCIe Device Page 0 */ typedef struct _MPI26_CONFIG_PAGE_PCIEDEV_0 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U16 Slot; /* 0x08 */ U16 EnclosureHandle; /* 0x0A */ U64 WWID; /* 0x0C */ U16 ParentDevHandle; /* 0x14 */ U8 PortNum; /* 0x16 */ U8 AccessStatus; /* 0x17 */ U16 DevHandle; /* 0x18 */ U8 PhysicalPort; /* 0x1A */ U8 Reserved1; /* 0x1B */ U32 DeviceInfo; /* 0x1C */ U32 Flags; /* 0x20 */ U8 SupportedLinkRates; /* 0x24 */ U8 MaxPortWidth; /* 0x25 */ U8 NegotiatedPortWidth; /* 0x26 */ U8 NegotiatedLinkRate; /* 0x27 */ U8 EnclosureLevel; /* 0x28 */ U8 Reserved2; /* 0x29 */ U16 Reserved3; /* 0x2A */ U8 ConnectorName[4]; /* 0x2C */ U32 Reserved4; /* 0x30 */ U32 Reserved5; /* 0x34 */ } MPI26_CONFIG_PAGE_PCIEDEV_0, MPI2_POINTER PTR_MPI26_CONFIG_PAGE_PCIEDEV_0, Mpi26PCIeDevicePage0_t, MPI2_POINTER pMpi26PCIeDevicePage0_t; #define MPI26_PCIEDEVICE0_PAGEVERSION (0x01) /* values for PCIe Device Page 0 AccessStatus field */ #define MPI26_PCIEDEV0_ASTATUS_NO_ERRORS (0x00) #define MPI26_PCIEDEV0_ASTATUS_NEEDS_INITIALIZATION (0x04) #define MPI26_PCIEDEV0_ASTATUS_CAPABILITY_FAILED (0x02) #define MPI26_PCIEDEV0_ASTATUS_DEVICE_BLOCKED (0x07) #define MPI26_PCIEDEV0_ASTATUS_MEMORY_SPACE_ACCESS_FAILED (0x08) #define MPI26_PCIEDEV0_ASTATUS_UNSUPPORTED_DEVICE (0x09) #define MPI26_PCIEDEV0_ASTATUS_MSIX_REQUIRED (0x0A) #define MPI26_PCIEDEV0_ASTATUS_UNKNOWN (0x10) #define MPI26_PCIEDEV0_ASTATUS_NVME_READY_TIMEOUT (0x30) #define MPI26_PCIEDEV0_ASTATUS_NVME_DEVCFG_UNSUPPORTED (0x31) #define MPI26_PCIEDEV0_ASTATUS_NVME_IDENTIFY_FAILED (0x32) #define MPI26_PCIEDEV0_ASTATUS_NVME_QCONFIG_FAILED (0x33) #define MPI26_PCIEDEV0_ASTATUS_NVME_QCREATION_FAILED (0x34) #define MPI26_PCIEDEV0_ASTATUS_NVME_EVENTCFG_FAILED (0x35) #define MPI26_PCIEDEV0_ASTATUS_NVME_GET_FEATURE_STAT_FAILED (0x36) #define MPI26_PCIEDEV0_ASTATUS_NVME_IDLE_TIMEOUT (0x37) #define MPI26_PCIEDEV0_ASTATUS_NVME_FAILURE_STATUS (0x38) #define MPI26_PCIEDEV0_ASTATUS_INIT_FAIL_MAX (0x3F) /* see mpi2_pci.h for the MPI26_PCIE_DEVINFO_ defines used for the DeviceInfo field */ /* values for PCIe Device Page 0 Flags field */ -#define MPI26_PCIEDEV0_FLAGS_UNAUTHORIZED_DEVICE (0x8000) -#define MPI26_PCIEDEV0_FLAGS_ENABLED_FAST_PATH (0x4000) -#define MPI26_PCIEDEV0_FLAGS_FAST_PATH_CAPABLE (0x2000) -#define MPI26_PCIEDEV0_FLAGS_ASYNCHRONOUS_NOTIFICATION (0x0400) -#define MPI26_PCIEDEV0_FLAGS_ATA_SW_PRESERVATION (0x0200) -#define MPI26_PCIEDEV0_FLAGS_UNSUPPORTED_DEVICE (0x0100) -#define MPI26_PCIEDEV0_FLAGS_ATA_48BIT_LBA_SUPPORTED (0x0080) -#define MPI26_PCIEDEV0_FLAGS_ATA_SMART_SUPPORTED (0x0040) -#define MPI26_PCIEDEV0_FLAGS_ATA_NCQ_SUPPORTED (0x0020) -#define MPI26_PCIEDEV0_FLAGS_ATA_FUA_SUPPORTED (0x0010) -#define MPI26_PCIEDEV0_FLAGS_ENCL_LEVEL_VALID (0x0002) -#define MPI26_PCIEDEV0_FLAGS_DEVICE_PRESENT (0x0001) +#define MPI26_PCIEDEV0_FLAGS_2_RETIMER_PRESENCE (0x00020000) +#define MPI26_PCIEDEV0_FLAGS_RETIMER_PRESENCE (0x00010000) +#define MPI26_PCIEDEV0_FLAGS_UNAUTHORIZED_DEVICE (0x00008000) +#define MPI26_PCIEDEV0_FLAGS_ENABLED_FAST_PATH (0x00004000) +#define MPI26_PCIEDEV0_FLAGS_FAST_PATH_CAPABLE (0x00002000) +#define MPI26_PCIEDEV0_FLAGS_ASYNCHRONOUS_NOTIFICATION (0x00000400) +#define MPI26_PCIEDEV0_FLAGS_ATA_SW_PRESERVATION (0x00000200) +#define MPI26_PCIEDEV0_FLAGS_UNSUPPORTED_DEVICE (0x00000100) +#define MPI26_PCIEDEV0_FLAGS_ATA_48BIT_LBA_SUPPORTED (0x00000080) +#define MPI26_PCIEDEV0_FLAGS_ATA_SMART_SUPPORTED (0x00000040) +#define MPI26_PCIEDEV0_FLAGS_ATA_NCQ_SUPPORTED (0x00000020) +#define MPI26_PCIEDEV0_FLAGS_ATA_FUA_SUPPORTED (0x00000010) +#define MPI26_PCIEDEV0_FLAGS_ENCL_LEVEL_VALID (0x00000002) +#define MPI26_PCIEDEV0_FLAGS_DEVICE_PRESENT (0x00000001) /* values for PCIe Device Page 0 SupportedLinkRates field */ #define MPI26_PCIEDEV0_LINK_RATE_16_0_SUPPORTED (0x08) #define MPI26_PCIEDEV0_LINK_RATE_8_0_SUPPORTED (0x04) #define MPI26_PCIEDEV0_LINK_RATE_5_0_SUPPORTED (0x02) #define MPI26_PCIEDEV0_LINK_RATE_2_5_SUPPORTED (0x01) /* use MPI26_PCIE_NEG_LINK_RATE_ defines for the NegotiatedLinkRate field */ /* PCIe Device Page 2 */ typedef struct _MPI26_CONFIG_PAGE_PCIEDEV_2 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U16 DevHandle; /* 0x08 */ - U16 Reserved1; /* 0x0A */ + U8 ControllerResetTO; /* 0x0A */ + U8 Reserved1; /* 0x0B */ U32 MaximumDataTransferSize;/* 0x0C */ U32 Capabilities; /* 0x10 */ - U32 Reserved2; /* 0x14 */ + U16 NOIOB; /* 0x14 */ + U16 Reserved2; /* 0x16 */ } MPI26_CONFIG_PAGE_PCIEDEV_2, MPI2_POINTER PTR_MPI26_CONFIG_PAGE_PCIEDEV_2, Mpi26PCIeDevicePage2_t, MPI2_POINTER pMpi26PCIeDevicePage2_t; -#define MPI26_PCIEDEVICE2_PAGEVERSION (0x00) +#define MPI26_PCIEDEVICE2_PAGEVERSION (0x01) /* defines for PCIe Device Page 2 Capabilities field */ -#define MPI26_PCIEDEV2_CAP_SGL_FORMAT (0x00000004) -#define MPI26_PCIEDEV2_CAP_BIT_BUCKET_SUPPORT (0x00000002) -#define MPI26_PCIEDEV2_CAP_SGL_SUPPORT (0x00000001) +#define MPI26_PCIEDEV2_CAP_DATA_BLK_ALIGN_AND_GRAN (0x00000008) +#define MPI26_PCIEDEV2_CAP_SGL_FORMAT (0x00000004) +#define MPI26_PCIEDEV2_CAP_BIT_BUCKET_SUPPORT (0x00000002) +#define MPI26_PCIEDEV2_CAP_SGL_SUPPORT (0x00000001) + +/* Defines for the NOIOB field */ +#define MPI26_PCIEDEV2_NOIOB_UNSUPPORTED (0x0000) /**************************************************************************** * PCIe Link Config Pages (MPI v2.6 and later) ****************************************************************************/ /* PCIe Link Page 1 */ typedef struct _MPI26_CONFIG_PAGE_PCIELINK_1 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U8 Link; /* 0x08 */ U8 Reserved1; /* 0x09 */ U16 Reserved2; /* 0x0A */ U32 CorrectableErrorCount; /* 0x0C */ U16 NonFatalErrorCount; /* 0x10 */ U16 Reserved3; /* 0x12 */ U16 FatalErrorCount; /* 0x14 */ U16 Reserved4; /* 0x16 */ } MPI26_CONFIG_PAGE_PCIELINK_1, MPI2_POINTER PTR_MPI26_CONFIG_PAGE_PCIELINK_1, Mpi26PcieLinkPage1_t, MPI2_POINTER pMpi26PcieLinkPage1_t; #define MPI26_PCIELINK1_PAGEVERSION (0x00) /* PCIe Link Page 2 */ typedef struct _MPI26_PCIELINK2_LINK_EVENT { U8 LinkEventCode; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 Reserved2; /* 0x02 */ U32 LinkEventInfo; /* 0x04 */ } MPI26_PCIELINK2_LINK_EVENT, MPI2_POINTER PTR_MPI26_PCIELINK2_LINK_EVENT, Mpi26PcieLink2LinkEvent_t, MPI2_POINTER pMpi26PcieLink2LinkEvent_t; /* use MPI26_PCIELINK3_EVTCODE_ for the LinkEventCode field */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumLinkEvents at runtime. */ #ifndef MPI26_PCIELINK2_LINK_EVENT_MAX #define MPI26_PCIELINK2_LINK_EVENT_MAX (1) #endif typedef struct _MPI26_CONFIG_PAGE_PCIELINK_2 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U8 Link; /* 0x08 */ U8 Reserved1; /* 0x09 */ U16 Reserved2; /* 0x0A */ U8 NumLinkEvents; /* 0x0C */ U8 Reserved3; /* 0x0D */ U16 Reserved4; /* 0x0E */ MPI26_PCIELINK2_LINK_EVENT LinkEvent[MPI26_PCIELINK2_LINK_EVENT_MAX]; /* 0x10 */ } MPI26_CONFIG_PAGE_PCIELINK_2, MPI2_POINTER PTR_MPI26_CONFIG_PAGE_PCIELINK_2, Mpi26PcieLinkPage2_t, MPI2_POINTER pMpi26PcieLinkPage2_t; #define MPI26_PCIELINK2_PAGEVERSION (0x00) /* PCIe Link Page 3 */ typedef struct _MPI26_PCIELINK3_LINK_EVENT_CONFIG { U8 LinkEventCode; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 Reserved2; /* 0x02 */ U8 CounterType; /* 0x04 */ U8 ThresholdWindow; /* 0x05 */ U8 TimeUnits; /* 0x06 */ U8 Reserved3; /* 0x07 */ U32 EventThreshold; /* 0x08 */ U16 ThresholdFlags; /* 0x0C */ U16 Reserved4; /* 0x0E */ } MPI26_PCIELINK3_LINK_EVENT_CONFIG, MPI2_POINTER PTR_MPI26_PCIELINK3_LINK_EVENT_CONFIG, Mpi26PcieLink3LinkEventConfig_t, MPI2_POINTER pMpi26PcieLink3LinkEventConfig_t; /* values for LinkEventCode field */ #define MPI26_PCIELINK3_EVTCODE_NO_EVENT (0x00) #define MPI26_PCIELINK3_EVTCODE_CORRECTABLE_ERROR_RECEIVED (0x01) #define MPI26_PCIELINK3_EVTCODE_NON_FATAL_ERROR_RECEIVED (0x02) #define MPI26_PCIELINK3_EVTCODE_FATAL_ERROR_RECEIVED (0x03) #define MPI26_PCIELINK3_EVTCODE_DATA_LINK_ERROR_DETECTED (0x04) #define MPI26_PCIELINK3_EVTCODE_TRANSACTION_LAYER_ERROR_DETECTED (0x05) #define MPI26_PCIELINK3_EVTCODE_TLP_ECRC_ERROR_DETECTED (0x06) #define MPI26_PCIELINK3_EVTCODE_POISONED_TLP (0x07) #define MPI26_PCIELINK3_EVTCODE_RECEIVED_NAK_DLLP (0x08) #define MPI26_PCIELINK3_EVTCODE_SENT_NAK_DLLP (0x09) #define MPI26_PCIELINK3_EVTCODE_LTSSM_RECOVERY_STATE (0x0A) #define MPI26_PCIELINK3_EVTCODE_LTSSM_RXL0S_STATE (0x0B) #define MPI26_PCIELINK3_EVTCODE_LTSSM_TXL0S_STATE (0x0C) #define MPI26_PCIELINK3_EVTCODE_LTSSM_L1_STATE (0x0D) #define MPI26_PCIELINK3_EVTCODE_LTSSM_DISABLED_STATE (0x0E) #define MPI26_PCIELINK3_EVTCODE_LTSSM_HOT_RESET_STATE (0x0F) #define MPI26_PCIELINK3_EVTCODE_SYSTEM_ERROR (0x10) #define MPI26_PCIELINK3_EVTCODE_DECODE_ERROR (0x11) #define MPI26_PCIELINK3_EVTCODE_DISPARITY_ERROR (0x12) /* values for the CounterType field */ #define MPI26_PCIELINK3_COUNTER_TYPE_WRAPPING (0x00) #define MPI26_PCIELINK3_COUNTER_TYPE_SATURATING (0x01) #define MPI26_PCIELINK3_COUNTER_TYPE_PEAK_VALUE (0x02) /* values for the TimeUnits field */ #define MPI26_PCIELINK3_TM_UNITS_10_MICROSECONDS (0x00) #define MPI26_PCIELINK3_TM_UNITS_100_MICROSECONDS (0x01) #define MPI26_PCIELINK3_TM_UNITS_1_MILLISECOND (0x02) #define MPI26_PCIELINK3_TM_UNITS_10_MILLISECONDS (0x03) /* values for the ThresholdFlags field */ #define MPI26_PCIELINK3_TFLAGS_EVENT_NOTIFY (0x0001) /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check the value returned for NumLinkEvents at runtime. */ #ifndef MPI26_PCIELINK3_LINK_EVENT_MAX #define MPI26_PCIELINK3_LINK_EVENT_MAX (1) #endif typedef struct _MPI26_CONFIG_PAGE_PCIELINK_3 { MPI2_CONFIG_EXTENDED_PAGE_HEADER Header; /* 0x00 */ U8 Link; /* 0x08 */ U8 Reserved1; /* 0x09 */ U16 Reserved2; /* 0x0A */ U8 NumLinkEvents; /* 0x0C */ U8 Reserved3; /* 0x0D */ U16 Reserved4; /* 0x0E */ MPI26_PCIELINK3_LINK_EVENT_CONFIG LinkEventConfig[MPI26_PCIELINK3_LINK_EVENT_MAX]; /* 0x10 */ } MPI26_CONFIG_PAGE_PCIELINK_3, MPI2_POINTER PTR_MPI26_CONFIG_PAGE_PCIELINK_3, Mpi26PcieLinkPage3_t, MPI2_POINTER pMpi26PcieLinkPage3_t; #define MPI26_PCIELINK3_PAGEVERSION (0x00) #endif Index: releng/12.1/sys/dev/mpr/mpi/mpi2_hbd.h =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2_hbd.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2_hbd.h (revision 352761) @@ -1,158 +1,154 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ /* - * Copyright (c) 2009-2015 LSI Corporation. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_hbd.h * Title: MPI Host Based Discovery messages and structures * Creation Date: October 21, 2009 * * mpi2_hbd.h Version: 02.00.04 * * NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25 * prefix are for use only on MPI v2.5 products, and must not be used * with MPI v2.0 products. Unless otherwise noted, names beginning with * MPI2 or Mpi2 are for use with both MPI v2.0 and MPI v2.5 products. * * Version History * --------------- * * Date Version Description * -------- -------- ------------------------------------------------------ * 10-28-09 02.00.00 Initial version. * 08-11-10 02.00.01 Removed PortGroups, DmaGroup, and ControlGroup from * HBD Action request, replaced by AdditionalInfo field. * 11-18-11 02.00.02 Incorporating additions for MPI v2.5. * 11-18-14 02.00.03 Updated copyright information. * 02-17-16 02.00.04 Added SAS 4 22.5 gbs speed support. * -------------------------------------------------------------------------- */ #ifndef MPI2_HBD_H #define MPI2_HBD_H /**************************************************************************** * Host Based Discovery Action messages ****************************************************************************/ /* Host Based Discovery Action Request Message */ typedef struct _MPI2_HBD_ACTION_REQUEST { U8 Operation; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 DevHandle; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U32 Reserved4; /* 0x0C */ U64 SASAddress; /* 0x10 */ U32 Reserved5; /* 0x18 */ U32 HbdDeviceInfo; /* 0x1C */ U16 ParentDevHandle; /* 0x20 */ U16 MaxQDepth; /* 0x22 */ U8 FirstPhyIdentifier; /* 0x24 */ U8 Port; /* 0x25 */ U8 MaxConnections; /* 0x26 */ U8 MaxRate; /* 0x27 */ U32 AdditionalInfo; /* 0x28 */ U16 InitialAWT; /* 0x2C */ U16 Reserved7; /* 0x2E */ U32 Reserved8; /* 0x30 */ } MPI2_HBD_ACTION_REQUEST, MPI2_POINTER PTR_MPI2_HBD_ACTION_REQUEST, Mpi2HbdActionRequest_t, MPI2_POINTER pMpi2HbdActionRequest_t; /* values for the Operation field */ #define MPI2_HBD_OP_ADD_DEVICE (0x01) #define MPI2_HBD_OP_REMOVE_DEVICE (0x02) #define MPI2_HBD_OP_UPDATE_DEVICE (0x03) /* values for the HbdDeviceInfo field */ #define MPI2_HBD_DEVICE_INFO_VIRTUAL_DEVICE (0x00004000) #define MPI2_HBD_DEVICE_INFO_ATAPI_DEVICE (0x00002000) #define MPI2_HBD_DEVICE_INFO_DIRECT_ATTACH (0x00000800) #define MPI2_HBD_DEVICE_INFO_SSP_TARGET (0x00000400) #define MPI2_HBD_DEVICE_INFO_STP_TARGET (0x00000200) #define MPI2_HBD_DEVICE_INFO_SMP_TARGET (0x00000100) #define MPI2_HBD_DEVICE_INFO_SATA_DEVICE (0x00000080) #define MPI2_HBD_DEVICE_INFO_SSP_INITIATOR (0x00000040) #define MPI2_HBD_DEVICE_INFO_STP_INITIATOR (0x00000020) #define MPI2_HBD_DEVICE_INFO_SMP_INITIATOR (0x00000010) #define MPI2_HBD_DEVICE_INFO_SATA_HOST (0x00000008) #define MPI2_HBD_DEVICE_INFO_MASK_DEVICE_TYPE (0x00000007) #define MPI2_HBD_DEVICE_INFO_NO_DEVICE (0x00000000) #define MPI2_HBD_DEVICE_INFO_END_DEVICE (0x00000001) #define MPI2_HBD_DEVICE_INFO_EDGE_EXPANDER (0x00000002) #define MPI2_HBD_DEVICE_INFO_FANOUT_EXPANDER (0x00000003) /* values for the MaxRate field */ #define MPI2_HBD_MAX_RATE_MASK (0x0F) #define MPI2_HBD_MAX_RATE_1_5 (0x08) #define MPI2_HBD_MAX_RATE_3_0 (0x09) #define MPI2_HBD_MAX_RATE_6_0 (0x0A) #define MPI25_HBD_MAX_RATE_12_0 (0x0B) #define MPI26_HBD_MAX_RATE_22_5 (0x0C) /* Host Based Discovery Action Reply Message */ typedef struct _MPI2_HBD_ACTION_REPLY { U8 Operation; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 DevHandle; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 Reserved4; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ } MPI2_HBD_ACTION_REPLY, MPI2_POINTER PTR_MPI2_HBD_ACTION_REPLY, Mpi2HbdActionReply_t, MPI2_POINTER pMpi2HbdActionReply_t; #endif Index: releng/12.1/sys/dev/mpr/mpi/mpi2_history.txt =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2_history.txt (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2_history.txt (revision 352761) @@ -1,814 +1,828 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ ============================== Fusion-MPT MPI 2.0 / 2.5 Header File Change History ============================== - Copyright (c) 2000-2015 LSI Corporation. - Copyright (c) 2013-2016 Avago Technologies - All rights reserved. + Copyright 2000-2020 Broadcom Inc. All rights reserved. --------------------------------------- - Header Set Release Version: 02.00.48 - Header Set Release Date: 02-03-17 + Header Set Release Version: 02.00.50 + Header Set Release Date: 09-29-17 --------------------------------------- Filename Current version Prior version ---------- --------------- ------------- - mpi2.h 02.00.48 02.00.47 - mpi2_cnfg.h 02.00.40 02.00.39 + mpi2.h 02.00.50 02.00.49 + mpi2_cnfg.h 02.00.42 02.00.41 mpi2_init.h 02.00.21 02.00.21 - mpi2_ioc.h 02.00.32 02.00.31 + mpi2_ioc.h 02.00.34 02.00.33 mpi2_raid.h 02.00.11 02.00.11 mpi2_sas.h 02.00.10 02.00.10 mpi2_targ.h 02.00.09 02.00.09 mpi2_tool.h 02.00.14 02.00.13 mpi2_type.h 02.00.01 02.00.01 mpi2_ra.h 02.00.01 02.00.01 mpi2_hbd.h 02.00.04 02.00.04 mpi2_pci.h 02.00.02 02.00.02 - mpi2_history.txt 02.00.45 02.00.44 + mpi2_history.txt 02.00.46 02.00.45 * Date Version Description * -------- -------- ------------------------------------------------------ mpi2.h * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 06-04-07 02.00.01 Bumped MPI2_HEADER_VERSION_UNIT. * 06-26-07 02.00.02 Bumped MPI2_HEADER_VERSION_UNIT. * 08-31-07 02.00.03 Bumped MPI2_HEADER_VERSION_UNIT. * Moved ReplyPostHostIndex register to offset 0x6C of the * MPI2_SYSTEM_INTERFACE_REGS and modified the define for * MPI2_REPLY_POST_HOST_INDEX_OFFSET. * Added union of request descriptors. * Added union of reply descriptors. * 10-31-07 02.00.04 Bumped MPI2_HEADER_VERSION_UNIT. * Added define for MPI2_VERSION_02_00. * Fixed the size of the FunctionDependent5 field in the * MPI2_DEFAULT_REPLY structure. * 12-18-07 02.00.05 Bumped MPI2_HEADER_VERSION_UNIT. * Removed the MPI-defined Fault Codes and extended the * product specific codes up to 0xEFFF. * Added a sixth key value for the WriteSequence register * and changed the flush value to 0x0. * Added message function codes for Diagnostic Buffer Post * and Diagnsotic Release. * New IOCStatus define: MPI2_IOCSTATUS_DIAGNOSTIC_RELEASED * Moved MPI2_VERSION_UNION from mpi2_ioc.h. * 02-29-08 02.00.06 Bumped MPI2_HEADER_VERSION_UNIT. * 03-03-08 02.00.07 Bumped MPI2_HEADER_VERSION_UNIT. * 05-21-08 02.00.08 Bumped MPI2_HEADER_VERSION_UNIT. * Added #defines for marking a reply descriptor as unused. * 06-27-08 02.00.09 Bumped MPI2_HEADER_VERSION_UNIT. * 10-02-08 02.00.10 Bumped MPI2_HEADER_VERSION_UNIT. * Moved LUN field defines from mpi2_init.h. * 01-19-09 02.00.11 Bumped MPI2_HEADER_VERSION_UNIT. * 05-06-09 02.00.12 Bumped MPI2_HEADER_VERSION_UNIT. * In all request and reply descriptors, replaced VF_ID * field with MSIxIndex field. * Removed DevHandle field from * MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR and made those * bytes reserved. * Added RAID Accelerator functionality. * 07-30-09 02.00.13 Bumped MPI2_HEADER_VERSION_UNIT. * 10-28-09 02.00.14 Bumped MPI2_HEADER_VERSION_UNIT. * Added MSI-x index mask and shift for Reply Post Host * Index register. * Added function code for Host Based Discovery Action. * 02-10-10 02.00.15 Bumped MPI2_HEADER_VERSION_UNIT. * Added define for MPI2_FUNCTION_PWR_MGMT_CONTROL. * Added defines for product-specific range of message * function codes, 0xF0 to 0xFF. * 05-12-10 02.00.16 Bumped MPI2_HEADER_VERSION_UNIT. * Added alternative defines for the SGE Direction bit. * 08-11-10 02.00.17 Bumped MPI2_HEADER_VERSION_UNIT. * 11-10-10 02.00.18 Bumped MPI2_HEADER_VERSION_UNIT. * Added MPI2_IEEE_SGE_FLAGS_SYSTEMPLBCPI_ADDR define. * 02-23-11 02.00.19 Bumped MPI2_HEADER_VERSION_UNIT. * Added MPI2_FUNCTION_SEND_HOST_MESSAGE. * 03-09-11 02.00.20 Bumped MPI2_HEADER_VERSION_UNIT. * 05-25-11 02.00.21 Bumped MPI2_HEADER_VERSION_UNIT. * 08-24-11 02.00.22 Bumped MPI2_HEADER_VERSION_UNIT. * 11-18-11 02.00.23 Bumped MPI2_HEADER_VERSION_UNIT. * Incorporating additions for MPI v2.5. * 02-06-12 02.00.24 Bumped MPI2_HEADER_VERSION_UNIT. * 03-29-12 02.00.25 Bumped MPI2_HEADER_VERSION_UNIT. * Added Hard Reset delay timings. * 07-10-12 02.00.26 Bumped MPI2_HEADER_VERSION_UNIT. * 07-26-12 02.00.27 Bumped MPI2_HEADER_VERSION_UNIT. * 11-27-12 02.00.28 Bumped MPI2_HEADER_VERSION_UNIT. * 12-20-12 02.00.29 Bumped MPI2_HEADER_VERSION_UNIT. * Added MPI25_SUP_REPLY_POST_HOST_INDEX_OFFSET. * 04-09-13 02.00.30 Bumped MPI2_HEADER_VERSION_UNIT. * 04-17-13 02.00.31 Bumped MPI2_HEADER_VERSION_UNIT. * 08-19-13 02.00.32 Bumped MPI2_HEADER_VERSION_UNIT. * 12-05-13 02.00.33 Bumped MPI2_HEADER_VERSION_UNIT. * 01-08-14 02.00.34 Bumped MPI2_HEADER_VERSION_UNIT. * 06-13-14 02.00.35 Bumped MPI2_HEADER_VERSION_UNIT. * 11-18-14 02.00.36 Updated copyright information. * Bumped MPI2_HEADER_VERSION_UNIT. * 03-16-15 02.00.37 Updated for MPI v2.6. * Bumped MPI2_HEADER_VERSION_UNIT. * Added Scratchpad registers and * AtomicRequestDescriptorPost register to * MPI2_SYSTEM_INTERFACE_REGS. * Added MPI2_DIAG_SBR_RELOAD. * Added MPI2_IOCSTATUS_INSUFFICIENT_POWER. * 03-19-15 02.00.38 Bumped MPI2_HEADER_VERSION_UNIT. * 05-25-15 02.00.39 Bumped MPI2_HEADER_VERSION_UNIT. * 08-25-15 02.00.40 Bumped MPI2_HEADER_VERSION_UNIT. * Added V7 HostDiagnostic register defines * 12-15-15 02.00.41 Bumped MPI_HEADER_VERSION_UNIT * 01-04-16 02.00.42 Bumped MPI_HEADER_VERSION_UNIT * 04-05-16 02.00.43 Modified MPI26_DIAG_BOOT_DEVICE_SELECT defines * to be unique within first 32 characters. * Removed AHCI support. * Removed SOP support. * Bumped MPI2_HEADER_VERSION_UNIT. * 04-10-16 02.00.44 Bumped MPI2_HEADER_VERSION_UNIT. * 07-06-16 02.00.45 Bumped MPI2_HEADER_VERSION_UNIT. * 09-02-16 02.00.46 Bumped MPI2_HEADER_VERSION_UNIT. * 11-23-16 02.00.47 Bumped MPI2_HEADER_VERSION_UNIT. * 02-03-17 02.00.48 Bumped MPI2_HEADER_VERSION_UNIT. + * 06-13-17 02.00.49 Bumped MPI2_HEADER_VERSION_UNIT. + * 09-29-17 02.00.50 Bumped MPI2_HEADER_VERSION_UNIT. * -------------------------------------------------------------------------- mpi2_cnfg.h * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 06-04-07 02.00.01 Added defines for SAS IO Unit Page 2 PhyFlags. * Added Manufacturing Page 11. * Added MPI2_SAS_EXPANDER0_FLAGS_CONNECTOR_END_DEVICE * define. * 06-26-07 02.00.02 Adding generic structure for product-specific * Manufacturing pages: MPI2_CONFIG_PAGE_MANUFACTURING_PS. * Rework of BIOS Page 2 configuration page. * Fixed MPI2_BIOSPAGE2_BOOT_DEVICE to be a union of the * forms. * Added configuration pages IOC Page 8 and Driver * Persistent Mapping Page 0. * 08-31-07 02.00.03 Modified configuration pages dealing with Integrated * RAID (Manufacturing Page 4, RAID Volume Pages 0 and 1, * RAID Physical Disk Pages 0 and 1, RAID Configuration * Page 0). * Added new value for AccessStatus field of SAS Device * Page 0 (_SATA_NEEDS_INITIALIZATION). * 10-31-07 02.00.04 Added missing SEPDevHandle field to * MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0. * 12-18-07 02.00.05 Modified IO Unit Page 0 to use 32-bit version fields for * NVDATA. * Modified IOC Page 7 to use masks and added field for * SASBroadcastPrimitiveMasks. * Added MPI2_CONFIG_PAGE_BIOS_4. * Added MPI2_CONFIG_PAGE_LOG_0. * 02-29-08 02.00.06 Modified various names to make them 32-character unique. * Added SAS Device IDs. * Updated Integrated RAID configuration pages including * Manufacturing Page 4, IOC Page 6, and RAID Configuration * Page 0. * 05-21-08 02.00.07 Added define MPI2_MANPAGE4_MIX_SSD_SAS_SATA. * Added define MPI2_MANPAGE4_PHYSDISK_128MB_COERCION. * Fixed define MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING. * Added missing MaxNumRoutedSasAddresses field to * MPI2_CONFIG_PAGE_EXPANDER_0. * Added SAS Port Page 0. * Modified structure layout for * MPI2_CONFIG_PAGE_DRIVER_MAPPING_0. * 06-27-08 02.00.08 Changed MPI2_CONFIG_PAGE_RD_PDISK_1 to use * MPI2_RAID_PHYS_DISK1_PATH_MAX to size the array. * 10-02-08 02.00.09 Changed MPI2_RAID_PGAD_CONFIGNUM_MASK from 0x0000FFFF * to 0x000000FF. * Added two new values for the Physical Disk Coercion Size * bits in the Flags field of Manufacturing Page 4. * Added product-specific Manufacturing pages 16 to 31. * Modified Flags bits for controlling write cache on SATA * drives in IO Unit Page 1. * Added new bit to AdditionalControlFlags of SAS IO Unit * Page 1 to control Invalid Topology Correction. * Added SupportedPhysDisks field to RAID Volume Page 1 and * added related defines. * Added additional defines for RAID Volume Page 0 * VolumeStatusFlags field. * Modified meaning of RAID Volume Page 0 VolumeSettings * define for auto-configure of hot-swap drives. * Added PhysDiskAttributes field (and related defines) to * RAID Physical Disk Page 0. * Added MPI2_SAS_PHYINFO_PHY_VACANT define. * Added three new DiscoveryStatus bits for SAS IO Unit * Page 0 and SAS Expander Page 0. * Removed multiplexing information from SAS IO Unit pages. * Added BootDeviceWaitTime field to SAS IO Unit Page 4. * Removed Zone Address Resolved bit from PhyInfo and from * Expander Page 0 Flags field. * Added two new AccessStatus values to SAS Device Page 0 * for indicating routing problems. Added 3 reserved words * to this page. * 01-19-09 02.00.10 Fixed defines for GPIOVal field of IO Unit Page 3. * Inserted missing reserved field into structure for IOC * Page 6. * Added more pending task bits to RAID Volume Page 0 * VolumeStatusFlags defines. * Added MPI2_PHYSDISK0_STATUS_FLAG_NOT_CERTIFIED define. * Added a new DiscoveryStatus bit for SAS IO Unit Page 0 * and SAS Expander Page 0 to flag a downstream initiator * when in simplified routing mode. * Removed SATA Init Failure defines for DiscoveryStatus * fields of SAS IO Unit Page 0 and SAS Expander Page 0. * Added MPI2_SAS_DEVICE0_ASTATUS_DEVICE_BLOCKED define. * Added PortGroups, DmaGroup, and ControlGroup fields to * SAS Device Page 0. * 05-06-09 02.00.11 Added structures and defines for IO Unit Page 5 and IO * Unit Page 6. * Added expander reduced functionality data to SAS * Expander Page 0. * Added SAS PHY Page 2 and SAS PHY Page 3. * 07-30-09 02.00.12 Added IO Unit Page 7. * Added new device ids. * Added SAS IO Unit Page 5. * Added partial and slumber power management capable flags * to SAS Device Page 0 Flags field. * Added PhyInfo defines for power condition. * Added Ethernet configuration pages. * 10-28-09 02.00.13 Added MPI2_IOUNITPAGE1_ENABLE_HOST_BASED_DISCOVERY. * Added SAS PHY Page 4 structure and defines. * 02-10-10 02.00.14 Modified the comments for the configuration page * structures that contain an array of data. The host * should use the "count" field in the page data (e.g. the * NumPhys field) to determine the number of valid elements * in the array. * Added/modified some MPI2_MFGPAGE_DEVID_SAS defines. * Added PowerManagementCapabilities to IO Unit Page 7. * Added PortWidthModGroup field to * MPI2_SAS_IO_UNIT5_PHY_PM_SETTINGS. * Added MPI2_CONFIG_PAGE_SASIOUNIT_6 and related defines. * Added MPI2_CONFIG_PAGE_SASIOUNIT_7 and related defines. * Added MPI2_CONFIG_PAGE_SASIOUNIT_8 and related defines. * 05-12-10 02.00.15 Added MPI2_RAIDVOL0_STATUS_FLAG_VOL_NOT_CONSISTENT * define. * Added MPI2_PHYSDISK0_INCOMPATIBLE_MEDIA_TYPE define. * Added MPI2_SAS_NEG_LINK_RATE_UNSUPPORTED_PHY define. * 08-11-10 02.00.16 Removed IO Unit Page 1 device path (multi-pathing) * defines. * 11-10-10 02.00.17 Added ReceptacleID field (replacing Reserved1) to * MPI2_MANPAGE7_CONNECTOR_INFO and reworked defines for * the Pinout field. * Added BoardTemperature and BoardTemperatureUnits fields * to MPI2_CONFIG_PAGE_IO_UNIT_7. * Added MPI2_CONFIG_EXTPAGETYPE_EXT_MANUFACTURING define * and MPI2_CONFIG_PAGE_EXT_MAN_PS structure. * 02-23-11 02.00.18 Added ProxyVF_ID field to MPI2_CONFIG_REQUEST. * Added IO Unit Page 8, IO Unit Page 9, * and IO Unit Page 10. * Added SASNotifyPrimitiveMasks field to * MPI2_CONFIG_PAGE_IOC_7. * 03-09-11 02.00.19 Fixed IO Unit Page 10 (to match the spec). * 05-25-11 02.00.20 Cleaned up a few comments. * 08-24-11 02.00.21 Marked the IO Unit Page 7 PowerManagementCapabilities * for PCIe link as obsolete. * Added SpinupFlags field containing a Disable Spin-up bit * to the MPI2_SAS_IOUNIT4_SPINUP_GROUP fields of SAS IO * Unit Page 4. * 11-18-11 02.00.22 Added define MPI2_IOCPAGE6_CAP_FLAGS_4K_SECTORS_SUPPORT. * Added UEFIVersion field to BIOS Page 1 and defined new * BiosOptions bits. * Incorporating additions for MPI v2.5. * 11-27-12 02.00.23 Added MPI2_MANPAGE7_FLAG_EVENTREPLAY_SLOT_ORDER. * Added MPI2_BIOSPAGE1_OPTIONS_MASK_OEM_ID. * 12-20-12 02.00.24 Marked MPI2_SASIOUNIT1_CONTROL_CLEAR_AFFILIATION as * obsolete for MPI v2.5 and later. * Added some defines for 12G SAS speeds. * 04-09-13 02.00.25 Added MPI2_IOUNITPAGE1_ATA_SECURITY_FREEZE_LOCK. * Fixed MPI2_IOUNITPAGE5_DMA_CAP_MASK_MAX_REQUESTS to * match the specification. * 08-19-13 02.00.26 Added reserved words to MPI2_CONFIG_PAGE_IO_UNIT_7 for * future use. * 12-05-13 02.00.27 Added MPI2_MANPAGE7_FLAG_BASE_ENCLOSURE_LEVEL for * MPI2_CONFIG_PAGE_MAN_7. * Added EnclosureLevel and ConnectorName fields to * MPI2_CONFIG_PAGE_SAS_DEV_0. * Added MPI2_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID for * MPI2_CONFIG_PAGE_SAS_DEV_0. * Added EnclosureLevel field to * MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0. * Added MPI2_SAS_ENCLS0_FLAGS_ENCL_LEVEL_VALID for * MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0. * 01-08-14 02.00.28 Added more defines for the BiosOptions field of * MPI2_CONFIG_PAGE_BIOS_1. * 06-13-14 02.00.29 Added SSUTimeout field to MPI2_CONFIG_PAGE_BIOS_1, and * more defines for the BiosOptions field.. * 11-18-14 02.00.30 Updated copyright information. * Added MPI2_BIOSPAGE1_OPTIONS_ADVANCED_CONFIG. * Added AdapterOrderAux fields to BIOS Page 3. * 03-16-15 02.00.31 Updated for MPI v2.6. * Added BoardPowerRequirement, PCISlotPowerAllocation, and * Flags field to IO Unit Page 7. * Added IO Unit Page 11. * Added new SAS Phy Event codes * Added PCIe configuration pages. * 03-19-15 02.00.32 Fixed PCIe Link Config page structure names to be * unique in first 32 characters. * 05-25-15 02.00.33 Added more defines for the BiosOptions field of * MPI2_CONFIG_PAGE_BIOS_1. * 08-25-15 02.00.34 Added PCIe Device Page 2 SGL format capability. * 12-18-15 02.00.35 Added SATADeviceWaitTime to SAS IO Unit Page 4. * 01-21-16 02.00.36 Added/modified MPI2_MFGPAGE_DEVID_SAS defines. * Added Link field to PCIe Link Pages * Added EnclosureLevel and ConnectorName to PCIe * Device Page 0. * Added define for PCIE IoUnit page 1 max rate shift. * Added comment for reserved ExtPageTypes. * Added SAS 4 22.5 gbs speed support. * Added PCIe 4 16.0 GT/sec speec support. * Removed AHCI support. * Removed SOP support. * Added NegotiatedLinkRate and NegotiatedPortWidth to * PCIe device page 0. * 04-10-16 02.00.37 Fixed MPI2_MFGPAGE_DEVID_SAS3616/3708 defines * 07-01-16 02.00.38 Added Manufacturing page 7 Connector types. * Changed declaration of ConnectorName in PCIe DevicePage0 * to match SAS DevicePage 0. * Added SATADeviceWaitTime to IO Unit Page 11. * Added MPI26_MFGPAGE_DEVID_SAS4008 * Added x16 PCIe width to IO Unit Page 7 * Added LINKFLAGS to control SRIS in PCIe IO Unit page 1 * phy data. * Added InitStatus to PCIe IO Unit Page 1 header. * 09-01-16 02.00.39 Added MPI26_CONFIG_PAGE_ENCLOSURE_0 and related defines. * Added MPI26_ENCLOS_PGAD_FORM_GET_NEXT_HANDLE and * MPI26_ENCLOS_PGAD_FORM_HANDLE page address formats. * 02-02-17 02.00.40 Added MPI2_MANPAGE7_SLOT_UNKNOWN. * Added ChassisSlot field to SAS Enclosure Page 0. * Added ChassisSlot Valid bit (bit 5) to the Flags field * in SAS Enclosure Page 0. + * 06-13-17 02.00.41 Added MPI26_MFGPAGE_DEVID_SAS3816 and + * MPI26_MFGPAGE_DEVID_SAS3916 defines. + * Removed MPI26_MFGPAGE_DEVID_SAS4008 define. + * Added MPI26_PCIEIOUNIT1_LINKFLAGS_SRNS_EN define. + * Renamed MPI26_PCIEIOUNIT1_LINKFLAGS_EN_SRIS to + * MPI26_PCIEIOUNIT1_LINKFLAGS_SRIS_EN. + * Renamed MPI26_PCIEIOUNIT1_LINKFLAGS_DIS_SRIS to + * MPI26_PCIEIOUNIT1_LINKFLAGS_DIS_SEPARATE_REFCLK. + * 09-29-17 02.00.42 Added ControllerResetTO field to PCIe Device Page 2. + * Added NOIOB field to PCIe Device Page 2. + * Added MPI26_PCIEDEV2_CAP_DATA_BLK_ALIGN_AND_GRAN to + * the Capabilities field of PCIe Device Page 2. * -------------------------------------------------------------------------- mpi2_init.h * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 10-31-07 02.00.01 Fixed name for pMpi2SCSITaskManagementRequest_t. * 12-18-07 02.00.02 Modified Task Management Target Reset Method defines. * 02-29-08 02.00.03 Added Query Task Set and Query Unit Attention. * 03-03-08 02.00.04 Fixed name of struct _MPI2_SCSI_TASK_MANAGE_REPLY. * 05-21-08 02.00.05 Fixed typo in name of Mpi2SepRequest_t. * 10-02-08 02.00.06 Removed Untagged and No Disconnect values from SCSI IO * Control field Task Attribute flags. * Moved LUN field defines to mpi2.h becasue they are * common to many structures. * 05-06-09 02.00.07 Changed task management type of Query Unit Attention to * Query Asynchronous Event. * Defined two new bits in the SlotStatus field of the SCSI * Enclosure Processor Request and Reply. * 10-28-09 02.00.08 Added defines for decoding the ResponseInfo bytes for * both SCSI IO Error Reply and SCSI Task Management Reply. * Added ResponseInfo field to MPI2_SCSI_TASK_MANAGE_REPLY. * Added MPI2_SCSITASKMGMT_RSP_TM_OVERLAPPED_TAG define. * 02-10-10 02.00.09 Removed unused structure that had "#if 0" around it. * 05-12-10 02.00.10 Added optional vendor-unique region to SCSI IO Request. * 11-10-10 02.00.11 Added MPI2_SCSIIO_NUM_SGLOFFSETS define. * 11-18-11 02.00.12 Incorporating additions for MPI v2.5. * 02-06-12 02.00.13 Added alternate defines for Task Priority / Command * Priority to match SAM-4. * Added EEDPErrorOffset to MPI2_SCSI_IO_REPLY. * 07-10-12 02.00.14 Added MPI2_SCSIIO_CONTROL_SHIFT_DATADIRECTION. * 04-09-13 02.00.15 Added SCSIStatusQualifier field to MPI2_SCSI_IO_REPLY, * replacing the Reserved4 field. * 11-18-14 02.00.16 Updated copyright information. * 03-16-15 02.00.17 Updated for MPI v2.6. * Added MPI26_SCSIIO_IOFLAGS_ESCAPE_PASSTHROUGH. * Added MPI2_SEP_REQ_SLOTSTATUS_DEV_OFF and * MPI2_SEP_REPLY_SLOTSTATUS_DEV_OFF. * 08-26-15 02.00.18 Added SCSITASKMGMT_MSGFLAGS for Target Reset. * 12-18-15 02.00.19 Added EEDPObservedValue added to SCSI IO Reply message. * 01-04-16 02.00.20 Modified EEDP reported values in SCSI IO Reply message. * 01-21-16 02.00.21 Modified MPI26_SCSITASKMGMT_MSGFLAGS_PCIE* defines to * be unique within first 32 characters. * -------------------------------------------------------------------------- mpi2_ioc.h * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 06-04-07 02.00.01 In IOCFacts Reply structure, renamed MaxDevices to * MaxTargets. * Added TotalImageSize field to FWDownload Request. * Added reserved words to FWUpload Request. * 06-26-07 02.00.02 Added IR Configuration Change List Event. * 08-31-07 02.00.03 Removed SystemReplyQueueDepth field from the IOCInit * request and replaced it with * ReplyDescriptorPostQueueDepth and ReplyFreeQueueDepth. * Replaced the MinReplyQueueDepth field of the IOCFacts * reply with MaxReplyDescriptorPostQueueDepth. * Added MPI2_RDPQ_DEPTH_MIN define to specify the minimum * depth for the Reply Descriptor Post Queue. * Added SASAddress field to Initiator Device Table * Overflow Event data. * 10-31-07 02.00.04 Added ReasonCode MPI2_EVENT_SAS_INIT_RC_NOT_RESPONDING * for SAS Initiator Device Status Change Event data. * Modified Reason Code defines for SAS Topology Change * List Event data, including adding a bit for PHY Vacant * status, and adding a mask for the Reason Code. * Added define for * MPI2_EVENT_SAS_TOPO_ES_DELAY_NOT_RESPONDING. * Added define for MPI2_EXT_IMAGE_TYPE_MEGARAID. * 12-18-07 02.00.05 Added Boot Status defines for the IOCExceptions field of * the IOCFacts Reply. * Removed MPI2_IOCFACTS_CAPABILITY_EXTENDED_BUFFER define. * Moved MPI2_VERSION_UNION to mpi2.h. * Changed MPI2_EVENT_NOTIFICATION_REQUEST to use masks * instead of enables, and added SASBroadcastPrimitiveMasks * field. * Added Log Entry Added Event and related structure. * 02-29-08 02.00.06 Added define MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID. * Removed define MPI2_IOCFACTS_PROTOCOL_SMP_TARGET. * Added MaxVolumes and MaxPersistentEntries fields to * IOCFacts reply. * Added ProtocalFlags and IOCCapabilities fields to * MPI2_FW_IMAGE_HEADER. * Removed MPI2_PORTENABLE_FLAGS_ENABLE_SINGLE_PORT. * 03-03-08 02.00.07 Fixed MPI2_FW_IMAGE_HEADER by changing Reserved26 to * a U16 (from a U32). * Removed extra 's' from EventMasks name. * 06-27-08 02.00.08 Fixed an offset in a comment. * 10-02-08 02.00.09 Removed SystemReplyFrameSize from MPI2_IOC_INIT_REQUEST. * Removed CurReplyFrameSize from MPI2_IOC_FACTS_REPLY and * renamed MinReplyFrameSize to ReplyFrameSize. * Added MPI2_IOCFACTS_EXCEPT_IR_FOREIGN_CONFIG_MAX. * Added two new RAIDOperation values for Integrated RAID * Operations Status Event data. * Added four new IR Configuration Change List Event data * ReasonCode values. * Added two new ReasonCode defines for SAS Device Status * Change Event data. * Added three new DiscoveryStatus bits for the SAS * Discovery event data. * Added Multiplexing Status Change bit to the PhyStatus * field of the SAS Topology Change List event data. * Removed define for MPI2_INIT_IMAGE_BOOTFLAGS_XMEMCOPY. * BootFlags are now product-specific. * Added defines for the indivdual signature bytes * for MPI2_INIT_IMAGE_FOOTER. * 01-19-09 02.00.10 Added MPI2_IOCFACTS_CAPABILITY_EVENT_REPLAY define. * Added MPI2_EVENT_SAS_DISC_DS_DOWNSTREAM_INITIATOR * define. * Added MPI2_EVENT_SAS_DEV_STAT_RC_SATA_INIT_FAILURE * define. * Removed MPI2_EVENT_SAS_DISC_DS_SATA_INIT_FAILURE define. * 05-06-09 02.00.11 Added MPI2_IOCFACTS_CAPABILITY_RAID_ACCELERATOR define. * Added MPI2_IOCFACTS_CAPABILITY_MSI_X_INDEX define. * Added two new reason codes for SAS Device Status Change * Event. * Added new event: SAS PHY Counter. * 07-30-09 02.00.12 Added GPIO Interrupt event define and structure. * Added MPI2_IOCFACTS_CAPABILITY_EXTENDED_BUFFER define. * Added new product id family for 2208. * 10-28-09 02.00.13 Added HostMSIxVectors field to MPI2_IOC_INIT_REQUEST. * Added MaxMSIxVectors field to MPI2_IOC_FACTS_REPLY. * Added MinDevHandle field to MPI2_IOC_FACTS_REPLY. * Added MPI2_IOCFACTS_CAPABILITY_HOST_BASED_DISCOVERY. * Added MPI2_EVENT_HOST_BASED_DISCOVERY_PHY define. * Added MPI2_EVENT_SAS_TOPO_ES_NO_EXPANDER define. * Added Host Based Discovery Phy Event data. * Added defines for ProductID Product field * (MPI2_FW_HEADER_PID_). * Modified values for SAS ProductID Family * (MPI2_FW_HEADER_PID_FAMILY_). * 02-10-10 02.00.14 Added SAS Quiesce Event structure and defines. * Added PowerManagementControl Request structures and * defines. * 05-12-10 02.00.15 Marked Task Set Full Event as obsolete. * Added MPI2_EVENT_SAS_TOPO_LR_UNSUPPORTED_PHY define. * 11-10-10 02.00.16 Added MPI2_FW_DOWNLOAD_ITYPE_MIN_PRODUCT_SPECIFIC. * 02-23-11 02.00.17 Added SAS NOTIFY Primitive event, and added * SASNotifyPrimitiveMasks field to * MPI2_EVENT_NOTIFICATION_REQUEST. * Added Temperature Threshold Event. * Added Host Message Event. * Added Send Host Message request and reply. * 05-25-11 02.00.18 For Extended Image Header, added * MPI2_EXT_IMAGE_TYPE_MIN_PRODUCT_SPECIFIC and * MPI2_EXT_IMAGE_TYPE_MAX_PRODUCT_SPECIFIC defines. * Deprecated MPI2_EXT_IMAGE_TYPE_MAX define. * 08-24-11 02.00.19 Added PhysicalPort field to * MPI2_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE structure. * Marked MPI2_PM_CONTROL_FEATURE_PCIE_LINK as obsolete. * 11-18-11 02.00.20 Incorporating additions for MPI v2.5. * 03-29-12 02.00.21 Added a product specific range to event values. * 07-26-12 02.00.22 Added MPI2_IOCFACTS_EXCEPT_PARTIAL_MEMORY_FAILURE. * Added ElapsedSeconds field to * MPI2_EVENT_DATA_IR_OPERATION_STATUS. * 08-19-13 02.00.23 For IOCInit, added MPI2_IOCINIT_MSGFLAG_RDPQ_ARRAY_MODE * and MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY. * Added MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE. * Added MPI2_FW_DOWNLOAD_ITYPE_PUBLIC_KEY. * Added Encrypted Hash Extended Image. * 12-05-13 02.00.24 Added MPI25_HASH_IMAGE_TYPE_BIOS. * 11-18-14 02.00.25 Updated copyright information. * 03-16-15 02.00.26 Updated for MPI v2.6. * Added MPI2_EVENT_ACTIVE_CABLE_EXCEPTION and * MPI26_EVENT_DATA_ACTIVE_CABLE_EXCEPT. * Added MPI2_EVENT_PCIE_LINK_COUNTER and * MPI26_EVENT_DATA_PCIE_LINK_COUNTER. * Added MPI26_CTRL_OP_SHUTDOWN. * Added MPI26_CTRL_OP_LINK_CLEAR_ERROR_LOG * Added MPI26_FW_HEADER_PID_FAMILY_3324_SAS and * MPI26_FW_HEADER_PID_FAMILY_3516_SAS. * 08-25-15 02.00.27 Added IC ARCH Class based signature defines. * Added MPI26_EVENT_PCIE_ENUM_ES_RESOURCES_EXHAUSTED event. * Added ConigurationFlags field to IOCInit message to * support NVMe SGL format control. * Added PCIe SRIOV support. * 02-17-16 02.00.28 Added SAS 4 22.5 gbs speed support. * Added PCIe 4 16.0 GT/sec speec support. * Removed AHCI support. * Removed SOP support. * 07-01-16 02.00.29 Added Archclass for 4008 product. * Added IOCException MPI2_IOCFACTS_EXCEPT_PCIE_DISABLED. * 08-23-16 02.00.30 Added new defines for the ImageType field of FWDownload * Request Message. * Added new defines for the ImageType field of FWUpload * Request Message. * Added new values for the RegionType field in the Layout * Data sections of the FLASH Layout Extended Image Data. * Added new defines for the ReasonCode field of * Active Cable Exception Event. * Added MPI2_EVENT_ENCL_DEVICE_STATUS_CHANGE and * MPI26_EVENT_DATA_ENCL_DEV_STATUS_CHANGE. * 11-23-16 02.00.31 Added MPI2_EVENT_SAS_DEVICE_DISCOVERY_ERROR and * MPI25_EVENT_DATA_SAS_DEVICE_DISCOVERY_ERROR. * 02-02-17 02.00.32 Added MPI2_FW_DOWNLOAD_ITYPE_CBB_BACKUP. * Added MPI25_EVENT_DATA_ACTIVE_CABLE_EXCEPT and related * defines for the ReasonCode field. + * 06-13-17 02.00.33 Added MPI2_FW_DOWNLOAD_ITYPE_CPLD. + * 09-29-17 02.00.34 Added MPI26_EVENT_PCIDEV_STAT_RC_PCIE_HOT_RESET_FAILED + * to the ReasonCode field in PCIe Device Status Change + * Event Data. * -------------------------------------------------------------------------- mpi2_raid.h * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 08-31-07 02.00.01 Modifications to RAID Action request and reply, * including the Actions and ActionData. * 02-29-08 02.00.02 Added MPI2_RAID_ACTION_ADATA_DISABL_FULL_REBUILD. * 05-21-08 02.00.03 Added MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS so that * the PhysDisk array in MPI2_RAID_VOLUME_CREATION_STRUCT * can be sized by the build environment. * 07-30-09 02.00.04 Added proper define for the Use Default Settings bit of * VolumeCreationFlags and marked the old one as obsolete. * 05-12-10 02.00.05 Added MPI2_RAID_VOL_FLAGS_OP_MDC define. * 08-24-10 02.00.06 Added MPI2_RAID_ACTION_COMPATIBILITY_CHECK along with * related structures and defines. * Added product-specific range to RAID Action values. * 11-18-11 02.00.07 Incorporating additions for MPI v2.5. * 02-06-12 02.00.08 Added MPI2_RAID_ACTION_PHYSDISK_HIDDEN. * 07-26-12 02.00.09 Added ElapsedSeconds field to MPI2_RAID_VOL_INDICATOR. * Added MPI2_RAID_VOL_FLAGS_ELAPSED_SECONDS_VALID define. * 04-17-13 02.00.10 Added MPI25_RAID_ACTION_ADATA_ALLOW_PI. * -------------------------------------------------------------------------- mpi2_sas.h * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 06-26-07 02.00.01 Added Clear All Persistent Operation to SAS IO Unit * Control Request. * 10-02-08 02.00.02 Added Set IOC Parameter Operation to SAS IO Unit Control * Request. * 10-28-09 02.00.03 Changed the type of SGL in MPI2_SATA_PASSTHROUGH_REQUEST * to MPI2_SGE_IO_UNION since it supports chained SGLs. * 05-12-10 02.00.04 Modified some comments. * 08-11-10 02.00.05 Added NCQ operations to SAS IO Unit Control. * 11-18-11 02.00.06 Incorporating additions for MPI v2.5. * 07-10-12 02.00.07 Added MPI2_SATA_PT_SGE_UNION for use in the SATA * Passthrough Request message. * 08-19-13 02.00.08 Made MPI2_SAS_OP_TRANSMIT_PORT_SELECT_SIGNAL obsolete * for anything newer than MPI v2.0. * 11-18-14 02.00.09 Updated copyright information. * 03-16-15 02.00.10 Updated for MPI v2.6. * Added MPI2_SATA_PT_REQ_PT_FLAGS_FPDMA. * -------------------------------------------------------------------------- mpi2_targ.h * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 08-31-07 02.00.01 Added Command Buffer Data Location Address Space bits to * BufferPostFlags field of CommandBufferPostBase Request. * 02-29-08 02.00.02 Modified various names to make them 32-character unique. * 10-02-08 02.00.03 Removed NextCmdBufferOffset from * MPI2_TARGET_CMD_BUF_POST_BASE_REQUEST. * Target Status Send Request only takes a single SGE for * response data. * 02-10-10 02.00.04 Added comment to MPI2_TARGET_SSP_RSP_IU structure. * 11-18-11 02.00.05 Incorporating additions for MPI v2.5. * 11-27-12 02.00.06 Added InitiatorDevHandle field to MPI2_TARGET_MODE_ABORT * request message structure. * Added AbortType MPI2_TARGET_MODE_ABORT_DEVHANDLE and * MPI2_TARGET_MODE_ABORT_ALL_COMMANDS. * 06-13-14 02.00.07 Added MinMSIxIndex and MaxMSIxIndex fields to * MPI2_TARGET_CMD_BUF_POST_BASE_REQUEST. * 11-18-14 02.00.08 Updated copyright information. * 03-16-15 02.00.09 Updated for MPI v2.6. * Added MPI26_TARGET_ASSIST_IOFLAGS_ESCAPE_PASSTHROUGH. * -------------------------------------------------------------------------- mpi2_tool.h * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 12-18-07 02.00.01 Added Diagnostic Buffer Post and Diagnostic Release * structures and defines. * 02-29-08 02.00.02 Modified various names to make them 32-character unique. * 05-06-09 02.00.03 Added ISTWI Read Write Tool and Diagnostic CLI Tool. * 07-30-09 02.00.04 Added ExtendedType field to DiagnosticBufferPost request * and reply messages. * Added MPI2_DIAG_BUF_TYPE_EXTENDED. * Incremented MPI2_DIAG_BUF_TYPE_COUNT. * 05-12-10 02.00.05 Added Diagnostic Data Upload tool. * 08-11-10 02.00.06 Added defines that were missing for Diagnostic Buffer * Post Request. * 05-25-11 02.00.07 Added Flags field and related defines to * MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST. * 11-18-11 02.00.08 Incorporating additions for MPI v2.5. * 07-10-12 02.00.09 Add MPI v2.5 Toolbox Diagnostic CLI Tool Request * message. * 07-26-12 02.00.10 Modified MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST so that * it uses MPI Chain SGE as well as MPI Simple SGE. * 08-19-13 02.00.11 Added MPI2_TOOLBOX_TEXT_DISPLAY_TOOL and related info. * 01-08-14 02.00.12 Added MPI2_TOOLBOX_CLEAN_BIT26_PRODUCT_SPECIFIC. * 11-18-14 02.00.13 Updated copyright information. * 08-25-16 02.00.14 Added new values for the Flags field of Toolbox Clean * Tool Request Message. * -------------------------------------------------------------------------- mpi2_type.h * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 11-18-14 02.00.01 Updated copyright information. * -------------------------------------------------------------------------- mpi2_ra.h * 05-06-09 02.00.00 Initial version. * 11-18-14 02.00.01 Updated copyright information. * -------------------------------------------------------------------------- mpi2_hbd.h * 10-28-09 02.00.00 Initial version. * 08-11-10 02.00.01 Removed PortGroups, DmaGroup, and ControlGroup from * HBD Action request, replaced by AdditionalInfo field. * 11-18-11 02.00.02 Incorporating additions for MPI v2.5. * 11-18-14 02.00.03 Updated copyright information. * 02-17-16 02.00.04 Added SAS 4 22.5 gbs speed support. * -------------------------------------------------------------------------- mpi2_pci.h * 03-16-15 02.00.00 Initial version. * 02-17-16 02.00.01 Removed AHCI support. * Removed SOP support. * 07-01-16 02.00.02 Added MPI26_NVME_FLAGS_FORCE_ADMIN_ERR_RESP to * NVME Encapsulated Request. * -------------------------------------------------------------------------- mpi2_history.txt Parts list history -Filename 02.00.48 ----------- -------- -mpi2.h 02.00.48 -mpi2_cnfg.h 02.00.40 -mpi2_init.h 02.00.21 -mpi2_ioc.h 02.00.32 -mpi2_raid.h 02.00.11 -mpi2_sas.h 02.00.10 -mpi2_targ.h 02.00.09 -mpi2_tool.h 02.00.14 -mpi2_type.h 02.00.01 -mpi2_ra.h 02.00.01 -mpi2_hbd.h 02.00.04 -mpi2_pci.h 02.00.02 +Filename 02.00.50 02.00.49 02.00.48 +---------- -------- -------- -------- +mpi2.h 02.00.50 02.00.49 02.00.48 +mpi2_cnfg.h 02.00.42 02.00.41 02.00.40 +mpi2_init.h 02.00.21 02.00.21 02.00.21 +mpi2_ioc.h 02.00.34 02.00.33 02.00.32 +mpi2_raid.h 02.00.11 02.00.11 02.00.11 +mpi2_sas.h 02.00.10 02.00.10 02.00.10 +mpi2_targ.h 02.00.09 02.00.09 02.00.09 +mpi2_tool.h 02.00.14 02.00.14 02.00.14 +mpi2_type.h 02.00.01 02.00.01 02.00.01 +mpi2_ra.h 02.00.01 02.00.01 02.00.01 +mpi2_hbd.h 02.00.04 02.00.04 02.00.04 +mpi2_pci.h 02.00.02 02.00.02 02.00.02 Filename 02.00.47 02.00.46 02.00.45 02.00.44 02.00.43 02.00.42 ---------- -------- -------- -------- -------- -------- -------- mpi2.h 02.00.47 02.00.46 02.00.45 02.00.44 02.00.43 02.00.42 mpi2_cnfg.h 02.00.39 02.00.39 02.00.38 02.00.37 02.00.36 02.00.35 mpi2_init.h 02.00.21 02.00.21 02.00.21 02.00.21 02.00.21 02.00.20 mpi2_ioc.h 02.00.31 02.00.30 02.00.29 02.00.28 02.00.28 02.00.27 mpi2_raid.h 02.00.11 02.00.11 02.00.11 02.00.11 02.00.11 02.00.11 mpi2_sas.h 02.00.10 02.00.10 02.00.10 02.00.10 02.00.10 02.00.10 mpi2_targ.h 02.00.09 02.00.09 02.00.09 02.00.09 02.00.09 02.00.09 mpi2_tool.h 02.00.14 02.00.14 02.00.13 02.00.13 02.00.13 02.00.13 mpi2_type.h 02.00.01 02.00.01 02.00.01 02.00.01 02.00.01 02.00.01 mpi2_ra.h 02.00.01 02.00.01 02.00.01 02.00.01 02.00.01 02.00.01 mpi2_hbd.h 02.00.04 02.00.04 02.00.04 02.00.04 02.00.04 02.00.03 mpi2_pci.h 02.00.02 02.00.02 02.00.02 02.00.01 02.00.01 02.00.00 Filename 02.00.41 02.00.40 02.00.39 02.00.38 02.00.37 02.00.36 ---------- -------- -------- -------- -------- -------- -------- mpi2.h 02.00.41 02.00.40 02.00.39 02.00.38 02.00.37 02.00.36 mpi2_cnfg.h 02.00.35 02.00.34 02.00.33 02.00.32 02.00.31 02.00.30 mpi2_init.h 02.00.19 02.00.18 02.00.17 02.00.17 02.00.17 02.00.16 mpi2_ioc.h 02.00.27 02.00.27 02.00.26 02.00.26 02.00.26 02.00.25 mpi2_raid.h 02.00.11 02.00.11 02.00.11 02.00.11 02.00.11 02.00.11 mpi2_sas.h 02.00.10 02.00.10 02.00.10 02.00.10 02.00.10 02.00.09 mpi2_targ.h 02.00.09 02.00.09 02.00.09 02.00.09 02.00.09 02.00.08 mpi2_tool.h 02.00.13 02.00.13 02.00.13 02.00.13 02.00.13 02.00.13 mpi2_type.h 02.00.01 02.00.01 02.00.01 02.00.01 02.00.01 02.00.01 mpi2_ra.h 02.00.01 02.00.01 02.00.01 02.00.01 02.00.01 02.00.01 mpi2_hbd.h 02.00.03 02.00.03 02.00.03 02.00.03 02.00.03 02.00.03 mpi2_pci.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 Filename 02.00.35 02.00.34 02.00.33 02.00.32 02.00.31 02.00.30 ---------- -------- -------- -------- -------- -------- -------- mpi2.h 02.00.35 02.00.34 02.00.33 02.00.32 02.00.31 02.00.30 mpi2_cnfg.h 02.00.29 02.00.28 02.00.27 02.00.26 02.00.25 02.00.25 mpi2_init.h 02.00.15 02.00.15 02.00.15 02.00.15 02.00.15 02.00.15 mpi2_ioc.h 02.00.24 02.00.24 02.00.24 02.00.23 02.00.22 02.00.22 mpi2_raid.h 02.00.10 02.00.10 02.00.10 02.00.10 02.00.10 02.00.09 mpi2_sas.h 02.00.08 02.00.08 02.00.08 02.00.08 02.00.07 02.00.07 mpi2_targ.h 02.00.07 02.00.06 02.00.06 02.00.06 02.00.06 02.00.06 mpi2_tool.h 02.00.12 02.00.12 02.00.11 02.00.11 02.00.10 02.00.10 mpi2_type.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 mpi2_ra.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 mpi2_hbd.h 02.00.02 02.00.02 02.00.02 02.00.02 02.00.02 02.00.02 Filename 02.00.29 02.00.28 02.00.27 02.00.26 02.00.25 02.00.24 ---------- -------- -------- -------- -------- -------- -------- mpi2.h 02.00.29 02.00.28 02.00.27 02.00.26 02.00.25 02.00.24 mpi2_cnfg.h 02.00.24 02.00.23 02.00.22 02.00.22 02.00.22 02.00.22 mpi2_init.h 02.00.14 02.00.14 02.00.14 02.00.14 02.00.13 02.00.13 mpi2_ioc.h 02.00.22 02.00.22 02.00.22 02.00.21 02.00.21 02.00.20 mpi2_raid.h 02.00.09 02.00.09 02.00.09 02.00.08 02.00.08 02.00.08 mpi2_sas.h 02.00.07 02.00.07 02.00.07 02.00.07 02.00.06 02.00.06 mpi2_targ.h 02.00.06 02.00.06 02.00.05 02.00.05 02.00.05 02.00.05 mpi2_tool.h 02.00.10 02.00.10 02.00.10 02.00.09 02.00.08 02.00.08 mpi2_type.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 mpi2_ra.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 mpi2_hbd.h 02.00.02 02.00.02 02.00.02 02.00.02 02.00.02 02.00.02 Filename 02.00.23 02.00.22 02.00.21 02.00.20 02.00.19 02.00.18 ---------- -------- -------- -------- -------- -------- -------- mpi2.h 02.00.23 02.00.22 02.00.21 02.00.20 02.00.19 02.00.18 mpi2_cnfg.h 02.00.22 02.00.21 02.00.20 02.00.19 02.00.18 02.00.17 mpi2_init.h 02.00.12 02.00.11 02.00.11 02.00.11 02.00.11 02.00.11 mpi2_ioc.h 02.00.20 02.00.19 02.00.18 02.00.17 02.00.17 02.00.16 mpi2_raid.h 02.00.07 02.00.06 02.00.05 02.00.05 02.00.05 02.00.05 mpi2_sas.h 02.00.06 02.00.05 02.00.05 02.00.05 02.00.05 02.00.05 mpi2_targ.h 02.00.05 02.00.04 02.00.04 02.00.04 02.00.04 02.00.04 mpi2_tool.h 02.00.08 02.00.07 02.00.07 02.00.06 02.00.06 02.00.06 mpi2_type.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 mpi2_ra.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 mpi2_hbd.h 02.00.02 02.00.01 02.00.01 02.00.01 02.00.01 02.00.01 Filename 02.00.17 02.00.16 02.00.15 02.00.14 02.00.13 02.00.12 ---------- -------- -------- -------- -------- -------- -------- mpi2.h 02.00.17 02.00.16 02.00.15 02.00.14 02.00.13 02.00.12 mpi2_cnfg.h 02.00.16 02.00.15 02.00.14 02.00.13 02.00.12 02.00.11 mpi2_init.h 02.00.10 02.00.10 02.00.09 02.00.08 02.00.07 02.00.07 mpi2_ioc.h 02.00.15 02.00.15 02.00.14 02.00.13 02.00.12 02.00.11 mpi2_raid.h 02.00.05 02.00.05 02.00.04 02.00.04 02.00.04 02.00.03 mpi2_sas.h 02.00.05 02.00.04 02.00.03 02.00.03 02.00.02 02.00.02 mpi2_targ.h 02.00.04 02.00.04 02.00.04 02.00.03 02.00.03 02.00.03 mpi2_tool.h 02.00.06 02.00.05 02.00.04 02.00.04 02.00.04 02.00.03 mpi2_type.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 mpi2_ra.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 mpi2_hbd.h 02.00.01 02.00.00 02.00.00 02.00.00 Filename 02.00.11 02.00.10 02.00.09 02.00.08 02.00.07 02.00.06 ---------- -------- -------- -------- -------- -------- -------- mpi2.h 02.00.11 02.00.10 02.00.09 02.00.08 02.00.07 02.00.06 mpi2_cnfg.h 02.00.10 02.00.09 02.00.08 02.00.07 02.00.06 02.00.06 mpi2_init.h 02.00.06 02.00.06 02.00.05 02.00.05 02.00.04 02.00.03 mpi2_ioc.h 02.00.10 02.00.09 02.00.08 02.00.07 02.00.07 02.00.06 mpi2_raid.h 02.00.03 02.00.03 02.00.03 02.00.03 02.00.02 02.00.02 mpi2_sas.h 02.00.02 02.00.02 02.00.01 02.00.01 02.00.01 02.00.01 mpi2_targ.h 02.00.03 02.00.03 02.00.02 02.00.02 02.00.02 02.00.02 mpi2_tool.h 02.00.02 02.00.02 02.00.02 02.00.02 02.00.02 02.00.02 mpi2_type.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 Filename 02.00.05 02.00.04 02.00.03 02.00.02 02.00.01 02.00.00 ---------- -------- -------- -------- -------- -------- -------- mpi2.h 02.00.05 02.00.04 02.00.03 02.00.02 02.00.01 02.00.00 mpi2_cnfg.h 02.00.05 02.00.04 02.00.03 02.00.02 02.00.01 02.00.00 mpi2_init.h 02.00.02 02.00.01 02.00.00 02.00.00 02.00.00 02.00.00 mpi2_ioc.h 02.00.05 02.00.04 02.00.03 02.00.02 02.00.01 02.00.00 mpi2_raid.h 02.00.01 02.00.01 02.00.01 02.00.00 02.00.00 02.00.00 mpi2_sas.h 02.00.01 02.00.01 02.00.01 02.00.01 02.00.00 02.00.00 mpi2_targ.h 02.00.01 02.00.01 02.00.01 02.00.00 02.00.00 02.00.00 mpi2_tool.h 02.00.01 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 mpi2_type.h 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 02.00.00 Index: releng/12.1/sys/dev/mpr/mpi/mpi2_init.h =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2_init.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2_init.h (revision 352761) @@ -1,639 +1,635 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ /* - * Copyright (c) 2000-2015 LSI Corporation. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_init.h * Title: MPI SCSI initiator mode messages and structures * Creation Date: June 23, 2006 * * mpi2_init.h Version: 02.00.21 * * NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25 * prefix are for use only on MPI v2.5 products, and must not be used * with MPI v2.0 products. Unless otherwise noted, names beginning with * MPI2 or Mpi2 are for use with both MPI v2.0 and MPI v2.5 products. * * Version History * --------------- * * Date Version Description * -------- -------- ------------------------------------------------------ * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 10-31-07 02.00.01 Fixed name for pMpi2SCSITaskManagementRequest_t. * 12-18-07 02.00.02 Modified Task Management Target Reset Method defines. * 02-29-08 02.00.03 Added Query Task Set and Query Unit Attention. * 03-03-08 02.00.04 Fixed name of struct _MPI2_SCSI_TASK_MANAGE_REPLY. * 05-21-08 02.00.05 Fixed typo in name of Mpi2SepRequest_t. * 10-02-08 02.00.06 Removed Untagged and No Disconnect values from SCSI IO * Control field Task Attribute flags. - * Moved LUN field defines to mpi2.h because they are + * Moved LUN field defines to mpi2.h becasue they are * common to many structures. * 05-06-09 02.00.07 Changed task management type of Query Unit Attention to * Query Asynchronous Event. * Defined two new bits in the SlotStatus field of the SCSI * Enclosure Processor Request and Reply. * 10-28-09 02.00.08 Added defines for decoding the ResponseInfo bytes for * both SCSI IO Error Reply and SCSI Task Management Reply. * Added ResponseInfo field to MPI2_SCSI_TASK_MANAGE_REPLY. * Added MPI2_SCSITASKMGMT_RSP_TM_OVERLAPPED_TAG define. * 02-10-10 02.00.09 Removed unused structure that had "#if 0" around it. * 05-12-10 02.00.10 Added optional vendor-unique region to SCSI IO Request. * 11-10-10 02.00.11 Added MPI2_SCSIIO_NUM_SGLOFFSETS define. * 11-18-11 02.00.12 Incorporating additions for MPI v2.5. * 02-06-12 02.00.13 Added alternate defines for Task Priority / Command * Priority to match SAM-4. * Added EEDPErrorOffset to MPI2_SCSI_IO_REPLY. * 07-10-12 02.00.14 Added MPI2_SCSIIO_CONTROL_SHIFT_DATADIRECTION. * 04-09-13 02.00.15 Added SCSIStatusQualifier field to MPI2_SCSI_IO_REPLY, * replacing the Reserved4 field. * 11-18-14 02.00.16 Updated copyright information. * 03-16-15 02.00.17 Updated for MPI v2.6. * Added MPI26_SCSIIO_IOFLAGS_ESCAPE_PASSTHROUGH. * Added MPI2_SEP_REQ_SLOTSTATUS_DEV_OFF and * MPI2_SEP_REPLY_SLOTSTATUS_DEV_OFF. * 08-26-15 02.00.18 Added SCSITASKMGMT_MSGFLAGS for Target Reset. * 12-18-15 02.00.19 Added EEDPObservedValue added to SCSI IO Reply message. * 01-04-16 02.00.20 Modified EEDP reported values in SCSI IO Reply message. * 01-21-16 02.00.21 Modified MPI26_SCSITASKMGMT_MSGFLAGS_PCIE* defines to * be unique within first 32 characters. * -------------------------------------------------------------------------- */ #ifndef MPI2_INIT_H #define MPI2_INIT_H /***************************************************************************** * * SCSI Initiator Messages * *****************************************************************************/ /**************************************************************************** * SCSI IO messages and associated structures ****************************************************************************/ typedef struct _MPI2_SCSI_IO_CDB_EEDP32 { U8 CDB[20]; /* 0x00 */ U32 PrimaryReferenceTag; /* 0x14 */ U16 PrimaryApplicationTag; /* 0x18 */ U16 PrimaryApplicationTagMask; /* 0x1A */ U32 TransferLength; /* 0x1C */ } MPI2_SCSI_IO_CDB_EEDP32, MPI2_POINTER PTR_MPI2_SCSI_IO_CDB_EEDP32, Mpi2ScsiIoCdbEedp32_t, MPI2_POINTER pMpi2ScsiIoCdbEedp32_t; /* MPI v2.0 CDB field */ typedef union _MPI2_SCSI_IO_CDB_UNION { U8 CDB32[32]; MPI2_SCSI_IO_CDB_EEDP32 EEDP32; MPI2_SGE_SIMPLE_UNION SGE; } MPI2_SCSI_IO_CDB_UNION, MPI2_POINTER PTR_MPI2_SCSI_IO_CDB_UNION, Mpi2ScsiIoCdb_t, MPI2_POINTER pMpi2ScsiIoCdb_t; /* MPI v2.0 SCSI IO Request Message */ typedef struct _MPI2_SCSI_IO_REQUEST { U16 DevHandle; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved1; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U32 SenseBufferLowAddress; /* 0x0C */ U16 SGLFlags; /* 0x10 */ U8 SenseBufferLength; /* 0x12 */ U8 Reserved4; /* 0x13 */ U8 SGLOffset0; /* 0x14 */ U8 SGLOffset1; /* 0x15 */ U8 SGLOffset2; /* 0x16 */ U8 SGLOffset3; /* 0x17 */ U32 SkipCount; /* 0x18 */ U32 DataLength; /* 0x1C */ U32 BidirectionalDataLength; /* 0x20 */ U16 IoFlags; /* 0x24 */ U16 EEDPFlags; /* 0x26 */ U32 EEDPBlockSize; /* 0x28 */ U32 SecondaryReferenceTag; /* 0x2C */ U16 SecondaryApplicationTag; /* 0x30 */ U16 ApplicationTagTranslationMask; /* 0x32 */ U8 LUN[8]; /* 0x34 */ U32 Control; /* 0x3C */ MPI2_SCSI_IO_CDB_UNION CDB; /* 0x40 */ #ifdef MPI2_SCSI_IO_VENDOR_UNIQUE_REGION /* typically this is left undefined */ MPI2_SCSI_IO_VENDOR_UNIQUE VendorRegion; #endif MPI2_SGE_IO_UNION SGL; /* 0x60 */ } MPI2_SCSI_IO_REQUEST, MPI2_POINTER PTR_MPI2_SCSI_IO_REQUEST, Mpi2SCSIIORequest_t, MPI2_POINTER pMpi2SCSIIORequest_t; /* SCSI IO MsgFlags bits */ /* MsgFlags for SenseBufferAddressSpace */ #define MPI2_SCSIIO_MSGFLAGS_MASK_SENSE_ADDR (0x0C) #define MPI2_SCSIIO_MSGFLAGS_SYSTEM_SENSE_ADDR (0x00) #define MPI2_SCSIIO_MSGFLAGS_IOCDDR_SENSE_ADDR (0x04) #define MPI2_SCSIIO_MSGFLAGS_IOCPLB_SENSE_ADDR (0x08) /* for MPI v2.5 and earlier only */ #define MPI2_SCSIIO_MSGFLAGS_IOCPLBNTA_SENSE_ADDR (0x0C) /* for MPI v2.5 and earlier only */ #define MPI26_SCSIIO_MSGFLAGS_IOCCTL_SENSE_ADDR (0x08) /* for MPI v2.6 only */ /* SCSI IO SGLFlags bits */ /* base values for Data Location Address Space */ #define MPI2_SCSIIO_SGLFLAGS_ADDR_MASK (0x0C) #define MPI2_SCSIIO_SGLFLAGS_SYSTEM_ADDR (0x00) #define MPI2_SCSIIO_SGLFLAGS_IOCDDR_ADDR (0x04) #define MPI2_SCSIIO_SGLFLAGS_IOCPLB_ADDR (0x08) #define MPI2_SCSIIO_SGLFLAGS_IOCPLBNTA_ADDR (0x0C) /* base values for Type */ #define MPI2_SCSIIO_SGLFLAGS_TYPE_MASK (0x03) #define MPI2_SCSIIO_SGLFLAGS_TYPE_MPI (0x00) #define MPI2_SCSIIO_SGLFLAGS_TYPE_IEEE32 (0x01) #define MPI2_SCSIIO_SGLFLAGS_TYPE_IEEE64 (0x02) /* shift values for each sub-field */ #define MPI2_SCSIIO_SGLFLAGS_SGL3_SHIFT (12) #define MPI2_SCSIIO_SGLFLAGS_SGL2_SHIFT (8) #define MPI2_SCSIIO_SGLFLAGS_SGL1_SHIFT (4) #define MPI2_SCSIIO_SGLFLAGS_SGL0_SHIFT (0) /* number of SGLOffset fields */ #define MPI2_SCSIIO_NUM_SGLOFFSETS (4) /* SCSI IO IoFlags bits */ /* Large CDB Address Space */ #define MPI2_SCSIIO_CDB_ADDR_MASK (0x6000) #define MPI2_SCSIIO_CDB_ADDR_SYSTEM (0x0000) #define MPI2_SCSIIO_CDB_ADDR_IOCDDR (0x2000) #define MPI2_SCSIIO_CDB_ADDR_IOCPLB (0x4000) #define MPI2_SCSIIO_CDB_ADDR_IOCPLBNTA (0x6000) #define MPI2_SCSIIO_IOFLAGS_LARGE_CDB (0x1000) #define MPI2_SCSIIO_IOFLAGS_BIDIRECTIONAL (0x0800) #define MPI2_SCSIIO_IOFLAGS_MULTICAST (0x0400) #define MPI2_SCSIIO_IOFLAGS_CMD_DETERMINES_DATA_DIR (0x0200) #define MPI2_SCSIIO_IOFLAGS_CDBLENGTH_MASK (0x01FF) /* SCSI IO EEDPFlags bits */ #define MPI2_SCSIIO_EEDPFLAGS_INC_PRI_REFTAG (0x8000) #define MPI2_SCSIIO_EEDPFLAGS_INC_SEC_REFTAG (0x4000) #define MPI2_SCSIIO_EEDPFLAGS_INC_PRI_APPTAG (0x2000) #define MPI2_SCSIIO_EEDPFLAGS_INC_SEC_APPTAG (0x1000) #define MPI2_SCSIIO_EEDPFLAGS_CHECK_REFTAG (0x0400) #define MPI2_SCSIIO_EEDPFLAGS_CHECK_APPTAG (0x0200) #define MPI2_SCSIIO_EEDPFLAGS_CHECK_GUARD (0x0100) #define MPI2_SCSIIO_EEDPFLAGS_PASSTHRU_REFTAG (0x0008) #define MPI2_SCSIIO_EEDPFLAGS_MASK_OP (0x0007) #define MPI2_SCSIIO_EEDPFLAGS_NOOP_OP (0x0000) #define MPI2_SCSIIO_EEDPFLAGS_CHECK_OP (0x0001) #define MPI2_SCSIIO_EEDPFLAGS_STRIP_OP (0x0002) #define MPI2_SCSIIO_EEDPFLAGS_CHECK_REMOVE_OP (0x0003) #define MPI2_SCSIIO_EEDPFLAGS_INSERT_OP (0x0004) #define MPI2_SCSIIO_EEDPFLAGS_REPLACE_OP (0x0006) #define MPI2_SCSIIO_EEDPFLAGS_CHECK_REGEN_OP (0x0007) /* SCSI IO LUN fields: use MPI2_LUN_ from mpi2.h */ /* SCSI IO Control bits */ #define MPI2_SCSIIO_CONTROL_ADDCDBLEN_MASK (0xFC000000) #define MPI2_SCSIIO_CONTROL_ADDCDBLEN_SHIFT (26) #define MPI2_SCSIIO_CONTROL_DATADIRECTION_MASK (0x03000000) #define MPI2_SCSIIO_CONTROL_SHIFT_DATADIRECTION (24) #define MPI2_SCSIIO_CONTROL_NODATATRANSFER (0x00000000) #define MPI2_SCSIIO_CONTROL_WRITE (0x01000000) #define MPI2_SCSIIO_CONTROL_READ (0x02000000) #define MPI2_SCSIIO_CONTROL_BIDIRECTIONAL (0x03000000) #define MPI2_SCSIIO_CONTROL_TASKPRI_MASK (0x00007800) #define MPI2_SCSIIO_CONTROL_TASKPRI_SHIFT (11) /* alternate name for the previous field; called Command Priority in SAM-4 */ #define MPI2_SCSIIO_CONTROL_CMDPRI_MASK (0x00007800) #define MPI2_SCSIIO_CONTROL_CMDPRI_SHIFT (11) #define MPI2_SCSIIO_CONTROL_TASKATTRIBUTE_MASK (0x00000700) #define MPI2_SCSIIO_CONTROL_SIMPLEQ (0x00000000) #define MPI2_SCSIIO_CONTROL_HEADOFQ (0x00000100) #define MPI2_SCSIIO_CONTROL_ORDEREDQ (0x00000200) #define MPI2_SCSIIO_CONTROL_ACAQ (0x00000400) #define MPI2_SCSIIO_CONTROL_TLR_MASK (0x000000C0) #define MPI2_SCSIIO_CONTROL_NO_TLR (0x00000000) #define MPI2_SCSIIO_CONTROL_TLR_ON (0x00000040) #define MPI2_SCSIIO_CONTROL_TLR_OFF (0x00000080) /* MPI v2.5 CDB field */ typedef union _MPI25_SCSI_IO_CDB_UNION { U8 CDB32[32]; MPI2_SCSI_IO_CDB_EEDP32 EEDP32; MPI2_IEEE_SGE_SIMPLE64 SGE; } MPI25_SCSI_IO_CDB_UNION, MPI2_POINTER PTR_MPI25_SCSI_IO_CDB_UNION, Mpi25ScsiIoCdb_t, MPI2_POINTER pMpi25ScsiIoCdb_t; /* MPI v2.5/2.6 SCSI IO Request Message */ typedef struct _MPI25_SCSI_IO_REQUEST { U16 DevHandle; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved1; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U32 SenseBufferLowAddress; /* 0x0C */ U8 DMAFlags; /* 0x10 */ U8 Reserved5; /* 0x11 */ U8 SenseBufferLength; /* 0x12 */ U8 Reserved4; /* 0x13 */ U8 SGLOffset0; /* 0x14 */ U8 SGLOffset1; /* 0x15 */ U8 SGLOffset2; /* 0x16 */ U8 SGLOffset3; /* 0x17 */ U32 SkipCount; /* 0x18 */ U32 DataLength; /* 0x1C */ U32 BidirectionalDataLength; /* 0x20 */ U16 IoFlags; /* 0x24 */ U16 EEDPFlags; /* 0x26 */ U16 EEDPBlockSize; /* 0x28 */ U16 Reserved6; /* 0x2A */ U32 SecondaryReferenceTag; /* 0x2C */ U16 SecondaryApplicationTag; /* 0x30 */ U16 ApplicationTagTranslationMask; /* 0x32 */ U8 LUN[8]; /* 0x34 */ U32 Control; /* 0x3C */ MPI25_SCSI_IO_CDB_UNION CDB; /* 0x40 */ #ifdef MPI25_SCSI_IO_VENDOR_UNIQUE_REGION /* typically this is left undefined */ MPI25_SCSI_IO_VENDOR_UNIQUE VendorRegion; #endif MPI25_SGE_IO_UNION SGL; /* 0x60 */ } MPI25_SCSI_IO_REQUEST, MPI2_POINTER PTR_MPI25_SCSI_IO_REQUEST, Mpi25SCSIIORequest_t, MPI2_POINTER pMpi25SCSIIORequest_t; /* use MPI2_SCSIIO_MSGFLAGS_ defines for the MsgFlags field */ /* Defines for the DMAFlags field * Each setting affects 4 SGLS, from SGL0 to SGL3. * D = Data * C = Cache DIF * I = Interleaved * H = Host DIF */ #define MPI25_SCSIIO_DMAFLAGS_OP_MASK (0x0F) #define MPI25_SCSIIO_DMAFLAGS_OP_D_D_D_D (0x00) #define MPI25_SCSIIO_DMAFLAGS_OP_D_D_D_C (0x01) #define MPI25_SCSIIO_DMAFLAGS_OP_D_D_D_I (0x02) #define MPI25_SCSIIO_DMAFLAGS_OP_D_D_C_C (0x03) #define MPI25_SCSIIO_DMAFLAGS_OP_D_D_C_I (0x04) #define MPI25_SCSIIO_DMAFLAGS_OP_D_D_I_I (0x05) #define MPI25_SCSIIO_DMAFLAGS_OP_D_C_C_C (0x06) #define MPI25_SCSIIO_DMAFLAGS_OP_D_C_C_I (0x07) #define MPI25_SCSIIO_DMAFLAGS_OP_D_C_I_I (0x08) #define MPI25_SCSIIO_DMAFLAGS_OP_D_I_I_I (0x09) #define MPI25_SCSIIO_DMAFLAGS_OP_D_H_D_D (0x0A) #define MPI25_SCSIIO_DMAFLAGS_OP_D_H_D_C (0x0B) #define MPI25_SCSIIO_DMAFLAGS_OP_D_H_D_I (0x0C) #define MPI25_SCSIIO_DMAFLAGS_OP_D_H_C_C (0x0D) #define MPI25_SCSIIO_DMAFLAGS_OP_D_H_C_I (0x0E) #define MPI25_SCSIIO_DMAFLAGS_OP_D_H_I_I (0x0F) /* number of SGLOffset fields */ #define MPI25_SCSIIO_NUM_SGLOFFSETS (4) /* defines for the IoFlags field */ #define MPI25_SCSIIO_IOFLAGS_IO_PATH_MASK (0xC000) #define MPI25_SCSIIO_IOFLAGS_NORMAL_PATH (0x0000) #define MPI25_SCSIIO_IOFLAGS_FAST_PATH (0x4000) #define MPI26_SCSIIO_IOFLAGS_ESCAPE_PASSTHROUGH (0x2000) /* MPI v2.6 and later */ #define MPI25_SCSIIO_IOFLAGS_LARGE_CDB (0x1000) #define MPI25_SCSIIO_IOFLAGS_BIDIRECTIONAL (0x0800) #define MPI26_SCSIIO_IOFLAGS_PORT_REQUEST (0x0400) /* MPI v2.6 and later; IOC use only */ #define MPI25_SCSIIO_IOFLAGS_CDBLENGTH_MASK (0x01FF) /* MPI v2.5 defines for the EEDPFlags bits */ /* use MPI2_SCSIIO_EEDPFLAGS_ defines for the other EEDPFlags bits */ #define MPI25_SCSIIO_EEDPFLAGS_ESCAPE_MODE_MASK (0x00C0) #define MPI25_SCSIIO_EEDPFLAGS_COMPATIBLE_MODE (0x0000) #define MPI25_SCSIIO_EEDPFLAGS_DO_NOT_DISABLE_MODE (0x0040) #define MPI25_SCSIIO_EEDPFLAGS_APPTAG_DISABLE_MODE (0x0080) #define MPI25_SCSIIO_EEDPFLAGS_APPTAG_REFTAG_DISABLE_MODE (0x00C0) #define MPI25_SCSIIO_EEDPFLAGS_HOST_GUARD_METHOD_MASK (0x0030) #define MPI25_SCSIIO_EEDPFLAGS_T10_CRC_HOST_GUARD (0x0000) #define MPI25_SCSIIO_EEDPFLAGS_IP_CHKSUM_HOST_GUARD (0x0010) /* use MPI2_LUN_ defines from mpi2.h for the LUN field */ /* use MPI2_SCSIIO_CONTROL_ defines for the Control field */ /* NOTE: The SCSI IO Reply is nearly the same for MPI 2.0 and MPI 2.5, so * MPI2_SCSI_IO_REPLY is used for both. */ /* SCSI IO Error Reply Message */ typedef struct _MPI2_SCSI_IO_REPLY { U16 DevHandle; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved1; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U8 SCSIStatus; /* 0x0C */ U8 SCSIState; /* 0x0D */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U32 TransferCount; /* 0x14 */ U32 SenseCount; /* 0x18 */ U32 ResponseInfo; /* 0x1C */ U16 TaskTag; /* 0x20 */ U16 SCSIStatusQualifier; /* 0x22 */ U32 BidirectionalTransferCount; /* 0x24 */ U32 EEDPErrorOffset; /* 0x28 */ /* MPI 2.5+ only; Reserved in MPI 2.0 */ U16 EEDPObservedAppTag; /* 0x2C */ /* MPI 2.5+ only; Reserved in MPI 2.0 */ U16 EEDPObservedGuard; /* 0x2E */ /* MPI 2.5+ only; Reserved in MPI 2.0 */ U32 EEDPObservedRefTag; /* 0x30 */ /* MPI 2.5+ only; Reserved in MPI 2.0 */ } MPI2_SCSI_IO_REPLY, MPI2_POINTER PTR_MPI2_SCSI_IO_REPLY, Mpi2SCSIIOReply_t, MPI2_POINTER pMpi2SCSIIOReply_t; /* SCSI IO Reply MsgFlags bits */ #define MPI26_SCSIIO_REPLY_MSGFLAGS_REFTAG_OBSERVED_VALID (0x01) #define MPI26_SCSIIO_REPLY_MSGFLAGS_GUARD_OBSERVED_VALID (0x02) #define MPI26_SCSIIO_REPLY_MSGFLAGS_APPTAG_OBSERVED_VALID (0x04) /* SCSI IO Reply SCSIStatus values (SAM-4 status codes) */ #define MPI2_SCSI_STATUS_GOOD (0x00) #define MPI2_SCSI_STATUS_CHECK_CONDITION (0x02) #define MPI2_SCSI_STATUS_CONDITION_MET (0x04) #define MPI2_SCSI_STATUS_BUSY (0x08) #define MPI2_SCSI_STATUS_INTERMEDIATE (0x10) #define MPI2_SCSI_STATUS_INTERMEDIATE_CONDMET (0x14) #define MPI2_SCSI_STATUS_RESERVATION_CONFLICT (0x18) #define MPI2_SCSI_STATUS_COMMAND_TERMINATED (0x22) /* obsolete */ #define MPI2_SCSI_STATUS_TASK_SET_FULL (0x28) #define MPI2_SCSI_STATUS_ACA_ACTIVE (0x30) #define MPI2_SCSI_STATUS_TASK_ABORTED (0x40) /* SCSI IO Reply SCSIState flags */ #define MPI2_SCSI_STATE_RESPONSE_INFO_VALID (0x10) #define MPI2_SCSI_STATE_TERMINATED (0x08) #define MPI2_SCSI_STATE_NO_SCSI_STATUS (0x04) #define MPI2_SCSI_STATE_AUTOSENSE_FAILED (0x02) #define MPI2_SCSI_STATE_AUTOSENSE_VALID (0x01) /* masks and shifts for the ResponseInfo field */ #define MPI2_SCSI_RI_MASK_REASONCODE (0x000000FF) #define MPI2_SCSI_RI_SHIFT_REASONCODE (0) #define MPI2_SCSI_TASKTAG_UNKNOWN (0xFFFF) /**************************************************************************** * SCSI Task Management messages ****************************************************************************/ /* SCSI Task Management Request Message */ typedef struct _MPI2_SCSI_TASK_MANAGE_REQUEST { U16 DevHandle; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U8 Reserved1; /* 0x04 */ U8 TaskType; /* 0x05 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U8 LUN[8]; /* 0x0C */ U32 Reserved4[7]; /* 0x14 */ U16 TaskMID; /* 0x30 */ U16 Reserved5; /* 0x32 */ } MPI2_SCSI_TASK_MANAGE_REQUEST, MPI2_POINTER PTR_MPI2_SCSI_TASK_MANAGE_REQUEST, Mpi2SCSITaskManagementRequest_t, MPI2_POINTER pMpi2SCSITaskManagementRequest_t; /* TaskType values */ #define MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK (0x01) #define MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET (0x02) #define MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET (0x03) #define MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET (0x05) #define MPI2_SCSITASKMGMT_TASKTYPE_CLEAR_TASK_SET (0x06) #define MPI2_SCSITASKMGMT_TASKTYPE_QUERY_TASK (0x07) #define MPI2_SCSITASKMGMT_TASKTYPE_CLR_ACA (0x08) #define MPI2_SCSITASKMGMT_TASKTYPE_QRY_TASK_SET (0x09) #define MPI2_SCSITASKMGMT_TASKTYPE_QRY_ASYNC_EVENT (0x0A) /* obsolete TaskType name */ #define MPI2_SCSITASKMGMT_TASKTYPE_QRY_UNIT_ATTENTION (MPI2_SCSITASKMGMT_TASKTYPE_QRY_ASYNC_EVENT) /* MsgFlags bits */ #define MPI2_SCSITASKMGMT_MSGFLAGS_MASK_TARGET_RESET (0x18) #define MPI26_SCSITASKMGMT_MSGFLAGS_HOT_RESET_PCIE (0x00) #define MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET (0x00) #define MPI2_SCSITASKMGMT_MSGFLAGS_DO_NOT_SEND_TASK_IU (0x01) #define MPI2_SCSITASKMGMT_MSGFLAGS_NEXUS_RESET_SRST (0x08) #define MPI2_SCSITASKMGMT_MSGFLAGS_SAS_HARD_LINK_RESET (0x10) #define MPI26_SCSITASKMGMT_MSGFLAGS_PROTOCOL_LVL_RST_PCIE (0x18) /* SCSI Task Management Reply Message */ typedef struct _MPI2_SCSI_TASK_MANAGE_REPLY { U16 DevHandle; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U8 ResponseCode; /* 0x04 */ U8 TaskType; /* 0x05 */ U8 Reserved1; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved2; /* 0x0A */ U16 Reserved3; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U32 TerminationCount; /* 0x14 */ U32 ResponseInfo; /* 0x18 */ } MPI2_SCSI_TASK_MANAGE_REPLY, MPI2_POINTER PTR_MPI2_SCSI_TASK_MANAGE_REPLY, Mpi2SCSITaskManagementReply_t, MPI2_POINTER pMpi2SCSIManagementReply_t; /* ResponseCode values */ #define MPI2_SCSITASKMGMT_RSP_TM_COMPLETE (0x00) #define MPI2_SCSITASKMGMT_RSP_INVALID_FRAME (0x02) #define MPI2_SCSITASKMGMT_RSP_TM_NOT_SUPPORTED (0x04) #define MPI2_SCSITASKMGMT_RSP_TM_FAILED (0x05) #define MPI2_SCSITASKMGMT_RSP_TM_SUCCEEDED (0x08) #define MPI2_SCSITASKMGMT_RSP_TM_INVALID_LUN (0x09) #define MPI2_SCSITASKMGMT_RSP_TM_OVERLAPPED_TAG (0x0A) #define MPI2_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC (0x80) /* masks and shifts for the ResponseInfo field */ #define MPI2_SCSITASKMGMT_RI_MASK_REASONCODE (0x000000FF) #define MPI2_SCSITASKMGMT_RI_SHIFT_REASONCODE (0) #define MPI2_SCSITASKMGMT_RI_MASK_ARI2 (0x0000FF00) #define MPI2_SCSITASKMGMT_RI_SHIFT_ARI2 (8) #define MPI2_SCSITASKMGMT_RI_MASK_ARI1 (0x00FF0000) #define MPI2_SCSITASKMGMT_RI_SHIFT_ARI1 (16) #define MPI2_SCSITASKMGMT_RI_MASK_ARI0 (0xFF000000) #define MPI2_SCSITASKMGMT_RI_SHIFT_ARI0 (24) /**************************************************************************** * SCSI Enclosure Processor messages ****************************************************************************/ /* SCSI Enclosure Processor Request Message */ typedef struct _MPI2_SEP_REQUEST { U16 DevHandle; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U8 Action; /* 0x04 */ U8 Flags; /* 0x05 */ U8 Reserved1; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved2; /* 0x0A */ U32 SlotStatus; /* 0x0C */ U32 Reserved3; /* 0x10 */ U32 Reserved4; /* 0x14 */ U32 Reserved5; /* 0x18 */ U16 Slot; /* 0x1C */ U16 EnclosureHandle; /* 0x1E */ } MPI2_SEP_REQUEST, MPI2_POINTER PTR_MPI2_SEP_REQUEST, Mpi2SepRequest_t, MPI2_POINTER pMpi2SepRequest_t; /* Action defines */ #define MPI2_SEP_REQ_ACTION_WRITE_STATUS (0x00) #define MPI2_SEP_REQ_ACTION_READ_STATUS (0x01) /* Flags defines */ #define MPI2_SEP_REQ_FLAGS_DEVHANDLE_ADDRESS (0x00) #define MPI2_SEP_REQ_FLAGS_ENCLOSURE_SLOT_ADDRESS (0x01) /* SlotStatus defines */ #define MPI2_SEP_REQ_SLOTSTATUS_DEV_OFF (0x00080000) /* MPI v2.6 and newer */ #define MPI2_SEP_REQ_SLOTSTATUS_REQUEST_REMOVE (0x00040000) #define MPI2_SEP_REQ_SLOTSTATUS_IDENTIFY_REQUEST (0x00020000) #define MPI2_SEP_REQ_SLOTSTATUS_REBUILD_STOPPED (0x00000200) #define MPI2_SEP_REQ_SLOTSTATUS_HOT_SPARE (0x00000100) #define MPI2_SEP_REQ_SLOTSTATUS_UNCONFIGURED (0x00000080) #define MPI2_SEP_REQ_SLOTSTATUS_PREDICTED_FAULT (0x00000040) #define MPI2_SEP_REQ_SLOTSTATUS_IN_CRITICAL_ARRAY (0x00000010) #define MPI2_SEP_REQ_SLOTSTATUS_IN_FAILED_ARRAY (0x00000008) #define MPI2_SEP_REQ_SLOTSTATUS_DEV_REBUILDING (0x00000004) #define MPI2_SEP_REQ_SLOTSTATUS_DEV_FAULTY (0x00000002) #define MPI2_SEP_REQ_SLOTSTATUS_NO_ERROR (0x00000001) /* SCSI Enclosure Processor Reply Message */ typedef struct _MPI2_SEP_REPLY { U16 DevHandle; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U8 Action; /* 0x04 */ U8 Flags; /* 0x05 */ U8 Reserved1; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved2; /* 0x0A */ U16 Reserved3; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U32 SlotStatus; /* 0x14 */ U32 Reserved4; /* 0x18 */ U16 Slot; /* 0x1C */ U16 EnclosureHandle; /* 0x1E */ } MPI2_SEP_REPLY, MPI2_POINTER PTR_MPI2_SEP_REPLY, Mpi2SepReply_t, MPI2_POINTER pMpi2SepReply_t; /* SlotStatus defines */ #define MPI2_SEP_REPLY_SLOTSTATUS_DEV_OFF (0x00080000) /* MPI v2.6 and newer */ #define MPI2_SEP_REPLY_SLOTSTATUS_REMOVE_READY (0x00040000) #define MPI2_SEP_REPLY_SLOTSTATUS_IDENTIFY_REQUEST (0x00020000) #define MPI2_SEP_REPLY_SLOTSTATUS_REBUILD_STOPPED (0x00000200) #define MPI2_SEP_REPLY_SLOTSTATUS_HOT_SPARE (0x00000100) #define MPI2_SEP_REPLY_SLOTSTATUS_UNCONFIGURED (0x00000080) #define MPI2_SEP_REPLY_SLOTSTATUS_PREDICTED_FAULT (0x00000040) #define MPI2_SEP_REPLY_SLOTSTATUS_IN_CRITICAL_ARRAY (0x00000010) #define MPI2_SEP_REPLY_SLOTSTATUS_IN_FAILED_ARRAY (0x00000008) #define MPI2_SEP_REPLY_SLOTSTATUS_DEV_REBUILDING (0x00000004) #define MPI2_SEP_REPLY_SLOTSTATUS_DEV_FAULTY (0x00000002) #define MPI2_SEP_REPLY_SLOTSTATUS_NO_ERROR (0x00000001) #endif Index: releng/12.1/sys/dev/mpr/mpi/mpi2_ioc.h =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2_ioc.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2_ioc.h (revision 352761) @@ -1,2255 +1,1902 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ /* - * Copyright (c) 2000-2015 LSI Corporation. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_ioc.h * Title: MPI IOC, Port, Event, FW Download, and FW Upload messages * Creation Date: October 11, 2006 * - * mpi2_ioc.h Version: 02.00.32 + * mpi2_ioc.h Version: 02.00.36 * * NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25 * prefix are for use only on MPI v2.5 products, and must not be used * with MPI v2.0 products. Unless otherwise noted, names beginning with * MPI2 or Mpi2 are for use with both MPI v2.0 and MPI v2.5 products. * * Version History * --------------- * * Date Version Description * -------- -------- ------------------------------------------------------ * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 06-04-07 02.00.01 In IOCFacts Reply structure, renamed MaxDevices to * MaxTargets. * Added TotalImageSize field to FWDownload Request. * Added reserved words to FWUpload Request. * 06-26-07 02.00.02 Added IR Configuration Change List Event. * 08-31-07 02.00.03 Removed SystemReplyQueueDepth field from the IOCInit * request and replaced it with * ReplyDescriptorPostQueueDepth and ReplyFreeQueueDepth. * Replaced the MinReplyQueueDepth field of the IOCFacts * reply with MaxReplyDescriptorPostQueueDepth. * Added MPI2_RDPQ_DEPTH_MIN define to specify the minimum * depth for the Reply Descriptor Post Queue. * Added SASAddress field to Initiator Device Table * Overflow Event data. * 10-31-07 02.00.04 Added ReasonCode MPI2_EVENT_SAS_INIT_RC_NOT_RESPONDING * for SAS Initiator Device Status Change Event data. * Modified Reason Code defines for SAS Topology Change * List Event data, including adding a bit for PHY Vacant * status, and adding a mask for the Reason Code. * Added define for * MPI2_EVENT_SAS_TOPO_ES_DELAY_NOT_RESPONDING. * Added define for MPI2_EXT_IMAGE_TYPE_MEGARAID. * 12-18-07 02.00.05 Added Boot Status defines for the IOCExceptions field of * the IOCFacts Reply. * Removed MPI2_IOCFACTS_CAPABILITY_EXTENDED_BUFFER define. * Moved MPI2_VERSION_UNION to mpi2.h. * Changed MPI2_EVENT_NOTIFICATION_REQUEST to use masks * instead of enables, and added SASBroadcastPrimitiveMasks * field. * Added Log Entry Added Event and related structure. * 02-29-08 02.00.06 Added define MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID. * Removed define MPI2_IOCFACTS_PROTOCOL_SMP_TARGET. * Added MaxVolumes and MaxPersistentEntries fields to * IOCFacts reply. * Added ProtocalFlags and IOCCapabilities fields to * MPI2_FW_IMAGE_HEADER. * Removed MPI2_PORTENABLE_FLAGS_ENABLE_SINGLE_PORT. * 03-03-08 02.00.07 Fixed MPI2_FW_IMAGE_HEADER by changing Reserved26 to * a U16 (from a U32). * Removed extra 's' from EventMasks name. * 06-27-08 02.00.08 Fixed an offset in a comment. * 10-02-08 02.00.09 Removed SystemReplyFrameSize from MPI2_IOC_INIT_REQUEST. * Removed CurReplyFrameSize from MPI2_IOC_FACTS_REPLY and * renamed MinReplyFrameSize to ReplyFrameSize. * Added MPI2_IOCFACTS_EXCEPT_IR_FOREIGN_CONFIG_MAX. * Added two new RAIDOperation values for Integrated RAID * Operations Status Event data. * Added four new IR Configuration Change List Event data * ReasonCode values. * Added two new ReasonCode defines for SAS Device Status * Change Event data. * Added three new DiscoveryStatus bits for the SAS * Discovery event data. * Added Multiplexing Status Change bit to the PhyStatus * field of the SAS Topology Change List event data. * Removed define for MPI2_INIT_IMAGE_BOOTFLAGS_XMEMCOPY. * BootFlags are now product-specific. * Added defines for the indivdual signature bytes * for MPI2_INIT_IMAGE_FOOTER. * 01-19-09 02.00.10 Added MPI2_IOCFACTS_CAPABILITY_EVENT_REPLAY define. * Added MPI2_EVENT_SAS_DISC_DS_DOWNSTREAM_INITIATOR * define. * Added MPI2_EVENT_SAS_DEV_STAT_RC_SATA_INIT_FAILURE * define. * Removed MPI2_EVENT_SAS_DISC_DS_SATA_INIT_FAILURE define. * 05-06-09 02.00.11 Added MPI2_IOCFACTS_CAPABILITY_RAID_ACCELERATOR define. * Added MPI2_IOCFACTS_CAPABILITY_MSI_X_INDEX define. * Added two new reason codes for SAS Device Status Change * Event. * Added new event: SAS PHY Counter. * 07-30-09 02.00.12 Added GPIO Interrupt event define and structure. * Added MPI2_IOCFACTS_CAPABILITY_EXTENDED_BUFFER define. * Added new product id family for 2208. * 10-28-09 02.00.13 Added HostMSIxVectors field to MPI2_IOC_INIT_REQUEST. * Added MaxMSIxVectors field to MPI2_IOC_FACTS_REPLY. * Added MinDevHandle field to MPI2_IOC_FACTS_REPLY. * Added MPI2_IOCFACTS_CAPABILITY_HOST_BASED_DISCOVERY. * Added MPI2_EVENT_HOST_BASED_DISCOVERY_PHY define. * Added MPI2_EVENT_SAS_TOPO_ES_NO_EXPANDER define. * Added Host Based Discovery Phy Event data. * Added defines for ProductID Product field * (MPI2_FW_HEADER_PID_). * Modified values for SAS ProductID Family * (MPI2_FW_HEADER_PID_FAMILY_). * 02-10-10 02.00.14 Added SAS Quiesce Event structure and defines. * Added PowerManagementControl Request structures and * defines. * 05-12-10 02.00.15 Marked Task Set Full Event as obsolete. * Added MPI2_EVENT_SAS_TOPO_LR_UNSUPPORTED_PHY define. * 11-10-10 02.00.16 Added MPI2_FW_DOWNLOAD_ITYPE_MIN_PRODUCT_SPECIFIC. * 02-23-11 02.00.17 Added SAS NOTIFY Primitive event, and added * SASNotifyPrimitiveMasks field to * MPI2_EVENT_NOTIFICATION_REQUEST. * Added Temperature Threshold Event. * Added Host Message Event. * Added Send Host Message request and reply. * 05-25-11 02.00.18 For Extended Image Header, added * MPI2_EXT_IMAGE_TYPE_MIN_PRODUCT_SPECIFIC and * MPI2_EXT_IMAGE_TYPE_MAX_PRODUCT_SPECIFIC defines. * Deprecated MPI2_EXT_IMAGE_TYPE_MAX define. * 08-24-11 02.00.19 Added PhysicalPort field to * MPI2_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE structure. * Marked MPI2_PM_CONTROL_FEATURE_PCIE_LINK as obsolete. * 11-18-11 02.00.20 Incorporating additions for MPI v2.5. * 03-29-12 02.00.21 Added a product specific range to event values. * 07-26-12 02.00.22 Added MPI2_IOCFACTS_EXCEPT_PARTIAL_MEMORY_FAILURE. * Added ElapsedSeconds field to * MPI2_EVENT_DATA_IR_OPERATION_STATUS. * 08-19-13 02.00.23 For IOCInit, added MPI2_IOCINIT_MSGFLAG_RDPQ_ARRAY_MODE * and MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY. * Added MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE. * Added MPI2_FW_DOWNLOAD_ITYPE_PUBLIC_KEY. * Added Encrypted Hash Extended Image. * 12-05-13 02.00.24 Added MPI25_HASH_IMAGE_TYPE_BIOS. * 11-18-14 02.00.25 Updated copyright information. * 03-16-15 02.00.26 Updated for MPI v2.6. * Added MPI2_EVENT_ACTIVE_CABLE_EXCEPTION and * MPI26_EVENT_DATA_ACTIVE_CABLE_EXCEPT. * Added MPI2_EVENT_PCIE_LINK_COUNTER and * MPI26_EVENT_DATA_PCIE_LINK_COUNTER. * Added MPI26_CTRL_OP_SHUTDOWN. * Added MPI26_CTRL_OP_LINK_CLEAR_ERROR_LOG * Added MPI26_FW_HEADER_PID_FAMILY_3324_SAS and * MPI26_FW_HEADER_PID_FAMILY_3516_SAS. * 08-25-15 02.00.27 Added IC ARCH Class based signature defines. * Added MPI26_EVENT_PCIE_ENUM_ES_RESOURCES_EXHAUSTED event. * Added ConigurationFlags field to IOCInit message to * support NVMe SGL format control. * Added PCIe SRIOV support. * 02-17-16 02.00.28 Added SAS 4 22.5 gbs speed support. * Added PCIe 4 16.0 GT/sec speec support. * Removed AHCI support. * Removed SOP support. * 07-01-16 02.00.29 Added Archclass for 4008 product. * Added IOCException MPI2_IOCFACTS_EXCEPT_PCIE_DISABLED * 08-23-16 02.00.30 Added new defines for the ImageType field of FWDownload * Request Message. * Added new defines for the ImageType field of FWUpload * Request Message. * Added new values for the RegionType field in the Layout * Data sections of the FLASH Layout Extended Image Data. * Added new defines for the ReasonCode field of * Active Cable Exception Event. * Added MPI2_EVENT_ENCL_DEVICE_STATUS_CHANGE and * MPI26_EVENT_DATA_ENCL_DEV_STATUS_CHANGE. * 11-23-16 02.00.31 Added MPI2_EVENT_SAS_DEVICE_DISCOVERY_ERROR and * MPI25_EVENT_DATA_SAS_DEVICE_DISCOVERY_ERROR. * 02-02-17 02.00.32 Added MPI2_FW_DOWNLOAD_ITYPE_CBB_BACKUP. * Added MPI25_EVENT_DATA_ACTIVE_CABLE_EXCEPT and related * defines for the ReasonCode field. + * 06-13-17 02.00.33 Added MPI2_FW_DOWNLOAD_ITYPE_CPLD. + * 09-29-17 02.00.34 Added MPI26_EVENT_PCIDEV_STAT_RC_PCIE_HOT_RESET_FAILED + * to the ReasonCode field in PCIe Device Status Change + * Event Data. + * 07-22-18 02.00.35 Added FW_DOWNLOAD_ITYPE_CPLD and _PSOC. + * Moved FW image definitions ionto new mpi2_image,h + * 08-14-18 02.00.36 Fixed definition of MPI2_FW_DOWNLOAD_ITYPE_PSOC (0x16) * -------------------------------------------------------------------------- */ #ifndef MPI2_IOC_H #define MPI2_IOC_H /***************************************************************************** * * IOC Messages * *****************************************************************************/ /**************************************************************************** * IOCInit message ****************************************************************************/ /* IOCInit Request message */ typedef struct _MPI2_IOC_INIT_REQUEST { U8 WhoInit; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 MsgVersion; /* 0x0C */ U16 HeaderVersion; /* 0x0E */ U32 Reserved5; /* 0x10 */ U16 ConfigurationFlags; /* 0x14 */ U8 HostPageSize; /* 0x16 */ U8 HostMSIxVectors; /* 0x17 */ U16 Reserved8; /* 0x18 */ U16 SystemRequestFrameSize; /* 0x1A */ U16 ReplyDescriptorPostQueueDepth; /* 0x1C */ U16 ReplyFreeQueueDepth; /* 0x1E */ U32 SenseBufferAddressHigh; /* 0x20 */ U32 SystemReplyAddressHigh; /* 0x24 */ U64 SystemRequestFrameBaseAddress; /* 0x28 */ U64 ReplyDescriptorPostQueueAddress;/* 0x30 */ U64 ReplyFreeQueueAddress; /* 0x38 */ U64 TimeStamp; /* 0x40 */ } MPI2_IOC_INIT_REQUEST, MPI2_POINTER PTR_MPI2_IOC_INIT_REQUEST, Mpi2IOCInitRequest_t, MPI2_POINTER pMpi2IOCInitRequest_t; /* WhoInit values */ #define MPI2_WHOINIT_NOT_INITIALIZED (0x00) #define MPI2_WHOINIT_SYSTEM_BIOS (0x01) #define MPI2_WHOINIT_ROM_BIOS (0x02) #define MPI2_WHOINIT_PCI_PEER (0x03) #define MPI2_WHOINIT_HOST_DRIVER (0x04) #define MPI2_WHOINIT_MANUFACTURER (0x05) /* MsgFlags */ #define MPI2_IOCINIT_MSGFLAG_RDPQ_ARRAY_MODE (0x01) /* MsgVersion */ #define MPI2_IOCINIT_MSGVERSION_MAJOR_MASK (0xFF00) #define MPI2_IOCINIT_MSGVERSION_MAJOR_SHIFT (8) #define MPI2_IOCINIT_MSGVERSION_MINOR_MASK (0x00FF) #define MPI2_IOCINIT_MSGVERSION_MINOR_SHIFT (0) /* HeaderVersion */ #define MPI2_IOCINIT_HDRVERSION_UNIT_MASK (0xFF00) #define MPI2_IOCINIT_HDRVERSION_UNIT_SHIFT (8) #define MPI2_IOCINIT_HDRVERSION_DEV_MASK (0x00FF) #define MPI2_IOCINIT_HDRVERSION_DEV_SHIFT (0) /* ConfigurationFlags */ #define MPI26_IOCINIT_CFGFLAGS_NVME_SGL_FORMAT (0x0001) /* minimum depth for a Reply Descriptor Post Queue */ #define MPI2_RDPQ_DEPTH_MIN (16) /* Reply Descriptor Post Queue Array Entry */ typedef struct _MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY { U64 RDPQBaseAddress; /* 0x00 */ U32 Reserved1; /* 0x08 */ U32 Reserved2; /* 0x0C */ } MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY, MPI2_POINTER PTR_MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY, Mpi2IOCInitRDPQArrayEntry, MPI2_POINTER pMpi2IOCInitRDPQArrayEntry; /* IOCInit Reply message */ typedef struct _MPI2_IOC_INIT_REPLY { U8 WhoInit; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 Reserved5; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ } MPI2_IOC_INIT_REPLY, MPI2_POINTER PTR_MPI2_IOC_INIT_REPLY, Mpi2IOCInitReply_t, MPI2_POINTER pMpi2IOCInitReply_t; /**************************************************************************** * IOCFacts message ****************************************************************************/ /* IOCFacts Request message */ typedef struct _MPI2_IOC_FACTS_REQUEST { U16 Reserved1; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ } MPI2_IOC_FACTS_REQUEST, MPI2_POINTER PTR_MPI2_IOC_FACTS_REQUEST, Mpi2IOCFactsRequest_t, MPI2_POINTER pMpi2IOCFactsRequest_t; /* IOCFacts Reply message */ typedef struct _MPI2_IOC_FACTS_REPLY { U16 MsgVersion; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 HeaderVersion; /* 0x04 */ U8 IOCNumber; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved1; /* 0x0A */ U16 IOCExceptions; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U8 MaxChainDepth; /* 0x14 */ U8 WhoInit; /* 0x15 */ U8 NumberOfPorts; /* 0x16 */ U8 MaxMSIxVectors; /* 0x17 */ U16 RequestCredit; /* 0x18 */ U16 ProductID; /* 0x1A */ U32 IOCCapabilities; /* 0x1C */ MPI2_VERSION_UNION FWVersion; /* 0x20 */ U16 IOCRequestFrameSize; /* 0x24 */ U16 IOCMaxChainSegmentSize; /* 0x26 */ /* MPI 2.5 only; Reserved in MPI 2.0 */ U16 MaxInitiators; /* 0x28 */ U16 MaxTargets; /* 0x2A */ U16 MaxSasExpanders; /* 0x2C */ U16 MaxEnclosures; /* 0x2E */ U16 ProtocolFlags; /* 0x30 */ U16 HighPriorityCredit; /* 0x32 */ U16 MaxReplyDescriptorPostQueueDepth; /* 0x34 */ U8 ReplyFrameSize; /* 0x36 */ U8 MaxVolumes; /* 0x37 */ U16 MaxDevHandle; /* 0x38 */ U16 MaxPersistentEntries; /* 0x3A */ U16 MinDevHandle; /* 0x3C */ U8 CurrentHostPageSize; /* 0x3E */ U8 Reserved4; /* 0x3F */ U8 SGEModifierMask; /* 0x40 */ U8 SGEModifierValue; /* 0x41 */ U8 SGEModifierShift; /* 0x42 */ U8 Reserved5; /* 0x43 */ } MPI2_IOC_FACTS_REPLY, MPI2_POINTER PTR_MPI2_IOC_FACTS_REPLY, Mpi2IOCFactsReply_t, MPI2_POINTER pMpi2IOCFactsReply_t; /* MsgVersion */ #define MPI2_IOCFACTS_MSGVERSION_MAJOR_MASK (0xFF00) #define MPI2_IOCFACTS_MSGVERSION_MAJOR_SHIFT (8) #define MPI2_IOCFACTS_MSGVERSION_MINOR_MASK (0x00FF) #define MPI2_IOCFACTS_MSGVERSION_MINOR_SHIFT (0) /* HeaderVersion */ #define MPI2_IOCFACTS_HDRVERSION_UNIT_MASK (0xFF00) #define MPI2_IOCFACTS_HDRVERSION_UNIT_SHIFT (8) #define MPI2_IOCFACTS_HDRVERSION_DEV_MASK (0x00FF) #define MPI2_IOCFACTS_HDRVERSION_DEV_SHIFT (0) /* IOCExceptions */ #define MPI2_IOCFACTS_EXCEPT_PCIE_DISABLED (0x0400) #define MPI2_IOCFACTS_EXCEPT_PARTIAL_MEMORY_FAILURE (0x0200) #define MPI2_IOCFACTS_EXCEPT_IR_FOREIGN_CONFIG_MAX (0x0100) #define MPI2_IOCFACTS_EXCEPT_BOOTSTAT_MASK (0x00E0) #define MPI2_IOCFACTS_EXCEPT_BOOTSTAT_GOOD (0x0000) #define MPI2_IOCFACTS_EXCEPT_BOOTSTAT_BACKUP (0x0020) #define MPI2_IOCFACTS_EXCEPT_BOOTSTAT_RESTORED (0x0040) #define MPI2_IOCFACTS_EXCEPT_BOOTSTAT_CORRUPT_BACKUP (0x0060) #define MPI2_IOCFACTS_EXCEPT_METADATA_UNSUPPORTED (0x0010) #define MPI2_IOCFACTS_EXCEPT_MANUFACT_CHECKSUM_FAIL (0x0008) #define MPI2_IOCFACTS_EXCEPT_FW_CHECKSUM_FAIL (0x0004) #define MPI2_IOCFACTS_EXCEPT_RAID_CONFIG_INVALID (0x0002) #define MPI2_IOCFACTS_EXCEPT_CONFIG_CHECKSUM_FAIL (0x0001) /* defines for WhoInit field are after the IOCInit Request */ /* ProductID field uses MPI2_FW_HEADER_PID_ */ /* IOCCapabilities */ #define MPI26_IOCFACTS_CAPABILITY_PCIE_SRIOV (0x00100000) #define MPI26_IOCFACTS_CAPABILITY_ATOMIC_REQ (0x00080000) #define MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE (0x00040000) #define MPI25_IOCFACTS_CAPABILITY_FAST_PATH_CAPABLE (0x00020000) #define MPI2_IOCFACTS_CAPABILITY_HOST_BASED_DISCOVERY (0x00010000) #define MPI2_IOCFACTS_CAPABILITY_MSI_X_INDEX (0x00008000) #define MPI2_IOCFACTS_CAPABILITY_RAID_ACCELERATOR (0x00004000) #define MPI2_IOCFACTS_CAPABILITY_EVENT_REPLAY (0x00002000) #define MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID (0x00001000) #define MPI2_IOCFACTS_CAPABILITY_TLR (0x00000800) #define MPI2_IOCFACTS_CAPABILITY_MULTICAST (0x00000100) #define MPI2_IOCFACTS_CAPABILITY_BIDIRECTIONAL_TARGET (0x00000080) #define MPI2_IOCFACTS_CAPABILITY_EEDP (0x00000040) #define MPI2_IOCFACTS_CAPABILITY_EXTENDED_BUFFER (0x00000020) #define MPI2_IOCFACTS_CAPABILITY_SNAPSHOT_BUFFER (0x00000010) #define MPI2_IOCFACTS_CAPABILITY_DIAG_TRACE_BUFFER (0x00000008) #define MPI2_IOCFACTS_CAPABILITY_TASK_SET_FULL_HANDLING (0x00000004) /* ProtocolFlags */ #define MPI2_IOCFACTS_PROTOCOL_NVME_DEVICES (0x0008) /* MPI v2.6 and later */ #define MPI2_IOCFACTS_PROTOCOL_SCSI_INITIATOR (0x0002) #define MPI2_IOCFACTS_PROTOCOL_SCSI_TARGET (0x0001) /**************************************************************************** * PortFacts message ****************************************************************************/ /* PortFacts Request message */ typedef struct _MPI2_PORT_FACTS_REQUEST { U16 Reserved1; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 PortNumber; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ } MPI2_PORT_FACTS_REQUEST, MPI2_POINTER PTR_MPI2_PORT_FACTS_REQUEST, Mpi2PortFactsRequest_t, MPI2_POINTER pMpi2PortFactsRequest_t; /* PortFacts Reply message */ typedef struct _MPI2_PORT_FACTS_REPLY { U16 Reserved1; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 PortNumber; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 Reserved4; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U8 Reserved5; /* 0x14 */ U8 PortType; /* 0x15 */ U16 Reserved6; /* 0x16 */ U16 MaxPostedCmdBuffers; /* 0x18 */ U16 Reserved7; /* 0x1A */ } MPI2_PORT_FACTS_REPLY, MPI2_POINTER PTR_MPI2_PORT_FACTS_REPLY, Mpi2PortFactsReply_t, MPI2_POINTER pMpi2PortFactsReply_t; /* PortType values */ #define MPI2_PORTFACTS_PORTTYPE_INACTIVE (0x00) #define MPI2_PORTFACTS_PORTTYPE_FC (0x10) #define MPI2_PORTFACTS_PORTTYPE_ISCSI (0x20) #define MPI2_PORTFACTS_PORTTYPE_SAS_PHYSICAL (0x30) #define MPI2_PORTFACTS_PORTTYPE_SAS_VIRTUAL (0x31) #define MPI2_PORTFACTS_PORTTYPE_TRI_MODE (0x40) /* MPI v2.6 and later */ /**************************************************************************** * PortEnable message ****************************************************************************/ /* PortEnable Request message */ typedef struct _MPI2_PORT_ENABLE_REQUEST { U16 Reserved1; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U8 Reserved2; /* 0x04 */ U8 PortFlags; /* 0x05 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ } MPI2_PORT_ENABLE_REQUEST, MPI2_POINTER PTR_MPI2_PORT_ENABLE_REQUEST, Mpi2PortEnableRequest_t, MPI2_POINTER pMpi2PortEnableRequest_t; /* PortEnable Reply message */ typedef struct _MPI2_PORT_ENABLE_REPLY { U16 Reserved1; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U8 Reserved2; /* 0x04 */ U8 PortFlags; /* 0x05 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 Reserved5; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ } MPI2_PORT_ENABLE_REPLY, MPI2_POINTER PTR_MPI2_PORT_ENABLE_REPLY, Mpi2PortEnableReply_t, MPI2_POINTER pMpi2PortEnableReply_t; /**************************************************************************** * EventNotification message ****************************************************************************/ /* EventNotification Request message */ #define MPI2_EVENT_NOTIFY_EVENTMASK_WORDS (4) typedef struct _MPI2_EVENT_NOTIFICATION_REQUEST { U16 Reserved1; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U32 Reserved5; /* 0x0C */ U32 Reserved6; /* 0x10 */ U32 EventMasks[MPI2_EVENT_NOTIFY_EVENTMASK_WORDS];/* 0x14 */ U16 SASBroadcastPrimitiveMasks; /* 0x24 */ U16 SASNotifyPrimitiveMasks; /* 0x26 */ U32 Reserved8; /* 0x28 */ } MPI2_EVENT_NOTIFICATION_REQUEST, MPI2_POINTER PTR_MPI2_EVENT_NOTIFICATION_REQUEST, Mpi2EventNotificationRequest_t, MPI2_POINTER pMpi2EventNotificationRequest_t; /* EventNotification Reply message */ typedef struct _MPI2_EVENT_NOTIFICATION_REPLY { U16 EventDataLength; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved1; /* 0x04 */ U8 AckRequired; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved2; /* 0x0A */ U16 Reserved3; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U16 Event; /* 0x14 */ U16 Reserved4; /* 0x16 */ U32 EventContext; /* 0x18 */ U32 EventData[1]; /* 0x1C */ } MPI2_EVENT_NOTIFICATION_REPLY, MPI2_POINTER PTR_MPI2_EVENT_NOTIFICATION_REPLY, Mpi2EventNotificationReply_t, MPI2_POINTER pMpi2EventNotificationReply_t; /* AckRequired */ #define MPI2_EVENT_NOTIFICATION_ACK_NOT_REQUIRED (0x00) #define MPI2_EVENT_NOTIFICATION_ACK_REQUIRED (0x01) /* Event */ #define MPI2_EVENT_LOG_DATA (0x0001) #define MPI2_EVENT_STATE_CHANGE (0x0002) #define MPI2_EVENT_HARD_RESET_RECEIVED (0x0005) #define MPI2_EVENT_EVENT_CHANGE (0x000A) #define MPI2_EVENT_TASK_SET_FULL (0x000E) /* obsolete */ #define MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE (0x000F) #define MPI2_EVENT_IR_OPERATION_STATUS (0x0014) #define MPI2_EVENT_SAS_DISCOVERY (0x0016) #define MPI2_EVENT_SAS_BROADCAST_PRIMITIVE (0x0017) #define MPI2_EVENT_SAS_INIT_DEVICE_STATUS_CHANGE (0x0018) #define MPI2_EVENT_SAS_INIT_TABLE_OVERFLOW (0x0019) #define MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST (0x001C) #define MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE (0x001D) #define MPI2_EVENT_ENCL_DEVICE_STATUS_CHANGE (0x001D) /* MPI v2.6 and later */ #define MPI2_EVENT_IR_VOLUME (0x001E) #define MPI2_EVENT_IR_PHYSICAL_DISK (0x001F) #define MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST (0x0020) #define MPI2_EVENT_LOG_ENTRY_ADDED (0x0021) #define MPI2_EVENT_SAS_PHY_COUNTER (0x0022) #define MPI2_EVENT_GPIO_INTERRUPT (0x0023) #define MPI2_EVENT_HOST_BASED_DISCOVERY_PHY (0x0024) #define MPI2_EVENT_SAS_QUIESCE (0x0025) #define MPI2_EVENT_SAS_NOTIFY_PRIMITIVE (0x0026) #define MPI2_EVENT_TEMP_THRESHOLD (0x0027) #define MPI2_EVENT_HOST_MESSAGE (0x0028) #define MPI2_EVENT_POWER_PERFORMANCE_CHANGE (0x0029) #define MPI2_EVENT_PCIE_DEVICE_STATUS_CHANGE (0x0030) /* MPI v2.6 and later */ #define MPI2_EVENT_PCIE_ENUMERATION (0x0031) /* MPI v2.6 and later */ #define MPI2_EVENT_PCIE_TOPOLOGY_CHANGE_LIST (0x0032) /* MPI v2.6 and later */ #define MPI2_EVENT_PCIE_LINK_COUNTER (0x0033) /* MPI v2.6 and later */ #define MPI2_EVENT_ACTIVE_CABLE_EXCEPTION (0x0034) /* MPI v2.6 and later */ #define MPI2_EVENT_SAS_DEVICE_DISCOVERY_ERROR (0x0035) /* MPI v2.5 and later */ #define MPI2_EVENT_MIN_PRODUCT_SPECIFIC (0x006E) #define MPI2_EVENT_MAX_PRODUCT_SPECIFIC (0x007F) /* Log Entry Added Event data */ /* the following structure matches MPI2_LOG_0_ENTRY in mpi2_cnfg.h */ #define MPI2_EVENT_DATA_LOG_DATA_LENGTH (0x1C) typedef struct _MPI2_EVENT_DATA_LOG_ENTRY_ADDED { U64 TimeStamp; /* 0x00 */ U32 Reserved1; /* 0x08 */ U16 LogSequence; /* 0x0C */ U16 LogEntryQualifier; /* 0x0E */ U8 VP_ID; /* 0x10 */ U8 VF_ID; /* 0x11 */ U16 Reserved2; /* 0x12 */ U8 LogData[MPI2_EVENT_DATA_LOG_DATA_LENGTH];/* 0x14 */ } MPI2_EVENT_DATA_LOG_ENTRY_ADDED, MPI2_POINTER PTR_MPI2_EVENT_DATA_LOG_ENTRY_ADDED, Mpi2EventDataLogEntryAdded_t, MPI2_POINTER pMpi2EventDataLogEntryAdded_t; /* GPIO Interrupt Event data */ typedef struct _MPI2_EVENT_DATA_GPIO_INTERRUPT { U8 GPIONum; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 Reserved2; /* 0x02 */ } MPI2_EVENT_DATA_GPIO_INTERRUPT, MPI2_POINTER PTR_MPI2_EVENT_DATA_GPIO_INTERRUPT, Mpi2EventDataGpioInterrupt_t, MPI2_POINTER pMpi2EventDataGpioInterrupt_t; /* Temperature Threshold Event data */ typedef struct _MPI2_EVENT_DATA_TEMPERATURE { U16 Status; /* 0x00 */ U8 SensorNum; /* 0x02 */ U8 Reserved1; /* 0x03 */ U16 CurrentTemperature; /* 0x04 */ U16 Reserved2; /* 0x06 */ U32 Reserved3; /* 0x08 */ U32 Reserved4; /* 0x0C */ } MPI2_EVENT_DATA_TEMPERATURE, MPI2_POINTER PTR_MPI2_EVENT_DATA_TEMPERATURE, Mpi2EventDataTemperature_t, MPI2_POINTER pMpi2EventDataTemperature_t; /* Temperature Threshold Event data Status bits */ #define MPI2_EVENT_TEMPERATURE3_EXCEEDED (0x0008) #define MPI2_EVENT_TEMPERATURE2_EXCEEDED (0x0004) #define MPI2_EVENT_TEMPERATURE1_EXCEEDED (0x0002) #define MPI2_EVENT_TEMPERATURE0_EXCEEDED (0x0001) /* Host Message Event data */ typedef struct _MPI2_EVENT_DATA_HOST_MESSAGE { U8 SourceVF_ID; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 Reserved2; /* 0x02 */ U32 Reserved3; /* 0x04 */ U32 HostData[1]; /* 0x08 */ } MPI2_EVENT_DATA_HOST_MESSAGE, MPI2_POINTER PTR_MPI2_EVENT_DATA_HOST_MESSAGE, Mpi2EventDataHostMessage_t, MPI2_POINTER pMpi2EventDataHostMessage_t; /* Power Performance Change Event data */ typedef struct _MPI2_EVENT_DATA_POWER_PERF_CHANGE { U8 CurrentPowerMode; /* 0x00 */ U8 PreviousPowerMode; /* 0x01 */ U16 Reserved1; /* 0x02 */ } MPI2_EVENT_DATA_POWER_PERF_CHANGE, MPI2_POINTER PTR_MPI2_EVENT_DATA_POWER_PERF_CHANGE, Mpi2EventDataPowerPerfChange_t, MPI2_POINTER pMpi2EventDataPowerPerfChange_t; /* defines for CurrentPowerMode and PreviousPowerMode fields */ #define MPI2_EVENT_PM_INIT_MASK (0xC0) #define MPI2_EVENT_PM_INIT_UNAVAILABLE (0x00) #define MPI2_EVENT_PM_INIT_HOST (0x40) #define MPI2_EVENT_PM_INIT_IO_UNIT (0x80) #define MPI2_EVENT_PM_INIT_PCIE_DPA (0xC0) #define MPI2_EVENT_PM_MODE_MASK (0x07) #define MPI2_EVENT_PM_MODE_UNAVAILABLE (0x00) #define MPI2_EVENT_PM_MODE_UNKNOWN (0x01) #define MPI2_EVENT_PM_MODE_FULL_POWER (0x04) #define MPI2_EVENT_PM_MODE_REDUCED_POWER (0x05) #define MPI2_EVENT_PM_MODE_STANDBY (0x06) /* Active Cable Exception Event data */ typedef struct _MPI26_EVENT_DATA_ACTIVE_CABLE_EXCEPT { U32 ActiveCablePowerRequirement; /* 0x00 */ U8 ReasonCode; /* 0x04 */ U8 ReceptacleID; /* 0x05 */ U16 Reserved1; /* 0x06 */ } MPI25_EVENT_DATA_ACTIVE_CABLE_EXCEPT, MPI2_POINTER PTR_MPI25_EVENT_DATA_ACTIVE_CABLE_EXCEPT, Mpi25EventDataActiveCableExcept_t, MPI2_POINTER pMpi25EventDataActiveCableExcept_t, MPI26_EVENT_DATA_ACTIVE_CABLE_EXCEPT, MPI2_POINTER PTR_MPI26_EVENT_DATA_ACTIVE_CABLE_EXCEPT, Mpi26EventDataActiveCableExcept_t, MPI2_POINTER pMpi26EventDataActiveCableExcept_t; /* MPI2.5 defines for the ReasonCode field */ #define MPI25_EVENT_ACTIVE_CABLE_INSUFFICIENT_POWER (0x00) #define MPI25_EVENT_ACTIVE_CABLE_PRESENT (0x01) #define MPI25_EVENT_ACTIVE_CABLE_DEGRADED (0x02) /* MPI2.6 defines for the ReasonCode field */ #define MPI26_EVENT_ACTIVE_CABLE_INSUFFICIENT_POWER (0x00) #define MPI26_EVENT_ACTIVE_CABLE_PRESENT (0x01) #define MPI26_EVENT_ACTIVE_CABLE_DEGRADED (0x02) /* Hard Reset Received Event data */ typedef struct _MPI2_EVENT_DATA_HARD_RESET_RECEIVED { U8 Reserved1; /* 0x00 */ U8 Port; /* 0x01 */ U16 Reserved2; /* 0x02 */ } MPI2_EVENT_DATA_HARD_RESET_RECEIVED, MPI2_POINTER PTR_MPI2_EVENT_DATA_HARD_RESET_RECEIVED, Mpi2EventDataHardResetReceived_t, MPI2_POINTER pMpi2EventDataHardResetReceived_t; /* Task Set Full Event data */ /* this event is obsolete */ typedef struct _MPI2_EVENT_DATA_TASK_SET_FULL { U16 DevHandle; /* 0x00 */ U16 CurrentDepth; /* 0x02 */ } MPI2_EVENT_DATA_TASK_SET_FULL, MPI2_POINTER PTR_MPI2_EVENT_DATA_TASK_SET_FULL, Mpi2EventDataTaskSetFull_t, MPI2_POINTER pMpi2EventDataTaskSetFull_t; /* SAS Device Status Change Event data */ typedef struct _MPI2_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE { U16 TaskTag; /* 0x00 */ U8 ReasonCode; /* 0x02 */ U8 PhysicalPort; /* 0x03 */ U8 ASC; /* 0x04 */ U8 ASCQ; /* 0x05 */ U16 DevHandle; /* 0x06 */ U32 Reserved2; /* 0x08 */ U64 SASAddress; /* 0x0C */ U8 LUN[8]; /* 0x14 */ } MPI2_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE, MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE, Mpi2EventDataSasDeviceStatusChange_t, MPI2_POINTER pMpi2EventDataSasDeviceStatusChange_t; /* SAS Device Status Change Event data ReasonCode values */ #define MPI2_EVENT_SAS_DEV_STAT_RC_SMART_DATA (0x05) #define MPI2_EVENT_SAS_DEV_STAT_RC_UNSUPPORTED (0x07) #define MPI2_EVENT_SAS_DEV_STAT_RC_INTERNAL_DEVICE_RESET (0x08) #define MPI2_EVENT_SAS_DEV_STAT_RC_TASK_ABORT_INTERNAL (0x09) #define MPI2_EVENT_SAS_DEV_STAT_RC_ABORT_TASK_SET_INTERNAL (0x0A) #define MPI2_EVENT_SAS_DEV_STAT_RC_CLEAR_TASK_SET_INTERNAL (0x0B) #define MPI2_EVENT_SAS_DEV_STAT_RC_QUERY_TASK_INTERNAL (0x0C) #define MPI2_EVENT_SAS_DEV_STAT_RC_ASYNC_NOTIFICATION (0x0D) #define MPI2_EVENT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET (0x0E) #define MPI2_EVENT_SAS_DEV_STAT_RC_CMP_TASK_ABORT_INTERNAL (0x0F) #define MPI2_EVENT_SAS_DEV_STAT_RC_SATA_INIT_FAILURE (0x10) #define MPI2_EVENT_SAS_DEV_STAT_RC_EXPANDER_REDUCED_FUNCTIONALITY (0x11) #define MPI2_EVENT_SAS_DEV_STAT_RC_CMP_EXPANDER_REDUCED_FUNCTIONALITY (0x12) /* Integrated RAID Operation Status Event data */ typedef struct _MPI2_EVENT_DATA_IR_OPERATION_STATUS { U16 VolDevHandle; /* 0x00 */ U16 Reserved1; /* 0x02 */ U8 RAIDOperation; /* 0x04 */ U8 PercentComplete; /* 0x05 */ U16 Reserved2; /* 0x06 */ U32 ElapsedSeconds; /* 0x08 */ } MPI2_EVENT_DATA_IR_OPERATION_STATUS, MPI2_POINTER PTR_MPI2_EVENT_DATA_IR_OPERATION_STATUS, Mpi2EventDataIrOperationStatus_t, MPI2_POINTER pMpi2EventDataIrOperationStatus_t; /* Integrated RAID Operation Status Event data RAIDOperation values */ #define MPI2_EVENT_IR_RAIDOP_RESYNC (0x00) #define MPI2_EVENT_IR_RAIDOP_ONLINE_CAP_EXPANSION (0x01) #define MPI2_EVENT_IR_RAIDOP_CONSISTENCY_CHECK (0x02) #define MPI2_EVENT_IR_RAIDOP_BACKGROUND_INIT (0x03) #define MPI2_EVENT_IR_RAIDOP_MAKE_DATA_CONSISTENT (0x04) /* Integrated RAID Volume Event data */ typedef struct _MPI2_EVENT_DATA_IR_VOLUME { U16 VolDevHandle; /* 0x00 */ U8 ReasonCode; /* 0x02 */ U8 Reserved1; /* 0x03 */ U32 NewValue; /* 0x04 */ U32 PreviousValue; /* 0x08 */ } MPI2_EVENT_DATA_IR_VOLUME, MPI2_POINTER PTR_MPI2_EVENT_DATA_IR_VOLUME, Mpi2EventDataIrVolume_t, MPI2_POINTER pMpi2EventDataIrVolume_t; /* Integrated RAID Volume Event data ReasonCode values */ #define MPI2_EVENT_IR_VOLUME_RC_SETTINGS_CHANGED (0x01) #define MPI2_EVENT_IR_VOLUME_RC_STATUS_FLAGS_CHANGED (0x02) #define MPI2_EVENT_IR_VOLUME_RC_STATE_CHANGED (0x03) /* Integrated RAID Physical Disk Event data */ typedef struct _MPI2_EVENT_DATA_IR_PHYSICAL_DISK { U16 Reserved1; /* 0x00 */ U8 ReasonCode; /* 0x02 */ U8 PhysDiskNum; /* 0x03 */ U16 PhysDiskDevHandle; /* 0x04 */ U16 Reserved2; /* 0x06 */ U16 Slot; /* 0x08 */ U16 EnclosureHandle; /* 0x0A */ U32 NewValue; /* 0x0C */ U32 PreviousValue; /* 0x10 */ } MPI2_EVENT_DATA_IR_PHYSICAL_DISK, MPI2_POINTER PTR_MPI2_EVENT_DATA_IR_PHYSICAL_DISK, Mpi2EventDataIrPhysicalDisk_t, MPI2_POINTER pMpi2EventDataIrPhysicalDisk_t; /* Integrated RAID Physical Disk Event data ReasonCode values */ #define MPI2_EVENT_IR_PHYSDISK_RC_SETTINGS_CHANGED (0x01) #define MPI2_EVENT_IR_PHYSDISK_RC_STATUS_FLAGS_CHANGED (0x02) #define MPI2_EVENT_IR_PHYSDISK_RC_STATE_CHANGED (0x03) /* Integrated RAID Configuration Change List Event data */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check NumElements at runtime. */ #ifndef MPI2_EVENT_IR_CONFIG_ELEMENT_COUNT #define MPI2_EVENT_IR_CONFIG_ELEMENT_COUNT (1) #endif typedef struct _MPI2_EVENT_IR_CONFIG_ELEMENT { U16 ElementFlags; /* 0x00 */ U16 VolDevHandle; /* 0x02 */ U8 ReasonCode; /* 0x04 */ U8 PhysDiskNum; /* 0x05 */ U16 PhysDiskDevHandle; /* 0x06 */ } MPI2_EVENT_IR_CONFIG_ELEMENT, MPI2_POINTER PTR_MPI2_EVENT_IR_CONFIG_ELEMENT, Mpi2EventIrConfigElement_t, MPI2_POINTER pMpi2EventIrConfigElement_t; /* IR Configuration Change List Event data ElementFlags values */ #define MPI2_EVENT_IR_CHANGE_EFLAGS_ELEMENT_TYPE_MASK (0x000F) #define MPI2_EVENT_IR_CHANGE_EFLAGS_VOLUME_ELEMENT (0x0000) #define MPI2_EVENT_IR_CHANGE_EFLAGS_VOLPHYSDISK_ELEMENT (0x0001) #define MPI2_EVENT_IR_CHANGE_EFLAGS_HOTSPARE_ELEMENT (0x0002) /* IR Configuration Change List Event data ReasonCode values */ #define MPI2_EVENT_IR_CHANGE_RC_ADDED (0x01) #define MPI2_EVENT_IR_CHANGE_RC_REMOVED (0x02) #define MPI2_EVENT_IR_CHANGE_RC_NO_CHANGE (0x03) #define MPI2_EVENT_IR_CHANGE_RC_HIDE (0x04) #define MPI2_EVENT_IR_CHANGE_RC_UNHIDE (0x05) #define MPI2_EVENT_IR_CHANGE_RC_VOLUME_CREATED (0x06) #define MPI2_EVENT_IR_CHANGE_RC_VOLUME_DELETED (0x07) #define MPI2_EVENT_IR_CHANGE_RC_PD_CREATED (0x08) #define MPI2_EVENT_IR_CHANGE_RC_PD_DELETED (0x09) typedef struct _MPI2_EVENT_DATA_IR_CONFIG_CHANGE_LIST { U8 NumElements; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 Reserved2; /* 0x02 */ U8 ConfigNum; /* 0x03 */ U32 Flags; /* 0x04 */ MPI2_EVENT_IR_CONFIG_ELEMENT ConfigElement[MPI2_EVENT_IR_CONFIG_ELEMENT_COUNT]; /* 0x08 */ } MPI2_EVENT_DATA_IR_CONFIG_CHANGE_LIST, MPI2_POINTER PTR_MPI2_EVENT_DATA_IR_CONFIG_CHANGE_LIST, Mpi2EventDataIrConfigChangeList_t, MPI2_POINTER pMpi2EventDataIrConfigChangeList_t; /* IR Configuration Change List Event data Flags values */ #define MPI2_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG (0x00000001) /* SAS Discovery Event data */ typedef struct _MPI2_EVENT_DATA_SAS_DISCOVERY { U8 Flags; /* 0x00 */ U8 ReasonCode; /* 0x01 */ U8 PhysicalPort; /* 0x02 */ U8 Reserved1; /* 0x03 */ U32 DiscoveryStatus; /* 0x04 */ } MPI2_EVENT_DATA_SAS_DISCOVERY, MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_DISCOVERY, Mpi2EventDataSasDiscovery_t, MPI2_POINTER pMpi2EventDataSasDiscovery_t; /* SAS Discovery Event data Flags values */ #define MPI2_EVENT_SAS_DISC_DEVICE_CHANGE (0x02) #define MPI2_EVENT_SAS_DISC_IN_PROGRESS (0x01) /* SAS Discovery Event data ReasonCode values */ #define MPI2_EVENT_SAS_DISC_RC_STARTED (0x01) #define MPI2_EVENT_SAS_DISC_RC_COMPLETED (0x02) /* SAS Discovery Event data DiscoveryStatus values */ #define MPI2_EVENT_SAS_DISC_DS_MAX_ENCLOSURES_EXCEED (0x80000000) #define MPI2_EVENT_SAS_DISC_DS_MAX_EXPANDERS_EXCEED (0x40000000) #define MPI2_EVENT_SAS_DISC_DS_MAX_DEVICES_EXCEED (0x20000000) #define MPI2_EVENT_SAS_DISC_DS_MAX_TOPO_PHYS_EXCEED (0x10000000) #define MPI2_EVENT_SAS_DISC_DS_DOWNSTREAM_INITIATOR (0x08000000) #define MPI2_EVENT_SAS_DISC_DS_MULTI_SUBTRACTIVE_SUBTRACTIVE (0x00008000) #define MPI2_EVENT_SAS_DISC_DS_EXP_MULTI_SUBTRACTIVE (0x00004000) #define MPI2_EVENT_SAS_DISC_DS_MULTI_PORT_DOMAIN (0x00002000) #define MPI2_EVENT_SAS_DISC_DS_TABLE_TO_SUBTRACTIVE_LINK (0x00001000) #define MPI2_EVENT_SAS_DISC_DS_UNSUPPORTED_DEVICE (0x00000800) #define MPI2_EVENT_SAS_DISC_DS_TABLE_LINK (0x00000400) #define MPI2_EVENT_SAS_DISC_DS_SUBTRACTIVE_LINK (0x00000200) #define MPI2_EVENT_SAS_DISC_DS_SMP_CRC_ERROR (0x00000100) #define MPI2_EVENT_SAS_DISC_DS_SMP_FUNCTION_FAILED (0x00000080) #define MPI2_EVENT_SAS_DISC_DS_INDEX_NOT_EXIST (0x00000040) #define MPI2_EVENT_SAS_DISC_DS_OUT_ROUTE_ENTRIES (0x00000020) #define MPI2_EVENT_SAS_DISC_DS_SMP_TIMEOUT (0x00000010) #define MPI2_EVENT_SAS_DISC_DS_MULTIPLE_PORTS (0x00000004) #define MPI2_EVENT_SAS_DISC_DS_UNADDRESSABLE_DEVICE (0x00000002) #define MPI2_EVENT_SAS_DISC_DS_LOOP_DETECTED (0x00000001) /* SAS Broadcast Primitive Event data */ typedef struct _MPI2_EVENT_DATA_SAS_BROADCAST_PRIMITIVE { U8 PhyNum; /* 0x00 */ U8 Port; /* 0x01 */ U8 PortWidth; /* 0x02 */ U8 Primitive; /* 0x03 */ } MPI2_EVENT_DATA_SAS_BROADCAST_PRIMITIVE, MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_BROADCAST_PRIMITIVE, Mpi2EventDataSasBroadcastPrimitive_t, MPI2_POINTER pMpi2EventDataSasBroadcastPrimitive_t; /* defines for the Primitive field */ #define MPI2_EVENT_PRIMITIVE_CHANGE (0x01) #define MPI2_EVENT_PRIMITIVE_SES (0x02) #define MPI2_EVENT_PRIMITIVE_EXPANDER (0x03) #define MPI2_EVENT_PRIMITIVE_ASYNCHRONOUS_EVENT (0x04) #define MPI2_EVENT_PRIMITIVE_RESERVED3 (0x05) #define MPI2_EVENT_PRIMITIVE_RESERVED4 (0x06) #define MPI2_EVENT_PRIMITIVE_CHANGE0_RESERVED (0x07) #define MPI2_EVENT_PRIMITIVE_CHANGE1_RESERVED (0x08) /* SAS Notify Primitive Event data */ typedef struct _MPI2_EVENT_DATA_SAS_NOTIFY_PRIMITIVE { U8 PhyNum; /* 0x00 */ U8 Port; /* 0x01 */ U8 Reserved1; /* 0x02 */ U8 Primitive; /* 0x03 */ } MPI2_EVENT_DATA_SAS_NOTIFY_PRIMITIVE, MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_NOTIFY_PRIMITIVE, Mpi2EventDataSasNotifyPrimitive_t, MPI2_POINTER pMpi2EventDataSasNotifyPrimitive_t; /* defines for the Primitive field */ #define MPI2_EVENT_NOTIFY_ENABLE_SPINUP (0x01) #define MPI2_EVENT_NOTIFY_POWER_LOSS_EXPECTED (0x02) #define MPI2_EVENT_NOTIFY_RESERVED1 (0x03) #define MPI2_EVENT_NOTIFY_RESERVED2 (0x04) /* SAS Initiator Device Status Change Event data */ typedef struct _MPI2_EVENT_DATA_SAS_INIT_DEV_STATUS_CHANGE { U8 ReasonCode; /* 0x00 */ U8 PhysicalPort; /* 0x01 */ U16 DevHandle; /* 0x02 */ U64 SASAddress; /* 0x04 */ } MPI2_EVENT_DATA_SAS_INIT_DEV_STATUS_CHANGE, MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_INIT_DEV_STATUS_CHANGE, Mpi2EventDataSasInitDevStatusChange_t, MPI2_POINTER pMpi2EventDataSasInitDevStatusChange_t; /* SAS Initiator Device Status Change event ReasonCode values */ #define MPI2_EVENT_SAS_INIT_RC_ADDED (0x01) #define MPI2_EVENT_SAS_INIT_RC_NOT_RESPONDING (0x02) /* SAS Initiator Device Table Overflow Event data */ typedef struct _MPI2_EVENT_DATA_SAS_INIT_TABLE_OVERFLOW { U16 MaxInit; /* 0x00 */ U16 CurrentInit; /* 0x02 */ U64 SASAddress; /* 0x04 */ } MPI2_EVENT_DATA_SAS_INIT_TABLE_OVERFLOW, MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_INIT_TABLE_OVERFLOW, Mpi2EventDataSasInitTableOverflow_t, MPI2_POINTER pMpi2EventDataSasInitTableOverflow_t; /* SAS Topology Change List Event data */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check NumEntries at runtime. */ #ifndef MPI2_EVENT_SAS_TOPO_PHY_COUNT #define MPI2_EVENT_SAS_TOPO_PHY_COUNT (1) #endif typedef struct _MPI2_EVENT_SAS_TOPO_PHY_ENTRY { U16 AttachedDevHandle; /* 0x00 */ U8 LinkRate; /* 0x02 */ U8 PhyStatus; /* 0x03 */ } MPI2_EVENT_SAS_TOPO_PHY_ENTRY, MPI2_POINTER PTR_MPI2_EVENT_SAS_TOPO_PHY_ENTRY, Mpi2EventSasTopoPhyEntry_t, MPI2_POINTER pMpi2EventSasTopoPhyEntry_t; typedef struct _MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST { U16 EnclosureHandle; /* 0x00 */ U16 ExpanderDevHandle; /* 0x02 */ U8 NumPhys; /* 0x04 */ U8 Reserved1; /* 0x05 */ U16 Reserved2; /* 0x06 */ U8 NumEntries; /* 0x08 */ U8 StartPhyNum; /* 0x09 */ U8 ExpStatus; /* 0x0A */ U8 PhysicalPort; /* 0x0B */ MPI2_EVENT_SAS_TOPO_PHY_ENTRY PHY[MPI2_EVENT_SAS_TOPO_PHY_COUNT]; /* 0x0C*/ } MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST, MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST, Mpi2EventDataSasTopologyChangeList_t, MPI2_POINTER pMpi2EventDataSasTopologyChangeList_t; /* values for the ExpStatus field */ #define MPI2_EVENT_SAS_TOPO_ES_NO_EXPANDER (0x00) #define MPI2_EVENT_SAS_TOPO_ES_ADDED (0x01) #define MPI2_EVENT_SAS_TOPO_ES_NOT_RESPONDING (0x02) #define MPI2_EVENT_SAS_TOPO_ES_RESPONDING (0x03) #define MPI2_EVENT_SAS_TOPO_ES_DELAY_NOT_RESPONDING (0x04) /* defines for the LinkRate field */ #define MPI2_EVENT_SAS_TOPO_LR_CURRENT_MASK (0xF0) #define MPI2_EVENT_SAS_TOPO_LR_CURRENT_SHIFT (4) #define MPI2_EVENT_SAS_TOPO_LR_PREV_MASK (0x0F) #define MPI2_EVENT_SAS_TOPO_LR_PREV_SHIFT (0) #define MPI2_EVENT_SAS_TOPO_LR_UNKNOWN_LINK_RATE (0x00) #define MPI2_EVENT_SAS_TOPO_LR_PHY_DISABLED (0x01) #define MPI2_EVENT_SAS_TOPO_LR_NEGOTIATION_FAILED (0x02) #define MPI2_EVENT_SAS_TOPO_LR_SATA_OOB_COMPLETE (0x03) #define MPI2_EVENT_SAS_TOPO_LR_PORT_SELECTOR (0x04) #define MPI2_EVENT_SAS_TOPO_LR_SMP_RESET_IN_PROGRESS (0x05) #define MPI2_EVENT_SAS_TOPO_LR_UNSUPPORTED_PHY (0x06) #define MPI2_EVENT_SAS_TOPO_LR_RATE_1_5 (0x08) #define MPI2_EVENT_SAS_TOPO_LR_RATE_3_0 (0x09) #define MPI2_EVENT_SAS_TOPO_LR_RATE_6_0 (0x0A) #define MPI25_EVENT_SAS_TOPO_LR_RATE_12_0 (0x0B) #define MPI26_EVENT_SAS_TOPO_LR_RATE_22_5 (0x0C) /* values for the PhyStatus field */ #define MPI2_EVENT_SAS_TOPO_PHYSTATUS_VACANT (0x80) #define MPI2_EVENT_SAS_TOPO_PS_MULTIPLEX_CHANGE (0x10) /* values for the PhyStatus ReasonCode sub-field */ #define MPI2_EVENT_SAS_TOPO_RC_MASK (0x0F) #define MPI2_EVENT_SAS_TOPO_RC_TARG_ADDED (0x01) #define MPI2_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING (0x02) #define MPI2_EVENT_SAS_TOPO_RC_PHY_CHANGED (0x03) #define MPI2_EVENT_SAS_TOPO_RC_NO_CHANGE (0x04) #define MPI2_EVENT_SAS_TOPO_RC_DELAY_NOT_RESPONDING (0x05) /* SAS Enclosure Device Status Change Event data */ typedef struct _MPI2_EVENT_DATA_SAS_ENCL_DEV_STATUS_CHANGE { U16 EnclosureHandle; /* 0x00 */ U8 ReasonCode; /* 0x02 */ U8 PhysicalPort; /* 0x03 */ U64 EnclosureLogicalID; /* 0x04 */ U16 NumSlots; /* 0x0C */ U16 StartSlot; /* 0x0E */ U32 PhyBits; /* 0x10 */ } MPI2_EVENT_DATA_SAS_ENCL_DEV_STATUS_CHANGE, MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_ENCL_DEV_STATUS_CHANGE, Mpi2EventDataSasEnclDevStatusChange_t, MPI2_POINTER pMpi2EventDataSasEnclDevStatusChange_t, MPI26_EVENT_DATA_ENCL_DEV_STATUS_CHANGE, MPI2_POINTER PTR_MPI26_EVENT_DATA_ENCL_DEV_STATUS_CHANGE, Mpi26EventDataEnclDevStatusChange_t, MPI2_POINTER pMpi26EventDataEnclDevStatusChange_t; /* SAS Enclosure Device Status Change event ReasonCode values */ #define MPI2_EVENT_SAS_ENCL_RC_ADDED (0x01) #define MPI2_EVENT_SAS_ENCL_RC_NOT_RESPONDING (0x02) /* Enclosure Device Status Change event ReasonCode values */ #define MPI26_EVENT_ENCL_RC_ADDED (0x01) #define MPI26_EVENT_ENCL_RC_NOT_RESPONDING (0x02) /* SAS PHY Counter Event data */ typedef struct _MPI2_EVENT_DATA_SAS_PHY_COUNTER { U64 TimeStamp; /* 0x00 */ U32 Reserved1; /* 0x08 */ U8 PhyEventCode; /* 0x0C */ U8 PhyNum; /* 0x0D */ U16 Reserved2; /* 0x0E */ U32 PhyEventInfo; /* 0x10 */ U8 CounterType; /* 0x14 */ U8 ThresholdWindow; /* 0x15 */ U8 TimeUnits; /* 0x16 */ U8 Reserved3; /* 0x17 */ U32 EventThreshold; /* 0x18 */ U16 ThresholdFlags; /* 0x1C */ U16 Reserved4; /* 0x1E */ } MPI2_EVENT_DATA_SAS_PHY_COUNTER, MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_PHY_COUNTER, Mpi2EventDataSasPhyCounter_t, MPI2_POINTER pMpi2EventDataSasPhyCounter_t; /* use MPI2_SASPHY3_EVENT_CODE_ values from mpi2_cnfg.h for the PhyEventCode field */ /* use MPI2_SASPHY3_COUNTER_TYPE_ values from mpi2_cnfg.h for the CounterType field */ /* use MPI2_SASPHY3_TIME_UNITS_ values from mpi2_cnfg.h for the TimeUnits field */ /* use MPI2_SASPHY3_TFLAGS_ values from mpi2_cnfg.h for the ThresholdFlags field */ /* SAS Quiesce Event data */ typedef struct _MPI2_EVENT_DATA_SAS_QUIESCE { U8 ReasonCode; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 Reserved2; /* 0x02 */ U32 Reserved3; /* 0x04 */ } MPI2_EVENT_DATA_SAS_QUIESCE, MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_QUIESCE, Mpi2EventDataSasQuiesce_t, MPI2_POINTER pMpi2EventDataSasQuiesce_t; /* SAS Quiesce Event data ReasonCode values */ #define MPI2_EVENT_SAS_QUIESCE_RC_STARTED (0x01) #define MPI2_EVENT_SAS_QUIESCE_RC_COMPLETED (0x02) typedef struct _MPI25_EVENT_DATA_SAS_DEVICE_DISCOVERY_ERROR { U16 DevHandle; /* 0x00 */ U8 ReasonCode; /* 0x02 */ U8 PhysicalPort; /* 0x03 */ U32 Reserved1[2]; /* 0x04 */ U64 SASAddress; /* 0x0C */ U32 Reserved2[2]; /* 0x14 */ } MPI25_EVENT_DATA_SAS_DEVICE_DISCOVERY_ERROR, MPI2_POINTER PTR_MPI25_EVENT_DATA_SAS_DEVICE_DISCOVERY_ERROR, Mpi25EventDataSasDeviceDiscoveryError_t, MPI2_POINTER pMpi25EventDataSasDeviceDiscoveryError_t; /* SAS Device Discovery Error Event data ReasonCode values */ #define MPI25_EVENT_SAS_DISC_ERR_SMP_FAILED (0x01) #define MPI25_EVENT_SAS_DISC_ERR_SMP_TIMEOUT (0x02) /* Host Based Discovery Phy Event data */ typedef struct _MPI2_EVENT_HBD_PHY_SAS { U8 Flags; /* 0x00 */ U8 NegotiatedLinkRate; /* 0x01 */ U8 PhyNum; /* 0x02 */ U8 PhysicalPort; /* 0x03 */ U32 Reserved1; /* 0x04 */ U8 InitialFrame[28]; /* 0x08 */ } MPI2_EVENT_HBD_PHY_SAS, MPI2_POINTER PTR_MPI2_EVENT_HBD_PHY_SAS, Mpi2EventHbdPhySas_t, MPI2_POINTER pMpi2EventHbdPhySas_t; /* values for the Flags field */ #define MPI2_EVENT_HBD_SAS_FLAGS_FRAME_VALID (0x02) #define MPI2_EVENT_HBD_SAS_FLAGS_SATA_FRAME (0x01) /* use MPI2_SAS_NEG_LINK_RATE_ defines from mpi2_cnfg.h for the NegotiatedLinkRate field */ typedef union _MPI2_EVENT_HBD_DESCRIPTOR { MPI2_EVENT_HBD_PHY_SAS Sas; } MPI2_EVENT_HBD_DESCRIPTOR, MPI2_POINTER PTR_MPI2_EVENT_HBD_DESCRIPTOR, Mpi2EventHbdDescriptor_t, MPI2_POINTER pMpi2EventHbdDescriptor_t; typedef struct _MPI2_EVENT_DATA_HBD_PHY { U8 DescriptorType; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 Reserved2; /* 0x02 */ U32 Reserved3; /* 0x04 */ MPI2_EVENT_HBD_DESCRIPTOR Descriptor; /* 0x08 */ } MPI2_EVENT_DATA_HBD_PHY, MPI2_POINTER PTR_MPI2_EVENT_DATA_HBD_PHY, Mpi2EventDataHbdPhy_t, MPI2_POINTER pMpi2EventDataMpi2EventDataHbdPhy_t; /* values for the DescriptorType field */ #define MPI2_EVENT_HBD_DT_SAS (0x01) /* PCIe Device Status Change Event data (MPI v2.6 and later) */ typedef struct _MPI26_EVENT_DATA_PCIE_DEVICE_STATUS_CHANGE { U16 TaskTag; /* 0x00 */ U8 ReasonCode; /* 0x02 */ U8 PhysicalPort; /* 0x03 */ U8 ASC; /* 0x04 */ U8 ASCQ; /* 0x05 */ U16 DevHandle; /* 0x06 */ U32 Reserved2; /* 0x08 */ U64 WWID; /* 0x0C */ U8 LUN[8]; /* 0x14 */ } MPI26_EVENT_DATA_PCIE_DEVICE_STATUS_CHANGE, MPI2_POINTER PTR_MPI26_EVENT_DATA_PCIE_DEVICE_STATUS_CHANGE, Mpi26EventDataPCIeDeviceStatusChange_t, MPI2_POINTER pMpi26EventDataPCIeDeviceStatusChange_t; /* PCIe Device Status Change Event data ReasonCode values */ #define MPI26_EVENT_PCIDEV_STAT_RC_SMART_DATA (0x05) #define MPI26_EVENT_PCIDEV_STAT_RC_UNSUPPORTED (0x07) #define MPI26_EVENT_PCIDEV_STAT_RC_INTERNAL_DEVICE_RESET (0x08) #define MPI26_EVENT_PCIDEV_STAT_RC_TASK_ABORT_INTERNAL (0x09) #define MPI26_EVENT_PCIDEV_STAT_RC_ABORT_TASK_SET_INTERNAL (0x0A) #define MPI26_EVENT_PCIDEV_STAT_RC_CLEAR_TASK_SET_INTERNAL (0x0B) #define MPI26_EVENT_PCIDEV_STAT_RC_QUERY_TASK_INTERNAL (0x0C) #define MPI26_EVENT_PCIDEV_STAT_RC_ASYNC_NOTIFICATION (0x0D) #define MPI26_EVENT_PCIDEV_STAT_RC_CMP_INTERNAL_DEV_RESET (0x0E) #define MPI26_EVENT_PCIDEV_STAT_RC_CMP_TASK_ABORT_INTERNAL (0x0F) #define MPI26_EVENT_PCIDEV_STAT_RC_DEV_INIT_FAILURE (0x10) +#define MPI26_EVENT_PCIDEV_STAT_RC_PCIE_HOT_RESET_FAILED (0x11) /* PCIe Enumeration Event data (MPI v2.6 and later) */ typedef struct _MPI26_EVENT_DATA_PCIE_ENUMERATION { U8 Flags; /* 0x00 */ U8 ReasonCode; /* 0x01 */ U8 PhysicalPort; /* 0x02 */ U8 Reserved1; /* 0x03 */ U32 EnumerationStatus; /* 0x04 */ } MPI26_EVENT_DATA_PCIE_ENUMERATION, MPI2_POINTER PTR_MPI26_EVENT_DATA_PCIE_ENUMERATION, Mpi26EventDataPCIeEnumeration_t, MPI2_POINTER pMpi26EventDataPCIeEnumeration_t; /* PCIe Enumeration Event data Flags values */ #define MPI26_EVENT_PCIE_ENUM_DEVICE_CHANGE (0x02) #define MPI26_EVENT_PCIE_ENUM_IN_PROGRESS (0x01) /* PCIe Enumeration Event data ReasonCode values */ #define MPI26_EVENT_PCIE_ENUM_RC_STARTED (0x01) #define MPI26_EVENT_PCIE_ENUM_RC_COMPLETED (0x02) /* PCIe Enumeration Event data EnumerationStatus values */ #define MPI26_EVENT_PCIE_ENUM_ES_MAX_SWITCHES_EXCEED (0x40000000) #define MPI26_EVENT_PCIE_ENUM_ES_MAX_DEVICES_EXCEED (0x20000000) #define MPI26_EVENT_PCIE_ENUM_ES_RESOURCES_EXHAUSTED (0x10000000) /* PCIe Topology Change List Event data (MPI v2.6 and later) */ /* * Host code (drivers, BIOS, utilities, etc.) should leave this define set to * one and check NumEntries at runtime. */ #ifndef MPI26_EVENT_PCIE_TOPO_PORT_COUNT #define MPI26_EVENT_PCIE_TOPO_PORT_COUNT (1) #endif typedef struct _MPI26_EVENT_PCIE_TOPO_PORT_ENTRY { U16 AttachedDevHandle; /* 0x00 */ U8 PortStatus; /* 0x02 */ U8 Reserved1; /* 0x03 */ U8 CurrentPortInfo; /* 0x04 */ U8 Reserved2; /* 0x05 */ U8 PreviousPortInfo; /* 0x06 */ U8 Reserved3; /* 0x07 */ } MPI26_EVENT_PCIE_TOPO_PORT_ENTRY, MPI2_POINTER PTR_MPI26_EVENT_PCIE_TOPO_PORT_ENTRY, Mpi26EventPCIeTopoPortEntry_t, MPI2_POINTER pMpi26EventPCIeTopoPortEntry_t; /* PCIe Topology Change List Event data PortStatus values */ #define MPI26_EVENT_PCIE_TOPO_PS_DEV_ADDED (0x01) #define MPI26_EVENT_PCIE_TOPO_PS_NOT_RESPONDING (0x02) #define MPI26_EVENT_PCIE_TOPO_PS_PORT_CHANGED (0x03) #define MPI26_EVENT_PCIE_TOPO_PS_NO_CHANGE (0x04) #define MPI26_EVENT_PCIE_TOPO_PS_DELAY_NOT_RESPONDING (0x05) /* PCIe Topology Change List Event data defines for CurrentPortInfo and PreviousPortInfo */ #define MPI26_EVENT_PCIE_TOPO_PI_LANE_MASK (0xF0) #define MPI26_EVENT_PCIE_TOPO_PI_LANES_UNKNOWN (0x00) #define MPI26_EVENT_PCIE_TOPO_PI_1_LANE (0x10) #define MPI26_EVENT_PCIE_TOPO_PI_2_LANES (0x20) #define MPI26_EVENT_PCIE_TOPO_PI_4_LANES (0x30) #define MPI26_EVENT_PCIE_TOPO_PI_8_LANES (0x40) #define MPI26_EVENT_PCIE_TOPO_PI_RATE_MASK (0x0F) #define MPI26_EVENT_PCIE_TOPO_PI_RATE_UNKNOWN (0x00) #define MPI26_EVENT_PCIE_TOPO_PI_RATE_DISABLED (0x01) #define MPI26_EVENT_PCIE_TOPO_PI_RATE_2_5 (0x02) #define MPI26_EVENT_PCIE_TOPO_PI_RATE_5_0 (0x03) #define MPI26_EVENT_PCIE_TOPO_PI_RATE_8_0 (0x04) #define MPI26_EVENT_PCIE_TOPO_PI_RATE_16_0 (0x05) typedef struct _MPI26_EVENT_DATA_PCIE_TOPOLOGY_CHANGE_LIST { U16 EnclosureHandle; /* 0x00 */ U16 SwitchDevHandle; /* 0x02 */ U8 NumPorts; /* 0x04 */ U8 Reserved1; /* 0x05 */ U16 Reserved2; /* 0x06 */ U8 NumEntries; /* 0x08 */ U8 StartPortNum; /* 0x09 */ U8 SwitchStatus; /* 0x0A */ U8 PhysicalPort; /* 0x0B */ MPI26_EVENT_PCIE_TOPO_PORT_ENTRY PortEntry[MPI26_EVENT_PCIE_TOPO_PORT_COUNT]; /* 0x0C */ } MPI26_EVENT_DATA_PCIE_TOPOLOGY_CHANGE_LIST, MPI2_POINTER PTR_MPI26_EVENT_DATA_PCIE_TOPOLOGY_CHANGE_LIST, Mpi26EventDataPCIeTopologyChangeList_t, MPI2_POINTER pMpi26EventDataPCIeTopologyChangeList_t; /* PCIe Topology Change List Event data SwitchStatus values */ #define MPI26_EVENT_PCIE_TOPO_SS_NO_PCIE_SWITCH (0x00) #define MPI26_EVENT_PCIE_TOPO_SS_ADDED (0x01) #define MPI26_EVENT_PCIE_TOPO_SS_NOT_RESPONDING (0x02) #define MPI26_EVENT_PCIE_TOPO_SS_RESPONDING (0x03) #define MPI26_EVENT_PCIE_TOPO_SS_DELAY_NOT_RESPONDING (0x04) /* PCIe Link Counter Event data (MPI v2.6 and later) */ typedef struct _MPI26_EVENT_DATA_PCIE_LINK_COUNTER { U64 TimeStamp; /* 0x00 */ U32 Reserved1; /* 0x08 */ U8 LinkEventCode; /* 0x0C */ U8 LinkNum; /* 0x0D */ U16 Reserved2; /* 0x0E */ U32 LinkEventInfo; /* 0x10 */ U8 CounterType; /* 0x14 */ U8 ThresholdWindow; /* 0x15 */ U8 TimeUnits; /* 0x16 */ U8 Reserved3; /* 0x17 */ U32 EventThreshold; /* 0x18 */ U16 ThresholdFlags; /* 0x1C */ U16 Reserved4; /* 0x1E */ } MPI26_EVENT_DATA_PCIE_LINK_COUNTER, MPI2_POINTER PTR_MPI26_EVENT_DATA_PCIE_LINK_COUNTER, Mpi26EventDataPcieLinkCounter_t, MPI2_POINTER pMpi26EventDataPcieLinkCounter_t; /* use MPI26_PCIELINK3_EVTCODE_ values from mpi2_cnfg.h for the LinkEventCode field */ /* use MPI26_PCIELINK3_COUNTER_TYPE_ values from mpi2_cnfg.h for the CounterType field */ /* use MPI26_PCIELINK3_TIME_UNITS_ values from mpi2_cnfg.h for the TimeUnits field */ /* use MPI26_PCIELINK3_TFLAGS_ values from mpi2_cnfg.h for the ThresholdFlags field */ /**************************************************************************** * EventAck message ****************************************************************************/ /* EventAck Request message */ typedef struct _MPI2_EVENT_ACK_REQUEST { U16 Reserved1; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 Event; /* 0x0C */ U16 Reserved5; /* 0x0E */ U32 EventContext; /* 0x10 */ } MPI2_EVENT_ACK_REQUEST, MPI2_POINTER PTR_MPI2_EVENT_ACK_REQUEST, Mpi2EventAckRequest_t, MPI2_POINTER pMpi2EventAckRequest_t; /* EventAck Reply message */ typedef struct _MPI2_EVENT_ACK_REPLY { U16 Reserved1; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 Reserved5; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ } MPI2_EVENT_ACK_REPLY, MPI2_POINTER PTR_MPI2_EVENT_ACK_REPLY, Mpi2EventAckReply_t, MPI2_POINTER pMpi2EventAckReply_t; /**************************************************************************** * SendHostMessage message ****************************************************************************/ /* SendHostMessage Request message */ typedef struct _MPI2_SEND_HOST_MESSAGE_REQUEST { U16 HostDataLength; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved1; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U8 Reserved4; /* 0x0C */ U8 DestVF_ID; /* 0x0D */ U16 Reserved5; /* 0x0E */ U32 Reserved6; /* 0x10 */ U32 Reserved7; /* 0x14 */ U32 Reserved8; /* 0x18 */ U32 Reserved9; /* 0x1C */ U32 Reserved10; /* 0x20 */ U32 HostData[1]; /* 0x24 */ } MPI2_SEND_HOST_MESSAGE_REQUEST, MPI2_POINTER PTR_MPI2_SEND_HOST_MESSAGE_REQUEST, Mpi2SendHostMessageRequest_t, MPI2_POINTER pMpi2SendHostMessageRequest_t; /* SendHostMessage Reply message */ typedef struct _MPI2_SEND_HOST_MESSAGE_REPLY { U16 HostDataLength; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved1; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 Reserved4; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ } MPI2_SEND_HOST_MESSAGE_REPLY, MPI2_POINTER PTR_MPI2_SEND_HOST_MESSAGE_REPLY, Mpi2SendHostMessageReply_t, MPI2_POINTER pMpi2SendHostMessageReply_t; /**************************************************************************** * FWDownload message ****************************************************************************/ /* MPI v2.0 FWDownload Request message */ typedef struct _MPI2_FW_DOWNLOAD_REQUEST { U8 ImageType; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U32 TotalImageSize; /* 0x0C */ U32 Reserved5; /* 0x10 */ MPI2_MPI_SGE_UNION SGL; /* 0x14 */ } MPI2_FW_DOWNLOAD_REQUEST, MPI2_POINTER PTR_MPI2_FW_DOWNLOAD_REQUEST, Mpi2FWDownloadRequest, MPI2_POINTER pMpi2FWDownloadRequest; #define MPI2_FW_DOWNLOAD_MSGFLGS_LAST_SEGMENT (0x01) #define MPI2_FW_DOWNLOAD_ITYPE_FW (0x01) #define MPI2_FW_DOWNLOAD_ITYPE_BIOS (0x02) #define MPI2_FW_DOWNLOAD_ITYPE_MANUFACTURING (0x06) #define MPI2_FW_DOWNLOAD_ITYPE_CONFIG_1 (0x07) #define MPI2_FW_DOWNLOAD_ITYPE_CONFIG_2 (0x08) #define MPI2_FW_DOWNLOAD_ITYPE_MEGARAID (0x09) #define MPI2_FW_DOWNLOAD_ITYPE_COMPLETE (0x0A) #define MPI2_FW_DOWNLOAD_ITYPE_COMMON_BOOT_BLOCK (0x0B) #define MPI2_FW_DOWNLOAD_ITYPE_PUBLIC_KEY (0x0C) /* MPI v2.5 and newer */ #define MPI2_FW_DOWNLOAD_ITYPE_CBB_BACKUP (0x0D) #define MPI2_FW_DOWNLOAD_ITYPE_SBR (0x0E) #define MPI2_FW_DOWNLOAD_ITYPE_SBR_BACKUP (0x0F) #define MPI2_FW_DOWNLOAD_ITYPE_HIIM (0x10) #define MPI2_FW_DOWNLOAD_ITYPE_HIIA (0x11) #define MPI2_FW_DOWNLOAD_ITYPE_CTLR (0x12) #define MPI2_FW_DOWNLOAD_ITYPE_IMR_FIRMWARE (0x13) #define MPI2_FW_DOWNLOAD_ITYPE_MR_NVDATA (0x14) +#define MPI2_FW_DOWNLOAD_ITYPE_CPLD (0x15) /* MPI v2.6 and newer */ +#define MPI2_FW_DOWNLOAD_ITYPE_PSOC (0x16) /* MPI v2.6 and newer */ #define MPI2_FW_DOWNLOAD_ITYPE_MIN_PRODUCT_SPECIFIC (0xF0) +#define MPI2_FW_DOWNLOAD_ITYPE_TERMINATE (0xFF) /* MPI v2.6 and newer */ + /* MPI v2.0 FWDownload TransactionContext Element */ typedef struct _MPI2_FW_DOWNLOAD_TCSGE { U8 Reserved1; /* 0x00 */ U8 ContextSize; /* 0x01 */ U8 DetailsLength; /* 0x02 */ U8 Flags; /* 0x03 */ U32 Reserved2; /* 0x04 */ U32 ImageOffset; /* 0x08 */ U32 ImageSize; /* 0x0C */ } MPI2_FW_DOWNLOAD_TCSGE, MPI2_POINTER PTR_MPI2_FW_DOWNLOAD_TCSGE, Mpi2FWDownloadTCSGE_t, MPI2_POINTER pMpi2FWDownloadTCSGE_t; /* MPI v2.5 FWDownload Request message */ typedef struct _MPI25_FW_DOWNLOAD_REQUEST { U8 ImageType; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U32 TotalImageSize; /* 0x0C */ U32 Reserved5; /* 0x10 */ U32 Reserved6; /* 0x14 */ U32 ImageOffset; /* 0x18 */ U32 ImageSize; /* 0x1C */ MPI25_SGE_IO_UNION SGL; /* 0x20 */ } MPI25_FW_DOWNLOAD_REQUEST, MPI2_POINTER PTR_MPI25_FW_DOWNLOAD_REQUEST, Mpi25FWDownloadRequest, MPI2_POINTER pMpi25FWDownloadRequest; /* FWDownload Reply message */ typedef struct _MPI2_FW_DOWNLOAD_REPLY { U8 ImageType; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 Reserved5; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ } MPI2_FW_DOWNLOAD_REPLY, MPI2_POINTER PTR_MPI2_FW_DOWNLOAD_REPLY, Mpi2FWDownloadReply_t, MPI2_POINTER pMpi2FWDownloadReply_t; /**************************************************************************** * FWUpload message ****************************************************************************/ /* MPI v2.0 FWUpload Request message */ typedef struct _MPI2_FW_UPLOAD_REQUEST { U8 ImageType; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U32 Reserved5; /* 0x0C */ U32 Reserved6; /* 0x10 */ MPI2_MPI_SGE_UNION SGL; /* 0x14 */ } MPI2_FW_UPLOAD_REQUEST, MPI2_POINTER PTR_MPI2_FW_UPLOAD_REQUEST, Mpi2FWUploadRequest_t, MPI2_POINTER pMpi2FWUploadRequest_t; #define MPI2_FW_UPLOAD_ITYPE_FW_CURRENT (0x00) #define MPI2_FW_UPLOAD_ITYPE_FW_FLASH (0x01) #define MPI2_FW_UPLOAD_ITYPE_BIOS_FLASH (0x02) #define MPI2_FW_UPLOAD_ITYPE_FW_BACKUP (0x05) #define MPI2_FW_UPLOAD_ITYPE_MANUFACTURING (0x06) #define MPI2_FW_UPLOAD_ITYPE_CONFIG_1 (0x07) #define MPI2_FW_UPLOAD_ITYPE_CONFIG_2 (0x08) #define MPI2_FW_UPLOAD_ITYPE_MEGARAID (0x09) #define MPI2_FW_UPLOAD_ITYPE_COMPLETE (0x0A) #define MPI2_FW_UPLOAD_ITYPE_COMMON_BOOT_BLOCK (0x0B) #define MPI2_FW_UPLOAD_ITYPE_CBB_BACKUP (0x0D) #define MPI2_FW_UPLOAD_ITYPE_SBR (0x0E) #define MPI2_FW_UPLOAD_ITYPE_SBR_BACKUP (0x0F) #define MPI2_FW_UPLOAD_ITYPE_HIIM (0x10) #define MPI2_FW_UPLOAD_ITYPE_HIIA (0x11) #define MPI2_FW_UPLOAD_ITYPE_CTLR (0x12) #define MPI2_FW_UPLOAD_ITYPE_IMR_FIRMWARE (0x13) #define MPI2_FW_UPLOAD_ITYPE_MR_NVDATA (0x14) /* MPI v2.0 FWUpload TransactionContext Element */ typedef struct _MPI2_FW_UPLOAD_TCSGE { U8 Reserved1; /* 0x00 */ U8 ContextSize; /* 0x01 */ U8 DetailsLength; /* 0x02 */ U8 Flags; /* 0x03 */ U32 Reserved2; /* 0x04 */ U32 ImageOffset; /* 0x08 */ U32 ImageSize; /* 0x0C */ } MPI2_FW_UPLOAD_TCSGE, MPI2_POINTER PTR_MPI2_FW_UPLOAD_TCSGE, Mpi2FWUploadTCSGE_t, MPI2_POINTER pMpi2FWUploadTCSGE_t; /* MPI v2.5 FWUpload Request message */ typedef struct _MPI25_FW_UPLOAD_REQUEST { U8 ImageType; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U32 Reserved5; /* 0x0C */ U32 Reserved6; /* 0x10 */ U32 Reserved7; /* 0x14 */ U32 ImageOffset; /* 0x18 */ U32 ImageSize; /* 0x1C */ MPI25_SGE_IO_UNION SGL; /* 0x20 */ } MPI25_FW_UPLOAD_REQUEST, MPI2_POINTER PTR_MPI25_FW_UPLOAD_REQUEST, Mpi25FWUploadRequest_t, MPI2_POINTER pMpi25FWUploadRequest_t; /* FWUpload Reply message */ typedef struct _MPI2_FW_UPLOAD_REPLY { U8 ImageType; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 Reserved5; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U32 ActualImageSize; /* 0x14 */ } MPI2_FW_UPLOAD_REPLY, MPI2_POINTER PTR_MPI2_FW_UPLOAD_REPLY, Mpi2FWUploadReply_t, MPI2_POINTER pMPi2FWUploadReply_t; - -/* FW Image Header */ -typedef struct _MPI2_FW_IMAGE_HEADER -{ - U32 Signature; /* 0x00 */ - U32 Signature0; /* 0x04 */ - U32 Signature1; /* 0x08 */ - U32 Signature2; /* 0x0C */ - MPI2_VERSION_UNION MPIVersion; /* 0x10 */ - MPI2_VERSION_UNION FWVersion; /* 0x14 */ - MPI2_VERSION_UNION NVDATAVersion; /* 0x18 */ - MPI2_VERSION_UNION PackageVersion; /* 0x1C */ - U16 VendorID; /* 0x20 */ - U16 ProductID; /* 0x22 */ - U16 ProtocolFlags; /* 0x24 */ - U16 Reserved26; /* 0x26 */ - U32 IOCCapabilities; /* 0x28 */ - U32 ImageSize; /* 0x2C */ - U32 NextImageHeaderOffset; /* 0x30 */ - U32 Checksum; /* 0x34 */ - U32 Reserved38; /* 0x38 */ - U32 Reserved3C; /* 0x3C */ - U32 Reserved40; /* 0x40 */ - U32 Reserved44; /* 0x44 */ - U32 Reserved48; /* 0x48 */ - U32 Reserved4C; /* 0x4C */ - U32 Reserved50; /* 0x50 */ - U32 Reserved54; /* 0x54 */ - U32 Reserved58; /* 0x58 */ - U32 Reserved5C; /* 0x5C */ - U32 BootFlags; /* 0x60 */ /* reserved in MPI v2.5 and earlier */ - U32 FirmwareVersionNameWhat; /* 0x64 */ - U8 FirmwareVersionName[32]; /* 0x68 */ - U32 VendorNameWhat; /* 0x88 */ - U8 VendorName[32]; /* 0x8C */ - U32 PackageNameWhat; /* 0x88 */ - U8 PackageName[32]; /* 0x8C */ - U32 ReservedD0; /* 0xD0 */ - U32 ReservedD4; /* 0xD4 */ - U32 ReservedD8; /* 0xD8 */ - U32 ReservedDC; /* 0xDC */ - U32 ReservedE0; /* 0xE0 */ - U32 ReservedE4; /* 0xE4 */ - U32 ReservedE8; /* 0xE8 */ - U32 ReservedEC; /* 0xEC */ - U32 ReservedF0; /* 0xF0 */ - U32 ReservedF4; /* 0xF4 */ - U32 ReservedF8; /* 0xF8 */ - U32 ReservedFC; /* 0xFC */ -} MPI2_FW_IMAGE_HEADER, MPI2_POINTER PTR_MPI2_FW_IMAGE_HEADER, - Mpi2FWImageHeader_t, MPI2_POINTER pMpi2FWImageHeader_t; - -/* Signature field */ -#define MPI2_FW_HEADER_SIGNATURE_OFFSET (0x00) -#define MPI2_FW_HEADER_SIGNATURE_MASK (0xFF000000) -#define MPI2_FW_HEADER_SIGNATURE (0xEA000000) -#define MPI26_FW_HEADER_SIGNATURE (0xEB000000) - -/* Signature0 field */ -#define MPI2_FW_HEADER_SIGNATURE0_OFFSET (0x04) -#define MPI2_FW_HEADER_SIGNATURE0 (0x5AFAA55A) -#define MPI26_FW_HEADER_SIGNATURE0_BASE (0x5AEAA500) /* Last byte is defined by architecture */ -#define MPI26_FW_HEADER_SIGNATURE0_ARC_0 (0x5A) -#define MPI26_FW_HEADER_SIGNATURE0_ARC_1 (0x00) -#define MPI26_FW_HEADER_SIGNATURE0_ARC_2 (0x01) -#define MPI26_FW_HEADER_SIGNATURE0_ARC_3 (0x02) -#define MPI26_FW_HEADER_SIGNATURE0 (MPI26_FW_HEADER_SIGNATURE0_BASE+MPI26_FW_HEADER_SIGNATURE0_ARC_0) // legacy (0x5AEAA55A) -#define MPI26_FW_HEADER_SIGNATURE0_3516 (MPI26_FW_HEADER_SIGNATURE0_BASE+MPI26_FW_HEADER_SIGNATURE0_ARC_1) -#define MPI26_FW_HEADER_SIGNATURE0_4008 (MPI26_FW_HEADER_SIGNATURE0_BASE+MPI26_FW_HEADER_SIGNATURE0_ARC_3) - -/* Signature1 field */ -#define MPI2_FW_HEADER_SIGNATURE1_OFFSET (0x08) -#define MPI2_FW_HEADER_SIGNATURE1 (0xA55AFAA5) -#define MPI26_FW_HEADER_SIGNATURE1 (0xA55AEAA5) - -/* Signature2 field */ -#define MPI2_FW_HEADER_SIGNATURE2_OFFSET (0x0C) -#define MPI2_FW_HEADER_SIGNATURE2 (0x5AA55AFA) -#define MPI26_FW_HEADER_SIGNATURE2 (0x5AA55AEA) - - -/* defines for using the ProductID field */ -#define MPI2_FW_HEADER_PID_TYPE_MASK (0xF000) -#define MPI2_FW_HEADER_PID_TYPE_SAS (0x2000) - -#define MPI2_FW_HEADER_PID_PROD_MASK (0x0F00) -#define MPI2_FW_HEADER_PID_PROD_A (0x0000) -#define MPI2_FW_HEADER_PID_PROD_TARGET_INITIATOR_SCSI (0x0200) -#define MPI2_FW_HEADER_PID_PROD_IR_SCSI (0x0700) - - -#define MPI2_FW_HEADER_PID_FAMILY_MASK (0x00FF) -/* SAS ProductID Family bits */ -#define MPI2_FW_HEADER_PID_FAMILY_2108_SAS (0x0013) -#define MPI2_FW_HEADER_PID_FAMILY_2208_SAS (0x0014) -#define MPI25_FW_HEADER_PID_FAMILY_3108_SAS (0x0021) -#define MPI26_FW_HEADER_PID_FAMILY_3324_SAS (0x0028) -#define MPI26_FW_HEADER_PID_FAMILY_3516_SAS (0x0031) - -/* use MPI2_IOCFACTS_PROTOCOL_ defines for ProtocolFlags field */ - -/* use MPI2_IOCFACTS_CAPABILITY_ defines for IOCCapabilities field */ - - -#define MPI2_FW_HEADER_IMAGESIZE_OFFSET (0x2C) -#define MPI2_FW_HEADER_NEXTIMAGE_OFFSET (0x30) -#define MPI26_FW_HEADER_BOOTFLAGS_OFFSET (0x60) -#define MPI2_FW_HEADER_VERNMHWAT_OFFSET (0x64) - -#define MPI2_FW_HEADER_WHAT_SIGNATURE (0x29232840) - -#define MPI2_FW_HEADER_SIZE (0x100) - - -/* Extended Image Header */ -typedef struct _MPI2_EXT_IMAGE_HEADER - -{ - U8 ImageType; /* 0x00 */ - U8 Reserved1; /* 0x01 */ - U16 Reserved2; /* 0x02 */ - U32 Checksum; /* 0x04 */ - U32 ImageSize; /* 0x08 */ - U32 NextImageHeaderOffset; /* 0x0C */ - U32 PackageVersion; /* 0x10 */ - U32 Reserved3; /* 0x14 */ - U32 Reserved4; /* 0x18 */ - U32 Reserved5; /* 0x1C */ - U8 IdentifyString[32]; /* 0x20 */ -} MPI2_EXT_IMAGE_HEADER, MPI2_POINTER PTR_MPI2_EXT_IMAGE_HEADER, - Mpi2ExtImageHeader_t, MPI2_POINTER pMpi2ExtImageHeader_t; - -/* useful offsets */ -#define MPI2_EXT_IMAGE_IMAGETYPE_OFFSET (0x00) -#define MPI2_EXT_IMAGE_IMAGESIZE_OFFSET (0x08) -#define MPI2_EXT_IMAGE_NEXTIMAGE_OFFSET (0x0C) - -#define MPI2_EXT_IMAGE_HEADER_SIZE (0x40) - -/* defines for the ImageType field */ -#define MPI2_EXT_IMAGE_TYPE_UNSPECIFIED (0x00) -#define MPI2_EXT_IMAGE_TYPE_FW (0x01) -#define MPI2_EXT_IMAGE_TYPE_NVDATA (0x03) -#define MPI2_EXT_IMAGE_TYPE_BOOTLOADER (0x04) -#define MPI2_EXT_IMAGE_TYPE_INITIALIZATION (0x05) -#define MPI2_EXT_IMAGE_TYPE_FLASH_LAYOUT (0x06) -#define MPI2_EXT_IMAGE_TYPE_SUPPORTED_DEVICES (0x07) -#define MPI2_EXT_IMAGE_TYPE_MEGARAID (0x08) -#define MPI2_EXT_IMAGE_TYPE_ENCRYPTED_HASH (0x09) /* MPI v2.5 and newer */ -#define MPI2_EXT_IMAGE_TYPE_MIN_PRODUCT_SPECIFIC (0x80) -#define MPI2_EXT_IMAGE_TYPE_MAX_PRODUCT_SPECIFIC (0xFF) - -#define MPI2_EXT_IMAGE_TYPE_MAX (MPI2_EXT_IMAGE_TYPE_MAX_PRODUCT_SPECIFIC) /* deprecated */ - - - -/* FLASH Layout Extended Image Data */ - -/* - * Host code (drivers, BIOS, utilities, etc.) should leave this define set to - * one and check RegionsPerLayout at runtime. - */ -#ifndef MPI2_FLASH_NUMBER_OF_REGIONS -#define MPI2_FLASH_NUMBER_OF_REGIONS (1) -#endif - -/* - * Host code (drivers, BIOS, utilities, etc.) should leave this define set to - * one and check NumberOfLayouts at runtime. - */ -#ifndef MPI2_FLASH_NUMBER_OF_LAYOUTS -#define MPI2_FLASH_NUMBER_OF_LAYOUTS (1) -#endif - -typedef struct _MPI2_FLASH_REGION -{ - U8 RegionType; /* 0x00 */ - U8 Reserved1; /* 0x01 */ - U16 Reserved2; /* 0x02 */ - U32 RegionOffset; /* 0x04 */ - U32 RegionSize; /* 0x08 */ - U32 Reserved3; /* 0x0C */ -} MPI2_FLASH_REGION, MPI2_POINTER PTR_MPI2_FLASH_REGION, - Mpi2FlashRegion_t, MPI2_POINTER pMpi2FlashRegion_t; - -typedef struct _MPI2_FLASH_LAYOUT -{ - U32 FlashSize; /* 0x00 */ - U32 Reserved1; /* 0x04 */ - U32 Reserved2; /* 0x08 */ - U32 Reserved3; /* 0x0C */ - MPI2_FLASH_REGION Region[MPI2_FLASH_NUMBER_OF_REGIONS];/* 0x10 */ -} MPI2_FLASH_LAYOUT, MPI2_POINTER PTR_MPI2_FLASH_LAYOUT, - Mpi2FlashLayout_t, MPI2_POINTER pMpi2FlashLayout_t; - -typedef struct _MPI2_FLASH_LAYOUT_DATA -{ - U8 ImageRevision; /* 0x00 */ - U8 Reserved1; /* 0x01 */ - U8 SizeOfRegion; /* 0x02 */ - U8 Reserved2; /* 0x03 */ - U16 NumberOfLayouts; /* 0x04 */ - U16 RegionsPerLayout; /* 0x06 */ - U16 MinimumSectorAlignment; /* 0x08 */ - U16 Reserved3; /* 0x0A */ - U32 Reserved4; /* 0x0C */ - MPI2_FLASH_LAYOUT Layout[MPI2_FLASH_NUMBER_OF_LAYOUTS];/* 0x10 */ -} MPI2_FLASH_LAYOUT_DATA, MPI2_POINTER PTR_MPI2_FLASH_LAYOUT_DATA, - Mpi2FlashLayoutData_t, MPI2_POINTER pMpi2FlashLayoutData_t; - -/* defines for the RegionType field */ -#define MPI2_FLASH_REGION_UNUSED (0x00) -#define MPI2_FLASH_REGION_FIRMWARE (0x01) -#define MPI2_FLASH_REGION_BIOS (0x02) -#define MPI2_FLASH_REGION_NVDATA (0x03) -#define MPI2_FLASH_REGION_FIRMWARE_BACKUP (0x05) -#define MPI2_FLASH_REGION_MFG_INFORMATION (0x06) -#define MPI2_FLASH_REGION_CONFIG_1 (0x07) -#define MPI2_FLASH_REGION_CONFIG_2 (0x08) -#define MPI2_FLASH_REGION_MEGARAID (0x09) -#define MPI2_FLASH_REGION_COMMON_BOOT_BLOCK (0x0A) -#define MPI2_FLASH_REGION_INIT (MPI2_FLASH_REGION_COMMON_BOOT_BLOCK) /* older name */ -#define MPI2_FLASH_REGION_CBB_BACKUP (0x0D) -#define MPI2_FLASH_REGION_SBR (0x0E) -#define MPI2_FLASH_REGION_SBR_BACKUP (0x0F) -#define MPI2_FLASH_REGION_HIIM (0x10) -#define MPI2_FLASH_REGION_HIIA (0x11) -#define MPI2_FLASH_REGION_CTLR (0x12) -#define MPI2_FLASH_REGION_IMR_FIRMWARE (0x13) -#define MPI2_FLASH_REGION_MR_NVDATA (0x14) - -/* ImageRevision */ -#define MPI2_FLASH_LAYOUT_IMAGE_REVISION (0x00) - - - -/* Supported Devices Extended Image Data */ - -/* - * Host code (drivers, BIOS, utilities, etc.) should leave this define set to - * one and check NumberOfDevices at runtime. - */ -#ifndef MPI2_SUPPORTED_DEVICES_IMAGE_NUM_DEVICES -#define MPI2_SUPPORTED_DEVICES_IMAGE_NUM_DEVICES (1) -#endif - -typedef struct _MPI2_SUPPORTED_DEVICE -{ - U16 DeviceID; /* 0x00 */ - U16 VendorID; /* 0x02 */ - U16 DeviceIDMask; /* 0x04 */ - U16 Reserved1; /* 0x06 */ - U8 LowPCIRev; /* 0x08 */ - U8 HighPCIRev; /* 0x09 */ - U16 Reserved2; /* 0x0A */ - U32 Reserved3; /* 0x0C */ -} MPI2_SUPPORTED_DEVICE, MPI2_POINTER PTR_MPI2_SUPPORTED_DEVICE, - Mpi2SupportedDevice_t, MPI2_POINTER pMpi2SupportedDevice_t; - -typedef struct _MPI2_SUPPORTED_DEVICES_DATA -{ - U8 ImageRevision; /* 0x00 */ - U8 Reserved1; /* 0x01 */ - U8 NumberOfDevices; /* 0x02 */ - U8 Reserved2; /* 0x03 */ - U32 Reserved3; /* 0x04 */ - MPI2_SUPPORTED_DEVICE SupportedDevice[MPI2_SUPPORTED_DEVICES_IMAGE_NUM_DEVICES]; /* 0x08 */ -} MPI2_SUPPORTED_DEVICES_DATA, MPI2_POINTER PTR_MPI2_SUPPORTED_DEVICES_DATA, - Mpi2SupportedDevicesData_t, MPI2_POINTER pMpi2SupportedDevicesData_t; - -/* ImageRevision */ -#define MPI2_SUPPORTED_DEVICES_IMAGE_REVISION (0x00) - - -/* Init Extended Image Data */ - -typedef struct _MPI2_INIT_IMAGE_FOOTER - -{ - U32 BootFlags; /* 0x00 */ - U32 ImageSize; /* 0x04 */ - U32 Signature0; /* 0x08 */ - U32 Signature1; /* 0x0C */ - U32 Signature2; /* 0x10 */ - U32 ResetVector; /* 0x14 */ -} MPI2_INIT_IMAGE_FOOTER, MPI2_POINTER PTR_MPI2_INIT_IMAGE_FOOTER, - Mpi2InitImageFooter_t, MPI2_POINTER pMpi2InitImageFooter_t; - -/* defines for the BootFlags field */ -#define MPI2_INIT_IMAGE_BOOTFLAGS_OFFSET (0x00) - -/* defines for the ImageSize field */ -#define MPI2_INIT_IMAGE_IMAGESIZE_OFFSET (0x04) - -/* defines for the Signature0 field */ -#define MPI2_INIT_IMAGE_SIGNATURE0_OFFSET (0x08) -#define MPI2_INIT_IMAGE_SIGNATURE0 (0x5AA55AEA) - -/* defines for the Signature1 field */ -#define MPI2_INIT_IMAGE_SIGNATURE1_OFFSET (0x0C) -#define MPI2_INIT_IMAGE_SIGNATURE1 (0xA55AEAA5) - -/* defines for the Signature2 field */ -#define MPI2_INIT_IMAGE_SIGNATURE2_OFFSET (0x10) -#define MPI2_INIT_IMAGE_SIGNATURE2 (0x5AEAA55A) - -/* Signature fields as individual bytes */ -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_0 (0xEA) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_1 (0x5A) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_2 (0xA5) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_3 (0x5A) - -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_4 (0xA5) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_5 (0xEA) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_6 (0x5A) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_7 (0xA5) - -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_8 (0x5A) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_9 (0xA5) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_A (0xEA) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_B (0x5A) - -/* defines for the ResetVector field */ -#define MPI2_INIT_IMAGE_RESETVECTOR_OFFSET (0x14) - - -/* Encrypted Hash Extended Image Data */ - -typedef struct _MPI25_ENCRYPTED_HASH_ENTRY -{ - U8 HashImageType; /* 0x00 */ - U8 HashAlgorithm; /* 0x01 */ - U8 EncryptionAlgorithm; /* 0x02 */ - U8 Reserved1; /* 0x03 */ - U32 Reserved2; /* 0x04 */ - U32 EncryptedHash[1]; /* 0x08 */ /* variable length */ -} MPI25_ENCRYPTED_HASH_ENTRY, MPI2_POINTER PTR_MPI25_ENCRYPTED_HASH_ENTRY, - Mpi25EncryptedHashEntry_t, MPI2_POINTER pMpi25EncryptedHashEntry_t; - -/* values for HashImageType */ -#define MPI25_HASH_IMAGE_TYPE_UNUSED (0x00) -#define MPI25_HASH_IMAGE_TYPE_FIRMWARE (0x01) -#define MPI25_HASH_IMAGE_TYPE_BIOS (0x02) - -/* values for HashAlgorithm */ -#define MPI25_HASH_ALGORITHM_UNUSED (0x00) -#define MPI25_HASH_ALGORITHM_SHA256 (0x01) - -/* values for EncryptionAlgorithm */ -#define MPI25_ENCRYPTION_ALG_UNUSED (0x00) -#define MPI25_ENCRYPTION_ALG_RSA256 (0x01) - -typedef struct _MPI25_ENCRYPTED_HASH_DATA -{ - U8 ImageVersion; /* 0x00 */ - U8 NumHash; /* 0x01 */ - U16 Reserved1; /* 0x02 */ - U32 Reserved2; /* 0x04 */ - MPI25_ENCRYPTED_HASH_ENTRY EncryptedHashEntry[1]; /* 0x08 */ /* variable number of entries */ -} MPI25_ENCRYPTED_HASH_DATA, MPI2_POINTER PTR_MPI25_ENCRYPTED_HASH_DATA, - Mpi25EncryptedHashData_t, MPI2_POINTER pMpi25EncryptedHashData_t; /**************************************************************************** * PowerManagementControl message ****************************************************************************/ /* PowerManagementControl Request message */ typedef struct _MPI2_PWR_MGMT_CONTROL_REQUEST { U8 Feature; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U8 Parameter1; /* 0x0C */ U8 Parameter2; /* 0x0D */ U8 Parameter3; /* 0x0E */ U8 Parameter4; /* 0x0F */ U32 Reserved5; /* 0x10 */ U32 Reserved6; /* 0x14 */ } MPI2_PWR_MGMT_CONTROL_REQUEST, MPI2_POINTER PTR_MPI2_PWR_MGMT_CONTROL_REQUEST, Mpi2PwrMgmtControlRequest_t, MPI2_POINTER pMpi2PwrMgmtControlRequest_t; /* defines for the Feature field */ #define MPI2_PM_CONTROL_FEATURE_DA_PHY_POWER_COND (0x01) #define MPI2_PM_CONTROL_FEATURE_PORT_WIDTH_MODULATION (0x02) #define MPI2_PM_CONTROL_FEATURE_PCIE_LINK (0x03) /* obsolete */ #define MPI2_PM_CONTROL_FEATURE_IOC_SPEED (0x04) #define MPI2_PM_CONTROL_FEATURE_GLOBAL_PWR_MGMT_MODE (0x05) /* reserved in MPI 2.0 */ #define MPI2_PM_CONTROL_FEATURE_MIN_PRODUCT_SPECIFIC (0x80) #define MPI2_PM_CONTROL_FEATURE_MAX_PRODUCT_SPECIFIC (0xFF) /* parameter usage for the MPI2_PM_CONTROL_FEATURE_DA_PHY_POWER_COND Feature */ /* Parameter1 contains a PHY number */ /* Parameter2 indicates power condition action using these defines */ #define MPI2_PM_CONTROL_PARAM2_PARTIAL (0x01) #define MPI2_PM_CONTROL_PARAM2_SLUMBER (0x02) #define MPI2_PM_CONTROL_PARAM2_EXIT_PWR_MGMT (0x03) /* Parameter3 and Parameter4 are reserved */ /* parameter usage for the MPI2_PM_CONTROL_FEATURE_PORT_WIDTH_MODULATION Feature */ /* Parameter1 contains SAS port width modulation group number */ /* Parameter2 indicates IOC action using these defines */ #define MPI2_PM_CONTROL_PARAM2_REQUEST_OWNERSHIP (0x01) #define MPI2_PM_CONTROL_PARAM2_CHANGE_MODULATION (0x02) #define MPI2_PM_CONTROL_PARAM2_RELINQUISH_OWNERSHIP (0x03) /* Parameter3 indicates desired modulation level using these defines */ #define MPI2_PM_CONTROL_PARAM3_25_PERCENT (0x00) #define MPI2_PM_CONTROL_PARAM3_50_PERCENT (0x01) #define MPI2_PM_CONTROL_PARAM3_75_PERCENT (0x02) #define MPI2_PM_CONTROL_PARAM3_100_PERCENT (0x03) /* Parameter4 is reserved */ /* this next set (_PCIE_LINK) is obsolete */ /* parameter usage for the MPI2_PM_CONTROL_FEATURE_PCIE_LINK Feature */ /* Parameter1 indicates desired PCIe link speed using these defines */ #define MPI2_PM_CONTROL_PARAM1_PCIE_2_5_GBPS (0x00) /* obsolete */ #define MPI2_PM_CONTROL_PARAM1_PCIE_5_0_GBPS (0x01) /* obsolete */ #define MPI2_PM_CONTROL_PARAM1_PCIE_8_0_GBPS (0x02) /* obsolete */ /* Parameter2 indicates desired PCIe link width using these defines */ #define MPI2_PM_CONTROL_PARAM2_WIDTH_X1 (0x01) /* obsolete */ #define MPI2_PM_CONTROL_PARAM2_WIDTH_X2 (0x02) /* obsolete */ #define MPI2_PM_CONTROL_PARAM2_WIDTH_X4 (0x04) /* obsolete */ #define MPI2_PM_CONTROL_PARAM2_WIDTH_X8 (0x08) /* obsolete */ /* Parameter3 and Parameter4 are reserved */ /* parameter usage for the MPI2_PM_CONTROL_FEATURE_IOC_SPEED Feature */ /* Parameter1 indicates desired IOC hardware clock speed using these defines */ #define MPI2_PM_CONTROL_PARAM1_FULL_IOC_SPEED (0x01) #define MPI2_PM_CONTROL_PARAM1_HALF_IOC_SPEED (0x02) #define MPI2_PM_CONTROL_PARAM1_QUARTER_IOC_SPEED (0x04) #define MPI2_PM_CONTROL_PARAM1_EIGHTH_IOC_SPEED (0x08) /* Parameter2, Parameter3, and Parameter4 are reserved */ /* parameter usage for the MPI2_PM_CONTROL_FEATURE_GLOBAL_PWR_MGMT_MODE Feature */ /* Parameter1 indicates host action regarding global power management mode */ #define MPI2_PM_CONTROL_PARAM1_TAKE_CONTROL (0x01) #define MPI2_PM_CONTROL_PARAM1_CHANGE_GLOBAL_MODE (0x02) #define MPI2_PM_CONTROL_PARAM1_RELEASE_CONTROL (0x03) /* Parameter2 indicates the requested global power management mode */ #define MPI2_PM_CONTROL_PARAM2_FULL_PWR_PERF (0x01) #define MPI2_PM_CONTROL_PARAM2_REDUCED_PWR_PERF (0x08) #define MPI2_PM_CONTROL_PARAM2_STANDBY (0x40) /* Parameter3 and Parameter4 are reserved */ /* PowerManagementControl Reply message */ typedef struct _MPI2_PWR_MGMT_CONTROL_REPLY { U8 Feature; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 Reserved5; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ } MPI2_PWR_MGMT_CONTROL_REPLY, MPI2_POINTER PTR_MPI2_PWR_MGMT_CONTROL_REPLY, Mpi2PwrMgmtControlReply_t, MPI2_POINTER pMpi2PwrMgmtControlReply_t; /**************************************************************************** * IO Unit Control messages (MPI v2.6 and later only.) ****************************************************************************/ /* IO Unit Control Request Message */ typedef struct _MPI26_IOUNIT_CONTROL_REQUEST { U8 Operation; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 DevHandle; /* 0x04 */ U8 IOCParameter; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 Reserved4; /* 0x0C */ U8 PhyNum; /* 0x0E */ U8 PrimFlags; /* 0x0F */ U32 Primitive; /* 0x10 */ U8 LookupMethod; /* 0x14 */ U8 Reserved5; /* 0x15 */ U16 SlotNumber; /* 0x16 */ U64 LookupAddress; /* 0x18 */ U32 IOCParameterValue; /* 0x20 */ U32 Reserved7; /* 0x24 */ U32 Reserved8; /* 0x28 */ } MPI26_IOUNIT_CONTROL_REQUEST, MPI2_POINTER PTR_MPI26_IOUNIT_CONTROL_REQUEST, Mpi26IoUnitControlRequest_t, MPI2_POINTER pMpi26IoUnitControlRequest_t; /* values for the Operation field */ #define MPI26_CTRL_OP_CLEAR_ALL_PERSISTENT (0x02) #define MPI26_CTRL_OP_SAS_PHY_LINK_RESET (0x06) #define MPI26_CTRL_OP_SAS_PHY_HARD_RESET (0x07) #define MPI26_CTRL_OP_PHY_CLEAR_ERROR_LOG (0x08) #define MPI26_CTRL_OP_LINK_CLEAR_ERROR_LOG (0x09) #define MPI26_CTRL_OP_SAS_SEND_PRIMITIVE (0x0A) #define MPI26_CTRL_OP_FORCE_FULL_DISCOVERY (0x0B) #define MPI26_CTRL_OP_REMOVE_DEVICE (0x0D) #define MPI26_CTRL_OP_LOOKUP_MAPPING (0x0E) #define MPI26_CTRL_OP_SET_IOC_PARAMETER (0x0F) #define MPI26_CTRL_OP_ENABLE_FP_DEVICE (0x10) #define MPI26_CTRL_OP_DISABLE_FP_DEVICE (0x11) #define MPI26_CTRL_OP_ENABLE_FP_ALL (0x12) #define MPI26_CTRL_OP_DISABLE_FP_ALL (0x13) #define MPI26_CTRL_OP_DEV_ENABLE_NCQ (0x14) #define MPI26_CTRL_OP_DEV_DISABLE_NCQ (0x15) #define MPI26_CTRL_OP_SHUTDOWN (0x16) #define MPI26_CTRL_OP_DEV_ENABLE_PERSIST_CONNECTION (0x17) #define MPI26_CTRL_OP_DEV_DISABLE_PERSIST_CONNECTION (0x18) #define MPI26_CTRL_OP_DEV_CLOSE_PERSIST_CONNECTION (0x19) #define MPI26_CTRL_OP_ENABLE_NVME_SGL_FORMAT (0x1A) #define MPI26_CTRL_OP_DISABLE_NVME_SGL_FORMAT (0x1B) #define MPI26_CTRL_OP_PRODUCT_SPECIFIC_MIN (0x80) /* values for the PrimFlags field */ #define MPI26_CTRL_PRIMFLAGS_SINGLE (0x08) #define MPI26_CTRL_PRIMFLAGS_TRIPLE (0x02) #define MPI26_CTRL_PRIMFLAGS_REDUNDANT (0x01) /* values for the LookupMethod field */ #define MPI26_CTRL_LOOKUP_METHOD_WWID_ADDRESS (0x01) #define MPI26_CTRL_LOOKUP_METHOD_ENCLOSURE_SLOT (0x02) #define MPI26_CTRL_LOOKUP_METHOD_SAS_DEVICE_NAME (0x03) /* IO Unit Control Reply Message */ typedef struct _MPI26_IOUNIT_CONTROL_REPLY { U8 Operation; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 DevHandle; /* 0x04 */ U8 IOCParameter; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 Reserved4; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ } MPI26_IOUNIT_CONTROL_REPLY, MPI2_POINTER PTR_MPI26_IOUNIT_CONTROL_REPLY, Mpi26IoUnitControlReply_t, MPI2_POINTER pMpi26IoUnitControlReply_t; #endif Index: releng/12.1/sys/dev/mpr/mpi/mpi2_pci.h =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2_pci.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2_pci.h (revision 352761) @@ -1,151 +1,148 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ /* - * Copyright (c) 2000-2015 LSI Corporation. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_pci.h * Title: MPI PCIe Attached Devices structures and definitions. * Creation Date: October 9, 2012 * - * mpi2_pci.h Version: 02.00.02 + * mpi2_pci.h Version: 02.00.03 * * NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25 * prefix are for use only on MPI v2.5 products, and must not be used * with MPI v2.0 products. Unless otherwise noted, names beginning with * MPI2 or Mpi2 are for use with both MPI v2.0 and MPI v2.5 products. * * Version History * --------------- * * Date Version Description * -------- -------- ------------------------------------------------------ * 03-16-15 02.00.00 Initial version. * 02-17-16 02.00.01 Removed AHCI support. * Removed SOP support. * 07-01-16 02.00.02 Added MPI26_NVME_FLAGS_FORCE_ADMIN_ERR_RESP to * NVME Encapsulated Request. + * 07-22-18 02.00.03 Updted flags field for NVME Encapsulated req * -------------------------------------------------------------------------- */ #ifndef MPI2_PCI_H #define MPI2_PCI_H /* * Values for the PCIe DeviceInfo field used in PCIe Device Status Change Event * data and PCIe Configuration pages. */ #define MPI26_PCIE_DEVINFO_DIRECT_ATTACH (0x00000010) #define MPI26_PCIE_DEVINFO_MASK_DEVICE_TYPE (0x0000000F) #define MPI26_PCIE_DEVINFO_NO_DEVICE (0x00000000) #define MPI26_PCIE_DEVINFO_PCI_SWITCH (0x00000001) #define MPI26_PCIE_DEVINFO_NVME (0x00000003) /**************************************************************************** * NVMe Encapsulated message ****************************************************************************/ /* NVME Encapsulated Request Message */ typedef struct _MPI26_NVME_ENCAPSULATED_REQUEST { U16 DevHandle; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 EncapsulatedCommandLength; /* 0x04 */ U8 Reserved1; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved2; /* 0x0A */ U32 Reserved3; /* 0x0C */ U64 ErrorResponseBaseAddress; /* 0x10 */ U16 ErrorResponseAllocationLength; /* 0x18 */ U16 Flags; /* 0x1A */ U32 DataLength; /* 0x1C */ U8 NVMe_Command[4]; /* 0x20 */ /* variable length */ } MPI26_NVME_ENCAPSULATED_REQUEST, MPI2_POINTER PTR_MPI26_NVME_ENCAPSULATED_REQUEST, Mpi26NVMeEncapsulatedRequest_t, MPI2_POINTER pMpi26NVMeEncapsulatedRequest_t; /* defines for the Flags field */ #define MPI26_NVME_FLAGS_FORCE_ADMIN_ERR_RESP (0x0020) /* Submission Queue Type*/ #define MPI26_NVME_FLAGS_SUBMISSIONQ_MASK (0x0010) #define MPI26_NVME_FLAGS_SUBMISSIONQ_IO (0x0000) #define MPI26_NVME_FLAGS_SUBMISSIONQ_ADMIN (0x0010) /* Error Response Address Space */ #define MPI26_NVME_FLAGS_MASK_ERROR_RSP_ADDR (0x000C) +#define MPI26_NVME_FLAGS_MASK_ERROR_RSP_ADDR_MASK (0x000C) #define MPI26_NVME_FLAGS_SYSTEM_RSP_ADDR (0x0000) -#define MPI26_NVME_FLAGS_IOCPLB_RSP_ADDR (0x0008) -#define MPI26_NVME_FLAGS_IOCPLBNTA_RSP_ADDR (0x000C) +#define MPI26_NVME_FLAGS_IOCCTL_RSP_ADDR (0x0008) /* Data Direction*/ #define MPI26_NVME_FLAGS_DATADIRECTION_MASK (0x0003) #define MPI26_NVME_FLAGS_NODATATRANSFER (0x0000) #define MPI26_NVME_FLAGS_WRITE (0x0001) #define MPI26_NVME_FLAGS_READ (0x0002) #define MPI26_NVME_FLAGS_BIDIRECTIONAL (0x0003) /* NVMe Encapuslated Reply Message */ typedef struct _MPI26_NVME_ENCAPSULATED_ERROR_REPLY { U16 DevHandle; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 EncapsulatedCommandLength; /* 0x04 */ U8 Reserved1; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved2; /* 0x0A */ U16 Reserved3; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U16 ErrorResponseCount; /* 0x14 */ U16 Reserved4; /* 0x16 */ } MPI26_NVME_ENCAPSULATED_ERROR_REPLY, MPI2_POINTER PTR_MPI26_NVME_ENCAPSULATED_ERROR_REPLY, Mpi26NVMeEncapsulatedErrorReply_t, MPI2_POINTER pMpi26NVMeEncapsulatedErrorReply_t; #endif Index: releng/12.1/sys/dev/mpr/mpi/mpi2_ra.h =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2_ra.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2_ra.h (revision 352761) @@ -1,122 +1,118 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ /* - * Copyright (c) 2012-2015 LSI Corporation. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_ra.h * Title: MPI RAID Accelerator messages and structures * Creation Date: April 13, 2009 * * mpi2_ra.h Version: 02.00.01 * * Version History * --------------- * * Date Version Description * -------- -------- ------------------------------------------------------ * 05-06-09 02.00.00 Initial version. * 11-18-14 02.00.01 Updated copyright information. * -------------------------------------------------------------------------- */ #ifndef MPI2_RA_H #define MPI2_RA_H /* generic structure for RAID Accelerator Control Block */ typedef struct _MPI2_RAID_ACCELERATOR_CONTROL_BLOCK { U32 Reserved[8]; /* 0x00 */ U32 RaidAcceleratorCDB[1]; /* 0x20 */ } MPI2_RAID_ACCELERATOR_CONTROL_BLOCK, MPI2_POINTER PTR_MPI2_RAID_ACCELERATOR_CONTROL_BLOCK, Mpi2RAIDAcceleratorControlBlock_t, MPI2_POINTER pMpi2RAIDAcceleratorControlBlock_t; /****************************************************************************** * * RAID Accelerator Messages * *******************************************************************************/ /* RAID Accelerator Request Message */ typedef struct _MPI2_RAID_ACCELERATOR_REQUEST { U16 Reserved0; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved1; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U64 RaidAcceleratorControlBlockAddress; /* 0x0C */ U8 DmaEngineNumber; /* 0x14 */ U8 Reserved4; /* 0x15 */ U16 Reserved5; /* 0x16 */ U32 Reserved6; /* 0x18 */ U32 Reserved7; /* 0x1C */ U32 Reserved8; /* 0x20 */ } MPI2_RAID_ACCELERATOR_REQUEST, MPI2_POINTER PTR_MPI2_RAID_ACCELERATOR_REQUEST, Mpi2RAIDAcceleratorRequest_t, MPI2_POINTER pMpi2RAIDAcceleratorRequest_t; /* RAID Accelerator Error Reply Message */ typedef struct _MPI2_RAID_ACCELERATOR_REPLY { U16 Reserved0; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved1; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 Reserved4; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U32 ProductSpecificData[3]; /* 0x14 */ } MPI2_RAID_ACCELERATOR_REPLY, MPI2_POINTER PTR_MPI2_RAID_ACCELERATOR_REPLY, Mpi2RAIDAcceleratorReply_t, MPI2_POINTER pMpi2RAIDAcceleratorReply_t; #endif Index: releng/12.1/sys/dev/mpr/mpi/mpi2_raid.h =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2_raid.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2_raid.h (revision 352761) @@ -1,410 +1,406 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ /* - * Copyright (c) 2000-2015 LSI Corporation. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_raid.h * Title: MPI Integrated RAID messages and structures * Creation Date: April 26, 2007 * * mpi2_raid.h Version: 02.00.11 * * Version History * --------------- * * Date Version Description * -------- -------- ------------------------------------------------------ * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 08-31-07 02.00.01 Modifications to RAID Action request and reply, * including the Actions and ActionData. * 02-29-08 02.00.02 Added MPI2_RAID_ACTION_ADATA_DISABL_FULL_REBUILD. * 05-21-08 02.00.03 Added MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS so that * the PhysDisk array in MPI2_RAID_VOLUME_CREATION_STRUCT * can be sized by the build environment. * 07-30-09 02.00.04 Added proper define for the Use Default Settings bit of * VolumeCreationFlags and marked the old one as obsolete. * 05-12-10 02.00.05 Added MPI2_RAID_VOL_FLAGS_OP_MDC define. * 08-24-10 02.00.06 Added MPI2_RAID_ACTION_COMPATIBILITY_CHECK along with * related structures and defines. * Added product-specific range to RAID Action values. * 11-18-11 02.00.07 Incorporating additions for MPI v2.5. * 02-06-12 02.00.08 Added MPI2_RAID_ACTION_PHYSDISK_HIDDEN. * 07-26-12 02.00.09 Added ElapsedSeconds field to MPI2_RAID_VOL_INDICATOR. * Added MPI2_RAID_VOL_FLAGS_ELAPSED_SECONDS_VALID define. * 04-17-13 02.00.10 Added MPI25_RAID_ACTION_ADATA_ALLOW_PI. * 11-18-14 02.00.11 Updated copyright information. * -------------------------------------------------------------------------- */ #ifndef MPI2_RAID_H #define MPI2_RAID_H /***************************************************************************** * * Integrated RAID Messages * *****************************************************************************/ /**************************************************************************** * RAID Action messages ****************************************************************************/ /* ActionDataWord defines for use with MPI2_RAID_ACTION_CREATE_VOLUME action */ #define MPI25_RAID_ACTION_ADATA_ALLOW_PI (0x80000000) /* ActionDataWord defines for use with MPI2_RAID_ACTION_DELETE_VOLUME action */ #define MPI2_RAID_ACTION_ADATA_KEEP_LBA0 (0x00000000) #define MPI2_RAID_ACTION_ADATA_ZERO_LBA0 (0x00000001) /* use MPI2_RAIDVOL0_SETTING_ defines from mpi2_cnfg.h for MPI2_RAID_ACTION_CHANGE_VOL_WRITE_CACHE action */ /* ActionDataWord defines for use with MPI2_RAID_ACTION_DISABLE_ALL_VOLUMES action */ #define MPI2_RAID_ACTION_ADATA_DISABL_FULL_REBUILD (0x00000001) /* ActionDataWord for MPI2_RAID_ACTION_SET_RAID_FUNCTION_RATE Action */ typedef struct _MPI2_RAID_ACTION_RATE_DATA { U8 RateToChange; /* 0x00 */ U8 RateOrMode; /* 0x01 */ U16 DataScrubDuration; /* 0x02 */ } MPI2_RAID_ACTION_RATE_DATA, MPI2_POINTER PTR_MPI2_RAID_ACTION_RATE_DATA, Mpi2RaidActionRateData_t, MPI2_POINTER pMpi2RaidActionRateData_t; #define MPI2_RAID_ACTION_SET_RATE_RESYNC (0x00) #define MPI2_RAID_ACTION_SET_RATE_DATA_SCRUB (0x01) #define MPI2_RAID_ACTION_SET_RATE_POWERSAVE_MODE (0x02) /* ActionDataWord for MPI2_RAID_ACTION_START_RAID_FUNCTION Action */ typedef struct _MPI2_RAID_ACTION_START_RAID_FUNCTION { U8 RAIDFunction; /* 0x00 */ U8 Flags; /* 0x01 */ U16 Reserved1; /* 0x02 */ } MPI2_RAID_ACTION_START_RAID_FUNCTION, MPI2_POINTER PTR_MPI2_RAID_ACTION_START_RAID_FUNCTION, Mpi2RaidActionStartRaidFunction_t, MPI2_POINTER pMpi2RaidActionStartRaidFunction_t; /* defines for the RAIDFunction field */ #define MPI2_RAID_ACTION_START_BACKGROUND_INIT (0x00) #define MPI2_RAID_ACTION_START_ONLINE_CAP_EXPANSION (0x01) #define MPI2_RAID_ACTION_START_CONSISTENCY_CHECK (0x02) /* defines for the Flags field */ #define MPI2_RAID_ACTION_START_NEW (0x00) #define MPI2_RAID_ACTION_START_RESUME (0x01) /* ActionDataWord for MPI2_RAID_ACTION_STOP_RAID_FUNCTION Action */ typedef struct _MPI2_RAID_ACTION_STOP_RAID_FUNCTION { U8 RAIDFunction; /* 0x00 */ U8 Flags; /* 0x01 */ U16 Reserved1; /* 0x02 */ } MPI2_RAID_ACTION_STOP_RAID_FUNCTION, MPI2_POINTER PTR_MPI2_RAID_ACTION_STOP_RAID_FUNCTION, Mpi2RaidActionStopRaidFunction_t, MPI2_POINTER pMpi2RaidActionStopRaidFunction_t; /* defines for the RAIDFunction field */ #define MPI2_RAID_ACTION_STOP_BACKGROUND_INIT (0x00) #define MPI2_RAID_ACTION_STOP_ONLINE_CAP_EXPANSION (0x01) #define MPI2_RAID_ACTION_STOP_CONSISTENCY_CHECK (0x02) /* defines for the Flags field */ #define MPI2_RAID_ACTION_STOP_ABORT (0x00) #define MPI2_RAID_ACTION_STOP_PAUSE (0x01) /* ActionDataWord for MPI2_RAID_ACTION_CREATE_HOT_SPARE Action */ typedef struct _MPI2_RAID_ACTION_HOT_SPARE { U8 HotSparePool; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 DevHandle; /* 0x02 */ } MPI2_RAID_ACTION_HOT_SPARE, MPI2_POINTER PTR_MPI2_RAID_ACTION_HOT_SPARE, Mpi2RaidActionHotSpare_t, MPI2_POINTER pMpi2RaidActionHotSpare_t; /* ActionDataWord for MPI2_RAID_ACTION_DEVICE_FW_UPDATE_MODE Action */ typedef struct _MPI2_RAID_ACTION_FW_UPDATE_MODE { U8 Flags; /* 0x00 */ U8 DeviceFirmwareUpdateModeTimeout; /* 0x01 */ U16 Reserved1; /* 0x02 */ } MPI2_RAID_ACTION_FW_UPDATE_MODE, MPI2_POINTER PTR_MPI2_RAID_ACTION_FW_UPDATE_MODE, Mpi2RaidActionFwUpdateMode_t, MPI2_POINTER pMpi2RaidActionFwUpdateMode_t; /* ActionDataWord defines for use with MPI2_RAID_ACTION_DEVICE_FW_UPDATE_MODE action */ #define MPI2_RAID_ACTION_ADATA_DISABLE_FW_UPDATE (0x00) #define MPI2_RAID_ACTION_ADATA_ENABLE_FW_UPDATE (0x01) typedef union _MPI2_RAID_ACTION_DATA { U32 Word; MPI2_RAID_ACTION_RATE_DATA Rates; MPI2_RAID_ACTION_START_RAID_FUNCTION StartRaidFunction; MPI2_RAID_ACTION_STOP_RAID_FUNCTION StopRaidFunction; MPI2_RAID_ACTION_HOT_SPARE HotSpare; MPI2_RAID_ACTION_FW_UPDATE_MODE FwUpdateMode; } MPI2_RAID_ACTION_DATA, MPI2_POINTER PTR_MPI2_RAID_ACTION_DATA, Mpi2RaidActionData_t, MPI2_POINTER pMpi2RaidActionData_t; /* RAID Action Request Message */ typedef struct _MPI2_RAID_ACTION_REQUEST { U8 Action; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 VolDevHandle; /* 0x04 */ U8 PhysDiskNum; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved2; /* 0x0A */ U32 Reserved3; /* 0x0C */ MPI2_RAID_ACTION_DATA ActionDataWord; /* 0x10 */ MPI2_SGE_SIMPLE_UNION ActionDataSGE; /* 0x14 */ } MPI2_RAID_ACTION_REQUEST, MPI2_POINTER PTR_MPI2_RAID_ACTION_REQUEST, Mpi2RaidActionRequest_t, MPI2_POINTER pMpi2RaidActionRequest_t; /* RAID Action request Action values */ #define MPI2_RAID_ACTION_INDICATOR_STRUCT (0x01) #define MPI2_RAID_ACTION_CREATE_VOLUME (0x02) #define MPI2_RAID_ACTION_DELETE_VOLUME (0x03) #define MPI2_RAID_ACTION_DISABLE_ALL_VOLUMES (0x04) #define MPI2_RAID_ACTION_ENABLE_ALL_VOLUMES (0x05) #define MPI2_RAID_ACTION_PHYSDISK_OFFLINE (0x0A) #define MPI2_RAID_ACTION_PHYSDISK_ONLINE (0x0B) #define MPI2_RAID_ACTION_FAIL_PHYSDISK (0x0F) #define MPI2_RAID_ACTION_ACTIVATE_VOLUME (0x11) #define MPI2_RAID_ACTION_DEVICE_FW_UPDATE_MODE (0x15) #define MPI2_RAID_ACTION_CHANGE_VOL_WRITE_CACHE (0x17) #define MPI2_RAID_ACTION_SET_VOLUME_NAME (0x18) #define MPI2_RAID_ACTION_SET_RAID_FUNCTION_RATE (0x19) #define MPI2_RAID_ACTION_ENABLE_FAILED_VOLUME (0x1C) #define MPI2_RAID_ACTION_CREATE_HOT_SPARE (0x1D) #define MPI2_RAID_ACTION_DELETE_HOT_SPARE (0x1E) #define MPI2_RAID_ACTION_SYSTEM_SHUTDOWN_INITIATED (0x20) #define MPI2_RAID_ACTION_START_RAID_FUNCTION (0x21) #define MPI2_RAID_ACTION_STOP_RAID_FUNCTION (0x22) #define MPI2_RAID_ACTION_COMPATIBILITY_CHECK (0x23) #define MPI2_RAID_ACTION_PHYSDISK_HIDDEN (0x24) #define MPI2_RAID_ACTION_MIN_PRODUCT_SPECIFIC (0x80) #define MPI2_RAID_ACTION_MAX_PRODUCT_SPECIFIC (0xFF) /* RAID Volume Creation Structure */ /* * The following define can be customized for the targeted product. */ #ifndef MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS #define MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS (1) #endif typedef struct _MPI2_RAID_VOLUME_PHYSDISK { U8 RAIDSetNum; /* 0x00 */ U8 PhysDiskMap; /* 0x01 */ U16 PhysDiskDevHandle; /* 0x02 */ } MPI2_RAID_VOLUME_PHYSDISK, MPI2_POINTER PTR_MPI2_RAID_VOLUME_PHYSDISK, Mpi2RaidVolumePhysDisk_t, MPI2_POINTER pMpi2RaidVolumePhysDisk_t; /* defines for the PhysDiskMap field */ #define MPI2_RAIDACTION_PHYSDISK_PRIMARY (0x01) #define MPI2_RAIDACTION_PHYSDISK_SECONDARY (0x02) typedef struct _MPI2_RAID_VOLUME_CREATION_STRUCT { U8 NumPhysDisks; /* 0x00 */ U8 VolumeType; /* 0x01 */ U16 Reserved1; /* 0x02 */ U32 VolumeCreationFlags; /* 0x04 */ U32 VolumeSettings; /* 0x08 */ U8 Reserved2; /* 0x0C */ U8 ResyncRate; /* 0x0D */ U16 DataScrubDuration; /* 0x0E */ U64 VolumeMaxLBA; /* 0x10 */ U32 StripeSize; /* 0x18 */ U8 Name[16]; /* 0x1C */ MPI2_RAID_VOLUME_PHYSDISK PhysDisk[MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS];/* 0x2C */ } MPI2_RAID_VOLUME_CREATION_STRUCT, MPI2_POINTER PTR_MPI2_RAID_VOLUME_CREATION_STRUCT, Mpi2RaidVolumeCreationStruct_t, MPI2_POINTER pMpi2RaidVolumeCreationStruct_t; /* use MPI2_RAID_VOL_TYPE_ defines from mpi2_cnfg.h for VolumeType */ /* defines for the VolumeCreationFlags field */ #define MPI2_RAID_VOL_CREATION_DEFAULT_SETTINGS (0x80000000) #define MPI2_RAID_VOL_CREATION_BACKGROUND_INIT (0x00000004) /* MPI 2.0 only */ #define MPI2_RAID_VOL_CREATION_LOW_LEVEL_INIT (0x00000002) #define MPI2_RAID_VOL_CREATION_MIGRATE_DATA (0x00000001) /* The following is an obsolete define. * It must be shifted left 24 bits in order to set the proper bit. */ #define MPI2_RAID_VOL_CREATION_USE_DEFAULT_SETTINGS (0x80) /* RAID Online Capacity Expansion Structure */ typedef struct _MPI2_RAID_ONLINE_CAPACITY_EXPANSION { U32 Flags; /* 0x00 */ U16 DevHandle0; /* 0x04 */ U16 Reserved1; /* 0x06 */ U16 DevHandle1; /* 0x08 */ U16 Reserved2; /* 0x0A */ } MPI2_RAID_ONLINE_CAPACITY_EXPANSION, MPI2_POINTER PTR_MPI2_RAID_ONLINE_CAPACITY_EXPANSION, Mpi2RaidOnlineCapacityExpansion_t, MPI2_POINTER pMpi2RaidOnlineCapacityExpansion_t; /* RAID Compatibility Input Structure */ typedef struct _MPI2_RAID_COMPATIBILITY_INPUT_STRUCT { U16 SourceDevHandle; /* 0x00 */ U16 CandidateDevHandle; /* 0x02 */ U32 Flags; /* 0x04 */ U32 Reserved1; /* 0x08 */ U32 Reserved2; /* 0x0C */ } MPI2_RAID_COMPATIBILITY_INPUT_STRUCT, MPI2_POINTER PTR_MPI2_RAID_COMPATIBILITY_INPUT_STRUCT, Mpi2RaidCompatibilityInputStruct_t, MPI2_POINTER pMpi2RaidCompatibilityInputStruct_t; /* defines for RAID Compatibility Structure Flags field */ #define MPI2_RAID_COMPAT_SOURCE_IS_VOLUME_FLAG (0x00000002) #define MPI2_RAID_COMPAT_REPORT_SOURCE_INFO_FLAG (0x00000001) /* RAID Volume Indicator Structure */ typedef struct _MPI2_RAID_VOL_INDICATOR { U64 TotalBlocks; /* 0x00 */ U64 BlocksRemaining; /* 0x08 */ U32 Flags; /* 0x10 */ U32 ElapsedSeconds; /* 0x14 */ } MPI2_RAID_VOL_INDICATOR, MPI2_POINTER PTR_MPI2_RAID_VOL_INDICATOR, Mpi2RaidVolIndicator_t, MPI2_POINTER pMpi2RaidVolIndicator_t; /* defines for RAID Volume Indicator Flags field */ #define MPI2_RAID_VOL_FLAGS_ELAPSED_SECONDS_VALID (0x80000000) #define MPI2_RAID_VOL_FLAGS_OP_MASK (0x0000000F) #define MPI2_RAID_VOL_FLAGS_OP_BACKGROUND_INIT (0x00000000) #define MPI2_RAID_VOL_FLAGS_OP_ONLINE_CAP_EXPANSION (0x00000001) #define MPI2_RAID_VOL_FLAGS_OP_CONSISTENCY_CHECK (0x00000002) #define MPI2_RAID_VOL_FLAGS_OP_RESYNC (0x00000003) #define MPI2_RAID_VOL_FLAGS_OP_MDC (0x00000004) /* RAID Compatibility Result Structure */ typedef struct _MPI2_RAID_COMPATIBILITY_RESULT_STRUCT { U8 State; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 Reserved2; /* 0x02 */ U32 GenericAttributes; /* 0x04 */ U32 OEMSpecificAttributes; /* 0x08 */ U32 Reserved3; /* 0x0C */ U32 Reserved4; /* 0x10 */ } MPI2_RAID_COMPATIBILITY_RESULT_STRUCT, MPI2_POINTER PTR_MPI2_RAID_COMPATIBILITY_RESULT_STRUCT, Mpi2RaidCompatibilityResultStruct_t, MPI2_POINTER pMpi2RaidCompatibilityResultStruct_t; /* defines for RAID Compatibility Result Structure State field */ #define MPI2_RAID_COMPAT_STATE_COMPATIBLE (0x00) #define MPI2_RAID_COMPAT_STATE_NOT_COMPATIBLE (0x01) /* defines for RAID Compatibility Result Structure GenericAttributes field */ #define MPI2_RAID_COMPAT_GENATTRIB_4K_SECTOR (0x00000010) #define MPI2_RAID_COMPAT_GENATTRIB_MEDIA_MASK (0x0000000C) #define MPI2_RAID_COMPAT_GENATTRIB_SOLID_STATE_DRIVE (0x00000008) #define MPI2_RAID_COMPAT_GENATTRIB_HARD_DISK_DRIVE (0x00000004) #define MPI2_RAID_COMPAT_GENATTRIB_PROTOCOL_MASK (0x00000003) #define MPI2_RAID_COMPAT_GENATTRIB_SAS_PROTOCOL (0x00000002) #define MPI2_RAID_COMPAT_GENATTRIB_SATA_PROTOCOL (0x00000001) /* RAID Action Reply ActionData union */ typedef union _MPI2_RAID_ACTION_REPLY_DATA { U32 Word[6]; MPI2_RAID_VOL_INDICATOR RaidVolumeIndicator; U16 VolDevHandle; U8 VolumeState; U8 PhysDiskNum; MPI2_RAID_COMPATIBILITY_RESULT_STRUCT RaidCompatibilityResult; } MPI2_RAID_ACTION_REPLY_DATA, MPI2_POINTER PTR_MPI2_RAID_ACTION_REPLY_DATA, Mpi2RaidActionReplyData_t, MPI2_POINTER pMpi2RaidActionReplyData_t; /* use MPI2_RAIDVOL0_SETTING_ defines from mpi2_cnfg.h for MPI2_RAID_ACTION_CHANGE_VOL_WRITE_CACHE action */ /* RAID Action Reply Message */ typedef struct _MPI2_RAID_ACTION_REPLY { U8 Action; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 VolDevHandle; /* 0x04 */ U8 PhysDiskNum; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved2; /* 0x0A */ U16 Reserved3; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ MPI2_RAID_ACTION_REPLY_DATA ActionData; /* 0x14 */ } MPI2_RAID_ACTION_REPLY, MPI2_POINTER PTR_MPI2_RAID_ACTION_REPLY, Mpi2RaidActionReply_t, MPI2_POINTER pMpi2RaidActionReply_t; #endif Index: releng/12.1/sys/dev/mpr/mpi/mpi2_sas.h =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2_sas.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2_sas.h (revision 352761) @@ -1,355 +1,351 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ /* - * Copyright (c) 2000-2015 LSI Corporation. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_sas.h * Title: MPI Serial Attached SCSI structures and definitions * Creation Date: February 9, 2007 * * mpi2_sas.h Version: 02.00.10 * * NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25 * prefix are for use only on MPI v2.5 products, and must not be used * with MPI v2.0 products. Unless otherwise noted, names beginning with * MPI2 or Mpi2 are for use with both MPI v2.0 and MPI v2.5 products. * * Version History * --------------- * * Date Version Description * -------- -------- ------------------------------------------------------ * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 06-26-07 02.00.01 Added Clear All Persistent Operation to SAS IO Unit * Control Request. * 10-02-08 02.00.02 Added Set IOC Parameter Operation to SAS IO Unit Control * Request. * 10-28-09 02.00.03 Changed the type of SGL in MPI2_SATA_PASSTHROUGH_REQUEST * to MPI2_SGE_IO_UNION since it supports chained SGLs. * 05-12-10 02.00.04 Modified some comments. * 08-11-10 02.00.05 Added NCQ operations to SAS IO Unit Control. * 11-18-11 02.00.06 Incorporating additions for MPI v2.5. * 07-10-12 02.00.07 Added MPI2_SATA_PT_SGE_UNION for use in the SATA * Passthrough Request message. * 08-19-13 02.00.08 Made MPI2_SAS_OP_TRANSMIT_PORT_SELECT_SIGNAL obsolete * for anything newer than MPI v2.0. * 11-18-14 02.00.09 Updated copyright information. * 03-16-15 02.00.10 Updated for MPI v2.6. * Added MPI2_SATA_PT_REQ_PT_FLAGS_FPDMA. * -------------------------------------------------------------------------- */ #ifndef MPI2_SAS_H #define MPI2_SAS_H /* * Values for SASStatus. */ #define MPI2_SASSTATUS_SUCCESS (0x00) #define MPI2_SASSTATUS_UNKNOWN_ERROR (0x01) #define MPI2_SASSTATUS_INVALID_FRAME (0x02) #define MPI2_SASSTATUS_UTC_BAD_DEST (0x03) #define MPI2_SASSTATUS_UTC_BREAK_RECEIVED (0x04) #define MPI2_SASSTATUS_UTC_CONNECT_RATE_NOT_SUPPORTED (0x05) #define MPI2_SASSTATUS_UTC_PORT_LAYER_REQUEST (0x06) #define MPI2_SASSTATUS_UTC_PROTOCOL_NOT_SUPPORTED (0x07) #define MPI2_SASSTATUS_UTC_STP_RESOURCES_BUSY (0x08) #define MPI2_SASSTATUS_UTC_WRONG_DESTINATION (0x09) #define MPI2_SASSTATUS_SHORT_INFORMATION_UNIT (0x0A) #define MPI2_SASSTATUS_LONG_INFORMATION_UNIT (0x0B) #define MPI2_SASSTATUS_XFER_RDY_INCORRECT_WRITE_DATA (0x0C) #define MPI2_SASSTATUS_XFER_RDY_REQUEST_OFFSET_ERROR (0x0D) #define MPI2_SASSTATUS_XFER_RDY_NOT_EXPECTED (0x0E) #define MPI2_SASSTATUS_DATA_INCORRECT_DATA_LENGTH (0x0F) #define MPI2_SASSTATUS_DATA_TOO_MUCH_READ_DATA (0x10) #define MPI2_SASSTATUS_DATA_OFFSET_ERROR (0x11) #define MPI2_SASSTATUS_SDSF_NAK_RECEIVED (0x12) #define MPI2_SASSTATUS_SDSF_CONNECTION_FAILED (0x13) #define MPI2_SASSTATUS_INITIATOR_RESPONSE_TIMEOUT (0x14) /* * Values for the SAS DeviceInfo field used in SAS Device Status Change Event * data and SAS Configuration pages. */ #define MPI2_SAS_DEVICE_INFO_SEP (0x00004000) #define MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE (0x00002000) #define MPI2_SAS_DEVICE_INFO_LSI_DEVICE (0x00001000) #define MPI2_SAS_DEVICE_INFO_DIRECT_ATTACH (0x00000800) #define MPI2_SAS_DEVICE_INFO_SSP_TARGET (0x00000400) #define MPI2_SAS_DEVICE_INFO_STP_TARGET (0x00000200) #define MPI2_SAS_DEVICE_INFO_SMP_TARGET (0x00000100) #define MPI2_SAS_DEVICE_INFO_SATA_DEVICE (0x00000080) #define MPI2_SAS_DEVICE_INFO_SSP_INITIATOR (0x00000040) #define MPI2_SAS_DEVICE_INFO_STP_INITIATOR (0x00000020) #define MPI2_SAS_DEVICE_INFO_SMP_INITIATOR (0x00000010) #define MPI2_SAS_DEVICE_INFO_SATA_HOST (0x00000008) #define MPI2_SAS_DEVICE_INFO_MASK_DEVICE_TYPE (0x00000007) #define MPI2_SAS_DEVICE_INFO_NO_DEVICE (0x00000000) #define MPI2_SAS_DEVICE_INFO_END_DEVICE (0x00000001) #define MPI2_SAS_DEVICE_INFO_EDGE_EXPANDER (0x00000002) #define MPI2_SAS_DEVICE_INFO_FANOUT_EXPANDER (0x00000003) /***************************************************************************** * * SAS Messages * *****************************************************************************/ /**************************************************************************** * SMP Passthrough messages ****************************************************************************/ /* SMP Passthrough Request Message */ typedef struct _MPI2_SMP_PASSTHROUGH_REQUEST { U8 PassthroughFlags; /* 0x00 */ U8 PhysicalPort; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 RequestDataLength; /* 0x04 */ U8 SGLFlags; /* 0x06 */ /* MPI v2.0 only. Reserved on MPI v2.5. */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved1; /* 0x0A */ U32 Reserved2; /* 0x0C */ U64 SASAddress; /* 0x10 */ U32 Reserved3; /* 0x18 */ U32 Reserved4; /* 0x1C */ MPI2_SIMPLE_SGE_UNION SGL; /* 0x20 */ /* MPI v2.5: IEEE Simple 64 elements only */ } MPI2_SMP_PASSTHROUGH_REQUEST, MPI2_POINTER PTR_MPI2_SMP_PASSTHROUGH_REQUEST, Mpi2SmpPassthroughRequest_t, MPI2_POINTER pMpi2SmpPassthroughRequest_t; /* values for PassthroughFlags field */ #define MPI2_SMP_PT_REQ_PT_FLAGS_IMMEDIATE (0x80) /* MPI v2.0: use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */ /* SMP Passthrough Reply Message */ typedef struct _MPI2_SMP_PASSTHROUGH_REPLY { U8 PassthroughFlags; /* 0x00 */ U8 PhysicalPort; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 ResponseDataLength; /* 0x04 */ U8 SGLFlags; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved1; /* 0x0A */ U8 Reserved2; /* 0x0C */ U8 SASStatus; /* 0x0D */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U32 Reserved3; /* 0x14 */ U8 ResponseData[4]; /* 0x18 */ } MPI2_SMP_PASSTHROUGH_REPLY, MPI2_POINTER PTR_MPI2_SMP_PASSTHROUGH_REPLY, Mpi2SmpPassthroughReply_t, MPI2_POINTER pMpi2SmpPassthroughReply_t; /* values for PassthroughFlags field */ #define MPI2_SMP_PT_REPLY_PT_FLAGS_IMMEDIATE (0x80) /* values for SASStatus field are at the top of this file */ /**************************************************************************** * SATA Passthrough messages ****************************************************************************/ typedef union _MPI2_SATA_PT_SGE_UNION { MPI2_SGE_SIMPLE_UNION MpiSimple; /* MPI v2.0 only */ MPI2_SGE_CHAIN_UNION MpiChain; /* MPI v2.0 only */ MPI2_IEEE_SGE_SIMPLE_UNION IeeeSimple; MPI2_IEEE_SGE_CHAIN_UNION IeeeChain; /* MPI v2.0 only */ MPI25_IEEE_SGE_CHAIN64 IeeeChain64; /* MPI v2.5 only */ } MPI2_SATA_PT_SGE_UNION, MPI2_POINTER PTR_MPI2_SATA_PT_SGE_UNION, Mpi2SataPTSGEUnion_t, MPI2_POINTER pMpi2SataPTSGEUnion_t; /* SATA Passthrough Request Message */ typedef struct _MPI2_SATA_PASSTHROUGH_REQUEST { U16 DevHandle; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 PassthroughFlags; /* 0x04 */ U8 SGLFlags; /* 0x06 */ /* MPI v2.0 only. Reserved on MPI v2.5. */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved1; /* 0x0A */ U32 Reserved2; /* 0x0C */ U32 Reserved3; /* 0x10 */ U32 Reserved4; /* 0x14 */ U32 DataLength; /* 0x18 */ U8 CommandFIS[20]; /* 0x1C */ MPI2_SATA_PT_SGE_UNION SGL; /* 0x30 */ /* MPI v2.5: IEEE 64 elements only */ } MPI2_SATA_PASSTHROUGH_REQUEST, MPI2_POINTER PTR_MPI2_SATA_PASSTHROUGH_REQUEST, Mpi2SataPassthroughRequest_t, MPI2_POINTER pMpi2SataPassthroughRequest_t; /* values for PassthroughFlags field */ #define MPI2_SATA_PT_REQ_PT_FLAGS_EXECUTE_DIAG (0x0100) #define MPI2_SATA_PT_REQ_PT_FLAGS_FPDMA (0x0040) /* MPI v2.6 and newer */ #define MPI2_SATA_PT_REQ_PT_FLAGS_DMA (0x0020) #define MPI2_SATA_PT_REQ_PT_FLAGS_PIO (0x0010) #define MPI2_SATA_PT_REQ_PT_FLAGS_UNSPECIFIED_VU (0x0004) #define MPI2_SATA_PT_REQ_PT_FLAGS_WRITE (0x0002) #define MPI2_SATA_PT_REQ_PT_FLAGS_READ (0x0001) /* MPI v2.0: use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */ /* SATA Passthrough Reply Message */ typedef struct _MPI2_SATA_PASSTHROUGH_REPLY { U16 DevHandle; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 PassthroughFlags; /* 0x04 */ U8 SGLFlags; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved1; /* 0x0A */ U8 Reserved2; /* 0x0C */ U8 SASStatus; /* 0x0D */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U8 StatusFIS[20]; /* 0x14 */ U32 StatusControlRegisters; /* 0x28 */ U32 TransferCount; /* 0x2C */ } MPI2_SATA_PASSTHROUGH_REPLY, MPI2_POINTER PTR_MPI2_SATA_PASSTHROUGH_REPLY, Mpi2SataPassthroughReply_t, MPI2_POINTER pMpi2SataPassthroughReply_t; /* values for SASStatus field are at the top of this file */ /**************************************************************************** * SAS IO Unit Control messages * (MPI v2.5 and earlier only. * Replaced by IO Unit Control messages in MPI v2.6 and later.) ****************************************************************************/ /* SAS IO Unit Control Request Message */ typedef struct _MPI2_SAS_IOUNIT_CONTROL_REQUEST { U8 Operation; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 DevHandle; /* 0x04 */ U8 IOCParameter; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 Reserved4; /* 0x0C */ U8 PhyNum; /* 0x0E */ U8 PrimFlags; /* 0x0F */ U32 Primitive; /* 0x10 */ U8 LookupMethod; /* 0x14 */ U8 Reserved5; /* 0x15 */ U16 SlotNumber; /* 0x16 */ U64 LookupAddress; /* 0x18 */ U32 IOCParameterValue; /* 0x20 */ U32 Reserved7; /* 0x24 */ U32 Reserved8; /* 0x28 */ } MPI2_SAS_IOUNIT_CONTROL_REQUEST, MPI2_POINTER PTR_MPI2_SAS_IOUNIT_CONTROL_REQUEST, Mpi2SasIoUnitControlRequest_t, MPI2_POINTER pMpi2SasIoUnitControlRequest_t; /* values for the Operation field */ #define MPI2_SAS_OP_CLEAR_ALL_PERSISTENT (0x02) #define MPI2_SAS_OP_PHY_LINK_RESET (0x06) #define MPI2_SAS_OP_PHY_HARD_RESET (0x07) #define MPI2_SAS_OP_PHY_CLEAR_ERROR_LOG (0x08) #define MPI2_SAS_OP_SEND_PRIMITIVE (0x0A) #define MPI2_SAS_OP_FORCE_FULL_DISCOVERY (0x0B) #define MPI2_SAS_OP_TRANSMIT_PORT_SELECT_SIGNAL (0x0C) /* MPI v2.0 only */ #define MPI2_SAS_OP_REMOVE_DEVICE (0x0D) #define MPI2_SAS_OP_LOOKUP_MAPPING (0x0E) #define MPI2_SAS_OP_SET_IOC_PARAMETER (0x0F) #define MPI25_SAS_OP_ENABLE_FP_DEVICE (0x10) #define MPI25_SAS_OP_DISABLE_FP_DEVICE (0x11) #define MPI25_SAS_OP_ENABLE_FP_ALL (0x12) #define MPI25_SAS_OP_DISABLE_FP_ALL (0x13) #define MPI2_SAS_OP_DEV_ENABLE_NCQ (0x14) #define MPI2_SAS_OP_DEV_DISABLE_NCQ (0x15) #define MPI2_SAS_OP_PRODUCT_SPECIFIC_MIN (0x80) /* values for the PrimFlags field */ #define MPI2_SAS_PRIMFLAGS_SINGLE (0x08) #define MPI2_SAS_PRIMFLAGS_TRIPLE (0x02) #define MPI2_SAS_PRIMFLAGS_REDUNDANT (0x01) /* values for the LookupMethod field */ #define MPI2_SAS_LOOKUP_METHOD_SAS_ADDRESS (0x01) #define MPI2_SAS_LOOKUP_METHOD_SAS_ENCLOSURE_SLOT (0x02) #define MPI2_SAS_LOOKUP_METHOD_SAS_DEVICE_NAME (0x03) /* SAS IO Unit Control Reply Message */ typedef struct _MPI2_SAS_IOUNIT_CONTROL_REPLY { U8 Operation; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 DevHandle; /* 0x04 */ U8 IOCParameter; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 Reserved4; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ } MPI2_SAS_IOUNIT_CONTROL_REPLY, MPI2_POINTER PTR_MPI2_SAS_IOUNIT_CONTROL_REPLY, Mpi2SasIoUnitControlReply_t, MPI2_POINTER pMpi2SasIoUnitControlReply_t; #endif Index: releng/12.1/sys/dev/mpr/mpi/mpi2_targ.h =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2_targ.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2_targ.h (revision 352761) @@ -1,611 +1,607 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ /* - * Copyright (c) 2000-2015 LSI Corporation. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_targ.h * Title: MPI Target mode messages and structures * Creation Date: September 8, 2006 * * mpi2_targ.h Version: 02.00.09 * * NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25 * prefix are for use only on MPI v2.5 products, and must not be used * with MPI v2.0 products. Unless otherwise noted, names beginning with * MPI2 or Mpi2 are for use with both MPI v2.0 and MPI v2.5 products. * * Version History * --------------- * * Date Version Description * -------- -------- ------------------------------------------------------ * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 08-31-07 02.00.01 Added Command Buffer Data Location Address Space bits to * BufferPostFlags field of CommandBufferPostBase Request. * 02-29-08 02.00.02 Modified various names to make them 32-character unique. * 10-02-08 02.00.03 Removed NextCmdBufferOffset from * MPI2_TARGET_CMD_BUF_POST_BASE_REQUEST. * Target Status Send Request only takes a single SGE for * response data. * 02-10-10 02.00.04 Added comment to MPI2_TARGET_SSP_RSP_IU structure. * 11-18-11 02.00.05 Incorporating additions for MPI v2.5. * 11-27-12 02.00.06 Added InitiatorDevHandle field to MPI2_TARGET_MODE_ABORT * request message structure. * Added AbortType MPI2_TARGET_MODE_ABORT_DEVHANDLE and * MPI2_TARGET_MODE_ABORT_ALL_COMMANDS. * 06-13-14 02.00.07 Added MinMSIxIndex and MaxMSIxIndex fields to * MPI2_TARGET_CMD_BUF_POST_BASE_REQUEST. * 11-18-14 02.00.08 Updated copyright information. * 03-16-15 02.00.09 Updated for MPI v2.6. * Added MPI26_TARGET_ASSIST_IOFLAGS_ESCAPE_PASSTHROUGH. * -------------------------------------------------------------------------- */ #ifndef MPI2_TARG_H #define MPI2_TARG_H /****************************************************************************** * * SCSI Target Messages * *******************************************************************************/ /**************************************************************************** * Target Command Buffer Post Base Request ****************************************************************************/ typedef struct _MPI2_TARGET_CMD_BUF_POST_BASE_REQUEST { U8 BufferPostFlags; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 TotalCmdBuffers; /* 0x04 */ U8 Reserved; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved2; /* 0x0A */ U32 Reserved3; /* 0x0C */ U16 CmdBufferLength; /* 0x10 */ U8 MinMSIxIndex; /* 0x12 */ /* MPI 2.5 and newer only; Reserved in MPI 2.0 */ U8 MaxMSIxIndex; /* 0x13 */ /* MPI 2.5 and newer only; Reserved in MPI 2.0 */ U32 BaseAddressLow; /* 0x14 */ U32 BaseAddressHigh; /* 0x18 */ } MPI2_TARGET_CMD_BUF_POST_BASE_REQUEST, MPI2_POINTER PTR_MPI2_TARGET_CMD_BUF_POST_BASE_REQUEST, Mpi2TargetCmdBufferPostBaseRequest_t, MPI2_POINTER pMpi2TargetCmdBufferPostBaseRequest_t; /* values for the BufferPostflags field */ #define MPI2_CMD_BUF_POST_BASE_ADDRESS_SPACE_MASK (0x0C) #define MPI2_CMD_BUF_POST_BASE_SYSTEM_ADDRESS_SPACE (0x00) #define MPI2_CMD_BUF_POST_BASE_IOCDDR_ADDRESS_SPACE (0x04) #define MPI2_CMD_BUF_POST_BASE_IOCPLB_ADDRESS_SPACE (0x08) /* only for MPI v2.5 and earlier */ #define MPI26_CMD_BUF_POST_BASE_IOCCTL_ADDRESS_SPACE (0x08) /* for MPI v2.6 only */ #define MPI2_CMD_BUF_POST_BASE_IOCPLBNTA_ADDRESS_SPACE (0x0C) /* only for MPI v2.5 and earlier */ #define MPI2_CMD_BUF_POST_BASE_FLAGS_AUTO_POST_ALL (0x01) /**************************************************************************** * Target Command Buffer Post List Request ****************************************************************************/ typedef struct _MPI2_TARGET_CMD_BUF_POST_LIST_REQUEST { U16 Reserved; /* 0x00 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 CmdBufferCount; /* 0x04 */ U8 Reserved1; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved2; /* 0x0A */ U32 Reserved3; /* 0x0C */ U16 IoIndex[2]; /* 0x10 */ } MPI2_TARGET_CMD_BUF_POST_LIST_REQUEST, MPI2_POINTER PTR_MPI2_TARGET_CMD_BUF_POST_LIST_REQUEST, Mpi2TargetCmdBufferPostListRequest_t, MPI2_POINTER pMpi2TargetCmdBufferPostListRequest_t; /**************************************************************************** * Target Command Buffer Post Base List Reply ****************************************************************************/ typedef struct _MPI2_TARGET_BUF_POST_BASE_LIST_REPLY { U8 Flags; /* 0x00 */ U8 Reserved; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved1; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 Reserved4; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U16 IoIndex; /* 0x14 */ U16 Reserved5; /* 0x16 */ U32 Reserved6; /* 0x18 */ } MPI2_TARGET_BUF_POST_BASE_LIST_REPLY, MPI2_POINTER PTR_MPI2_TARGET_BUF_POST_BASE_LIST_REPLY, Mpi2TargetCmdBufferPostBaseListReply_t, MPI2_POINTER pMpi2TargetCmdBufferPostBaseListReply_t; /* Flags defines */ #define MPI2_CMD_BUF_POST_REPLY_IOINDEX_VALID (0x01) /**************************************************************************** * Command Buffer Formats (with 16 byte CDB) ****************************************************************************/ typedef struct _MPI2_TARGET_SSP_CMD_BUFFER { U8 FrameType; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 InitiatorConnectionTag; /* 0x02 */ U32 HashedSourceSASAddress; /* 0x04 */ U16 Reserved2; /* 0x08 */ U16 Flags; /* 0x0A */ U32 Reserved3; /* 0x0C */ U16 Tag; /* 0x10 */ U16 TargetPortTransferTag; /* 0x12 */ U32 DataOffset; /* 0x14 */ /* COMMAND information unit starts here */ U8 LogicalUnitNumber[8]; /* 0x18 */ U8 Reserved4; /* 0x20 */ U8 TaskAttribute; /* lower 3 bits */ /* 0x21 */ U8 Reserved5; /* 0x22 */ U8 AdditionalCDBLength; /* upper 5 bits */ /* 0x23 */ U8 CDB[16]; /* 0x24 */ /* Additional CDB bytes extend past the CDB field */ } MPI2_TARGET_SSP_CMD_BUFFER, MPI2_POINTER PTR_MPI2_TARGET_SSP_CMD_BUFFER, Mpi2TargetSspCmdBuffer, MPI2_POINTER pMp2iTargetSspCmdBuffer; typedef struct _MPI2_TARGET_SSP_TASK_BUFFER { U8 FrameType; /* 0x00 */ U8 Reserved1; /* 0x01 */ U16 InitiatorConnectionTag; /* 0x02 */ U32 HashedSourceSASAddress; /* 0x04 */ U16 Reserved2; /* 0x08 */ U16 Flags; /* 0x0A */ U32 Reserved3; /* 0x0C */ U16 Tag; /* 0x10 */ U16 TargetPortTransferTag; /* 0x12 */ U32 DataOffset; /* 0x14 */ /* TASK information unit starts here */ U8 LogicalUnitNumber[8]; /* 0x18 */ U16 Reserved4; /* 0x20 */ U8 TaskManagementFunction; /* 0x22 */ U8 Reserved5; /* 0x23 */ U16 ManagedTaskTag; /* 0x24 */ U16 Reserved6; /* 0x26 */ U32 Reserved7; /* 0x28 */ U32 Reserved8; /* 0x2C */ U32 Reserved9; /* 0x30 */ } MPI2_TARGET_SSP_TASK_BUFFER, MPI2_POINTER PTR_MPI2_TARGET_SSP_TASK_BUFFER, Mpi2TargetSspTaskBuffer, MPI2_POINTER pMpi2TargetSspTaskBuffer; /* mask and shift for HashedSourceSASAddress field */ #define MPI2_TARGET_HASHED_SAS_ADDRESS_MASK (0xFFFFFF00) #define MPI2_TARGET_HASHED_SAS_ADDRESS_SHIFT (8) /**************************************************************************** * MPI v2.0 Target Assist Request ****************************************************************************/ typedef struct _MPI2_TARGET_ASSIST_REQUEST { U8 Reserved1; /* 0x00 */ U8 TargetAssistFlags; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 QueueTag; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 IoIndex; /* 0x0C */ U16 InitiatorConnectionTag; /* 0x0E */ U16 SGLFlags; /* 0x10 */ U8 SequenceNumber; /* 0x12 */ U8 Reserved4; /* 0x13 */ U8 SGLOffset0; /* 0x14 */ U8 SGLOffset1; /* 0x15 */ U8 SGLOffset2; /* 0x16 */ U8 SGLOffset3; /* 0x17 */ U32 SkipCount; /* 0x18 */ U32 DataLength; /* 0x1C */ U32 BidirectionalDataLength; /* 0x20 */ U16 IoFlags; /* 0x24 */ U16 EEDPFlags; /* 0x26 */ U32 EEDPBlockSize; /* 0x28 */ U32 SecondaryReferenceTag; /* 0x2C */ U16 SecondaryApplicationTag; /* 0x30 */ U16 ApplicationTagTranslationMask; /* 0x32 */ U32 PrimaryReferenceTag; /* 0x34 */ U16 PrimaryApplicationTag; /* 0x38 */ U16 PrimaryApplicationTagMask; /* 0x3A */ U32 RelativeOffset; /* 0x3C */ U32 Reserved5; /* 0x40 */ U32 Reserved6; /* 0x44 */ U32 Reserved7; /* 0x48 */ U32 Reserved8; /* 0x4C */ MPI2_SGE_IO_UNION SGL[1]; /* 0x50 */ } MPI2_TARGET_ASSIST_REQUEST, MPI2_POINTER PTR_MPI2_TARGET_ASSIST_REQUEST, Mpi2TargetAssistRequest_t, MPI2_POINTER pMpi2TargetAssistRequest_t; /* Target Assist TargetAssistFlags bits */ #define MPI2_TARGET_ASSIST_FLAGS_REPOST_CMD_BUFFER (0x80) #define MPI2_TARGET_ASSIST_FLAGS_TLR (0x10) #define MPI2_TARGET_ASSIST_FLAGS_RETRANSMIT (0x04) #define MPI2_TARGET_ASSIST_FLAGS_AUTO_STATUS (0x02) #define MPI2_TARGET_ASSIST_FLAGS_DATA_DIRECTION (0x01) /* Target Assist SGLFlags bits */ /* base values for Data Location Address Space */ #define MPI2_TARGET_ASSIST_SGLFLAGS_ADDR_MASK (0x0C) #define MPI2_TARGET_ASSIST_SGLFLAGS_SYSTEM_ADDR (0x00) #define MPI2_TARGET_ASSIST_SGLFLAGS_IOCDDR_ADDR (0x04) #define MPI2_TARGET_ASSIST_SGLFLAGS_IOCPLB_ADDR (0x08) #define MPI2_TARGET_ASSIST_SGLFLAGS_PLBNTA_ADDR (0x0C) /* base values for Type */ #define MPI2_TARGET_ASSIST_SGLFLAGS_TYPE_MASK (0x03) #define MPI2_TARGET_ASSIST_SGLFLAGS_MPI_TYPE (0x00) #define MPI2_TARGET_ASSIST_SGLFLAGS_32IEEE_TYPE (0x01) #define MPI2_TARGET_ASSIST_SGLFLAGS_64IEEE_TYPE (0x02) /* shift values for each sub-field */ #define MPI2_TARGET_ASSIST_SGLFLAGS_SGL3_SHIFT (12) #define MPI2_TARGET_ASSIST_SGLFLAGS_SGL2_SHIFT (8) #define MPI2_TARGET_ASSIST_SGLFLAGS_SGL1_SHIFT (4) #define MPI2_TARGET_ASSIST_SGLFLAGS_SGL0_SHIFT (0) /* Target Assist IoFlags bits */ #define MPI2_TARGET_ASSIST_IOFLAGS_BIDIRECTIONAL (0x0800) #define MPI2_TARGET_ASSIST_IOFLAGS_MULTICAST (0x0400) #define MPI2_TARGET_ASSIST_IOFLAGS_RECEIVE_FIRST (0x0200) /* Target Assist EEDPFlags bits */ #define MPI2_TA_EEDPFLAGS_INC_PRI_REFTAG (0x8000) #define MPI2_TA_EEDPFLAGS_INC_SEC_REFTAG (0x4000) #define MPI2_TA_EEDPFLAGS_INC_PRI_APPTAG (0x2000) #define MPI2_TA_EEDPFLAGS_INC_SEC_APPTAG (0x1000) #define MPI2_TA_EEDPFLAGS_CHECK_REFTAG (0x0400) #define MPI2_TA_EEDPFLAGS_CHECK_APPTAG (0x0200) #define MPI2_TA_EEDPFLAGS_CHECK_GUARD (0x0100) #define MPI2_TA_EEDPFLAGS_PASSTHRU_REFTAG (0x0008) #define MPI2_TA_EEDPFLAGS_MASK_OP (0x0007) #define MPI2_TA_EEDPFLAGS_NOOP_OP (0x0000) #define MPI2_TA_EEDPFLAGS_CHECK_OP (0x0001) #define MPI2_TA_EEDPFLAGS_STRIP_OP (0x0002) #define MPI2_TA_EEDPFLAGS_CHECK_REMOVE_OP (0x0003) #define MPI2_TA_EEDPFLAGS_INSERT_OP (0x0004) #define MPI2_TA_EEDPFLAGS_REPLACE_OP (0x0006) #define MPI2_TA_EEDPFLAGS_CHECK_REGEN_OP (0x0007) /**************************************************************************** * MPI v2.5 Target Assist Request ****************************************************************************/ typedef struct _MPI25_TARGET_ASSIST_REQUEST { U8 Reserved1; /* 0x00 */ U8 TargetAssistFlags; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 QueueTag; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 IoIndex; /* 0x0C */ U16 InitiatorConnectionTag; /* 0x0E */ U8 DMAFlags; /* 0x10 */ U8 Reserved9; /* 0x11 */ U8 SequenceNumber; /* 0x12 */ U8 Reserved4; /* 0x13 */ U8 SGLOffset0; /* 0x14 */ U8 SGLOffset1; /* 0x15 */ U8 SGLOffset2; /* 0x16 */ U8 SGLOffset3; /* 0x17 */ U32 SkipCount; /* 0x18 */ U32 DataLength; /* 0x1C */ U32 BidirectionalDataLength; /* 0x20 */ U16 IoFlags; /* 0x24 */ U16 EEDPFlags; /* 0x26 */ U16 EEDPBlockSize; /* 0x28 */ U16 Reserved10; /* 0x2A */ U32 SecondaryReferenceTag; /* 0x2C */ U16 SecondaryApplicationTag; /* 0x30 */ U16 ApplicationTagTranslationMask; /* 0x32 */ U32 PrimaryReferenceTag; /* 0x34 */ U16 PrimaryApplicationTag; /* 0x38 */ U16 PrimaryApplicationTagMask; /* 0x3A */ U32 RelativeOffset; /* 0x3C */ U32 Reserved5; /* 0x40 */ U32 Reserved6; /* 0x44 */ U32 Reserved7; /* 0x48 */ U32 Reserved8; /* 0x4C */ MPI25_SGE_IO_UNION SGL; /* 0x50 */ } MPI25_TARGET_ASSIST_REQUEST, MPI2_POINTER PTR_MPI25_TARGET_ASSIST_REQUEST, Mpi25TargetAssistRequest_t, MPI2_POINTER pMpi25TargetAssistRequest_t; /* use MPI2_TARGET_ASSIST_FLAGS_ defines for the Flags field */ /* Defines for the DMAFlags field * Each setting affects 4 SGLS, from SGL0 to SGL3. * D = Data * C = Cache DIF * I = Interleaved * H = Host DIF */ #define MPI25_TA_DMAFLAGS_OP_MASK (0x0F) #define MPI25_TA_DMAFLAGS_OP_D_D_D_D (0x00) #define MPI25_TA_DMAFLAGS_OP_D_D_D_C (0x01) #define MPI25_TA_DMAFLAGS_OP_D_D_D_I (0x02) #define MPI25_TA_DMAFLAGS_OP_D_D_C_C (0x03) #define MPI25_TA_DMAFLAGS_OP_D_D_C_I (0x04) #define MPI25_TA_DMAFLAGS_OP_D_D_I_I (0x05) #define MPI25_TA_DMAFLAGS_OP_D_C_C_C (0x06) #define MPI25_TA_DMAFLAGS_OP_D_C_C_I (0x07) #define MPI25_TA_DMAFLAGS_OP_D_C_I_I (0x08) #define MPI25_TA_DMAFLAGS_OP_D_I_I_I (0x09) #define MPI25_TA_DMAFLAGS_OP_D_H_D_D (0x0A) #define MPI25_TA_DMAFLAGS_OP_D_H_D_C (0x0B) #define MPI25_TA_DMAFLAGS_OP_D_H_D_I (0x0C) #define MPI25_TA_DMAFLAGS_OP_D_H_C_C (0x0D) #define MPI25_TA_DMAFLAGS_OP_D_H_C_I (0x0E) #define MPI25_TA_DMAFLAGS_OP_D_H_I_I (0x0F) /* defines for the IoFlags field */ #define MPI26_TARGET_ASSIST_IOFLAGS_ESCAPE_PASSTHROUGH (0x2000) /* MPI v2.6 and later */ #define MPI25_TARGET_ASSIST_IOFLAGS_BIDIRECTIONAL (0x0800) #define MPI25_TARGET_ASSIST_IOFLAGS_RECEIVE_FIRST (0x0200) /* defines for the EEDPFlags field */ #define MPI25_TA_EEDPFLAGS_INC_PRI_REFTAG (0x8000) #define MPI25_TA_EEDPFLAGS_INC_SEC_REFTAG (0x4000) #define MPI25_TA_EEDPFLAGS_INC_PRI_APPTAG (0x2000) #define MPI25_TA_EEDPFLAGS_INC_SEC_APPTAG (0x1000) #define MPI25_TA_EEDPFLAGS_CHECK_REFTAG (0x0400) #define MPI25_TA_EEDPFLAGS_CHECK_APPTAG (0x0200) #define MPI25_TA_EEDPFLAGS_CHECK_GUARD (0x0100) #define MPI25_TA_EEDPFLAGS_ESCAPE_MODE_MASK (0x00C0) #define MPI25_TA_EEDPFLAGS_COMPATIBLE_MODE (0x0000) #define MPI25_TA_EEDPFLAGS_DO_NOT_DISABLE_MODE (0x0040) #define MPI25_TA_EEDPFLAGS_APPTAG_DISABLE_MODE (0x0080) #define MPI25_TA_EEDPFLAGS_APPTAG_REFTAG_DISABLE_MODE (0x00C0) #define MPI25_TA_EEDPFLAGS_HOST_GUARD_METHOD_MASK (0x0030) #define MPI25_TA_EEDPFLAGS_T10_CRC_HOST_GUARD (0x0000) #define MPI25_TA_EEDPFLAGS_IP_CHKSUM_HOST_GUARD (0x0010) #define MPI25_TA_EEDPFLAGS_PASSTHRU_REFTAG (0x0008) #define MPI25_TA_EEDPFLAGS_MASK_OP (0x0007) #define MPI25_TA_EEDPFLAGS_NOOP_OP (0x0000) #define MPI25_TA_EEDPFLAGS_CHECK_OP (0x0001) #define MPI25_TA_EEDPFLAGS_STRIP_OP (0x0002) #define MPI25_TA_EEDPFLAGS_CHECK_REMOVE_OP (0x0003) #define MPI25_TA_EEDPFLAGS_INSERT_OP (0x0004) #define MPI25_TA_EEDPFLAGS_REPLACE_OP (0x0006) #define MPI25_TA_EEDPFLAGS_CHECK_REGEN_OP (0x0007) /**************************************************************************** * Target Status Send Request ****************************************************************************/ typedef struct _MPI2_TARGET_STATUS_SEND_REQUEST { U8 Reserved1; /* 0x00 */ U8 StatusFlags; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 QueueTag; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 IoIndex; /* 0x0C */ U16 InitiatorConnectionTag; /* 0x0E */ U16 SGLFlags; /* 0x10 */ /* MPI v2.0 only. Reserved on MPI v2.5. */ U16 Reserved4; /* 0x12 */ U8 SGLOffset0; /* 0x14 */ U8 Reserved5; /* 0x15 */ U16 Reserved6; /* 0x16 */ U32 Reserved7; /* 0x18 */ U32 Reserved8; /* 0x1C */ MPI2_SIMPLE_SGE_UNION StatusDataSGE; /* 0x20 */ /* MPI v2.5: This must be an IEEE Simple Element 64. */ } MPI2_TARGET_STATUS_SEND_REQUEST, MPI2_POINTER PTR_MPI2_TARGET_STATUS_SEND_REQUEST, Mpi2TargetStatusSendRequest_t, MPI2_POINTER pMpi2TargetStatusSendRequest_t; /* Target Status Send StatusFlags bits */ #define MPI2_TSS_FLAGS_REPOST_CMD_BUFFER (0x80) #define MPI2_TSS_FLAGS_RETRANSMIT (0x04) #define MPI2_TSS_FLAGS_AUTO_GOOD_STATUS (0x01) /* Target Status Send SGLFlags bits - MPI v2.0 only */ /* Data Location Address Space */ #define MPI2_TSS_SGLFLAGS_ADDR_MASK (0x0C) #define MPI2_TSS_SGLFLAGS_SYSTEM_ADDR (0x00) #define MPI2_TSS_SGLFLAGS_IOCDDR_ADDR (0x04) #define MPI2_TSS_SGLFLAGS_IOCPLB_ADDR (0x08) #define MPI2_TSS_SGLFLAGS_IOCPLBNTA_ADDR (0x0C) /* Type */ #define MPI2_TSS_SGLFLAGS_TYPE_MASK (0x03) #define MPI2_TSS_SGLFLAGS_MPI_TYPE (0x00) #define MPI2_TSS_SGLFLAGS_IEEE32_TYPE (0x01) #define MPI2_TSS_SGLFLAGS_IEEE64_TYPE (0x02) /* * NOTE: The SSP status IU is big-endian. When used on a little-endian system, * this structure properly orders the bytes. */ typedef struct _MPI2_TARGET_SSP_RSP_IU { U32 Reserved0[6]; /* reserved for SSP header */ /* 0x00 */ /* start of RESPONSE information unit */ U32 Reserved1; /* 0x18 */ U32 Reserved2; /* 0x1C */ U16 Reserved3; /* 0x20 */ U8 DataPres; /* lower 2 bits */ /* 0x22 */ U8 Status; /* 0x23 */ U32 Reserved4; /* 0x24 */ U32 SenseDataLength; /* 0x28 */ U32 ResponseDataLength; /* 0x2C */ /* start of Response or Sense Data (size may vary dynamically) */ U8 ResponseSenseData[4]; /* 0x30 */ } MPI2_TARGET_SSP_RSP_IU, MPI2_POINTER PTR_MPI2_TARGET_SSP_RSP_IU, Mpi2TargetSspRspIu_t, MPI2_POINTER pMpi2TargetSspRspIu_t; /**************************************************************************** * Target Standard Reply - used with Target Assist or Target Status Send ****************************************************************************/ typedef struct _MPI2_TARGET_STANDARD_REPLY { U16 Reserved; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved1; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 Reserved4; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U16 IoIndex; /* 0x14 */ U16 Reserved5; /* 0x16 */ U32 TransferCount; /* 0x18 */ U32 BidirectionalTransferCount; /* 0x1C */ } MPI2_TARGET_STANDARD_REPLY, MPI2_POINTER PTR_MPI2_TARGET_STANDARD_REPLY, Mpi2TargetErrorReply_t, MPI2_POINTER pMpi2TargetErrorReply_t; /**************************************************************************** * Target Mode Abort Request ****************************************************************************/ typedef struct _MPI2_TARGET_MODE_ABORT_REQUEST { U8 AbortType; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 IoIndexToAbort; /* 0x0C */ U16 InitiatorDevHandle; /* 0x0E */ U32 MidToAbort; /* 0x10 */ } MPI2_TARGET_MODE_ABORT, MPI2_POINTER PTR_MPI2_TARGET_MODE_ABORT, Mpi2TargetModeAbort_t, MPI2_POINTER pMpi2TargetModeAbort_t; /* Target Mode Abort AbortType values */ #define MPI2_TARGET_MODE_ABORT_ALL_CMD_BUFFERS (0x00) #define MPI2_TARGET_MODE_ABORT_ALL_IO (0x01) #define MPI2_TARGET_MODE_ABORT_EXACT_IO (0x02) #define MPI2_TARGET_MODE_ABORT_EXACT_IO_REQUEST (0x03) #define MPI2_TARGET_MODE_ABORT_IO_REQUEST_AND_IO (0x04) #define MPI2_TARGET_MODE_ABORT_DEVHANDLE (0x05) #define MPI2_TARGET_MODE_ABORT_ALL_COMMANDS (0x06) /**************************************************************************** * Target Mode Abort Reply ****************************************************************************/ typedef struct _MPI2_TARGET_MODE_ABORT_REPLY { U16 Reserved; /* 0x00 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved1; /* 0x04 */ U8 Reserved2; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved3; /* 0x0A */ U16 Reserved4; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U32 AbortCount; /* 0x14 */ } MPI2_TARGET_MODE_ABORT_REPLY, MPI2_POINTER PTR_MPI2_TARGET_MODE_ABORT_REPLY, Mpi2TargetModeAbortReply_t, MPI2_POINTER pMpi2TargetModeAbortReply_t; #endif Index: releng/12.1/sys/dev/mpr/mpi/mpi2_tool.h =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2_tool.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2_tool.h (revision 352761) @@ -1,564 +1,623 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ /* - * Copyright (c) 2000-2015 LSI Corporation. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_tool.h * Title: MPI diagnostic tool structures and definitions * Creation Date: March 26, 2007 * - * mpi2_tool.h Version: 02.00.14 + * mpi2_tool.h Version: 02.00.15 * * Version History * --------------- * * Date Version Description * -------- -------- ------------------------------------------------------ * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 12-18-07 02.00.01 Added Diagnostic Buffer Post and Diagnostic Release * structures and defines. * 02-29-08 02.00.02 Modified various names to make them 32-character unique. * 05-06-09 02.00.03 Added ISTWI Read Write Tool and Diagnostic CLI Tool. * 07-30-09 02.00.04 Added ExtendedType field to DiagnosticBufferPost request * and reply messages. * Added MPI2_DIAG_BUF_TYPE_EXTENDED. * Incremented MPI2_DIAG_BUF_TYPE_COUNT. * 05-12-10 02.00.05 Added Diagnostic Data Upload tool. * 08-11-10 02.00.06 Added defines that were missing for Diagnostic Buffer * Post Request. * 05-25-11 02.00.07 Added Flags field and related defines to * MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST. * 11-18-11 02.00.08 Incorporating additions for MPI v2.5. * 07-10-12 02.00.09 Add MPI v2.5 Toolbox Diagnostic CLI Tool Request * message. * 07-26-12 02.00.10 Modified MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST so that * it uses MPI Chain SGE as well as MPI Simple SGE. * 08-19-13 02.00.11 Added MPI2_TOOLBOX_TEXT_DISPLAY_TOOL and related info. * 01-08-14 02.00.12 Added MPI2_TOOLBOX_CLEAN_BIT26_PRODUCT_SPECIFIC. * 11-18-14 02.00.13 Updated copyright information. * 08-25-16 02.00.14 Added new values for the Flags field of Toolbox Clean * Tool Request Message. + * 07-22-18 02.00.15 Added defines for new TOOLBOX_PCIE_LANE_MARGINING tool. + * Added option for DeviceInfo field in ISTWI tool. * -------------------------------------------------------------------------- */ #ifndef MPI2_TOOL_H #define MPI2_TOOL_H /***************************************************************************** * * Toolbox Messages * *****************************************************************************/ /* defines for the Tools */ #define MPI2_TOOLBOX_CLEAN_TOOL (0x00) #define MPI2_TOOLBOX_MEMORY_MOVE_TOOL (0x01) #define MPI2_TOOLBOX_DIAG_DATA_UPLOAD_TOOL (0x02) #define MPI2_TOOLBOX_ISTWI_READ_WRITE_TOOL (0x03) #define MPI2_TOOLBOX_BEACON_TOOL (0x05) #define MPI2_TOOLBOX_DIAGNOSTIC_CLI_TOOL (0x06) #define MPI2_TOOLBOX_TEXT_DISPLAY_TOOL (0x07) +#define MPI26_TOOLBOX_BACKEND_PCIE_LANE_MARGIN (0x08) /**************************************************************************** * Toolbox reply ****************************************************************************/ typedef struct _MPI2_TOOLBOX_REPLY { U8 Tool; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 Reserved5; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ } MPI2_TOOLBOX_REPLY, MPI2_POINTER PTR_MPI2_TOOLBOX_REPLY, Mpi2ToolboxReply_t, MPI2_POINTER pMpi2ToolboxReply_t; /**************************************************************************** * Toolbox Clean Tool request ****************************************************************************/ typedef struct _MPI2_TOOLBOX_CLEAN_REQUEST { U8 Tool; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U32 Flags; /* 0x0C */ } MPI2_TOOLBOX_CLEAN_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_CLEAN_REQUEST, Mpi2ToolboxCleanRequest_t, MPI2_POINTER pMpi2ToolboxCleanRequest_t; /* values for the Flags field */ #define MPI2_TOOLBOX_CLEAN_BOOT_SERVICES (0x80000000) #define MPI2_TOOLBOX_CLEAN_PERSIST_MANUFACT_PAGES (0x40000000) #define MPI2_TOOLBOX_CLEAN_OTHER_PERSIST_PAGES (0x20000000) #define MPI2_TOOLBOX_CLEAN_FW_CURRENT (0x10000000) #define MPI2_TOOLBOX_CLEAN_FW_BACKUP (0x08000000) #define MPI2_TOOLBOX_CLEAN_BIT26_PRODUCT_SPECIFIC (0x04000000) #define MPI2_TOOLBOX_CLEAN_MEGARAID (0x02000000) #define MPI2_TOOLBOX_CLEAN_INITIALIZATION (0x01000000) #define MPI2_TOOLBOX_CLEAN_SBR (0x00800000) #define MPI2_TOOLBOX_CLEAN_SBR_BACKUP (0x00400000) #define MPI2_TOOLBOX_CLEAN_HIIM (0x00200000) #define MPI2_TOOLBOX_CLEAN_HIIA (0x00100000) #define MPI2_TOOLBOX_CLEAN_CTLR (0x00080000) #define MPI2_TOOLBOX_CLEAN_IMR_FIRMWARE (0x00040000) #define MPI2_TOOLBOX_CLEAN_MR_NVDATA (0x00020000) #define MPI2_TOOLBOX_CLEAN_RESERVED_5_16 (0x0001FFE0) #define MPI2_TOOLBOX_CLEAN_ALL_BUT_MPB (0x00000010) #define MPI2_TOOLBOX_CLEAN_ENTIRE_FLASH (0x00000008) #define MPI2_TOOLBOX_CLEAN_FLASH (0x00000004) #define MPI2_TOOLBOX_CLEAN_SEEPROM (0x00000002) #define MPI2_TOOLBOX_CLEAN_NVSRAM (0x00000001) /**************************************************************************** * Toolbox Memory Move request ****************************************************************************/ typedef struct _MPI2_TOOLBOX_MEM_MOVE_REQUEST { U8 Tool; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ MPI2_SGE_SIMPLE_UNION SGL; /* 0x0C */ } MPI2_TOOLBOX_MEM_MOVE_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_MEM_MOVE_REQUEST, Mpi2ToolboxMemMoveRequest_t, MPI2_POINTER pMpi2ToolboxMemMoveRequest_t; /**************************************************************************** * Toolbox Diagnostic Data Upload request ****************************************************************************/ typedef struct _MPI2_TOOLBOX_DIAG_DATA_UPLOAD_REQUEST { U8 Tool; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U8 SGLFlags; /* 0x0C */ U8 Reserved5; /* 0x0D */ U16 Reserved6; /* 0x0E */ U32 Flags; /* 0x10 */ U32 DataLength; /* 0x14 */ MPI2_SGE_SIMPLE_UNION SGL; /* 0x18 */ } MPI2_TOOLBOX_DIAG_DATA_UPLOAD_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_DIAG_DATA_UPLOAD_REQUEST, Mpi2ToolboxDiagDataUploadRequest_t, MPI2_POINTER pMpi2ToolboxDiagDataUploadRequest_t; /* use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */ typedef struct _MPI2_DIAG_DATA_UPLOAD_HEADER { U32 DiagDataLength; /* 00h */ U8 FormatCode; /* 04h */ U8 Reserved1; /* 05h */ U16 Reserved2; /* 06h */ } MPI2_DIAG_DATA_UPLOAD_HEADER, MPI2_POINTER PTR_MPI2_DIAG_DATA_UPLOAD_HEADER, Mpi2DiagDataUploadHeader_t, MPI2_POINTER pMpi2DiagDataUploadHeader_t; /**************************************************************************** * Toolbox ISTWI Read Write Tool ****************************************************************************/ /* Toolbox ISTWI Read Write Tool request message */ typedef struct _MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST { U8 Tool; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U32 Reserved5; /* 0x0C */ U32 Reserved6; /* 0x10 */ U8 DevIndex; /* 0x14 */ U8 Action; /* 0x15 */ U8 SGLFlags; /* 0x16 */ U8 Flags; /* 0x17 */ U16 TxDataLength; /* 0x18 */ U16 RxDataLength; /* 0x1A */ - U32 Reserved8; /* 0x1C */ - U32 Reserved9; /* 0x20 */ - U32 Reserved10; /* 0x24 */ + U32 DeviceInfo[3]; /* 0x1C */ U32 Reserved11; /* 0x28 */ U32 Reserved12; /* 0x2C */ MPI2_SGE_SIMPLE_UNION SGL; /* 0x30 */ } MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST, Mpi2ToolboxIstwiReadWriteRequest_t, MPI2_POINTER pMpi2ToolboxIstwiReadWriteRequest_t; /* values for the Action field */ #define MPI2_TOOL_ISTWI_ACTION_READ_DATA (0x01) #define MPI2_TOOL_ISTWI_ACTION_WRITE_DATA (0x02) #define MPI2_TOOL_ISTWI_ACTION_SEQUENCE (0x03) #define MPI2_TOOL_ISTWI_ACTION_RESERVE_BUS (0x10) #define MPI2_TOOL_ISTWI_ACTION_RELEASE_BUS (0x11) #define MPI2_TOOL_ISTWI_ACTION_RESET (0x12) /* use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */ /* values for the Flags field */ #define MPI2_TOOL_ISTWI_FLAG_AUTO_RESERVE_RELEASE (0x80) #define MPI2_TOOL_ISTWI_FLAG_PAGE_ADDR_MASK (0x07) +/* MPI26 TOOLBOX Request MsgFlags defines */ +#define MPI26_TOOLBOX_REQ_MSGFLAGS_ADDRESSING_MASK (0x01) +#define MPI26_TOOLBOX_REQ_MSGFLAGS_ADDRESSING_DEVINDEX (0x00) /* Request uses Man Page 43 device index addressing */ +#define MPI26_TOOLBOX_REQ_MSGFLAGS_ADDRESSING_DEVINFO (0x01) /* Request uses Man Page 43 device info struct addressing */ + /* Toolbox ISTWI Read Write Tool reply message */ typedef struct _MPI2_TOOLBOX_ISTWI_REPLY { U8 Tool; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 Reserved5; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U8 DevIndex; /* 0x14 */ U8 Action; /* 0x15 */ U8 IstwiStatus; /* 0x16 */ U8 Reserved6; /* 0x17 */ U16 TxDataCount; /* 0x18 */ U16 RxDataCount; /* 0x1A */ } MPI2_TOOLBOX_ISTWI_REPLY, MPI2_POINTER PTR_MPI2_TOOLBOX_ISTWI_REPLY, Mpi2ToolboxIstwiReply_t, MPI2_POINTER pMpi2ToolboxIstwiReply_t; /**************************************************************************** * Toolbox Beacon Tool request ****************************************************************************/ typedef struct _MPI2_TOOLBOX_BEACON_REQUEST { U8 Tool; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U8 Reserved5; /* 0x0C */ U8 PhysicalPort; /* 0x0D */ U8 Reserved6; /* 0x0E */ U8 Flags; /* 0x0F */ } MPI2_TOOLBOX_BEACON_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_BEACON_REQUEST, Mpi2ToolboxBeaconRequest_t, MPI2_POINTER pMpi2ToolboxBeaconRequest_t; /* values for the Flags field */ #define MPI2_TOOLBOX_FLAGS_BEACONMODE_OFF (0x00) #define MPI2_TOOLBOX_FLAGS_BEACONMODE_ON (0x01) /**************************************************************************** * Toolbox Diagnostic CLI Tool ****************************************************************************/ #define MPI2_TOOLBOX_DIAG_CLI_CMD_LENGTH (0x5C) /* MPI v2.0 Toolbox Diagnostic CLI Tool request message */ typedef struct _MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST { U8 Tool; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U8 SGLFlags; /* 0x0C */ U8 Reserved5; /* 0x0D */ U16 Reserved6; /* 0x0E */ U32 DataLength; /* 0x10 */ U8 DiagnosticCliCommand[MPI2_TOOLBOX_DIAG_CLI_CMD_LENGTH]; /* 0x14 */ MPI2_MPI_SGE_IO_UNION SGL; /* 0x70 */ } MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST, Mpi2ToolboxDiagnosticCliRequest_t, MPI2_POINTER pMpi2ToolboxDiagnosticCliRequest_t; /* use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */ /* MPI v2.5 Toolbox Diagnostic CLI Tool request message */ typedef struct _MPI25_TOOLBOX_DIAGNOSTIC_CLI_REQUEST { U8 Tool; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U32 Reserved5; /* 0x0C */ U32 DataLength; /* 0x10 */ U8 DiagnosticCliCommand[MPI2_TOOLBOX_DIAG_CLI_CMD_LENGTH]; /* 0x14 */ MPI25_SGE_IO_UNION SGL; /* 0x70 */ } MPI25_TOOLBOX_DIAGNOSTIC_CLI_REQUEST, MPI2_POINTER PTR_MPI25_TOOLBOX_DIAGNOSTIC_CLI_REQUEST, Mpi25ToolboxDiagnosticCliRequest_t, MPI2_POINTER pMpi25ToolboxDiagnosticCliRequest_t; /* Toolbox Diagnostic CLI Tool reply message */ typedef struct _MPI2_TOOLBOX_DIAGNOSTIC_CLI_REPLY { U8 Tool; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 Reserved5; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U32 ReturnedDataLength; /* 0x14 */ } MPI2_TOOLBOX_DIAGNOSTIC_CLI_REPLY, MPI2_POINTER PTR_MPI2_TOOLBOX_DIAG_CLI_REPLY, Mpi2ToolboxDiagnosticCliReply_t, MPI2_POINTER pMpi2ToolboxDiagnosticCliReply_t; /**************************************************************************** * Toolbox Console Text Display Tool ****************************************************************************/ /* Toolbox Console Text Display Tool request message */ typedef struct _MPI2_TOOLBOX_TEXT_DISPLAY_REQUEST { U8 Tool; /* 0x00 */ U8 Reserved1; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U8 Console; /* 0x0C */ U8 Flags; /* 0x0D */ U16 Reserved6; /* 0x0E */ U8 TextToDisplay[4]; /* 0x10 */ /* actual length determined at runtime based on frame size */ } MPI2_TOOLBOX_TEXT_DISPLAY_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_TEXT_DISPLAY_REQUEST, Mpi2ToolboxTextDisplayRequest_t, MPI2_POINTER pMpi2ToolboxTextDisplayRequest_t; /* defines for the Console field */ #define MPI2_TOOLBOX_CONSOLE_TYPE_MASK (0xF0) #define MPI2_TOOLBOX_CONSOLE_TYPE_DEFAULT (0x00) #define MPI2_TOOLBOX_CONSOLE_TYPE_UART (0x10) #define MPI2_TOOLBOX_CONSOLE_TYPE_ETHERNET (0x20) #define MPI2_TOOLBOX_CONSOLE_NUMBER_MASK (0x0F) /* defines for the Flags field */ #define MPI2_TOOLBOX_CONSOLE_FLAG_TIMESTAMP (0x01) + +/**************************************************************************** +* Toolbox Backend Lane Margining Tool +****************************************************************************/ + +/* Toolbox Backend Lane Margining Tool request message */ +typedef struct _MPI26_TOOLBOX_LANE_MARGINING_REQUEST +{ + U8 Tool; /* 0x00 */ + U8 Reserved1; /* 0x01 */ + U8 ChainOffset; /* 0x02 */ + U8 Function; /* 0x03 */ + U16 Reserved2; /* 0x04 */ + U8 Reserved3; /* 0x06 */ + U8 MsgFlags; /* 0x07 */ + U8 VP_ID; /* 0x08 */ + U8 VF_ID; /* 0x09 */ + U16 Reserved4; /* 0x0A */ + U8 Command; /* 0x0C */ + U8 SwitchPort; /* 0x0D */ + U16 DevHandle; /* 0x0E */ + U8 RegisterOffset; /* 0x10 */ + U8 Reserved5; /* 0x11 */ + U16 DataLength; /* 0x12 */ + MPI2_SGE_SIMPLE_UNION SGL; /* 0x14 */ +} MPI26_TOOLBOX_LANE_MARGINING_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_LANE_MARGINING_REQUEST, + Mpi26ToolboxLaneMarginingRequest_t, MPI2_POINTER pMpi2ToolboxLaneMarginingRequest_t; + +/* defines for the Command field */ +#define MPI26_TOOL_MARGIN_COMMAND_ENTER_MARGIN_MODE (0x01) +#define MPI26_TOOL_MARGIN_COMMAND_READ_REGISTER_DATA (0x02) +#define MPI26_TOOL_MARGIN_COMMAND_WRITE_REGISTER_DATA (0x03) +#define MPI26_TOOL_MARGIN_COMMAND_EXIT_MARGIN_MODE (0x04) + + +/* Toolbox Backend Lane Margining Tool reply message */ +typedef struct _MPI26_TOOLBOX_LANE_MARGINING_REPLY +{ + U8 Tool; /* 0x00 */ + U8 Reserved1; /* 0x01 */ + U8 MsgLength; /* 0x02 */ + U8 Function; /* 0x03 */ + U16 Reserved2; /* 0x04 */ + U8 Reserved3; /* 0x06 */ + U8 MsgFlags; /* 0x07 */ + U8 VP_ID; /* 0x08 */ + U8 VF_ID; /* 0x09 */ + U16 Reserved4; /* 0x0A */ + U16 Reserved5; /* 0x0C */ + U16 IOCStatus; /* 0x0E */ + U32 IOCLogInfo; /* 0x10 */ + U16 ReturnedDataLength; /* 0x14 */ + U16 Reserved6; /* 0x16 */ +} MPI26_TOOLBOX_LANE_MARGINING_REPLY, + MPI2_POINTER PTR_MPI26_TOOLBOX_LANE_MARGINING_REPLY, + Mpi26ToolboxLaneMarginingReply_t, + MPI2_POINTER pMpi26ToolboxLaneMarginingReply_t; /***************************************************************************** * * Diagnostic Buffer Messages * *****************************************************************************/ /**************************************************************************** * Diagnostic Buffer Post request ****************************************************************************/ typedef struct _MPI2_DIAG_BUFFER_POST_REQUEST { U8 ExtendedType; /* 0x00 */ U8 BufferType; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U64 BufferAddress; /* 0x0C */ U32 BufferLength; /* 0x14 */ U32 Reserved5; /* 0x18 */ U32 Reserved6; /* 0x1C */ U32 Flags; /* 0x20 */ U32 ProductSpecific[23]; /* 0x24 */ } MPI2_DIAG_BUFFER_POST_REQUEST, MPI2_POINTER PTR_MPI2_DIAG_BUFFER_POST_REQUEST, Mpi2DiagBufferPostRequest_t, MPI2_POINTER pMpi2DiagBufferPostRequest_t; /* values for the ExtendedType field */ #define MPI2_DIAG_EXTENDED_TYPE_UTILIZATION (0x02) /* values for the BufferType field */ #define MPI2_DIAG_BUF_TYPE_TRACE (0x00) #define MPI2_DIAG_BUF_TYPE_SNAPSHOT (0x01) #define MPI2_DIAG_BUF_TYPE_EXTENDED (0x02) /* count of the number of buffer types */ #define MPI2_DIAG_BUF_TYPE_COUNT (0x03) /* values for the Flags field */ #define MPI2_DIAG_BUF_FLAG_RELEASE_ON_FULL (0x00000002) /* for MPI v2.0 products only */ #define MPI2_DIAG_BUF_FLAG_IMMEDIATE_RELEASE (0x00000001) /**************************************************************************** * Diagnostic Buffer Post reply ****************************************************************************/ typedef struct _MPI2_DIAG_BUFFER_POST_REPLY { U8 ExtendedType; /* 0x00 */ U8 BufferType; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 Reserved5; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ U32 TransferLength; /* 0x14 */ } MPI2_DIAG_BUFFER_POST_REPLY, MPI2_POINTER PTR_MPI2_DIAG_BUFFER_POST_REPLY, Mpi2DiagBufferPostReply_t, MPI2_POINTER pMpi2DiagBufferPostReply_t; /**************************************************************************** * Diagnostic Release request ****************************************************************************/ typedef struct _MPI2_DIAG_RELEASE_REQUEST { U8 Reserved1; /* 0x00 */ U8 BufferType; /* 0x01 */ U8 ChainOffset; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ } MPI2_DIAG_RELEASE_REQUEST, MPI2_POINTER PTR_MPI2_DIAG_RELEASE_REQUEST, Mpi2DiagReleaseRequest_t, MPI2_POINTER pMpi2DiagReleaseRequest_t; /**************************************************************************** * Diagnostic Buffer Post reply ****************************************************************************/ typedef struct _MPI2_DIAG_RELEASE_REPLY { U8 Reserved1; /* 0x00 */ U8 BufferType; /* 0x01 */ U8 MsgLength; /* 0x02 */ U8 Function; /* 0x03 */ U16 Reserved2; /* 0x04 */ U8 Reserved3; /* 0x06 */ U8 MsgFlags; /* 0x07 */ U8 VP_ID; /* 0x08 */ U8 VF_ID; /* 0x09 */ U16 Reserved4; /* 0x0A */ U16 Reserved5; /* 0x0C */ U16 IOCStatus; /* 0x0E */ U32 IOCLogInfo; /* 0x10 */ } MPI2_DIAG_RELEASE_REPLY, MPI2_POINTER PTR_MPI2_DIAG_RELEASE_REPLY, Mpi2DiagReleaseReply_t, MPI2_POINTER pMpi2DiagReleaseReply_t; #endif Index: releng/12.1/sys/dev/mpr/mpi/mpi2_type.h =================================================================== --- releng/12.1/sys/dev/mpr/mpi/mpi2_type.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpi/mpi2_type.h (revision 352761) @@ -1,135 +1,131 @@ /*- - * Copyright (c) 2012-2015 LSI Corp. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ /* - * Copyright (c) 2000-2015 LSI Corporation. - * Copyright (c) 2013-2016 Avago Technologies - * All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_type.h * Title: MPI basic type definitions * Creation Date: August 16, 2006 * * mpi2_type.h Version: 02.00.01 * * Version History * --------------- * * Date Version Description * -------- -------- ------------------------------------------------------ * 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A. * 11-18-14 02.00.01 Updated copyright information. * -------------------------------------------------------------------------- */ #ifndef MPI2_TYPE_H #define MPI2_TYPE_H /******************************************************************************* * Define MPI2_POINTER if it hasn't already been defined. By default * MPI2_POINTER is defined to be a near pointer. MPI2_POINTER can be defined as * a far pointer by defining MPI2_POINTER as "far *" before this header file is * included. */ #ifndef MPI2_POINTER #define MPI2_POINTER * #endif /* the basic types may have already been included by mpi_type.h */ #ifndef MPI_TYPE_H /***************************************************************************** * * Basic Types * *****************************************************************************/ typedef signed char S8; typedef unsigned char U8; typedef signed short S16; typedef unsigned short U16; #ifdef __FreeBSD__ typedef int32_t S32; typedef uint32_t U32; #else #if defined(unix) || defined(__arm) || defined(ALPHA) || defined(__PPC__) || defined(__ppc) typedef signed int S32; typedef unsigned int U32; #else typedef signed long S32; typedef unsigned long U32; #endif #endif typedef struct _S64 { U32 Low; S32 High; } S64; typedef struct _U64 { U32 Low; U32 High; } U64; /***************************************************************************** * * Pointer Types * *****************************************************************************/ typedef S8 *PS8; typedef U8 *PU8; typedef S16 *PS16; typedef U16 *PU16; typedef S32 *PS32; typedef U32 *PU32; typedef S64 *PS64; typedef U64 *PU64; #endif #endif Index: releng/12.1/sys/dev/mpr/mpr.c =================================================================== --- releng/12.1/sys/dev/mpr/mpr.c (revision 352760) +++ releng/12.1/sys/dev/mpr/mpr.c (revision 352761) @@ -1,4012 +1,4031 @@ /*- * Copyright (c) 2009 Yahoo! Inc. * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2016 Avago Technologies + * Copyright 2000-2020 Broadcom Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * */ #include __FBSDID("$FreeBSD$"); /* Communications core for Avago Technologies (LSI) MPT3 */ /* TODO Move headers to mprvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include static int mpr_diag_reset(struct mpr_softc *sc, int sleep_flag); static int mpr_init_queues(struct mpr_softc *sc); static void mpr_resize_queues(struct mpr_softc *sc); static int mpr_message_unit_reset(struct mpr_softc *sc, int sleep_flag); static int mpr_transition_operational(struct mpr_softc *sc); static int mpr_iocfacts_allocate(struct mpr_softc *sc, uint8_t attaching); static void mpr_iocfacts_free(struct mpr_softc *sc); static void mpr_startup(void *arg); static int mpr_send_iocinit(struct mpr_softc *sc); static int mpr_alloc_queues(struct mpr_softc *sc); static int mpr_alloc_hw_queues(struct mpr_softc *sc); static int mpr_alloc_replies(struct mpr_softc *sc); static int mpr_alloc_requests(struct mpr_softc *sc); static int mpr_alloc_nvme_prp_pages(struct mpr_softc *sc); static int mpr_attach_log(struct mpr_softc *sc); static __inline void mpr_complete_command(struct mpr_softc *sc, struct mpr_command *cm); static void mpr_dispatch_event(struct mpr_softc *sc, uintptr_t data, MPI2_EVENT_NOTIFICATION_REPLY *reply); static void mpr_config_complete(struct mpr_softc *sc, struct mpr_command *cm); static void mpr_periodic(void *); static int mpr_reregister_events(struct mpr_softc *sc); static void mpr_enqueue_request(struct mpr_softc *sc, struct mpr_command *cm); static int mpr_get_iocfacts(struct mpr_softc *sc, MPI2_IOC_FACTS_REPLY *facts); static int mpr_wait_db_ack(struct mpr_softc *sc, int timeout, int sleep_flag); static int mpr_debug_sysctl(SYSCTL_HANDLER_ARGS); static int mpr_dump_reqs(SYSCTL_HANDLER_ARGS); static void mpr_parse_debug(struct mpr_softc *sc, char *list); SYSCTL_NODE(_hw, OID_AUTO, mpr, CTLFLAG_RD, 0, "MPR Driver Parameters"); MALLOC_DEFINE(M_MPR, "mpr", "mpr driver memory"); /* * Do a "Diagnostic Reset" aka a hard reset. This should get the chip out of * any state and back to its initialization state machine. */ static char mpt2_reset_magic[] = { 0x00, 0x0f, 0x04, 0x0b, 0x02, 0x07, 0x0d }; /* * Added this union to smoothly convert le64toh cm->cm_desc.Words. * Compiler only supports uint64_t to be passed as an argument. * Otherwise it will throw this error: * "aggregate value used where an integer was expected" */ typedef union _reply_descriptor { u64 word; struct { u32 low; u32 high; } u; } reply_descriptor, request_descriptor; /* Rate limit chain-fail messages to 1 per minute */ static struct timeval mpr_chainfail_interval = { 60, 0 }; /* * sleep_flag can be either CAN_SLEEP or NO_SLEEP. * If this function is called from process context, it can sleep * and there is no harm to sleep, in case if this fuction is called * from Interrupt handler, we can not sleep and need NO_SLEEP flag set. * based on sleep flags driver will call either msleep, pause or DELAY. * msleep and pause are of same variant, but pause is used when mpr_mtx * is not hold by driver. */ static int mpr_diag_reset(struct mpr_softc *sc,int sleep_flag) { uint32_t reg; int i, error, tries = 0; uint8_t first_wait_done = FALSE; mpr_dprint(sc, MPR_INIT, "%s entered\n", __func__); /* Clear any pending interrupts */ mpr_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); /* * Force NO_SLEEP for threads prohibited to sleep * e.a Thread from interrupt handler are prohibited to sleep. */ #if __FreeBSD_version >= 1000029 if (curthread->td_no_sleeping) #else //__FreeBSD_version < 1000029 if (curthread->td_pflags & TDP_NOSLEEPING) #endif //__FreeBSD_version >= 1000029 sleep_flag = NO_SLEEP; mpr_dprint(sc, MPR_INIT, "sequence start, sleep_flag=%d\n", sleep_flag); /* Push the magic sequence */ error = ETIMEDOUT; while (tries++ < 20) { for (i = 0; i < sizeof(mpt2_reset_magic); i++) mpr_regwrite(sc, MPI2_WRITE_SEQUENCE_OFFSET, mpt2_reset_magic[i]); /* wait 100 msec */ if (mtx_owned(&sc->mpr_mtx) && sleep_flag == CAN_SLEEP) msleep(&sc->msleep_fake_chan, &sc->mpr_mtx, 0, "mprdiag", hz/10); else if (sleep_flag == CAN_SLEEP) pause("mprdiag", hz/10); else DELAY(100 * 1000); reg = mpr_regread(sc, MPI2_HOST_DIAGNOSTIC_OFFSET); if (reg & MPI2_DIAG_DIAG_WRITE_ENABLE) { error = 0; break; } } if (error) { mpr_dprint(sc, MPR_INIT, "sequence failed, error=%d, exit\n", error); return (error); } /* Send the actual reset. XXX need to refresh the reg? */ reg |= MPI2_DIAG_RESET_ADAPTER; mpr_dprint(sc, MPR_INIT, "sequence success, sending reset, reg= 0x%x\n", reg); mpr_regwrite(sc, MPI2_HOST_DIAGNOSTIC_OFFSET, reg); /* Wait up to 300 seconds in 50ms intervals */ error = ETIMEDOUT; for (i = 0; i < 6000; i++) { /* * Wait 50 msec. If this is the first time through, wait 256 * msec to satisfy Diag Reset timing requirements. */ if (first_wait_done) { if (mtx_owned(&sc->mpr_mtx) && sleep_flag == CAN_SLEEP) msleep(&sc->msleep_fake_chan, &sc->mpr_mtx, 0, "mprdiag", hz/20); else if (sleep_flag == CAN_SLEEP) pause("mprdiag", hz/20); else DELAY(50 * 1000); } else { DELAY(256 * 1000); first_wait_done = TRUE; } /* * Check for the RESET_ADAPTER bit to be cleared first, then * wait for the RESET state to be cleared, which takes a little * longer. */ reg = mpr_regread(sc, MPI2_HOST_DIAGNOSTIC_OFFSET); if (reg & MPI2_DIAG_RESET_ADAPTER) { continue; } reg = mpr_regread(sc, MPI2_DOORBELL_OFFSET); if ((reg & MPI2_IOC_STATE_MASK) != MPI2_IOC_STATE_RESET) { error = 0; break; } } if (error) { mpr_dprint(sc, MPR_INIT, "reset failed, error= %d, exit\n", error); return (error); } mpr_regwrite(sc, MPI2_WRITE_SEQUENCE_OFFSET, 0x0); mpr_dprint(sc, MPR_INIT, "diag reset success, exit\n"); return (0); } static int mpr_message_unit_reset(struct mpr_softc *sc, int sleep_flag) { int error; MPR_FUNCTRACE(sc); mpr_dprint(sc, MPR_INIT, "%s entered\n", __func__); error = 0; mpr_regwrite(sc, MPI2_DOORBELL_OFFSET, MPI2_FUNCTION_IOC_MESSAGE_UNIT_RESET << MPI2_DOORBELL_FUNCTION_SHIFT); if (mpr_wait_db_ack(sc, 5, sleep_flag) != 0) { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "Doorbell handshake failed\n"); error = ETIMEDOUT; } mpr_dprint(sc, MPR_INIT, "%s exit\n", __func__); return (error); } static int mpr_transition_ready(struct mpr_softc *sc) { uint32_t reg, state; int error, tries = 0; int sleep_flags; MPR_FUNCTRACE(sc); /* If we are in attach call, do not sleep */ sleep_flags = (sc->mpr_flags & MPR_FLAGS_ATTACH_DONE) ? CAN_SLEEP : NO_SLEEP; error = 0; mpr_dprint(sc, MPR_INIT, "%s entered, sleep_flags= %d\n", __func__, sleep_flags); while (tries++ < 1200) { reg = mpr_regread(sc, MPI2_DOORBELL_OFFSET); mpr_dprint(sc, MPR_INIT, " Doorbell= 0x%x\n", reg); /* * Ensure the IOC is ready to talk. If it's not, try * resetting it. */ if (reg & MPI2_DOORBELL_USED) { mpr_dprint(sc, MPR_INIT, " Not ready, sending diag " "reset\n"); mpr_diag_reset(sc, sleep_flags); DELAY(50000); continue; } /* Is the adapter owned by another peer? */ if ((reg & MPI2_DOORBELL_WHO_INIT_MASK) == (MPI2_WHOINIT_PCI_PEER << MPI2_DOORBELL_WHO_INIT_SHIFT)) { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "IOC is under the " "control of another peer host, aborting " "initialization.\n"); error = ENXIO; break; } state = reg & MPI2_IOC_STATE_MASK; if (state == MPI2_IOC_STATE_READY) { /* Ready to go! */ error = 0; break; } else if (state == MPI2_IOC_STATE_FAULT) { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "IOC in fault " "state 0x%x, resetting\n", state & MPI2_DOORBELL_FAULT_CODE_MASK); mpr_diag_reset(sc, sleep_flags); } else if (state == MPI2_IOC_STATE_OPERATIONAL) { /* Need to take ownership */ mpr_message_unit_reset(sc, sleep_flags); } else if (state == MPI2_IOC_STATE_RESET) { /* Wait a bit, IOC might be in transition */ mpr_dprint(sc, MPR_INIT|MPR_FAULT, "IOC in unexpected reset state\n"); } else { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "IOC in unknown state 0x%x\n", state); error = EINVAL; break; } /* Wait 50ms for things to settle down. */ DELAY(50000); } if (error) mpr_dprint(sc, MPR_INIT|MPR_FAULT, "Cannot transition IOC to ready\n"); mpr_dprint(sc, MPR_INIT, "%s exit\n", __func__); return (error); } static int mpr_transition_operational(struct mpr_softc *sc) { uint32_t reg, state; int error; MPR_FUNCTRACE(sc); error = 0; reg = mpr_regread(sc, MPI2_DOORBELL_OFFSET); mpr_dprint(sc, MPR_INIT, "%s entered, Doorbell= 0x%x\n", __func__, reg); state = reg & MPI2_IOC_STATE_MASK; if (state != MPI2_IOC_STATE_READY) { mpr_dprint(sc, MPR_INIT, "IOC not ready\n"); if ((error = mpr_transition_ready(sc)) != 0) { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "failed to transition ready, exit\n"); return (error); } } error = mpr_send_iocinit(sc); mpr_dprint(sc, MPR_INIT, "%s exit\n", __func__); return (error); } static void mpr_resize_queues(struct mpr_softc *sc) { u_int reqcr, prireqcr, maxio, sges_per_frame, chain_seg_size; /* * Size the queues. Since the reply queues always need one free * entry, we'll deduct one reply message here. The LSI documents * suggest instead to add a count to the request queue, but I think * that it's better to deduct from reply queue. */ prireqcr = MAX(1, sc->max_prireqframes); prireqcr = MIN(prireqcr, sc->facts->HighPriorityCredit); reqcr = MAX(2, sc->max_reqframes); reqcr = MIN(reqcr, sc->facts->RequestCredit); sc->num_reqs = prireqcr + reqcr; sc->num_prireqs = prireqcr; sc->num_replies = MIN(sc->max_replyframes + sc->max_evtframes, sc->facts->MaxReplyDescriptorPostQueueDepth) - 1; /* Store the request frame size in bytes rather than as 32bit words */ sc->reqframesz = sc->facts->IOCRequestFrameSize * 4; /* * Gen3 and beyond uses the IOCMaxChainSegmentSize from IOC Facts to * get the size of a Chain Frame. Previous versions use the size as a * Request Frame for the Chain Frame size. If IOCMaxChainSegmentSize * is 0, use the default value. The IOCMaxChainSegmentSize is the * number of 16-byte elelements that can fit in a Chain Frame, which is * the size of an IEEE Simple SGE. */ if (sc->facts->MsgVersion >= MPI2_VERSION_02_05) { chain_seg_size = htole16(sc->facts->IOCMaxChainSegmentSize); if (chain_seg_size == 0) chain_seg_size = MPR_DEFAULT_CHAIN_SEG_SIZE; sc->chain_frame_size = chain_seg_size * MPR_MAX_CHAIN_ELEMENT_SIZE; } else { sc->chain_frame_size = sc->reqframesz; } /* * Max IO Size is Page Size * the following: * ((SGEs per frame - 1 for chain element) * Max Chain Depth) * + 1 for no chain needed in last frame * * If user suggests a Max IO size to use, use the smaller of the * user's value and the calculated value as long as the user's * value is larger than 0. The user's value is in pages. */ sges_per_frame = sc->chain_frame_size/sizeof(MPI2_IEEE_SGE_SIMPLE64)-1; maxio = (sges_per_frame * sc->facts->MaxChainDepth + 1) * PAGE_SIZE; /* * If I/O size limitation requested then use it and pass up to CAM. * If not, use MAXPHYS as an optimization hint, but report HW limit. */ if (sc->max_io_pages > 0) { maxio = min(maxio, sc->max_io_pages * PAGE_SIZE); sc->maxio = maxio; } else { sc->maxio = maxio; maxio = min(maxio, MAXPHYS); } sc->num_chains = (maxio / PAGE_SIZE + sges_per_frame - 2) / sges_per_frame * reqcr; if (sc->max_chains > 0 && sc->max_chains < sc->num_chains) sc->num_chains = sc->max_chains; /* * Figure out the number of MSIx-based queues. If the firmware or * user has done something crazy and not allowed enough credit for * the queues to be useful then don't enable multi-queue. */ if (sc->facts->MaxMSIxVectors < 2) sc->msi_msgs = 1; if (sc->msi_msgs > 1) { sc->msi_msgs = MIN(sc->msi_msgs, mp_ncpus); sc->msi_msgs = MIN(sc->msi_msgs, sc->facts->MaxMSIxVectors); if (sc->num_reqs / sc->msi_msgs < 2) sc->msi_msgs = 1; } mpr_dprint(sc, MPR_INIT, "Sized queues to q=%d reqs=%d replies=%d\n", sc->msi_msgs, sc->num_reqs, sc->num_replies); } /* * This is called during attach and when re-initializing due to a Diag Reset. * IOC Facts is used to allocate many of the structures needed by the driver. * If called from attach, de-allocation is not required because the driver has * not allocated any structures yet, but if called from a Diag Reset, previously * allocated structures based on IOC Facts will need to be freed and re- * allocated bases on the latest IOC Facts. */ static int mpr_iocfacts_allocate(struct mpr_softc *sc, uint8_t attaching) { int error; Mpi2IOCFactsReply_t saved_facts; uint8_t saved_mode, reallocating; mpr_dprint(sc, MPR_INIT|MPR_TRACE, "%s entered\n", __func__); /* Save old IOC Facts and then only reallocate if Facts have changed */ if (!attaching) { bcopy(sc->facts, &saved_facts, sizeof(MPI2_IOC_FACTS_REPLY)); } /* * Get IOC Facts. In all cases throughout this function, panic if doing * a re-initialization and only return the error if attaching so the OS * can handle it. */ if ((error = mpr_get_iocfacts(sc, sc->facts)) != 0) { if (attaching) { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "Failed to get " "IOC Facts with error %d, exit\n", error); return (error); } else { panic("%s failed to get IOC Facts with error %d\n", __func__, error); } } MPR_DPRINT_PAGE(sc, MPR_XINFO, iocfacts, sc->facts); snprintf(sc->fw_version, sizeof(sc->fw_version), "%02d.%02d.%02d.%02d", sc->facts->FWVersion.Struct.Major, sc->facts->FWVersion.Struct.Minor, sc->facts->FWVersion.Struct.Unit, sc->facts->FWVersion.Struct.Dev); mpr_dprint(sc, MPR_INFO, "Firmware: %s, Driver: %s\n", sc->fw_version, MPR_DRIVER_VERSION); mpr_dprint(sc, MPR_INFO, "IOCCapabilities: %b\n", sc->facts->IOCCapabilities, "\20" "\3ScsiTaskFull" "\4DiagTrace" "\5SnapBuf" "\6ExtBuf" "\7EEDP" "\10BiDirTarg" "\11Multicast" "\14TransRetry" "\15IR" "\16EventReplay" "\17RaidAccel" "\20MSIXIndex" "\21HostDisc" "\22FastPath" "\23RDPQArray" "\24AtomicReqDesc" "\25PCIeSRIOV"); /* * If the chip doesn't support event replay then a hard reset will be * required to trigger a full discovery. Do the reset here then * retransition to Ready. A hard reset might have already been done, * but it doesn't hurt to do it again. Only do this if attaching, not * for a Diag Reset. */ if (attaching && ((sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_EVENT_REPLAY) == 0)) { mpr_dprint(sc, MPR_INIT, "No event replay, resetting\n"); mpr_diag_reset(sc, NO_SLEEP); if ((error = mpr_transition_ready(sc)) != 0) { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "Failed to " "transition to ready with error %d, exit\n", error); return (error); } } /* * Set flag if IR Firmware is loaded. If the RAID Capability has * changed from the previous IOC Facts, log a warning, but only if * checking this after a Diag Reset and not during attach. */ saved_mode = sc->ir_firmware; if (sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID) sc->ir_firmware = 1; if (!attaching) { if (sc->ir_firmware != saved_mode) { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "new IR/IT mode " "in IOC Facts does not match previous mode\n"); } } /* Only deallocate and reallocate if relevant IOC Facts have changed */ reallocating = FALSE; sc->mpr_flags &= ~MPR_FLAGS_REALLOCATED; if ((!attaching) && ((saved_facts.MsgVersion != sc->facts->MsgVersion) || (saved_facts.HeaderVersion != sc->facts->HeaderVersion) || (saved_facts.MaxChainDepth != sc->facts->MaxChainDepth) || (saved_facts.RequestCredit != sc->facts->RequestCredit) || (saved_facts.ProductID != sc->facts->ProductID) || (saved_facts.IOCCapabilities != sc->facts->IOCCapabilities) || (saved_facts.IOCRequestFrameSize != sc->facts->IOCRequestFrameSize) || (saved_facts.IOCMaxChainSegmentSize != sc->facts->IOCMaxChainSegmentSize) || (saved_facts.MaxTargets != sc->facts->MaxTargets) || (saved_facts.MaxSasExpanders != sc->facts->MaxSasExpanders) || (saved_facts.MaxEnclosures != sc->facts->MaxEnclosures) || (saved_facts.HighPriorityCredit != sc->facts->HighPriorityCredit) || (saved_facts.MaxReplyDescriptorPostQueueDepth != sc->facts->MaxReplyDescriptorPostQueueDepth) || (saved_facts.ReplyFrameSize != sc->facts->ReplyFrameSize) || (saved_facts.MaxVolumes != sc->facts->MaxVolumes) || (saved_facts.MaxPersistentEntries != sc->facts->MaxPersistentEntries))) { reallocating = TRUE; /* Record that we reallocated everything */ sc->mpr_flags |= MPR_FLAGS_REALLOCATED; } /* * Some things should be done if attaching or re-allocating after a Diag * Reset, but are not needed after a Diag Reset if the FW has not * changed. */ if (attaching || reallocating) { /* * Check if controller supports FW diag buffers and set flag to * enable each type. */ if (sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_DIAG_TRACE_BUFFER) sc->fw_diag_buffer_list[MPI2_DIAG_BUF_TYPE_TRACE]. enabled = TRUE; if (sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_SNAPSHOT_BUFFER) sc->fw_diag_buffer_list[MPI2_DIAG_BUF_TYPE_SNAPSHOT]. enabled = TRUE; if (sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_EXTENDED_BUFFER) sc->fw_diag_buffer_list[MPI2_DIAG_BUF_TYPE_EXTENDED]. enabled = TRUE; /* * Set flags for some supported items. */ if (sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_EEDP) sc->eedp_enabled = TRUE; if (sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_TLR) sc->control_TLR = TRUE; - if (sc->facts->IOCCapabilities & - MPI26_IOCFACTS_CAPABILITY_ATOMIC_REQ) + if ((sc->facts->IOCCapabilities & + MPI26_IOCFACTS_CAPABILITY_ATOMIC_REQ) && + (sc->mpr_flags & MPR_FLAGS_SEA_IOC)) sc->atomic_desc_capable = TRUE; mpr_resize_queues(sc); /* * Initialize all Tail Queues */ TAILQ_INIT(&sc->req_list); TAILQ_INIT(&sc->high_priority_req_list); TAILQ_INIT(&sc->chain_list); TAILQ_INIT(&sc->prp_page_list); TAILQ_INIT(&sc->tm_list); } /* * If doing a Diag Reset and the FW is significantly different * (reallocating will be set above in IOC Facts comparison), then all * buffers based on the IOC Facts will need to be freed before they are * reallocated. */ if (reallocating) { mpr_iocfacts_free(sc); mprsas_realloc_targets(sc, saved_facts.MaxTargets + saved_facts.MaxVolumes); } /* * Any deallocation has been completed. Now start reallocating * if needed. Will only need to reallocate if attaching or if the new * IOC Facts are different from the previous IOC Facts after a Diag * Reset. Targets have already been allocated above if needed. */ error = 0; while (attaching || reallocating) { if ((error = mpr_alloc_hw_queues(sc)) != 0) break; if ((error = mpr_alloc_replies(sc)) != 0) break; if ((error = mpr_alloc_requests(sc)) != 0) break; if ((error = mpr_alloc_queues(sc)) != 0) break; break; } if (error) { mpr_dprint(sc, MPR_INIT|MPR_ERROR, "Failed to alloc queues with error %d\n", error); mpr_free(sc); return (error); } /* Always initialize the queues */ bzero(sc->free_queue, sc->fqdepth * 4); mpr_init_queues(sc); /* * Always get the chip out of the reset state, but only panic if not * attaching. If attaching and there is an error, that is handled by * the OS. */ error = mpr_transition_operational(sc); if (error != 0) { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "Failed to " "transition to operational with error %d\n", error); mpr_free(sc); return (error); } /* * Finish the queue initialization. * These are set here instead of in mpr_init_queues() because the * IOC resets these values during the state transition in * mpr_transition_operational(). The free index is set to 1 * because the corresponding index in the IOC is set to 0, and the * IOC treats the queues as full if both are set to the same value. * Hence the reason that the queue can't hold all of the possible * replies. */ sc->replypostindex = 0; mpr_regwrite(sc, MPI2_REPLY_FREE_HOST_INDEX_OFFSET, sc->replyfreeindex); mpr_regwrite(sc, MPI2_REPLY_POST_HOST_INDEX_OFFSET, 0); /* * Attach the subsystems so they can prepare their event masks. * XXX Should be dynamic so that IM/IR and user modules can attach */ error = 0; while (attaching) { mpr_dprint(sc, MPR_INIT, "Attaching subsystems\n"); if ((error = mpr_attach_log(sc)) != 0) break; if ((error = mpr_attach_sas(sc)) != 0) break; if ((error = mpr_attach_user(sc)) != 0) break; break; } if (error) { mpr_dprint(sc, MPR_INIT|MPR_ERROR, "Failed to attach all subsystems: error %d\n", error); mpr_free(sc); return (error); } /* * XXX If the number of MSI-X vectors changes during re-init, this * won't see it and adjust. */ if (attaching && (error = mpr_pci_setup_interrupts(sc)) != 0) { mpr_dprint(sc, MPR_INIT|MPR_ERROR, "Failed to setup interrupts\n"); mpr_free(sc); return (error); } return (error); } /* * This is called if memory is being free (during detach for example) and when * buffers need to be reallocated due to a Diag Reset. */ static void mpr_iocfacts_free(struct mpr_softc *sc) { struct mpr_command *cm; int i; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if (sc->free_busaddr != 0) bus_dmamap_unload(sc->queues_dmat, sc->queues_map); if (sc->free_queue != NULL) bus_dmamem_free(sc->queues_dmat, sc->free_queue, sc->queues_map); if (sc->queues_dmat != NULL) bus_dma_tag_destroy(sc->queues_dmat); if (sc->chain_frames != NULL) { bus_dmamap_unload(sc->chain_dmat, sc->chain_map); bus_dmamem_free(sc->chain_dmat, sc->chain_frames, sc->chain_map); } if (sc->chain_dmat != NULL) bus_dma_tag_destroy(sc->chain_dmat); if (sc->sense_busaddr != 0) bus_dmamap_unload(sc->sense_dmat, sc->sense_map); if (sc->sense_frames != NULL) bus_dmamem_free(sc->sense_dmat, sc->sense_frames, sc->sense_map); if (sc->sense_dmat != NULL) bus_dma_tag_destroy(sc->sense_dmat); if (sc->prp_page_busaddr != 0) bus_dmamap_unload(sc->prp_page_dmat, sc->prp_page_map); if (sc->prp_pages != NULL) bus_dmamem_free(sc->prp_page_dmat, sc->prp_pages, sc->prp_page_map); if (sc->prp_page_dmat != NULL) bus_dma_tag_destroy(sc->prp_page_dmat); if (sc->reply_busaddr != 0) bus_dmamap_unload(sc->reply_dmat, sc->reply_map); if (sc->reply_frames != NULL) bus_dmamem_free(sc->reply_dmat, sc->reply_frames, sc->reply_map); if (sc->reply_dmat != NULL) bus_dma_tag_destroy(sc->reply_dmat); if (sc->req_busaddr != 0) bus_dmamap_unload(sc->req_dmat, sc->req_map); if (sc->req_frames != NULL) bus_dmamem_free(sc->req_dmat, sc->req_frames, sc->req_map); if (sc->req_dmat != NULL) bus_dma_tag_destroy(sc->req_dmat); if (sc->chains != NULL) free(sc->chains, M_MPR); if (sc->prps != NULL) free(sc->prps, M_MPR); if (sc->commands != NULL) { for (i = 1; i < sc->num_reqs; i++) { cm = &sc->commands[i]; bus_dmamap_destroy(sc->buffer_dmat, cm->cm_dmamap); } free(sc->commands, M_MPR); } if (sc->buffer_dmat != NULL) bus_dma_tag_destroy(sc->buffer_dmat); mpr_pci_free_interrupts(sc); free(sc->queues, M_MPR); sc->queues = NULL; } /* * The terms diag reset and hard reset are used interchangeably in the MPI * docs to mean resetting the controller chip. In this code diag reset * cleans everything up, and the hard reset function just sends the reset * sequence to the chip. This should probably be refactored so that every * subsystem gets a reset notification of some sort, and can clean up * appropriately. */ int mpr_reinit(struct mpr_softc *sc) { int error; struct mprsas_softc *sassc; sassc = sc->sassc; MPR_FUNCTRACE(sc); mtx_assert(&sc->mpr_mtx, MA_OWNED); mpr_dprint(sc, MPR_INIT|MPR_INFO, "Reinitializing controller\n"); if (sc->mpr_flags & MPR_FLAGS_DIAGRESET) { mpr_dprint(sc, MPR_INIT, "Reset already in progress\n"); return 0; } /* * Make sure the completion callbacks can recognize they're getting * a NULL cm_reply due to a reset. */ sc->mpr_flags |= MPR_FLAGS_DIAGRESET; /* * Mask interrupts here. */ mpr_dprint(sc, MPR_INIT, "Masking interrupts and resetting\n"); mpr_mask_intr(sc); error = mpr_diag_reset(sc, CAN_SLEEP); if (error != 0) { panic("%s hard reset failed with error %d\n", __func__, error); } /* Restore the PCI state, including the MSI-X registers */ mpr_pci_restore(sc); /* Give the I/O subsystem special priority to get itself prepared */ mprsas_handle_reinit(sc); /* * Get IOC Facts and allocate all structures based on this information. * The attach function will also call mpr_iocfacts_allocate at startup. * If relevant values have changed in IOC Facts, this function will free * all of the memory based on IOC Facts and reallocate that memory. */ if ((error = mpr_iocfacts_allocate(sc, FALSE)) != 0) { panic("%s IOC Facts based allocation failed with error %d\n", __func__, error); } /* * Mapping structures will be re-allocated after getting IOC Page8, so * free these structures here. */ mpr_mapping_exit(sc); /* * The static page function currently read is IOC Page8. Others can be * added in future. It's possible that the values in IOC Page8 have * changed after a Diag Reset due to user modification, so always read * these. Interrupts are masked, so unmask them before getting config * pages. */ mpr_unmask_intr(sc); sc->mpr_flags &= ~MPR_FLAGS_DIAGRESET; mpr_base_static_config_pages(sc); /* * Some mapping info is based in IOC Page8 data, so re-initialize the * mapping tables. */ mpr_mapping_initialize(sc); /* * Restart will reload the event masks clobbered by the reset, and * then enable the port. */ mpr_reregister_events(sc); /* the end of discovery will release the simq, so we're done. */ mpr_dprint(sc, MPR_INIT|MPR_XINFO, "Finished sc %p post %u free %u\n", sc, sc->replypostindex, sc->replyfreeindex); mprsas_release_simq_reinit(sassc); mpr_dprint(sc, MPR_INIT, "%s exit error= %d\n", __func__, error); return 0; } /* Wait for the chip to ACK a word that we've put into its FIFO * Wait for seconds. In single loop wait for busy loop * for 500 microseconds. * Total is [ 0.5 * (2000 * ) ] in miliseconds. * */ static int mpr_wait_db_ack(struct mpr_softc *sc, int timeout, int sleep_flag) { u32 cntdn, count; u32 int_status; u32 doorbell; count = 0; cntdn = (sleep_flag == CAN_SLEEP) ? 1000*timeout : 2000*timeout; do { int_status = mpr_regread(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET); if (!(int_status & MPI2_HIS_SYS2IOC_DB_STATUS)) { mpr_dprint(sc, MPR_TRACE, "%s: successful count(%d), " "timeout(%d)\n", __func__, count, timeout); return 0; } else if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) { doorbell = mpr_regread(sc, MPI2_DOORBELL_OFFSET); if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) { mpr_dprint(sc, MPR_FAULT, "fault_state(0x%04x)!\n", doorbell); return (EFAULT); } } else if (int_status == 0xFFFFFFFF) goto out; /* * If it can sleep, sleep for 1 milisecond, else busy loop for * 0.5 milisecond */ if (mtx_owned(&sc->mpr_mtx) && sleep_flag == CAN_SLEEP) msleep(&sc->msleep_fake_chan, &sc->mpr_mtx, 0, "mprdba", hz/1000); else if (sleep_flag == CAN_SLEEP) pause("mprdba", hz/1000); else DELAY(500); count++; } while (--cntdn); out: mpr_dprint(sc, MPR_FAULT, "%s: failed due to timeout count(%d), " "int_status(%x)!\n", __func__, count, int_status); return (ETIMEDOUT); } /* Wait for the chip to signal that the next word in its FIFO can be fetched */ static int mpr_wait_db_int(struct mpr_softc *sc) { int retry; for (retry = 0; retry < MPR_DB_MAX_WAIT; retry++) { if ((mpr_regread(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET) & MPI2_HIS_IOC2SYS_DB_STATUS) != 0) return (0); DELAY(2000); } return (ETIMEDOUT); } /* Step through the synchronous command state machine, i.e. "Doorbell mode" */ static int mpr_request_sync(struct mpr_softc *sc, void *req, MPI2_DEFAULT_REPLY *reply, int req_sz, int reply_sz, int timeout) { uint32_t *data32; uint16_t *data16; int i, count, ioc_sz, residual; int sleep_flags = CAN_SLEEP; #if __FreeBSD_version >= 1000029 if (curthread->td_no_sleeping) #else //__FreeBSD_version < 1000029 if (curthread->td_pflags & TDP_NOSLEEPING) #endif //__FreeBSD_version >= 1000029 sleep_flags = NO_SLEEP; /* Step 1 */ mpr_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); /* Step 2 */ if (mpr_regread(sc, MPI2_DOORBELL_OFFSET) & MPI2_DOORBELL_USED) return (EBUSY); /* Step 3 * Announce that a message is coming through the doorbell. Messages * are pushed at 32bit words, so round up if needed. */ count = (req_sz + 3) / 4; mpr_regwrite(sc, MPI2_DOORBELL_OFFSET, (MPI2_FUNCTION_HANDSHAKE << MPI2_DOORBELL_FUNCTION_SHIFT) | (count << MPI2_DOORBELL_ADD_DWORDS_SHIFT)); /* Step 4 */ if (mpr_wait_db_int(sc) || (mpr_regread(sc, MPI2_DOORBELL_OFFSET) & MPI2_DOORBELL_USED) == 0) { mpr_dprint(sc, MPR_FAULT, "Doorbell failed to activate\n"); return (ENXIO); } mpr_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); if (mpr_wait_db_ack(sc, 5, sleep_flags) != 0) { mpr_dprint(sc, MPR_FAULT, "Doorbell handshake failed\n"); return (ENXIO); } /* Step 5 */ /* Clock out the message data synchronously in 32-bit dwords*/ data32 = (uint32_t *)req; for (i = 0; i < count; i++) { mpr_regwrite(sc, MPI2_DOORBELL_OFFSET, htole32(data32[i])); if (mpr_wait_db_ack(sc, 5, sleep_flags) != 0) { mpr_dprint(sc, MPR_FAULT, "Timeout while writing doorbell\n"); return (ENXIO); } } /* Step 6 */ /* Clock in the reply in 16-bit words. The total length of the * message is always in the 4th byte, so clock out the first 2 words * manually, then loop the rest. */ data16 = (uint16_t *)reply; if (mpr_wait_db_int(sc) != 0) { mpr_dprint(sc, MPR_FAULT, "Timeout reading doorbell 0\n"); return (ENXIO); } data16[0] = mpr_regread(sc, MPI2_DOORBELL_OFFSET) & MPI2_DOORBELL_DATA_MASK; mpr_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); if (mpr_wait_db_int(sc) != 0) { mpr_dprint(sc, MPR_FAULT, "Timeout reading doorbell 1\n"); return (ENXIO); } data16[1] = mpr_regread(sc, MPI2_DOORBELL_OFFSET) & MPI2_DOORBELL_DATA_MASK; mpr_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); /* Number of 32bit words in the message */ ioc_sz = reply->MsgLength; /* * Figure out how many 16bit words to clock in without overrunning. * The precision loss with dividing reply_sz can safely be * ignored because the messages can only be multiples of 32bits. */ residual = 0; count = MIN((reply_sz / 4), ioc_sz) * 2; if (count < ioc_sz * 2) { residual = ioc_sz * 2 - count; mpr_dprint(sc, MPR_ERROR, "Driver error, throwing away %d " "residual message words\n", residual); } for (i = 2; i < count; i++) { if (mpr_wait_db_int(sc) != 0) { mpr_dprint(sc, MPR_FAULT, "Timeout reading doorbell %d\n", i); return (ENXIO); } data16[i] = mpr_regread(sc, MPI2_DOORBELL_OFFSET) & MPI2_DOORBELL_DATA_MASK; mpr_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); } /* * Pull out residual words that won't fit into the provided buffer. * This keeps the chip from hanging due to a driver programming * error. */ while (residual--) { if (mpr_wait_db_int(sc) != 0) { mpr_dprint(sc, MPR_FAULT, "Timeout reading doorbell\n"); return (ENXIO); } (void)mpr_regread(sc, MPI2_DOORBELL_OFFSET); mpr_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); } /* Step 7 */ if (mpr_wait_db_int(sc) != 0) { mpr_dprint(sc, MPR_FAULT, "Timeout waiting to exit doorbell\n"); return (ENXIO); } if (mpr_regread(sc, MPI2_DOORBELL_OFFSET) & MPI2_DOORBELL_USED) mpr_dprint(sc, MPR_FAULT, "Warning, doorbell still active\n"); mpr_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); return (0); } static void mpr_enqueue_request(struct mpr_softc *sc, struct mpr_command *cm) { request_descriptor rd; MPR_FUNCTRACE(sc); mpr_dprint(sc, MPR_TRACE, "SMID %u cm %p ccb %p\n", cm->cm_desc.Default.SMID, cm, cm->cm_ccb); if (sc->mpr_flags & MPR_FLAGS_ATTACH_DONE && !(sc->mpr_flags & MPR_FLAGS_SHUTDOWN)) mtx_assert(&sc->mpr_mtx, MA_OWNED); if (++sc->io_cmds_active > sc->io_cmds_highwater) sc->io_cmds_highwater++; KASSERT(cm->cm_state == MPR_CM_STATE_BUSY, ("command not busy\n")); cm->cm_state = MPR_CM_STATE_INQUEUE; if (sc->atomic_desc_capable) { rd.u.low = cm->cm_desc.Words.Low; mpr_regwrite(sc, MPI26_ATOMIC_REQUEST_DESCRIPTOR_POST_OFFSET, rd.u.low); } else { rd.u.low = cm->cm_desc.Words.Low; rd.u.high = cm->cm_desc.Words.High; rd.word = htole64(rd.word); mpr_regwrite(sc, MPI2_REQUEST_DESCRIPTOR_POST_LOW_OFFSET, rd.u.low); mpr_regwrite(sc, MPI2_REQUEST_DESCRIPTOR_POST_HIGH_OFFSET, rd.u.high); } } /* * Just the FACTS, ma'am. */ static int mpr_get_iocfacts(struct mpr_softc *sc, MPI2_IOC_FACTS_REPLY *facts) { MPI2_DEFAULT_REPLY *reply; MPI2_IOC_FACTS_REQUEST request; int error, req_sz, reply_sz; MPR_FUNCTRACE(sc); mpr_dprint(sc, MPR_INIT, "%s entered\n", __func__); req_sz = sizeof(MPI2_IOC_FACTS_REQUEST); reply_sz = sizeof(MPI2_IOC_FACTS_REPLY); reply = (MPI2_DEFAULT_REPLY *)facts; bzero(&request, req_sz); request.Function = MPI2_FUNCTION_IOC_FACTS; error = mpr_request_sync(sc, &request, reply, req_sz, reply_sz, 5); mpr_dprint(sc, MPR_INIT, "%s exit, error= %d\n", __func__, error); return (error); } static int mpr_send_iocinit(struct mpr_softc *sc) { MPI2_IOC_INIT_REQUEST init; MPI2_DEFAULT_REPLY reply; int req_sz, reply_sz, error; struct timeval now; uint64_t time_in_msec; MPR_FUNCTRACE(sc); mpr_dprint(sc, MPR_INIT, "%s entered\n", __func__); /* Do a quick sanity check on proper initialization */ if ((sc->pqdepth == 0) || (sc->fqdepth == 0) || (sc->reqframesz == 0) || (sc->replyframesz == 0)) { mpr_dprint(sc, MPR_INIT|MPR_ERROR, "Driver not fully initialized for IOCInit\n"); return (EINVAL); } req_sz = sizeof(MPI2_IOC_INIT_REQUEST); reply_sz = sizeof(MPI2_IOC_INIT_REPLY); bzero(&init, req_sz); bzero(&reply, reply_sz); /* * Fill in the init block. Note that most addresses are * deliberately in the lower 32bits of memory. This is a micro- * optimzation for PCI/PCIX, though it's not clear if it helps PCIe. */ init.Function = MPI2_FUNCTION_IOC_INIT; init.WhoInit = MPI2_WHOINIT_HOST_DRIVER; init.MsgVersion = htole16(MPI2_VERSION); init.HeaderVersion = htole16(MPI2_HEADER_VERSION); init.SystemRequestFrameSize = htole16((uint16_t)(sc->reqframesz / 4)); init.ReplyDescriptorPostQueueDepth = htole16(sc->pqdepth); init.ReplyFreeQueueDepth = htole16(sc->fqdepth); init.SenseBufferAddressHigh = 0; init.SystemReplyAddressHigh = 0; init.SystemRequestFrameBaseAddress.High = 0; init.SystemRequestFrameBaseAddress.Low = htole32((uint32_t)sc->req_busaddr); init.ReplyDescriptorPostQueueAddress.High = 0; init.ReplyDescriptorPostQueueAddress.Low = htole32((uint32_t)sc->post_busaddr); init.ReplyFreeQueueAddress.High = 0; init.ReplyFreeQueueAddress.Low = htole32((uint32_t)sc->free_busaddr); getmicrotime(&now); time_in_msec = (now.tv_sec * 1000 + now.tv_usec/1000); init.TimeStamp.High = htole32((time_in_msec >> 32) & 0xFFFFFFFF); init.TimeStamp.Low = htole32(time_in_msec & 0xFFFFFFFF); init.HostPageSize = HOST_PAGE_SIZE_4K; error = mpr_request_sync(sc, &init, &reply, req_sz, reply_sz, 5); if ((reply.IOCStatus & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) error = ENXIO; mpr_dprint(sc, MPR_INIT, "IOCInit status= 0x%x\n", reply.IOCStatus); mpr_dprint(sc, MPR_INIT, "%s exit\n", __func__); return (error); } void mpr_memaddr_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error) { bus_addr_t *addr; addr = arg; *addr = segs[0].ds_addr; } void mpr_memaddr_wait_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error) { struct mpr_busdma_context *ctx; int need_unload, need_free; ctx = (struct mpr_busdma_context *)arg; need_unload = 0; need_free = 0; mpr_lock(ctx->softc); ctx->error = error; ctx->completed = 1; if ((error == 0) && (ctx->abandoned == 0)) { *ctx->addr = segs[0].ds_addr; } else { if (nsegs != 0) need_unload = 1; if (ctx->abandoned != 0) need_free = 1; } if (need_free == 0) wakeup(ctx); mpr_unlock(ctx->softc); if (need_unload != 0) { bus_dmamap_unload(ctx->buffer_dmat, ctx->buffer_dmamap); *ctx->addr = 0; } if (need_free != 0) free(ctx, M_MPR); } static int mpr_alloc_queues(struct mpr_softc *sc) { struct mpr_queue *q; int nq, i; nq = sc->msi_msgs; mpr_dprint(sc, MPR_INIT|MPR_XINFO, "Allocating %d I/O queues\n", nq); sc->queues = malloc(sizeof(struct mpr_queue) * nq, M_MPR, M_NOWAIT|M_ZERO); if (sc->queues == NULL) return (ENOMEM); for (i = 0; i < nq; i++) { q = &sc->queues[i]; mpr_dprint(sc, MPR_INIT, "Configuring queue %d %p\n", i, q); q->sc = sc; q->qnum = i; } return (0); } static int mpr_alloc_hw_queues(struct mpr_softc *sc) { bus_addr_t queues_busaddr; uint8_t *queues; int qsize, fqsize, pqsize; /* * The reply free queue contains 4 byte entries in multiples of 16 and * aligned on a 16 byte boundary. There must always be an unused entry. * This queue supplies fresh reply frames for the firmware to use. * * The reply descriptor post queue contains 8 byte entries in * multiples of 16 and aligned on a 16 byte boundary. This queue * contains filled-in reply frames sent from the firmware to the host. * * These two queues are allocated together for simplicity. */ sc->fqdepth = roundup2(sc->num_replies + 1, 16); sc->pqdepth = roundup2(sc->num_replies + 1, 16); fqsize= sc->fqdepth * 4; pqsize = sc->pqdepth * 8; qsize = fqsize + pqsize; if (bus_dma_tag_create( sc->mpr_parent_dmat, /* parent */ 16, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ qsize, /* maxsize */ 1, /* nsegments */ qsize, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->queues_dmat)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate queues DMA tag\n"); return (ENOMEM); } if (bus_dmamem_alloc(sc->queues_dmat, (void **)&queues, BUS_DMA_NOWAIT, &sc->queues_map)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate queues memory\n"); return (ENOMEM); } bzero(queues, qsize); bus_dmamap_load(sc->queues_dmat, sc->queues_map, queues, qsize, mpr_memaddr_cb, &queues_busaddr, 0); sc->free_queue = (uint32_t *)queues; sc->free_busaddr = queues_busaddr; sc->post_queue = (MPI2_REPLY_DESCRIPTORS_UNION *)(queues + fqsize); sc->post_busaddr = queues_busaddr + fqsize; mpr_dprint(sc, MPR_INIT, "free queue busaddr= %#016jx size= %d\n", (uintmax_t)sc->free_busaddr, fqsize); mpr_dprint(sc, MPR_INIT, "reply queue busaddr= %#016jx size= %d\n", (uintmax_t)sc->post_busaddr, pqsize); return (0); } static int mpr_alloc_replies(struct mpr_softc *sc) { int rsize, num_replies; /* Store the reply frame size in bytes rather than as 32bit words */ sc->replyframesz = sc->facts->ReplyFrameSize * 4; /* * sc->num_replies should be one less than sc->fqdepth. We need to * allocate space for sc->fqdepth replies, but only sc->num_replies * replies can be used at once. */ num_replies = max(sc->fqdepth, sc->num_replies); rsize = sc->replyframesz * num_replies; if (bus_dma_tag_create( sc->mpr_parent_dmat, /* parent */ 4, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ rsize, /* maxsize */ 1, /* nsegments */ rsize, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->reply_dmat)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate replies DMA tag\n"); return (ENOMEM); } if (bus_dmamem_alloc(sc->reply_dmat, (void **)&sc->reply_frames, BUS_DMA_NOWAIT, &sc->reply_map)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate replies memory\n"); return (ENOMEM); } bzero(sc->reply_frames, rsize); bus_dmamap_load(sc->reply_dmat, sc->reply_map, sc->reply_frames, rsize, mpr_memaddr_cb, &sc->reply_busaddr, 0); mpr_dprint(sc, MPR_INIT, "reply frames busaddr= %#016jx size= %d\n", (uintmax_t)sc->reply_busaddr, rsize); return (0); } static void mpr_load_chains_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error) { struct mpr_softc *sc = arg; struct mpr_chain *chain; bus_size_t bo; int i, o, s; if (error != 0) return; for (i = 0, o = 0, s = 0; s < nsegs; s++) { for (bo = 0; bo + sc->chain_frame_size <= segs[s].ds_len; bo += sc->chain_frame_size) { chain = &sc->chains[i++]; chain->chain =(MPI2_SGE_IO_UNION *)(sc->chain_frames+o); chain->chain_busaddr = segs[s].ds_addr + bo; o += sc->chain_frame_size; mpr_free_chain(sc, chain); } if (bo != segs[s].ds_len) o += segs[s].ds_len - bo; } sc->chain_free_lowwater = i; } static int mpr_alloc_requests(struct mpr_softc *sc) { struct mpr_command *cm; int i, rsize, nsegs; rsize = sc->reqframesz * sc->num_reqs; if (bus_dma_tag_create( sc->mpr_parent_dmat, /* parent */ 16, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ rsize, /* maxsize */ 1, /* nsegments */ rsize, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->req_dmat)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate request DMA tag\n"); return (ENOMEM); } if (bus_dmamem_alloc(sc->req_dmat, (void **)&sc->req_frames, BUS_DMA_NOWAIT, &sc->req_map)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate request memory\n"); return (ENOMEM); } bzero(sc->req_frames, rsize); bus_dmamap_load(sc->req_dmat, sc->req_map, sc->req_frames, rsize, mpr_memaddr_cb, &sc->req_busaddr, 0); mpr_dprint(sc, MPR_INIT, "request frames busaddr= %#016jx size= %d\n", (uintmax_t)sc->req_busaddr, rsize); sc->chains = malloc(sizeof(struct mpr_chain) * sc->num_chains, M_MPR, M_NOWAIT | M_ZERO); if (!sc->chains) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate chain memory\n"); return (ENOMEM); } rsize = sc->chain_frame_size * sc->num_chains; if (bus_dma_tag_create( sc->mpr_parent_dmat, /* parent */ 16, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ rsize, /* maxsize */ howmany(rsize, PAGE_SIZE), /* nsegments */ rsize, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->chain_dmat)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate chain DMA tag\n"); return (ENOMEM); } if (bus_dmamem_alloc(sc->chain_dmat, (void **)&sc->chain_frames, BUS_DMA_NOWAIT | BUS_DMA_ZERO, &sc->chain_map)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate chain memory\n"); return (ENOMEM); } if (bus_dmamap_load(sc->chain_dmat, sc->chain_map, sc->chain_frames, rsize, mpr_load_chains_cb, sc, BUS_DMA_NOWAIT)) { mpr_dprint(sc, MPR_ERROR, "Cannot load chain memory\n"); bus_dmamem_free(sc->chain_dmat, sc->chain_frames, sc->chain_map); return (ENOMEM); } rsize = MPR_SENSE_LEN * sc->num_reqs; if (bus_dma_tag_create( sc->mpr_parent_dmat, /* parent */ 1, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ rsize, /* maxsize */ 1, /* nsegments */ rsize, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->sense_dmat)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate sense DMA tag\n"); return (ENOMEM); } if (bus_dmamem_alloc(sc->sense_dmat, (void **)&sc->sense_frames, BUS_DMA_NOWAIT, &sc->sense_map)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate sense memory\n"); return (ENOMEM); } bzero(sc->sense_frames, rsize); bus_dmamap_load(sc->sense_dmat, sc->sense_map, sc->sense_frames, rsize, mpr_memaddr_cb, &sc->sense_busaddr, 0); mpr_dprint(sc, MPR_INIT, "sense frames busaddr= %#016jx size= %d\n", (uintmax_t)sc->sense_busaddr, rsize); /* * Allocate NVMe PRP Pages for NVMe SGL support only if the FW supports * these devices. */ if ((sc->facts->MsgVersion >= MPI2_VERSION_02_06) && (sc->facts->ProtocolFlags & MPI2_IOCFACTS_PROTOCOL_NVME_DEVICES)) { if (mpr_alloc_nvme_prp_pages(sc) == ENOMEM) return (ENOMEM); } nsegs = (sc->maxio / PAGE_SIZE) + 1; if (bus_dma_tag_create( sc->mpr_parent_dmat, /* parent */ 1, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ BUS_SPACE_MAXSIZE_32BIT,/* maxsize */ nsegs, /* nsegments */ BUS_SPACE_MAXSIZE_32BIT,/* maxsegsize */ BUS_DMA_ALLOCNOW, /* flags */ busdma_lock_mutex, /* lockfunc */ &sc->mpr_mtx, /* lockarg */ &sc->buffer_dmat)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate buffer DMA tag\n"); return (ENOMEM); } /* * SMID 0 cannot be used as a free command per the firmware spec. * Just drop that command instead of risking accounting bugs. */ sc->commands = malloc(sizeof(struct mpr_command) * sc->num_reqs, M_MPR, M_WAITOK | M_ZERO); if (!sc->commands) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate command memory\n"); return (ENOMEM); } for (i = 1; i < sc->num_reqs; i++) { cm = &sc->commands[i]; cm->cm_req = sc->req_frames + i * sc->reqframesz; cm->cm_req_busaddr = sc->req_busaddr + i * sc->reqframesz; cm->cm_sense = &sc->sense_frames[i]; cm->cm_sense_busaddr = sc->sense_busaddr + i * MPR_SENSE_LEN; cm->cm_desc.Default.SMID = i; cm->cm_sc = sc; cm->cm_state = MPR_CM_STATE_BUSY; TAILQ_INIT(&cm->cm_chain_list); TAILQ_INIT(&cm->cm_prp_page_list); callout_init_mtx(&cm->cm_callout, &sc->mpr_mtx, 0); /* XXX Is a failure here a critical problem? */ if (bus_dmamap_create(sc->buffer_dmat, 0, &cm->cm_dmamap) == 0) { if (i <= sc->num_prireqs) mpr_free_high_priority_command(sc, cm); else mpr_free_command(sc, cm); } else { panic("failed to allocate command %d\n", i); sc->num_reqs = i; break; } } return (0); } /* * Allocate contiguous buffers for PCIe NVMe devices for building native PRPs, * which are scatter/gather lists for NVMe devices. * * This buffer must be contiguous due to the nature of how NVMe PRPs are built * and translated by FW. * * returns ENOMEM if memory could not be allocated, otherwise returns 0. */ static int mpr_alloc_nvme_prp_pages(struct mpr_softc *sc) { int PRPs_per_page, PRPs_required, pages_required; int rsize, i; struct mpr_prp_page *prp_page; /* * Assuming a MAX_IO_SIZE of 1MB and a PAGE_SIZE of 4k, the max number * of PRPs (NVMe's Scatter/Gather Element) needed per I/O is: * MAX_IO_SIZE / PAGE_SIZE = 256 * * 1 PRP entry in main frame for PRP list pointer still leaves 255 PRPs * required for the remainder of the 1MB I/O. 512 PRPs can fit into one * page (4096 / 8 = 512), so only one page is required for each I/O. * * Each of these buffers will need to be contiguous. For simplicity, * only one buffer is allocated here, which has all of the space * required for the NVMe Queue Depth. If there are problems allocating * this one buffer, this function will need to change to allocate * individual, contiguous NVME_QDEPTH buffers. * * The real calculation will use the real max io size. Above is just an * example. * */ PRPs_required = sc->maxio / PAGE_SIZE; PRPs_per_page = (PAGE_SIZE / PRP_ENTRY_SIZE) - 1; pages_required = (PRPs_required / PRPs_per_page) + 1; sc->prp_buffer_size = PAGE_SIZE * pages_required; rsize = sc->prp_buffer_size * NVME_QDEPTH; if (bus_dma_tag_create( sc->mpr_parent_dmat, /* parent */ 4, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ rsize, /* maxsize */ 1, /* nsegments */ rsize, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->prp_page_dmat)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate NVMe PRP DMA " "tag\n"); return (ENOMEM); } if (bus_dmamem_alloc(sc->prp_page_dmat, (void **)&sc->prp_pages, BUS_DMA_NOWAIT, &sc->prp_page_map)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate NVMe PRP memory\n"); return (ENOMEM); } bzero(sc->prp_pages, rsize); bus_dmamap_load(sc->prp_page_dmat, sc->prp_page_map, sc->prp_pages, rsize, mpr_memaddr_cb, &sc->prp_page_busaddr, 0); sc->prps = malloc(sizeof(struct mpr_prp_page) * NVME_QDEPTH, M_MPR, M_WAITOK | M_ZERO); for (i = 0; i < NVME_QDEPTH; i++) { prp_page = &sc->prps[i]; prp_page->prp_page = (uint64_t *)(sc->prp_pages + i * sc->prp_buffer_size); prp_page->prp_page_busaddr = (uint64_t)(sc->prp_page_busaddr + i * sc->prp_buffer_size); mpr_free_prp_page(sc, prp_page); sc->prp_pages_free_lowwater++; } return (0); } static int mpr_init_queues(struct mpr_softc *sc) { int i; memset((uint8_t *)sc->post_queue, 0xff, sc->pqdepth * 8); /* * According to the spec, we need to use one less reply than we * have space for on the queue. So sc->num_replies (the number we * use) should be less than sc->fqdepth (allocated size). */ if (sc->num_replies >= sc->fqdepth) return (EINVAL); /* * Initialize all of the free queue entries. */ for (i = 0; i < sc->fqdepth; i++) { sc->free_queue[i] = sc->reply_busaddr + (i * sc->replyframesz); } sc->replyfreeindex = sc->num_replies; return (0); } /* Get the driver parameter tunables. Lowest priority are the driver defaults. * Next are the global settings, if they exist. Highest are the per-unit * settings, if they exist. */ void mpr_get_tunables(struct mpr_softc *sc) { char tmpstr[80], mpr_debug[80]; /* XXX default to some debugging for now */ sc->mpr_debug = MPR_INFO | MPR_FAULT; sc->disable_msix = 0; sc->disable_msi = 0; sc->max_msix = MPR_MSIX_MAX; sc->max_chains = MPR_CHAIN_FRAMES; sc->max_io_pages = MPR_MAXIO_PAGES; sc->enable_ssu = MPR_SSU_ENABLE_SSD_DISABLE_HDD; sc->spinup_wait_time = DEFAULT_SPINUP_WAIT; sc->use_phynum = 1; sc->max_reqframes = MPR_REQ_FRAMES; sc->max_prireqframes = MPR_PRI_REQ_FRAMES; sc->max_replyframes = MPR_REPLY_FRAMES; sc->max_evtframes = MPR_EVT_REPLY_FRAMES; /* * Grab the global variables. */ bzero(mpr_debug, 80); if (TUNABLE_STR_FETCH("hw.mpr.debug_level", mpr_debug, 80) != 0) mpr_parse_debug(sc, mpr_debug); TUNABLE_INT_FETCH("hw.mpr.disable_msix", &sc->disable_msix); TUNABLE_INT_FETCH("hw.mpr.disable_msi", &sc->disable_msi); TUNABLE_INT_FETCH("hw.mpr.max_msix", &sc->max_msix); TUNABLE_INT_FETCH("hw.mpr.max_chains", &sc->max_chains); TUNABLE_INT_FETCH("hw.mpr.max_io_pages", &sc->max_io_pages); TUNABLE_INT_FETCH("hw.mpr.enable_ssu", &sc->enable_ssu); TUNABLE_INT_FETCH("hw.mpr.spinup_wait_time", &sc->spinup_wait_time); TUNABLE_INT_FETCH("hw.mpr.use_phy_num", &sc->use_phynum); TUNABLE_INT_FETCH("hw.mpr.max_reqframes", &sc->max_reqframes); TUNABLE_INT_FETCH("hw.mpr.max_prireqframes", &sc->max_prireqframes); TUNABLE_INT_FETCH("hw.mpr.max_replyframes", &sc->max_replyframes); TUNABLE_INT_FETCH("hw.mpr.max_evtframes", &sc->max_evtframes); /* Grab the unit-instance variables */ snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.debug_level", device_get_unit(sc->mpr_dev)); bzero(mpr_debug, 80); if (TUNABLE_STR_FETCH(tmpstr, mpr_debug, 80) != 0) mpr_parse_debug(sc, mpr_debug); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.disable_msix", device_get_unit(sc->mpr_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->disable_msix); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.disable_msi", device_get_unit(sc->mpr_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->disable_msi); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.max_msix", device_get_unit(sc->mpr_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_msix); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.max_chains", device_get_unit(sc->mpr_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_chains); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.max_io_pages", device_get_unit(sc->mpr_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_io_pages); bzero(sc->exclude_ids, sizeof(sc->exclude_ids)); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.exclude_ids", device_get_unit(sc->mpr_dev)); TUNABLE_STR_FETCH(tmpstr, sc->exclude_ids, sizeof(sc->exclude_ids)); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.enable_ssu", device_get_unit(sc->mpr_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->enable_ssu); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.spinup_wait_time", device_get_unit(sc->mpr_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->spinup_wait_time); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.use_phy_num", device_get_unit(sc->mpr_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->use_phynum); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.max_reqframes", device_get_unit(sc->mpr_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_reqframes); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.max_prireqframes", device_get_unit(sc->mpr_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_prireqframes); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.max_replyframes", device_get_unit(sc->mpr_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_replyframes); snprintf(tmpstr, sizeof(tmpstr), "dev.mpr.%d.max_evtframes", device_get_unit(sc->mpr_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_evtframes); } static void mpr_setup_sysctl(struct mpr_softc *sc) { struct sysctl_ctx_list *sysctl_ctx = NULL; struct sysctl_oid *sysctl_tree = NULL; char tmpstr[80], tmpstr2[80]; /* * Setup the sysctl variable so the user can change the debug level * on the fly. */ snprintf(tmpstr, sizeof(tmpstr), "MPR controller %d", device_get_unit(sc->mpr_dev)); snprintf(tmpstr2, sizeof(tmpstr2), "%d", device_get_unit(sc->mpr_dev)); sysctl_ctx = device_get_sysctl_ctx(sc->mpr_dev); if (sysctl_ctx != NULL) sysctl_tree = device_get_sysctl_tree(sc->mpr_dev); if (sysctl_tree == NULL) { sysctl_ctx_init(&sc->sysctl_ctx); sc->sysctl_tree = SYSCTL_ADD_NODE(&sc->sysctl_ctx, SYSCTL_STATIC_CHILDREN(_hw_mpr), OID_AUTO, tmpstr2, CTLFLAG_RD, 0, tmpstr); if (sc->sysctl_tree == NULL) return; sysctl_ctx = &sc->sysctl_ctx; sysctl_tree = sc->sysctl_tree; } SYSCTL_ADD_PROC(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "debug_level", CTLTYPE_STRING | CTLFLAG_RW | CTLFLAG_MPSAFE, sc, 0, mpr_debug_sysctl, "A", "mpr debug level"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "disable_msix", CTLFLAG_RD, &sc->disable_msix, 0, "Disable the use of MSI-X interrupts"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_msix", CTLFLAG_RD, &sc->max_msix, 0, "User-defined maximum number of MSIX queues"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "msix_msgs", CTLFLAG_RD, &sc->msi_msgs, 0, "Negotiated number of MSIX queues"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_reqframes", CTLFLAG_RD, &sc->max_reqframes, 0, "Total number of allocated request frames"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_prireqframes", CTLFLAG_RD, &sc->max_prireqframes, 0, "Total number of allocated high priority request frames"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_replyframes", CTLFLAG_RD, &sc->max_replyframes, 0, "Total number of allocated reply frames"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_evtframes", CTLFLAG_RD, &sc->max_evtframes, 0, "Total number of event frames allocated"); SYSCTL_ADD_STRING(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "firmware_version", CTLFLAG_RW, sc->fw_version, strlen(sc->fw_version), "firmware version"); SYSCTL_ADD_STRING(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "driver_version", CTLFLAG_RW, MPR_DRIVER_VERSION, strlen(MPR_DRIVER_VERSION), "driver version"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "io_cmds_active", CTLFLAG_RD, &sc->io_cmds_active, 0, "number of currently active commands"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "io_cmds_highwater", CTLFLAG_RD, &sc->io_cmds_highwater, 0, "maximum active commands seen"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "chain_free", CTLFLAG_RD, &sc->chain_free, 0, "number of free chain elements"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "chain_free_lowwater", CTLFLAG_RD, &sc->chain_free_lowwater, 0,"lowest number of free chain elements"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_chains", CTLFLAG_RD, &sc->max_chains, 0,"maximum chain frames that will be allocated"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_io_pages", CTLFLAG_RD, &sc->max_io_pages, 0,"maximum pages to allow per I/O (if <1 use " "IOCFacts)"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "enable_ssu", CTLFLAG_RW, &sc->enable_ssu, 0, "enable SSU to SATA SSD/HDD at shutdown"); SYSCTL_ADD_UQUAD(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "chain_alloc_fail", CTLFLAG_RD, &sc->chain_alloc_fail, "chain allocation failures"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "spinup_wait_time", CTLFLAG_RD, &sc->spinup_wait_time, DEFAULT_SPINUP_WAIT, "seconds to wait for " "spinup after SATA ID error"); SYSCTL_ADD_PROC(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "dump_reqs", CTLTYPE_OPAQUE | CTLFLAG_RD | CTLFLAG_SKIP, sc, 0, mpr_dump_reqs, "I", "Dump Active Requests"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "use_phy_num", CTLFLAG_RD, &sc->use_phynum, 0, "Use the phy number for enumeration"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "prp_pages_free", CTLFLAG_RD, &sc->prp_pages_free, 0, "number of free PRP pages"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "prp_pages_free_lowwater", CTLFLAG_RD, &sc->prp_pages_free_lowwater, 0,"lowest number of free PRP pages"); SYSCTL_ADD_UQUAD(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "prp_page_alloc_fail", CTLFLAG_RD, &sc->prp_page_alloc_fail, "PRP page allocation failures"); } static struct mpr_debug_string { char *name; int flag; } mpr_debug_strings[] = { {"info", MPR_INFO}, {"fault", MPR_FAULT}, {"event", MPR_EVENT}, {"log", MPR_LOG}, {"recovery", MPR_RECOVERY}, {"error", MPR_ERROR}, {"init", MPR_INIT}, {"xinfo", MPR_XINFO}, {"user", MPR_USER}, {"mapping", MPR_MAPPING}, {"trace", MPR_TRACE} }; enum mpr_debug_level_combiner { COMB_NONE, COMB_ADD, COMB_SUB }; static int mpr_debug_sysctl(SYSCTL_HANDLER_ARGS) { struct mpr_softc *sc; struct mpr_debug_string *string; struct sbuf *sbuf; char *buffer; size_t sz; int i, len, debug, error; sc = (struct mpr_softc *)arg1; error = sysctl_wire_old_buffer(req, 0); if (error != 0) return (error); sbuf = sbuf_new_for_sysctl(NULL, NULL, 128, req); debug = sc->mpr_debug; sbuf_printf(sbuf, "%#x", debug); sz = sizeof(mpr_debug_strings) / sizeof(mpr_debug_strings[0]); for (i = 0; i < sz; i++) { string = &mpr_debug_strings[i]; if (debug & string->flag) sbuf_printf(sbuf, ",%s", string->name); } error = sbuf_finish(sbuf); sbuf_delete(sbuf); if (error || req->newptr == NULL) return (error); len = req->newlen - req->newidx; if (len == 0) return (0); buffer = malloc(len, M_MPR, M_ZERO|M_WAITOK); error = SYSCTL_IN(req, buffer, len); mpr_parse_debug(sc, buffer); free(buffer, M_MPR); return (error); } static void mpr_parse_debug(struct mpr_softc *sc, char *list) { struct mpr_debug_string *string; enum mpr_debug_level_combiner op; char *token, *endtoken; size_t sz; int flags, i; if (list == NULL || *list == '\0') return; if (*list == '+') { op = COMB_ADD; list++; } else if (*list == '-') { op = COMB_SUB; list++; } else op = COMB_NONE; if (*list == '\0') return; flags = 0; sz = sizeof(mpr_debug_strings) / sizeof(mpr_debug_strings[0]); while ((token = strsep(&list, ":,")) != NULL) { /* Handle integer flags */ flags |= strtol(token, &endtoken, 0); if (token != endtoken) continue; /* Handle text flags */ for (i = 0; i < sz; i++) { string = &mpr_debug_strings[i]; if (strcasecmp(token, string->name) == 0) { flags |= string->flag; break; } } } switch (op) { case COMB_NONE: sc->mpr_debug = flags; break; case COMB_ADD: sc->mpr_debug |= flags; break; case COMB_SUB: sc->mpr_debug &= (~flags); break; } return; } struct mpr_dumpreq_hdr { uint32_t smid; uint32_t state; uint32_t numframes; uint32_t deschi; uint32_t desclo; }; static int mpr_dump_reqs(SYSCTL_HANDLER_ARGS) { struct mpr_softc *sc; struct mpr_chain *chain, *chain1; struct mpr_command *cm; struct mpr_dumpreq_hdr hdr; struct sbuf *sb; uint32_t smid, state; int i, numreqs, error = 0; sc = (struct mpr_softc *)arg1; if ((error = priv_check(curthread, PRIV_DRIVER)) != 0) { printf("priv check error %d\n", error); return (error); } state = MPR_CM_STATE_INQUEUE; smid = 1; numreqs = sc->num_reqs; if (req->newptr != NULL) return (EINVAL); if (smid == 0 || smid > sc->num_reqs) return (EINVAL); if (numreqs <= 0 || (numreqs + smid > sc->num_reqs)) numreqs = sc->num_reqs; sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req); /* Best effort, no locking */ for (i = smid; i < numreqs; i++) { cm = &sc->commands[i]; if (cm->cm_state != state) continue; hdr.smid = i; hdr.state = cm->cm_state; hdr.numframes = 1; hdr.deschi = cm->cm_desc.Words.High; hdr.desclo = cm->cm_desc.Words.Low; TAILQ_FOREACH_SAFE(chain, &cm->cm_chain_list, chain_link, chain1) hdr.numframes++; sbuf_bcat(sb, &hdr, sizeof(hdr)); sbuf_bcat(sb, cm->cm_req, 128); TAILQ_FOREACH_SAFE(chain, &cm->cm_chain_list, chain_link, chain1) sbuf_bcat(sb, chain->chain, 128); } error = sbuf_finish(sb); sbuf_delete(sb); return (error); } int mpr_attach(struct mpr_softc *sc) { int error; MPR_FUNCTRACE(sc); mpr_dprint(sc, MPR_INIT, "%s entered\n", __func__); mtx_init(&sc->mpr_mtx, "MPR lock", NULL, MTX_DEF); callout_init_mtx(&sc->periodic, &sc->mpr_mtx, 0); callout_init_mtx(&sc->device_check_callout, &sc->mpr_mtx, 0); TAILQ_INIT(&sc->event_list); timevalclear(&sc->lastfail); if ((error = mpr_transition_ready(sc)) != 0) { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "Failed to transition ready\n"); return (error); } sc->facts = malloc(sizeof(MPI2_IOC_FACTS_REPLY), M_MPR, M_ZERO|M_NOWAIT); if (!sc->facts) { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "Cannot allocate memory, exit\n"); return (ENOMEM); } /* * Get IOC Facts and allocate all structures based on this information. * A Diag Reset will also call mpr_iocfacts_allocate and re-read the IOC * Facts. If relevant values have changed in IOC Facts, this function * will free all of the memory based on IOC Facts and reallocate that * memory. If this fails, any allocated memory should already be freed. */ if ((error = mpr_iocfacts_allocate(sc, TRUE)) != 0) { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "IOC Facts allocation " "failed with error %d\n", error); return (error); } /* Start the periodic watchdog check on the IOC Doorbell */ mpr_periodic(sc); /* * The portenable will kick off discovery events that will drive the * rest of the initialization process. The CAM/SAS module will * hold up the boot sequence until discovery is complete. */ sc->mpr_ich.ich_func = mpr_startup; sc->mpr_ich.ich_arg = sc; if (config_intrhook_establish(&sc->mpr_ich) != 0) { mpr_dprint(sc, MPR_INIT|MPR_ERROR, "Cannot establish MPR config hook\n"); error = EINVAL; } /* * Allow IR to shutdown gracefully when shutdown occurs. */ sc->shutdown_eh = EVENTHANDLER_REGISTER(shutdown_final, mprsas_ir_shutdown, sc, SHUTDOWN_PRI_DEFAULT); if (sc->shutdown_eh == NULL) mpr_dprint(sc, MPR_INIT|MPR_ERROR, "shutdown event registration failed\n"); mpr_setup_sysctl(sc); sc->mpr_flags |= MPR_FLAGS_ATTACH_DONE; mpr_dprint(sc, MPR_INIT, "%s exit error= %d\n", __func__, error); return (error); } /* Run through any late-start handlers. */ static void mpr_startup(void *arg) { struct mpr_softc *sc; sc = (struct mpr_softc *)arg; mpr_dprint(sc, MPR_INIT, "%s entered\n", __func__); mpr_lock(sc); mpr_unmask_intr(sc); /* initialize device mapping tables */ mpr_base_static_config_pages(sc); mpr_mapping_initialize(sc); mprsas_startup(sc); mpr_unlock(sc); mpr_dprint(sc, MPR_INIT, "disestablish config intrhook\n"); config_intrhook_disestablish(&sc->mpr_ich); sc->mpr_ich.ich_arg = NULL; mpr_dprint(sc, MPR_INIT, "%s exit\n", __func__); } /* Periodic watchdog. Is called with the driver lock already held. */ static void mpr_periodic(void *arg) { struct mpr_softc *sc; uint32_t db; sc = (struct mpr_softc *)arg; if (sc->mpr_flags & MPR_FLAGS_SHUTDOWN) return; db = mpr_regread(sc, MPI2_DOORBELL_OFFSET); if ((db & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) { if ((db & MPI2_DOORBELL_FAULT_CODE_MASK) == IFAULT_IOP_OVER_TEMP_THRESHOLD_EXCEEDED) { panic("TEMPERATURE FAULT: STOPPING."); } mpr_dprint(sc, MPR_FAULT, "IOC Fault 0x%08x, Resetting\n", db); mpr_reinit(sc); } callout_reset(&sc->periodic, MPR_PERIODIC_DELAY * hz, mpr_periodic, sc); } static void mpr_log_evt_handler(struct mpr_softc *sc, uintptr_t data, MPI2_EVENT_NOTIFICATION_REPLY *event) { MPI2_EVENT_DATA_LOG_ENTRY_ADDED *entry; MPR_DPRINT_EVENT(sc, generic, event); switch (event->Event) { case MPI2_EVENT_LOG_DATA: mpr_dprint(sc, MPR_EVENT, "MPI2_EVENT_LOG_DATA:\n"); if (sc->mpr_debug & MPR_EVENT) hexdump(event->EventData, event->EventDataLength, NULL, 0); break; case MPI2_EVENT_LOG_ENTRY_ADDED: entry = (MPI2_EVENT_DATA_LOG_ENTRY_ADDED *)event->EventData; mpr_dprint(sc, MPR_EVENT, "MPI2_EVENT_LOG_ENTRY_ADDED event " "0x%x Sequence %d:\n", entry->LogEntryQualifier, entry->LogSequence); break; default: break; } return; } static int mpr_attach_log(struct mpr_softc *sc) { uint8_t events[16]; bzero(events, 16); setbit(events, MPI2_EVENT_LOG_DATA); setbit(events, MPI2_EVENT_LOG_ENTRY_ADDED); mpr_register_events(sc, events, mpr_log_evt_handler, NULL, &sc->mpr_log_eh); return (0); } static int mpr_detach_log(struct mpr_softc *sc) { if (sc->mpr_log_eh != NULL) mpr_deregister_events(sc, sc->mpr_log_eh); return (0); } /* * Free all of the driver resources and detach submodules. Should be called * without the lock held. */ int mpr_free(struct mpr_softc *sc) { int error; mpr_dprint(sc, MPR_INIT, "%s entered\n", __func__); /* Turn off the watchdog */ mpr_lock(sc); sc->mpr_flags |= MPR_FLAGS_SHUTDOWN; mpr_unlock(sc); /* Lock must not be held for this */ callout_drain(&sc->periodic); callout_drain(&sc->device_check_callout); if (((error = mpr_detach_log(sc)) != 0) || ((error = mpr_detach_sas(sc)) != 0)) { mpr_dprint(sc, MPR_INIT|MPR_FAULT, "failed to detach " "subsystems, error= %d, exit\n", error); return (error); } mpr_detach_user(sc); /* Put the IOC back in the READY state. */ mpr_lock(sc); if ((error = mpr_transition_ready(sc)) != 0) { mpr_unlock(sc); return (error); } mpr_unlock(sc); if (sc->facts != NULL) free(sc->facts, M_MPR); /* * Free all buffers that are based on IOC Facts. A Diag Reset may need * to free these buffers too. */ mpr_iocfacts_free(sc); if (sc->sysctl_tree != NULL) sysctl_ctx_free(&sc->sysctl_ctx); /* Deregister the shutdown function */ if (sc->shutdown_eh != NULL) EVENTHANDLER_DEREGISTER(shutdown_final, sc->shutdown_eh); mtx_destroy(&sc->mpr_mtx); mpr_dprint(sc, MPR_INIT, "%s exit\n", __func__); return (0); } static __inline void mpr_complete_command(struct mpr_softc *sc, struct mpr_command *cm) { MPR_FUNCTRACE(sc); if (cm == NULL) { mpr_dprint(sc, MPR_ERROR, "Completing NULL command\n"); return; } + cm->cm_state = MPR_CM_STATE_BUSY; if (cm->cm_flags & MPR_CM_FLAGS_POLLED) cm->cm_flags |= MPR_CM_FLAGS_COMPLETE; if (cm->cm_complete != NULL) { mpr_dprint(sc, MPR_TRACE, "%s cm %p calling cm_complete %p data %p reply %p\n", __func__, cm, cm->cm_complete, cm->cm_complete_data, cm->cm_reply); cm->cm_complete(sc, cm); } if (cm->cm_flags & MPR_CM_FLAGS_WAKEUP) { mpr_dprint(sc, MPR_TRACE, "waking up %p\n", cm); wakeup(cm); } if (sc->io_cmds_active != 0) { sc->io_cmds_active--; } else { mpr_dprint(sc, MPR_ERROR, "Warning: io_cmds_active is " "out of sync - resynching to 0\n"); } } static void mpr_sas_log_info(struct mpr_softc *sc , u32 log_info) { union loginfo_type { u32 loginfo; struct { u32 subcode:16; u32 code:8; u32 originator:4; u32 bus_type:4; } dw; }; union loginfo_type sas_loginfo; char *originator_str = NULL; sas_loginfo.loginfo = log_info; if (sas_loginfo.dw.bus_type != 3 /*SAS*/) return; /* each nexus loss loginfo */ if (log_info == 0x31170000) return; /* eat the loginfos associated with task aborts */ if ((log_info == 30050000) || (log_info == 0x31140000) || (log_info == 0x31130000)) return; switch (sas_loginfo.dw.originator) { case 0: originator_str = "IOP"; break; case 1: originator_str = "PL"; break; case 2: originator_str = "IR"; break; } mpr_dprint(sc, MPR_LOG, "log_info(0x%08x): originator(%s), " "code(0x%02x), sub_code(0x%04x)\n", log_info, originator_str, sas_loginfo.dw.code, sas_loginfo.dw.subcode); } static void mpr_display_reply_info(struct mpr_softc *sc, uint8_t *reply) { MPI2DefaultReply_t *mpi_reply; u16 sc_status; mpi_reply = (MPI2DefaultReply_t*)reply; sc_status = le16toh(mpi_reply->IOCStatus); if (sc_status & MPI2_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE) mpr_sas_log_info(sc, le32toh(mpi_reply->IOCLogInfo)); } void mpr_intr(void *data) { struct mpr_softc *sc; uint32_t status; sc = (struct mpr_softc *)data; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); /* * Check interrupt status register to flush the bus. This is * needed for both INTx interrupts and driver-driven polling */ status = mpr_regread(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET); if ((status & MPI2_HIS_REPLY_DESCRIPTOR_INTERRUPT) == 0) return; mpr_lock(sc); mpr_intr_locked(data); mpr_unlock(sc); return; } /* * In theory, MSI/MSIX interrupts shouldn't need to read any registers on the * chip. Hopefully this theory is correct. */ void mpr_intr_msi(void *data) { struct mpr_softc *sc; sc = (struct mpr_softc *)data; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); mpr_lock(sc); mpr_intr_locked(data); mpr_unlock(sc); return; } /* * The locking is overly broad and simplistic, but easy to deal with for now. */ void mpr_intr_locked(void *data) { MPI2_REPLY_DESCRIPTORS_UNION *desc; + MPI2_DIAG_RELEASE_REPLY *rel_rep; + mpr_fw_diagnostic_buffer_t *pBuffer; struct mpr_softc *sc; + uint64_t tdesc; struct mpr_command *cm = NULL; uint8_t flags; u_int pq; - MPI2_DIAG_RELEASE_REPLY *rel_rep; - mpr_fw_diagnostic_buffer_t *pBuffer; sc = (struct mpr_softc *)data; pq = sc->replypostindex; mpr_dprint(sc, MPR_TRACE, "%s sc %p starting with replypostindex %u\n", __func__, sc, sc->replypostindex); for ( ;; ) { cm = NULL; desc = &sc->post_queue[sc->replypostindex]; + + /* + * Copy and clear out the descriptor so that any reentry will + * immediately know that this descriptor has already been + * looked at. There is unfortunate casting magic because the + * MPI API doesn't have a cardinal 64bit type. + */ + tdesc = 0xffffffffffffffff; + tdesc = atomic_swap_64((uint64_t *)desc, tdesc); + desc = (MPI2_REPLY_DESCRIPTORS_UNION *)&tdesc; + flags = desc->Default.ReplyFlags & MPI2_RPY_DESCRIPT_FLAGS_TYPE_MASK; if ((flags == MPI2_RPY_DESCRIPT_FLAGS_UNUSED) || (le32toh(desc->Words.High) == 0xffffffff)) break; /* increment the replypostindex now, so that event handlers * and cm completion handlers which decide to do a diag * reset can zero it without it getting incremented again * afterwards, and we break out of this loop on the next * iteration since the reply post queue has been cleared to * 0xFF and all descriptors look unused (which they are). */ if (++sc->replypostindex >= sc->pqdepth) sc->replypostindex = 0; switch (flags) { case MPI2_RPY_DESCRIPT_FLAGS_SCSI_IO_SUCCESS: case MPI25_RPY_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO_SUCCESS: case MPI26_RPY_DESCRIPT_FLAGS_PCIE_ENCAPSULATED_SUCCESS: cm = &sc->commands[le16toh(desc->SCSIIOSuccess.SMID)]; KASSERT(cm->cm_state == MPR_CM_STATE_INQUEUE, ("command not inqueue\n")); cm->cm_state = MPR_CM_STATE_BUSY; cm->cm_reply = NULL; break; case MPI2_RPY_DESCRIPT_FLAGS_ADDRESS_REPLY: { uint32_t baddr; uint8_t *reply; /* * Re-compose the reply address from the address * sent back from the chip. The ReplyFrameAddress * is the lower 32 bits of the physical address of * particular reply frame. Convert that address to * host format, and then use that to provide the * offset against the virtual address base * (sc->reply_frames). */ baddr = le32toh(desc->AddressReply.ReplyFrameAddress); reply = sc->reply_frames + (baddr - ((uint32_t)sc->reply_busaddr)); /* * Make sure the reply we got back is in a valid * range. If not, go ahead and panic here, since * we'll probably panic as soon as we deference the * reply pointer anyway. */ if ((reply < sc->reply_frames) || (reply > (sc->reply_frames + (sc->fqdepth * sc->replyframesz)))) { printf("%s: WARNING: reply %p out of range!\n", __func__, reply); printf("%s: reply_frames %p, fqdepth %d, " "frame size %d\n", __func__, sc->reply_frames, sc->fqdepth, sc->replyframesz); printf("%s: baddr %#x,\n", __func__, baddr); /* LSI-TODO. See Linux Code for Graceful exit */ panic("Reply address out of range"); } if (le16toh(desc->AddressReply.SMID) == 0) { if (((MPI2_DEFAULT_REPLY *)reply)->Function == MPI2_FUNCTION_DIAG_BUFFER_POST) { /* * If SMID is 0 for Diag Buffer Post, * this implies that the reply is due to * a release function with a status that * the buffer has been released. Set * the buffer flags accordingly. */ rel_rep = (MPI2_DIAG_RELEASE_REPLY *)reply; if ((le16toh(rel_rep->IOCStatus) & MPI2_IOCSTATUS_MASK) == MPI2_IOCSTATUS_DIAGNOSTIC_RELEASED) { pBuffer = &sc->fw_diag_buffer_list[ rel_rep->BufferType]; pBuffer->valid_data = TRUE; pBuffer->owned_by_firmware = FALSE; pBuffer->immediate = FALSE; } } else mpr_dispatch_event(sc, baddr, (MPI2_EVENT_NOTIFICATION_REPLY *) reply); } else { cm = &sc->commands[ le16toh(desc->AddressReply.SMID)]; - KASSERT(cm->cm_state == MPR_CM_STATE_INQUEUE, - ("command not inqueue\n")); - cm->cm_state = MPR_CM_STATE_BUSY; - cm->cm_reply = reply; - cm->cm_reply_data = - le32toh(desc->AddressReply. - ReplyFrameAddress); + if (cm->cm_state == MPR_CM_STATE_INQUEUE) { + cm->cm_reply = reply; + cm->cm_reply_data = + le32toh(desc->AddressReply. + ReplyFrameAddress); + } else { + mpr_dprint(sc, MPR_RECOVERY, + "Bad state for ADDRESS_REPLY status," + " ignoring state %d cm %p\n", + cm->cm_state, cm); + } } break; } case MPI2_RPY_DESCRIPT_FLAGS_TARGETASSIST_SUCCESS: case MPI2_RPY_DESCRIPT_FLAGS_TARGET_COMMAND_BUFFER: case MPI2_RPY_DESCRIPT_FLAGS_RAID_ACCELERATOR_SUCCESS: default: /* Unhandled */ mpr_dprint(sc, MPR_ERROR, "Unhandled reply 0x%x\n", desc->Default.ReplyFlags); cm = NULL; break; } if (cm != NULL) { // Print Error reply frame if (cm->cm_reply) mpr_display_reply_info(sc,cm->cm_reply); mpr_complete_command(sc, cm); } - - desc->Words.Low = 0xffffffff; - desc->Words.High = 0xffffffff; } if (pq != sc->replypostindex) { mpr_dprint(sc, MPR_TRACE, "%s sc %p writing postindex %d\n", __func__, sc, sc->replypostindex); mpr_regwrite(sc, MPI2_REPLY_POST_HOST_INDEX_OFFSET, sc->replypostindex); } return; } static void mpr_dispatch_event(struct mpr_softc *sc, uintptr_t data, MPI2_EVENT_NOTIFICATION_REPLY *reply) { struct mpr_event_handle *eh; int event, handled = 0; event = le16toh(reply->Event); TAILQ_FOREACH(eh, &sc->event_list, eh_list) { if (isset(eh->mask, event)) { eh->callback(sc, data, reply); handled++; } } if (handled == 0) mpr_dprint(sc, MPR_EVENT, "Unhandled event 0x%x\n", le16toh(event)); /* * This is the only place that the event/reply should be freed. * Anything wanting to hold onto the event data should have * already copied it into their own storage. */ mpr_free_reply(sc, data); } static void mpr_reregister_events_complete(struct mpr_softc *sc, struct mpr_command *cm) { mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if (cm->cm_reply) MPR_DPRINT_EVENT(sc, generic, (MPI2_EVENT_NOTIFICATION_REPLY *)cm->cm_reply); mpr_free_command(sc, cm); /* next, send a port enable */ mprsas_startup(sc); } /* * For both register_events and update_events, the caller supplies a bitmap * of events that it _wants_. These functions then turn that into a bitmask * suitable for the controller. */ int mpr_register_events(struct mpr_softc *sc, uint8_t *mask, mpr_evt_callback_t *cb, void *data, struct mpr_event_handle **handle) { struct mpr_event_handle *eh; int error = 0; eh = malloc(sizeof(struct mpr_event_handle), M_MPR, M_WAITOK|M_ZERO); if (!eh) { mpr_dprint(sc, MPR_EVENT|MPR_ERROR, "Cannot allocate event memory\n"); return (ENOMEM); } eh->callback = cb; eh->data = data; TAILQ_INSERT_TAIL(&sc->event_list, eh, eh_list); if (mask != NULL) error = mpr_update_events(sc, eh, mask); *handle = eh; return (error); } int mpr_update_events(struct mpr_softc *sc, struct mpr_event_handle *handle, uint8_t *mask) { MPI2_EVENT_NOTIFICATION_REQUEST *evtreq; MPI2_EVENT_NOTIFICATION_REPLY *reply = NULL; struct mpr_command *cm = NULL; struct mpr_event_handle *eh; int error, i; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if ((mask != NULL) && (handle != NULL)) bcopy(mask, &handle->mask[0], 16); memset(sc->event_mask, 0xff, 16); TAILQ_FOREACH(eh, &sc->event_list, eh_list) { for (i = 0; i < 16; i++) sc->event_mask[i] &= ~eh->mask[i]; } if ((cm = mpr_alloc_command(sc)) == NULL) return (EBUSY); evtreq = (MPI2_EVENT_NOTIFICATION_REQUEST *)cm->cm_req; evtreq->Function = MPI2_FUNCTION_EVENT_NOTIFICATION; evtreq->MsgFlags = 0; evtreq->SASBroadcastPrimitiveMasks = 0; #ifdef MPR_DEBUG_ALL_EVENTS { u_char fullmask[16]; memset(fullmask, 0x00, 16); bcopy(fullmask, (uint8_t *)&evtreq->EventMasks, 16); } #else bcopy(sc->event_mask, (uint8_t *)&evtreq->EventMasks, 16); #endif cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; error = mpr_request_polled(sc, &cm); if (cm != NULL) reply = (MPI2_EVENT_NOTIFICATION_REPLY *)cm->cm_reply; if ((reply == NULL) || (reply->IOCStatus & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) error = ENXIO; if (reply) MPR_DPRINT_EVENT(sc, generic, reply); mpr_dprint(sc, MPR_TRACE, "%s finished error %d\n", __func__, error); if (cm != NULL) mpr_free_command(sc, cm); return (error); } static int mpr_reregister_events(struct mpr_softc *sc) { MPI2_EVENT_NOTIFICATION_REQUEST *evtreq; struct mpr_command *cm; struct mpr_event_handle *eh; int error, i; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); /* first, reregister events */ memset(sc->event_mask, 0xff, 16); TAILQ_FOREACH(eh, &sc->event_list, eh_list) { for (i = 0; i < 16; i++) sc->event_mask[i] &= ~eh->mask[i]; } if ((cm = mpr_alloc_command(sc)) == NULL) return (EBUSY); evtreq = (MPI2_EVENT_NOTIFICATION_REQUEST *)cm->cm_req; evtreq->Function = MPI2_FUNCTION_EVENT_NOTIFICATION; evtreq->MsgFlags = 0; evtreq->SASBroadcastPrimitiveMasks = 0; #ifdef MPR_DEBUG_ALL_EVENTS { u_char fullmask[16]; memset(fullmask, 0x00, 16); bcopy(fullmask, (uint8_t *)&evtreq->EventMasks, 16); } #else bcopy(sc->event_mask, (uint8_t *)&evtreq->EventMasks, 16); #endif cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; cm->cm_complete = mpr_reregister_events_complete; error = mpr_map_command(sc, cm); mpr_dprint(sc, MPR_TRACE, "%s finished with error %d\n", __func__, error); return (error); } int mpr_deregister_events(struct mpr_softc *sc, struct mpr_event_handle *handle) { TAILQ_REMOVE(&sc->event_list, handle, eh_list); free(handle, M_MPR); return (mpr_update_events(sc, NULL, NULL)); } /** * mpr_build_nvme_prp - This function is called for NVMe end devices to build a * native SGL (NVMe PRP). The native SGL is built starting in the first PRP entry * of the NVMe message (PRP1). If the data buffer is small enough to be described * entirely using PRP1, then PRP2 is not used. If needed, PRP2 is used to * describe a larger data buffer. If the data buffer is too large to describe * using the two PRP entriess inside the NVMe message, then PRP1 describes the * first data memory segment, and PRP2 contains a pointer to a PRP list located * elsewhere in memory to describe the remaining data memory segments. The PRP * list will be contiguous. * The native SGL for NVMe devices is a Physical Region Page (PRP). A PRP * consists of a list of PRP entries to describe a number of noncontigous * physical memory segments as a single memory buffer, just as a SGL does. Note * however, that this function is only used by the IOCTL call, so the memory * given will be guaranteed to be contiguous. There is no need to translate * non-contiguous SGL into a PRP in this case. All PRPs will describe contiguous * space that is one page size each. * * Each NVMe message contains two PRP entries. The first (PRP1) either contains * a PRP list pointer or a PRP element, depending upon the command. PRP2 contains * the second PRP element if the memory being described fits within 2 PRP * entries, or a PRP list pointer if the PRP spans more than two entries. * * A PRP list pointer contains the address of a PRP list, structured as a linear * array of PRP entries. Each PRP entry in this list describes a segment of * physical memory. * * Each 64-bit PRP entry comprises an address and an offset field. The address * always points to the beginning of a PAGE_SIZE physical memory page, and the * offset describes where within that page the memory segment begins. Only the * first element in a PRP list may contain a non-zero offest, implying that all * memory segments following the first begin at the start of a PAGE_SIZE page. * * Each PRP element normally describes a chunck of PAGE_SIZE physical memory, * with exceptions for the first and last elements in the list. If the memory * being described by the list begins at a non-zero offset within the first page, * then the first PRP element will contain a non-zero offset indicating where the * region begins within the page. The last memory segment may end before the end * of the PAGE_SIZE segment, depending upon the overall size of the memory being * described by the PRP list. * * Since PRP entries lack any indication of size, the overall data buffer length * is used to determine where the end of the data memory buffer is located, and * how many PRP entries are required to describe it. * * Returns nothing. */ void mpr_build_nvme_prp(struct mpr_softc *sc, struct mpr_command *cm, Mpi26NVMeEncapsulatedRequest_t *nvme_encap_request, void *data, uint32_t data_in_sz, uint32_t data_out_sz) { int prp_size = PRP_ENTRY_SIZE; uint64_t *prp_entry, *prp1_entry, *prp2_entry; uint64_t *prp_entry_phys, *prp_page, *prp_page_phys; uint32_t offset, entry_len, page_mask_result, page_mask; bus_addr_t paddr; size_t length; struct mpr_prp_page *prp_page_info = NULL; /* * Not all commands require a data transfer. If no data, just return * without constructing any PRP. */ if (!data_in_sz && !data_out_sz) return; /* * Set pointers to PRP1 and PRP2, which are in the NVMe command. PRP1 is * located at a 24 byte offset from the start of the NVMe command. Then * set the current PRP entry pointer to PRP1. */ prp1_entry = (uint64_t *)(nvme_encap_request->NVMe_Command + NVME_CMD_PRP1_OFFSET); prp2_entry = (uint64_t *)(nvme_encap_request->NVMe_Command + NVME_CMD_PRP2_OFFSET); prp_entry = prp1_entry; /* * For the PRP entries, use the specially allocated buffer of * contiguous memory. PRP Page allocation failures should not happen * because there should be enough PRP page buffers to account for the * possible NVMe QDepth. */ prp_page_info = mpr_alloc_prp_page(sc); KASSERT(prp_page_info != NULL, ("%s: There are no PRP Pages left to be " "used for building a native NVMe SGL.\n", __func__)); prp_page = (uint64_t *)prp_page_info->prp_page; prp_page_phys = (uint64_t *)(uintptr_t)prp_page_info->prp_page_busaddr; /* * Insert the allocated PRP page into the command's PRP page list. This * will be freed when the command is freed. */ TAILQ_INSERT_TAIL(&cm->cm_prp_page_list, prp_page_info, prp_page_link); /* * Check if we are within 1 entry of a page boundary we don't want our * first entry to be a PRP List entry. */ page_mask = PAGE_SIZE - 1; page_mask_result = (uintptr_t)((uint8_t *)prp_page + prp_size) & page_mask; if (!page_mask_result) { /* Bump up to next page boundary. */ prp_page = (uint64_t *)((uint8_t *)prp_page + prp_size); prp_page_phys = (uint64_t *)((uint8_t *)prp_page_phys + prp_size); } /* * Set PRP physical pointer, which initially points to the current PRP * DMA memory page. */ prp_entry_phys = prp_page_phys; /* Get physical address and length of the data buffer. */ paddr = (bus_addr_t)(uintptr_t)data; if (data_in_sz) length = data_in_sz; else length = data_out_sz; /* Loop while the length is not zero. */ while (length) { /* * Check if we need to put a list pointer here if we are at page * boundary - prp_size (8 bytes). */ page_mask_result = (uintptr_t)((uint8_t *)prp_entry_phys + prp_size) & page_mask; if (!page_mask_result) { /* * This is the last entry in a PRP List, so we need to * put a PRP list pointer here. What this does is: * - bump the current memory pointer to the next * address, which will be the next full page. * - set the PRP Entry to point to that page. This is * now the PRP List pointer. * - bump the PRP Entry pointer the start of the next * page. Since all of this PRP memory is contiguous, * no need to get a new page - it's just the next * address. */ prp_entry_phys++; *prp_entry = htole64((uint64_t)(uintptr_t)prp_entry_phys); prp_entry++; } /* Need to handle if entry will be part of a page. */ offset = (uint32_t)paddr & page_mask; entry_len = PAGE_SIZE - offset; if (prp_entry == prp1_entry) { /* * Must fill in the first PRP pointer (PRP1) before * moving on. */ *prp1_entry = htole64((uint64_t)paddr); /* * Now point to the second PRP entry within the * command (PRP2). */ prp_entry = prp2_entry; } else if (prp_entry == prp2_entry) { /* * Should the PRP2 entry be a PRP List pointer or just a * regular PRP pointer? If there is more than one more * page of data, must use a PRP List pointer. */ if (length > PAGE_SIZE) { /* * PRP2 will contain a PRP List pointer because * more PRP's are needed with this command. The * list will start at the beginning of the * contiguous buffer. */ *prp2_entry = htole64( (uint64_t)(uintptr_t)prp_entry_phys); /* * The next PRP Entry will be the start of the * first PRP List. */ prp_entry = prp_page; } else { /* * After this, the PRP Entries are complete. * This command uses 2 PRP's and no PRP list. */ *prp2_entry = htole64((uint64_t)paddr); } } else { /* * Put entry in list and bump the addresses. * * After PRP1 and PRP2 are filled in, this will fill in * all remaining PRP entries in a PRP List, one per each * time through the loop. */ *prp_entry = htole64((uint64_t)paddr); prp_entry++; prp_entry_phys++; } /* * Bump the phys address of the command's data buffer by the * entry_len. */ paddr += entry_len; /* Decrement length accounting for last partial page. */ if (entry_len > length) length = 0; else length -= entry_len; } } /* * mpr_check_pcie_native_sgl - This function is called for PCIe end devices to * determine if the driver needs to build a native SGL. If so, that native SGL * is built in the contiguous buffers allocated especially for PCIe SGL * creation. If the driver will not build a native SGL, return TRUE and a * normal IEEE SGL will be built. Currently this routine supports NVMe devices * only. * * Returns FALSE (0) if native SGL was built, TRUE (1) if no SGL was built. */ static int mpr_check_pcie_native_sgl(struct mpr_softc *sc, struct mpr_command *cm, bus_dma_segment_t *segs, int segs_left) { uint32_t i, sge_dwords, length, offset, entry_len; uint32_t num_entries, buff_len = 0, sges_in_segment; uint32_t page_mask, page_mask_result, *curr_buff; uint32_t *ptr_sgl, *ptr_first_sgl, first_page_offset; uint32_t first_page_data_size, end_residual; uint64_t *msg_phys; bus_addr_t paddr; int build_native_sgl = 0, first_prp_entry; int prp_size = PRP_ENTRY_SIZE; Mpi25IeeeSgeChain64_t *main_chain_element = NULL; struct mpr_prp_page *prp_page_info = NULL; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); /* * Add up the sizes of each segment length to get the total transfer * size, which will be checked against the Maximum Data Transfer Size. * If the data transfer length exceeds the MDTS for this device, just * return 1 so a normal IEEE SGL will be built. F/W will break the I/O * up into multiple I/O's. [nvme_mdts = 0 means unlimited] */ for (i = 0; i < segs_left; i++) buff_len += htole32(segs[i].ds_len); if ((cm->cm_targ->MDTS > 0) && (buff_len > cm->cm_targ->MDTS)) return 1; /* Create page_mask (to get offset within page) */ page_mask = PAGE_SIZE - 1; /* * Check if the number of elements exceeds the max number that can be * put in the main message frame (H/W can only translate an SGL that * is contained entirely in the main message frame). */ sges_in_segment = (sc->reqframesz - offsetof(Mpi25SCSIIORequest_t, SGL)) / sizeof(MPI25_SGE_IO_UNION); if (segs_left > sges_in_segment) build_native_sgl = 1; else { /* * NVMe uses one PRP for each physical page (or part of physical * page). * if 4 pages or less then IEEE is OK * if > 5 pages then we need to build a native SGL * if > 4 and <= 5 pages, then check the physical address of * the first SG entry, then if this first size in the page * is >= the residual beyond 4 pages then use IEEE, * otherwise use native SGL */ if (buff_len > (PAGE_SIZE * 5)) build_native_sgl = 1; else if ((buff_len > (PAGE_SIZE * 4)) && (buff_len <= (PAGE_SIZE * 5)) ) { msg_phys = (uint64_t *)(uintptr_t)segs[0].ds_addr; first_page_offset = ((uint32_t)(uint64_t)(uintptr_t)msg_phys & page_mask); first_page_data_size = PAGE_SIZE - first_page_offset; end_residual = buff_len % PAGE_SIZE; /* * If offset into first page pushes the end of the data * beyond end of the 5th page, we need the extra PRP * list. */ if (first_page_data_size < end_residual) build_native_sgl = 1; /* * Check if first SG entry size is < residual beyond 4 * pages. */ if (htole32(segs[0].ds_len) < (buff_len - (PAGE_SIZE * 4))) build_native_sgl = 1; } } /* check if native SGL is needed */ if (!build_native_sgl) return 1; /* * Native SGL is needed. * Put a chain element in main message frame that points to the first * chain buffer. * * NOTE: The ChainOffset field must be 0 when using a chain pointer to * a native SGL. */ /* Set main message chain element pointer */ main_chain_element = (pMpi25IeeeSgeChain64_t)cm->cm_sge; /* * For NVMe the chain element needs to be the 2nd SGL entry in the main * message. */ main_chain_element = (Mpi25IeeeSgeChain64_t *) ((uint8_t *)main_chain_element + sizeof(MPI25_IEEE_SGE_CHAIN64)); /* * For the PRP entries, use the specially allocated buffer of * contiguous memory. PRP Page allocation failures should not happen * because there should be enough PRP page buffers to account for the * possible NVMe QDepth. */ prp_page_info = mpr_alloc_prp_page(sc); KASSERT(prp_page_info != NULL, ("%s: There are no PRP Pages left to be " "used for building a native NVMe SGL.\n", __func__)); curr_buff = (uint32_t *)prp_page_info->prp_page; msg_phys = (uint64_t *)(uintptr_t)prp_page_info->prp_page_busaddr; /* * Insert the allocated PRP page into the command's PRP page list. This * will be freed when the command is freed. */ TAILQ_INSERT_TAIL(&cm->cm_prp_page_list, prp_page_info, prp_page_link); /* * Check if we are within 1 entry of a page boundary we don't want our * first entry to be a PRP List entry. */ page_mask_result = (uintptr_t)((uint8_t *)curr_buff + prp_size) & page_mask; if (!page_mask_result) { /* Bump up to next page boundary. */ curr_buff = (uint32_t *)((uint8_t *)curr_buff + prp_size); msg_phys = (uint64_t *)((uint8_t *)msg_phys + prp_size); } /* Fill in the chain element and make it an NVMe segment type. */ main_chain_element->Address.High = htole32((uint32_t)((uint64_t)(uintptr_t)msg_phys >> 32)); main_chain_element->Address.Low = htole32((uint32_t)(uintptr_t)msg_phys); main_chain_element->NextChainOffset = 0; main_chain_element->Flags = MPI2_IEEE_SGE_FLAGS_CHAIN_ELEMENT | MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR | MPI26_IEEE_SGE_FLAGS_NSF_NVME_PRP; /* Set SGL pointer to start of contiguous PCIe buffer. */ ptr_sgl = curr_buff; sge_dwords = 2; num_entries = 0; /* * NVMe has a very convoluted PRP format. One PRP is required for each * page or partial page. We need to split up OS SG entries if they are * longer than one page or cross a page boundary. We also have to insert * a PRP list pointer entry as the last entry in each physical page of * the PRP list. * * NOTE: The first PRP "entry" is actually placed in the first SGL entry * in the main message in IEEE 64 format. The 2nd entry in the main * message is the chain element, and the rest of the PRP entries are * built in the contiguous PCIe buffer. */ first_prp_entry = 1; ptr_first_sgl = (uint32_t *)cm->cm_sge; for (i = 0; i < segs_left; i++) { /* Get physical address and length of this SG entry. */ paddr = segs[i].ds_addr; length = segs[i].ds_len; /* * Check whether a given SGE buffer lies on a non-PAGED * boundary if this is not the first page. If so, this is not * expected so have FW build the SGL. */ if ((i != 0) && (((uint32_t)paddr & page_mask) != 0)) { mpr_dprint(sc, MPR_ERROR, "Unaligned SGE while " "building NVMe PRPs, low address is 0x%x\n", (uint32_t)paddr); return 1; } /* Apart from last SGE, if any other SGE boundary is not page * aligned then it means that hole exists. Existence of hole * leads to data corruption. So fallback to IEEE SGEs. */ if (i != (segs_left - 1)) { if (((uint32_t)paddr + length) & page_mask) { mpr_dprint(sc, MPR_ERROR, "Unaligned SGE " "boundary while building NVMe PRPs, low " "address: 0x%x and length: %u\n", (uint32_t)paddr, length); return 1; } } /* Loop while the length is not zero. */ while (length) { /* * Check if we need to put a list pointer here if we are * at page boundary - prp_size. */ page_mask_result = (uintptr_t)((uint8_t *)ptr_sgl + prp_size) & page_mask; if (!page_mask_result) { /* * Need to put a PRP list pointer here. */ msg_phys = (uint64_t *)((uint8_t *)msg_phys + prp_size); *ptr_sgl = htole32((uintptr_t)msg_phys); *(ptr_sgl+1) = htole32((uint64_t)(uintptr_t) msg_phys >> 32); ptr_sgl += sge_dwords; num_entries++; } /* Need to handle if entry will be part of a page. */ offset = (uint32_t)paddr & page_mask; entry_len = PAGE_SIZE - offset; if (first_prp_entry) { /* * Put IEEE entry in first SGE in main message. * (Simple element, System addr, not end of * list.) */ *ptr_first_sgl = htole32((uint32_t)paddr); *(ptr_first_sgl + 1) = htole32((uint32_t)((uint64_t)paddr >> 32)); *(ptr_first_sgl + 2) = htole32(entry_len); *(ptr_first_sgl + 3) = 0; /* No longer the first PRP entry. */ first_prp_entry = 0; } else { /* Put entry in list. */ *ptr_sgl = htole32((uint32_t)paddr); *(ptr_sgl + 1) = htole32((uint32_t)((uint64_t)paddr >> 32)); /* Bump ptr_sgl, msg_phys, and num_entries. */ ptr_sgl += sge_dwords; msg_phys = (uint64_t *)((uint8_t *)msg_phys + prp_size); num_entries++; } /* Bump the phys address by the entry_len. */ paddr += entry_len; /* Decrement length accounting for last partial page. */ if (entry_len > length) length = 0; else length -= entry_len; } } /* Set chain element Length. */ main_chain_element->Length = htole32(num_entries * prp_size); /* Return 0, indicating we built a native SGL. */ return 0; } /* * Add a chain element as the next SGE for the specified command. * Reset cm_sge and cm_sgesize to indicate all the available space. Chains are * only required for IEEE commands. Therefore there is no code for commands * that have the MPR_CM_FLAGS_SGE_SIMPLE flag set (and those commands * shouldn't be requesting chains). */ static int mpr_add_chain(struct mpr_command *cm, int segsleft) { struct mpr_softc *sc = cm->cm_sc; MPI2_REQUEST_HEADER *req; MPI25_IEEE_SGE_CHAIN64 *ieee_sgc; struct mpr_chain *chain; int sgc_size, current_segs, rem_segs, segs_per_frame; uint8_t next_chain_offset = 0; /* * Fail if a command is requesting a chain for SIMPLE SGE's. For SAS3 * only IEEE commands should be requesting chains. Return some error * code other than 0. */ if (cm->cm_flags & MPR_CM_FLAGS_SGE_SIMPLE) { mpr_dprint(sc, MPR_ERROR, "A chain element cannot be added to " "an MPI SGL.\n"); return(ENOBUFS); } sgc_size = sizeof(MPI25_IEEE_SGE_CHAIN64); if (cm->cm_sglsize < sgc_size) panic("MPR: Need SGE Error Code\n"); chain = mpr_alloc_chain(cm->cm_sc); if (chain == NULL) return (ENOBUFS); /* * Note: a double-linked list is used to make it easier to walk for * debugging. */ TAILQ_INSERT_TAIL(&cm->cm_chain_list, chain, chain_link); /* * Need to know if the number of frames left is more than 1 or not. If * more than 1 frame is required, NextChainOffset will need to be set, * which will just be the last segment of the frame. */ rem_segs = 0; if (cm->cm_sglsize < (sgc_size * segsleft)) { /* * rem_segs is the number of segements remaining after the * segments that will go into the current frame. Since it is * known that at least one more frame is required, account for * the chain element. To know if more than one more frame is * required, just check if there will be a remainder after using * the current frame (with this chain) and the next frame. If * so the NextChainOffset must be the last element of the next * frame. */ current_segs = (cm->cm_sglsize / sgc_size) - 1; rem_segs = segsleft - current_segs; segs_per_frame = sc->chain_frame_size / sgc_size; if (rem_segs > segs_per_frame) { next_chain_offset = segs_per_frame - 1; } } ieee_sgc = &((MPI25_SGE_IO_UNION *)cm->cm_sge)->IeeeChain; ieee_sgc->Length = next_chain_offset ? htole32((uint32_t)sc->chain_frame_size) : htole32((uint32_t)rem_segs * (uint32_t)sgc_size); ieee_sgc->NextChainOffset = next_chain_offset; ieee_sgc->Flags = (MPI2_IEEE_SGE_FLAGS_CHAIN_ELEMENT | MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR); ieee_sgc->Address.Low = htole32(chain->chain_busaddr); ieee_sgc->Address.High = htole32(chain->chain_busaddr >> 32); cm->cm_sge = &((MPI25_SGE_IO_UNION *)chain->chain)->IeeeSimple; req = (MPI2_REQUEST_HEADER *)cm->cm_req; req->ChainOffset = (sc->chain_frame_size - sgc_size) >> 4; cm->cm_sglsize = sc->chain_frame_size; return (0); } /* * Add one scatter-gather element to the scatter-gather list for a command. * Maintain cm_sglsize and cm_sge as the remaining size and pointer to the * next SGE to fill in, respectively. In Gen3, the MPI SGL does not have a * chain, so don't consider any chain additions. */ int mpr_push_sge(struct mpr_command *cm, MPI2_SGE_SIMPLE64 *sge, size_t len, int segsleft) { uint32_t saved_buf_len, saved_address_low, saved_address_high; u32 sge_flags; /* * case 1: >=1 more segment, no room for anything (error) * case 2: 1 more segment and enough room for it */ if (cm->cm_sglsize < (segsleft * sizeof(MPI2_SGE_SIMPLE64))) { mpr_dprint(cm->cm_sc, MPR_ERROR, "%s: warning: Not enough room for MPI SGL in frame.\n", __func__); return(ENOBUFS); } KASSERT(segsleft == 1, ("segsleft cannot be more than 1 for an MPI SGL; segsleft = %d\n", segsleft)); /* * There is one more segment left to add for the MPI SGL and there is * enough room in the frame to add it. This is the normal case because * MPI SGL's don't have chains, otherwise something is wrong. * * If this is a bi-directional request, need to account for that * here. Save the pre-filled sge values. These will be used * either for the 2nd SGL or for a single direction SGL. If * cm_out_len is non-zero, this is a bi-directional request, so * fill in the OUT SGL first, then the IN SGL, otherwise just * fill in the IN SGL. Note that at this time, when filling in * 2 SGL's for a bi-directional request, they both use the same * DMA buffer (same cm command). */ saved_buf_len = sge->FlagsLength & 0x00FFFFFF; saved_address_low = sge->Address.Low; saved_address_high = sge->Address.High; if (cm->cm_out_len) { sge->FlagsLength = cm->cm_out_len | ((uint32_t)(MPI2_SGE_FLAGS_SIMPLE_ELEMENT | MPI2_SGE_FLAGS_END_OF_BUFFER | MPI2_SGE_FLAGS_HOST_TO_IOC | MPI2_SGE_FLAGS_64_BIT_ADDRESSING) << MPI2_SGE_FLAGS_SHIFT); cm->cm_sglsize -= len; /* Endian Safe code */ sge_flags = sge->FlagsLength; sge->FlagsLength = htole32(sge_flags); sge->Address.High = htole32(sge->Address.High); sge->Address.Low = htole32(sge->Address.Low); bcopy(sge, cm->cm_sge, len); cm->cm_sge = (MPI2_SGE_IO_UNION *)((uintptr_t)cm->cm_sge + len); } sge->FlagsLength = saved_buf_len | ((uint32_t)(MPI2_SGE_FLAGS_SIMPLE_ELEMENT | MPI2_SGE_FLAGS_END_OF_BUFFER | MPI2_SGE_FLAGS_LAST_ELEMENT | MPI2_SGE_FLAGS_END_OF_LIST | MPI2_SGE_FLAGS_64_BIT_ADDRESSING) << MPI2_SGE_FLAGS_SHIFT); if (cm->cm_flags & MPR_CM_FLAGS_DATAIN) { sge->FlagsLength |= ((uint32_t)(MPI2_SGE_FLAGS_IOC_TO_HOST) << MPI2_SGE_FLAGS_SHIFT); } else { sge->FlagsLength |= ((uint32_t)(MPI2_SGE_FLAGS_HOST_TO_IOC) << MPI2_SGE_FLAGS_SHIFT); } sge->Address.Low = saved_address_low; sge->Address.High = saved_address_high; cm->cm_sglsize -= len; /* Endian Safe code */ sge_flags = sge->FlagsLength; sge->FlagsLength = htole32(sge_flags); sge->Address.High = htole32(sge->Address.High); sge->Address.Low = htole32(sge->Address.Low); bcopy(sge, cm->cm_sge, len); cm->cm_sge = (MPI2_SGE_IO_UNION *)((uintptr_t)cm->cm_sge + len); return (0); } /* * Add one IEEE scatter-gather element (chain or simple) to the IEEE scatter- * gather list for a command. Maintain cm_sglsize and cm_sge as the * remaining size and pointer to the next SGE to fill in, respectively. */ int mpr_push_ieee_sge(struct mpr_command *cm, void *sgep, int segsleft) { MPI2_IEEE_SGE_SIMPLE64 *sge = sgep; int error, ieee_sge_size = sizeof(MPI25_SGE_IO_UNION); uint32_t saved_buf_len, saved_address_low, saved_address_high; uint32_t sge_length; /* * case 1: No room for chain or segment (error). * case 2: Two or more segments left but only room for chain. * case 3: Last segment and room for it, so set flags. */ /* * There should be room for at least one element, or there is a big * problem. */ if (cm->cm_sglsize < ieee_sge_size) panic("MPR: Need SGE Error Code\n"); if ((segsleft >= 2) && (cm->cm_sglsize < (ieee_sge_size * 2))) { if ((error = mpr_add_chain(cm, segsleft)) != 0) return (error); } if (segsleft == 1) { /* * If this is a bi-directional request, need to account for that * here. Save the pre-filled sge values. These will be used * either for the 2nd SGL or for a single direction SGL. If * cm_out_len is non-zero, this is a bi-directional request, so * fill in the OUT SGL first, then the IN SGL, otherwise just * fill in the IN SGL. Note that at this time, when filling in * 2 SGL's for a bi-directional request, they both use the same * DMA buffer (same cm command). */ saved_buf_len = sge->Length; saved_address_low = sge->Address.Low; saved_address_high = sge->Address.High; if (cm->cm_out_len) { sge->Length = cm->cm_out_len; sge->Flags = (MPI2_IEEE_SGE_FLAGS_SIMPLE_ELEMENT | MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR); cm->cm_sglsize -= ieee_sge_size; /* Endian Safe code */ sge_length = sge->Length; sge->Length = htole32(sge_length); sge->Address.High = htole32(sge->Address.High); sge->Address.Low = htole32(sge->Address.Low); bcopy(sgep, cm->cm_sge, ieee_sge_size); cm->cm_sge = (MPI25_SGE_IO_UNION *)((uintptr_t)cm->cm_sge + ieee_sge_size); } sge->Length = saved_buf_len; sge->Flags = (MPI2_IEEE_SGE_FLAGS_SIMPLE_ELEMENT | MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR | MPI25_IEEE_SGE_FLAGS_END_OF_LIST); sge->Address.Low = saved_address_low; sge->Address.High = saved_address_high; } cm->cm_sglsize -= ieee_sge_size; /* Endian Safe code */ sge_length = sge->Length; sge->Length = htole32(sge_length); sge->Address.High = htole32(sge->Address.High); sge->Address.Low = htole32(sge->Address.Low); bcopy(sgep, cm->cm_sge, ieee_sge_size); cm->cm_sge = (MPI25_SGE_IO_UNION *)((uintptr_t)cm->cm_sge + ieee_sge_size); return (0); } /* * Add one dma segment to the scatter-gather list for a command. */ int mpr_add_dmaseg(struct mpr_command *cm, vm_paddr_t pa, size_t len, u_int flags, int segsleft) { MPI2_SGE_SIMPLE64 sge; MPI2_IEEE_SGE_SIMPLE64 ieee_sge; if (!(cm->cm_flags & MPR_CM_FLAGS_SGE_SIMPLE)) { ieee_sge.Flags = (MPI2_IEEE_SGE_FLAGS_SIMPLE_ELEMENT | MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR); ieee_sge.Length = len; mpr_from_u64(pa, &ieee_sge.Address); return (mpr_push_ieee_sge(cm, &ieee_sge, segsleft)); } else { /* * This driver always uses 64-bit address elements for * simplicity. */ flags |= MPI2_SGE_FLAGS_SIMPLE_ELEMENT | MPI2_SGE_FLAGS_64_BIT_ADDRESSING; /* Set Endian safe macro in mpr_push_sge */ sge.FlagsLength = len | (flags << MPI2_SGE_FLAGS_SHIFT); mpr_from_u64(pa, &sge.Address); return (mpr_push_sge(cm, &sge, sizeof sge, segsleft)); } } static void mpr_data_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error) { struct mpr_softc *sc; struct mpr_command *cm; u_int i, dir, sflags; cm = (struct mpr_command *)arg; sc = cm->cm_sc; /* * In this case, just print out a warning and let the chip tell the * user they did the wrong thing. */ if ((cm->cm_max_segs != 0) && (nsegs > cm->cm_max_segs)) { mpr_dprint(sc, MPR_ERROR, "%s: warning: busdma returned %d " "segments, more than the %d allowed\n", __func__, nsegs, cm->cm_max_segs); } /* * Set up DMA direction flags. Bi-directional requests are also handled * here. In that case, both direction flags will be set. */ sflags = 0; if (cm->cm_flags & MPR_CM_FLAGS_SMP_PASS) { /* * We have to add a special case for SMP passthrough, there * is no easy way to generically handle it. The first * S/G element is used for the command (therefore the * direction bit needs to be set). The second one is used * for the reply. We'll leave it to the caller to make * sure we only have two buffers. */ /* * Even though the busdma man page says it doesn't make * sense to have both direction flags, it does in this case. * We have one s/g element being accessed in each direction. */ dir = BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD; /* * Set the direction flag on the first buffer in the SMP * passthrough request. We'll clear it for the second one. */ sflags |= MPI2_SGE_FLAGS_DIRECTION | MPI2_SGE_FLAGS_END_OF_BUFFER; } else if (cm->cm_flags & MPR_CM_FLAGS_DATAOUT) { sflags |= MPI2_SGE_FLAGS_HOST_TO_IOC; dir = BUS_DMASYNC_PREWRITE; } else dir = BUS_DMASYNC_PREREAD; /* Check if a native SG list is needed for an NVMe PCIe device. */ if (cm->cm_targ && cm->cm_targ->is_nvme && mpr_check_pcie_native_sgl(sc, cm, segs, nsegs) == 0) { /* A native SG list was built, skip to end. */ goto out; } for (i = 0; i < nsegs; i++) { if ((cm->cm_flags & MPR_CM_FLAGS_SMP_PASS) && (i != 0)) { sflags &= ~MPI2_SGE_FLAGS_DIRECTION; } error = mpr_add_dmaseg(cm, segs[i].ds_addr, segs[i].ds_len, sflags, nsegs - i); if (error != 0) { /* Resource shortage, roll back! */ if (ratecheck(&sc->lastfail, &mpr_chainfail_interval)) mpr_dprint(sc, MPR_INFO, "Out of chain frames, " "consider increasing hw.mpr.max_chains.\n"); cm->cm_flags |= MPR_CM_FLAGS_CHAIN_FAILED; mpr_complete_command(sc, cm); return; } } out: bus_dmamap_sync(sc->buffer_dmat, cm->cm_dmamap, dir); mpr_enqueue_request(sc, cm); return; } static void mpr_data_cb2(void *arg, bus_dma_segment_t *segs, int nsegs, bus_size_t mapsize, int error) { mpr_data_cb(arg, segs, nsegs, error); } /* * This is the routine to enqueue commands ansynchronously. * Note that the only error path here is from bus_dmamap_load(), which can * return EINPROGRESS if it is waiting for resources. Other than this, it's * assumed that if you have a command in-hand, then you have enough credits * to use it. */ int mpr_map_command(struct mpr_softc *sc, struct mpr_command *cm) { int error = 0; if (cm->cm_flags & MPR_CM_FLAGS_USE_UIO) { error = bus_dmamap_load_uio(sc->buffer_dmat, cm->cm_dmamap, &cm->cm_uio, mpr_data_cb2, cm, 0); } else if (cm->cm_flags & MPR_CM_FLAGS_USE_CCB) { error = bus_dmamap_load_ccb(sc->buffer_dmat, cm->cm_dmamap, cm->cm_data, mpr_data_cb, cm, 0); } else if ((cm->cm_data != NULL) && (cm->cm_length != 0)) { error = bus_dmamap_load(sc->buffer_dmat, cm->cm_dmamap, cm->cm_data, cm->cm_length, mpr_data_cb, cm, 0); } else { /* Add a zero-length element as needed */ if (cm->cm_sge != NULL) mpr_add_dmaseg(cm, 0, 0, 0, 1); mpr_enqueue_request(sc, cm); } return (error); } /* * This is the routine to enqueue commands synchronously. An error of * EINPROGRESS from mpr_map_command() is ignored since the command will * be executed and enqueued automatically. Other errors come from msleep(). */ int mpr_wait_command(struct mpr_softc *sc, struct mpr_command **cmp, int timeout, int sleep_flag) { int error, rc; struct timeval cur_time, start_time; struct mpr_command *cm = *cmp; if (sc->mpr_flags & MPR_FLAGS_DIAGRESET) return EBUSY; cm->cm_complete = NULL; cm->cm_flags |= (MPR_CM_FLAGS_WAKEUP + MPR_CM_FLAGS_POLLED); error = mpr_map_command(sc, cm); if ((error != 0) && (error != EINPROGRESS)) return (error); // Check for context and wait for 50 mSec at a time until time has // expired or the command has finished. If msleep can't be used, need // to poll. #if __FreeBSD_version >= 1000029 if (curthread->td_no_sleeping) #else //__FreeBSD_version < 1000029 if (curthread->td_pflags & TDP_NOSLEEPING) #endif //__FreeBSD_version >= 1000029 sleep_flag = NO_SLEEP; getmicrouptime(&start_time); if (mtx_owned(&sc->mpr_mtx) && sleep_flag == CAN_SLEEP) { error = msleep(cm, &sc->mpr_mtx, 0, "mprwait", timeout*hz); if (error == EWOULDBLOCK) { /* * Record the actual elapsed time in the case of a * timeout for the message below. */ getmicrouptime(&cur_time); timevalsub(&cur_time, &start_time); } } else { while ((cm->cm_flags & MPR_CM_FLAGS_COMPLETE) == 0) { mpr_intr_locked(sc); if (sleep_flag == CAN_SLEEP) pause("mprwait", hz/20); else DELAY(50000); getmicrouptime(&cur_time); timevalsub(&cur_time, &start_time); if (cur_time.tv_sec > timeout) { error = EWOULDBLOCK; break; } } } if (error == EWOULDBLOCK) { - mpr_dprint(sc, MPR_FAULT, "Calling Reinit from %s, timeout=%d," - " elapsed=%jd\n", __func__, timeout, - (intmax_t)cur_time.tv_sec); - rc = mpr_reinit(sc); - mpr_dprint(sc, MPR_FAULT, "Reinit %s\n", (rc == 0) ? "success" : - "failed"); + if (cm->cm_timeout_handler == NULL) { + mpr_dprint(sc, MPR_FAULT, "Calling Reinit from %s, timeout=%d," + " elapsed=%jd\n", __func__, timeout, + (intmax_t)cur_time.tv_sec); + rc = mpr_reinit(sc); + mpr_dprint(sc, MPR_FAULT, "Reinit %s\n", (rc == 0) ? "success" : + "failed"); + } else + cm->cm_timeout_handler(sc, cm); if (sc->mpr_flags & MPR_FLAGS_REALLOCATED) { /* * Tell the caller that we freed the command in a * reinit. */ *cmp = NULL; } error = ETIMEDOUT; } return (error); } /* * This is the routine to enqueue a command synchonously and poll for * completion. Its use should be rare. */ int mpr_request_polled(struct mpr_softc *sc, struct mpr_command **cmp) { int error, rc; struct timeval cur_time, start_time; struct mpr_command *cm = *cmp; error = 0; cm->cm_flags |= MPR_CM_FLAGS_POLLED; cm->cm_complete = NULL; mpr_map_command(sc, cm); getmicrouptime(&start_time); while ((cm->cm_flags & MPR_CM_FLAGS_COMPLETE) == 0) { mpr_intr_locked(sc); if (mtx_owned(&sc->mpr_mtx)) msleep(&sc->msleep_fake_chan, &sc->mpr_mtx, 0, "mprpoll", hz/20); else pause("mprpoll", hz/20); /* * Check for real-time timeout and fail if more than 60 seconds. */ getmicrouptime(&cur_time); timevalsub(&cur_time, &start_time); if (cur_time.tv_sec > 60) { mpr_dprint(sc, MPR_FAULT, "polling failed\n"); error = ETIMEDOUT; break; } } - + cm->cm_state = MPR_CM_STATE_BUSY; if (error) { mpr_dprint(sc, MPR_FAULT, "Calling Reinit from %s\n", __func__); rc = mpr_reinit(sc); mpr_dprint(sc, MPR_FAULT, "Reinit %s\n", (rc == 0) ? "success" : "failed"); if (sc->mpr_flags & MPR_FLAGS_REALLOCATED) { /* * Tell the caller that we freed the command in a * reinit. */ *cmp = NULL; } } return (error); } /* * The MPT driver had a verbose interface for config pages. In this driver, * reduce it to much simpler terms, similar to the Linux driver. */ int mpr_read_config_page(struct mpr_softc *sc, struct mpr_config_params *params) { MPI2_CONFIG_REQUEST *req; struct mpr_command *cm; int error; if (sc->mpr_flags & MPR_FLAGS_BUSY) { return (EBUSY); } cm = mpr_alloc_command(sc); if (cm == NULL) { return (EBUSY); } req = (MPI2_CONFIG_REQUEST *)cm->cm_req; req->Function = MPI2_FUNCTION_CONFIG; req->Action = params->action; req->SGLFlags = 0; req->ChainOffset = 0; req->PageAddress = params->page_address; if (params->hdr.Struct.PageType == MPI2_CONFIG_PAGETYPE_EXTENDED) { MPI2_CONFIG_EXTENDED_PAGE_HEADER *hdr; hdr = ¶ms->hdr.Ext; req->ExtPageType = hdr->ExtPageType; req->ExtPageLength = hdr->ExtPageLength; req->Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; req->Header.PageLength = 0; /* Must be set to zero */ req->Header.PageNumber = hdr->PageNumber; req->Header.PageVersion = hdr->PageVersion; } else { MPI2_CONFIG_PAGE_HEADER *hdr; hdr = ¶ms->hdr.Struct; req->Header.PageType = hdr->PageType; req->Header.PageNumber = hdr->PageNumber; req->Header.PageLength = hdr->PageLength; req->Header.PageVersion = hdr->PageVersion; } cm->cm_data = params->buffer; cm->cm_length = params->length; if (cm->cm_data != NULL) { cm->cm_sge = &req->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAIN; } else cm->cm_sge = NULL; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_complete_data = params; if (params->callback != NULL) { cm->cm_complete = mpr_config_complete; return (mpr_map_command(sc, cm)); } else { error = mpr_wait_command(sc, &cm, 0, CAN_SLEEP); if (error) { mpr_dprint(sc, MPR_FAULT, "Error %d reading config page\n", error); if (cm != NULL) mpr_free_command(sc, cm); return (error); } mpr_config_complete(sc, cm); } return (0); } int mpr_write_config_page(struct mpr_softc *sc, struct mpr_config_params *params) { return (EINVAL); } static void mpr_config_complete(struct mpr_softc *sc, struct mpr_command *cm) { MPI2_CONFIG_REPLY *reply; struct mpr_config_params *params; MPR_FUNCTRACE(sc); params = cm->cm_complete_data; if (cm->cm_data != NULL) { bus_dmamap_sync(sc->buffer_dmat, cm->cm_dmamap, BUS_DMASYNC_POSTREAD); bus_dmamap_unload(sc->buffer_dmat, cm->cm_dmamap); } /* * XXX KDM need to do more error recovery? This results in the * device in question not getting probed. */ if ((cm->cm_flags & MPR_CM_FLAGS_ERROR_MASK) != 0) { params->status = MPI2_IOCSTATUS_BUSY; goto done; } reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (reply == NULL) { params->status = MPI2_IOCSTATUS_BUSY; goto done; } params->status = reply->IOCStatus; if (params->hdr.Struct.PageType == MPI2_CONFIG_PAGETYPE_EXTENDED) { params->hdr.Ext.ExtPageType = reply->ExtPageType; params->hdr.Ext.ExtPageLength = reply->ExtPageLength; params->hdr.Ext.PageType = reply->Header.PageType; params->hdr.Ext.PageNumber = reply->Header.PageNumber; params->hdr.Ext.PageVersion = reply->Header.PageVersion; } else { params->hdr.Struct.PageType = reply->Header.PageType; params->hdr.Struct.PageNumber = reply->Header.PageNumber; params->hdr.Struct.PageLength = reply->Header.PageLength; params->hdr.Struct.PageVersion = reply->Header.PageVersion; } done: mpr_free_command(sc, cm); if (params->callback != NULL) params->callback(sc, params); return; } Index: releng/12.1/sys/dev/mpr/mpr_config.c =================================================================== --- releng/12.1/sys/dev/mpr/mpr_config.c (revision 352760) +++ releng/12.1/sys/dev/mpr/mpr_config.c (revision 352761) @@ -1,1596 +1,1752 @@ /*- * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2016 Avago Technologies + * Copyright 2000-2020 Broadcom Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD */ #include __FBSDID("$FreeBSD$"); /* TODO Move headers to mprvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /** * mpr_config_get_ioc_pg8 - obtain ioc page 8 * @sc: per adapter object * @mpi_reply: reply mf payload returned from firmware * @config_page: contents of the config page * Context: sleep. * * Returns 0 for success, non-zero for failure. */ int mpr_config_get_ioc_pg8(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2IOCPage8_t *config_page) { MPI2_CONFIG_REQUEST *request; MPI2_CONFIG_REPLY *reply; struct mpr_command *cm; MPI2_CONFIG_PAGE_IOC_8 *page = NULL; int error = 0; u16 ioc_status; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_HEADER; request->Header.PageType = MPI2_CONFIG_PAGETYPE_IOC; request->Header.PageNumber = 8; request->Header.PageLength = request->Header.PageVersion = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for header completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: header read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } /* We have to do free and alloc for the reply-free and reply-post * counters to match - Need to review the reply FIFO handling. */ mpr_free_command(sc, cm); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; request->Header.PageType = MPI2_CONFIG_PAGETYPE_IOC; request->Header.PageNumber = 8; request->Header.PageVersion = mpi_reply->Header.PageVersion; request->Header.PageLength = mpi_reply->Header.PageLength; cm->cm_length = le16toh(mpi_reply->Header.PageLength) * 4; cm->cm_sge = &request->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAIN; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; page = malloc((cm->cm_length), M_MPR, M_ZERO | M_NOWAIT); if (!page) { printf("%s: page alloc failed\n", __func__); error = ENOMEM; goto out; } cm->cm_data = page; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for page completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: page read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } bcopy(page, config_page, MIN(cm->cm_length, (sizeof(Mpi2IOCPage8_t)))); out: free(page, M_MPR); if (cm) mpr_free_command(sc, cm); return (error); } /** * mpr_config_get_iounit_pg8 - obtain iounit page 8 * @sc: per adapter object * @mpi_reply: reply mf payload returned from firmware * @config_page: contents of the config page * Context: sleep. * * Returns 0 for success, non-zero for failure. */ int mpr_config_get_iounit_pg8(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2IOUnitPage8_t *config_page) { MPI2_CONFIG_REQUEST *request; MPI2_CONFIG_REPLY *reply; struct mpr_command *cm; MPI2_CONFIG_PAGE_IO_UNIT_8 *page = NULL; int error = 0; u16 ioc_status; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_HEADER; request->Header.PageType = MPI2_CONFIG_PAGETYPE_IO_UNIT; request->Header.PageNumber = 8; request->Header.PageLength = request->Header.PageVersion = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for header completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: header read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } /* We have to do free and alloc for the reply-free and reply-post * counters to match - Need to review the reply FIFO handling. */ mpr_free_command(sc, cm); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; request->Header.PageType = MPI2_CONFIG_PAGETYPE_IO_UNIT; request->Header.PageNumber = 8; request->Header.PageVersion = mpi_reply->Header.PageVersion; request->Header.PageLength = mpi_reply->Header.PageLength; cm->cm_length = le16toh(mpi_reply->Header.PageLength) * 4; cm->cm_sge = &request->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAIN; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; page = malloc((cm->cm_length), M_MPR, M_ZERO | M_NOWAIT); if (!page) { printf("%s: page alloc failed\n", __func__); error = ENOMEM; goto out; } cm->cm_data = page; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for page completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: page read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } bcopy(page, config_page, MIN(cm->cm_length, (sizeof(Mpi2IOUnitPage8_t)))); out: free(page, M_MPR); if (cm) mpr_free_command(sc, cm); return (error); } /** + * mpr_config_get_man_pg11 - obtain manufacturing page 11 + * @sc: per adapter object + * @mpi_reply: reply mf payload returned from firmware + * @config_page: contents of the config page + * Context: sleep. + * + * Returns 0 for success, non-zero for failure. + */ +int +mpr_config_get_man_pg11(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, + Mpi2ManufacturingPage11_t *config_page) +{ + MPI2_CONFIG_REQUEST *request; + MPI2_CONFIG_REPLY *reply; + struct mpr_command *cm; + MPI2_CONFIG_PAGE_MAN_11 *page = NULL; + int error = 0; + u16 ioc_status; + + mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); + + if ((cm = mpr_alloc_command(sc)) == NULL) { + printf("%s: command alloc failed @ line %d\n", __func__, + __LINE__); + error = EBUSY; + goto out; + } + request = (MPI2_CONFIG_REQUEST *)cm->cm_req; + bzero(request, sizeof(MPI2_CONFIG_REQUEST)); + request->Function = MPI2_FUNCTION_CONFIG; + request->Action = MPI2_CONFIG_ACTION_PAGE_HEADER; + request->Header.PageType = MPI2_CONFIG_PAGETYPE_MANUFACTURING; + request->Header.PageNumber = 11; + request->Header.PageLength = request->Header.PageVersion = 0; + cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; + cm->cm_data = NULL; + error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); + reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; + if (error || (reply == NULL)) { + /* FIXME */ + /* + * If the request returns an error then we need to do a diag + * reset + */ + printf("%s: request for header completed with error %d", + __func__, error); + error = ENXIO; + goto out; + } + ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; + bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); + if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { + /* FIXME */ + /* + * If the request returns an error then we need to do a diag + * reset + */ + printf("%s: header read with error; iocstatus = 0x%x\n", + __func__, ioc_status); + error = ENXIO; + goto out; + } + /* We have to do free and alloc for the reply-free and reply-post + * counters to match - Need to review the reply FIFO handling. + */ + mpr_free_command(sc, cm); + + if ((cm = mpr_alloc_command(sc)) == NULL) { + printf("%s: command alloc failed @ line %d\n", __func__, + __LINE__); + error = EBUSY; + goto out; + } + request = (MPI2_CONFIG_REQUEST *)cm->cm_req; + bzero(request, sizeof(MPI2_CONFIG_REQUEST)); + request->Function = MPI2_FUNCTION_CONFIG; + request->Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; + request->Header.PageType = MPI2_CONFIG_PAGETYPE_MANUFACTURING; + request->Header.PageNumber = 11; + request->Header.PageVersion = mpi_reply->Header.PageVersion; + request->Header.PageLength = mpi_reply->Header.PageLength; + cm->cm_length = le16toh(mpi_reply->Header.PageLength) * 4; + cm->cm_sge = &request->PageBufferSGE; + cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); + cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAIN; + cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; + page = malloc((cm->cm_length), M_MPR, M_ZERO | M_NOWAIT); + if (!page) { + printf("%s: page alloc failed\n", __func__); + error = ENOMEM; + goto out; + } + cm->cm_data = page; + + error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); + reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; + if (error || (reply == NULL)) { + /* FIXME */ + /* + * If the request returns an error then we need to do a diag + * reset + */ + printf("%s: request for page completed with error %d", + __func__, error); + error = ENXIO; + goto out; + } + ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; + bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); + if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { + /* FIXME */ + /* + * If the request returns an error then we need to do a diag + * reset + */ + printf("%s: page read with error; iocstatus = 0x%x\n", + __func__, ioc_status); + error = ENXIO; + goto out; + } + bcopy(page, config_page, MIN(cm->cm_length, + (sizeof(Mpi2ManufacturingPage11_t)))); + +out: + free(page, M_MPR); + if (cm) + mpr_free_command(sc, cm); + return (error); +} + +/** * mpr_base_static_config_pages - static start of day config pages. * @sc: per adapter object * * Return nothing. */ void mpr_base_static_config_pages(struct mpr_softc *sc) { - Mpi2ConfigReply_t mpi_reply; - int retry; + Mpi2ConfigReply_t mpi_reply; + Mpi2ManufacturingPage11_t man_pg11; + int retry, rc; retry = 0; while (mpr_config_get_ioc_pg8(sc, &mpi_reply, &sc->ioc_pg8)) { retry++; if (retry > 5) { /* We need to Handle this situation */ /*FIXME*/ break; } } retry = 0; while (mpr_config_get_iounit_pg8(sc, &mpi_reply, &sc->iounit_pg8)) { retry++; if (retry > 5) { /* We need to Handle this situation */ /*FIXME*/ break; } + } + retry = 0; + while ((rc = mpr_config_get_man_pg11(sc, &mpi_reply, &man_pg11))) { + retry++; + if (retry > 5) { + /* We need to Handle this situation */ + /*FIXME*/ + break; + } + } + + if (!rc) { + sc->custom_nvme_tm_handling = (le16toh(man_pg11.AddlFlags2) & + MPI2_MAN_PG11_ADDLFLAGS2_CUSTOM_TM_HANDLING_MASK); + sc->nvme_abort_timeout = man_pg11.NVMeAbortTO; + + /* Minimum NVMe Abort timeout value should be 6 seconds & + * maximum value should be 60 seconds. + */ + if (sc->nvme_abort_timeout < 6) + sc->nvme_abort_timeout = 6; + if (sc->nvme_abort_timeout > 60) + sc->nvme_abort_timeout = 60; } } /** * mpr_config_get_dpm_pg0 - obtain driver persistent mapping page0 * @sc: per adapter object * @mpi_reply: reply mf payload returned from firmware * @config_page: contents of the config page * @sz: size of buffer passed in config_page * Context: sleep. * * Returns 0 for success, non-zero for failure. */ int mpr_config_get_dpm_pg0(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2DriverMappingPage0_t *config_page, u16 sz) { MPI2_CONFIG_REQUEST *request; MPI2_CONFIG_REPLY *reply; struct mpr_command *cm; Mpi2DriverMappingPage0_t *page = NULL; int error = 0; u16 ioc_status; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); memset(config_page, 0, sz); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_HEADER; request->Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; request->ExtPageType = MPI2_CONFIG_EXTPAGETYPE_DRIVER_MAPPING; request->Header.PageNumber = 0; request->ExtPageLength = request->Header.PageVersion = 0; request->PageAddress = sc->max_dpm_entries << MPI2_DPM_PGAD_ENTRY_COUNT_SHIFT; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for header completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: header read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } /* We have to do free and alloc for the reply-free and reply-post * counters to match - Need to review the reply FIFO handling. */ mpr_free_command(sc, cm); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_READ_NVRAM; request->Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; request->ExtPageType = MPI2_CONFIG_EXTPAGETYPE_DRIVER_MAPPING; request->Header.PageNumber = 0; request->Header.PageVersion = mpi_reply->Header.PageVersion; request->PageAddress = sc->max_dpm_entries << MPI2_DPM_PGAD_ENTRY_COUNT_SHIFT; request->ExtPageLength = mpi_reply->ExtPageLength; cm->cm_length = le16toh(request->ExtPageLength) * 4; cm->cm_sge = &request->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAIN; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; page = malloc(cm->cm_length, M_MPR, M_ZERO|M_NOWAIT); if (!page) { printf("%s: page alloc failed\n", __func__); error = ENOMEM; goto out; } cm->cm_data = page; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for page completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: page read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } bcopy(page, config_page, MIN(cm->cm_length, sz)); out: free(page, M_MPR); if (cm) mpr_free_command(sc, cm); return (error); } /** * mpr_config_set_dpm_pg0 - write an entry in driver persistent mapping page0 * @sc: per adapter object * @mpi_reply: reply mf payload returned from firmware * @config_page: contents of the config page * @entry_idx: entry index in DPM Page0 to be modified * Context: sleep. * * Returns 0 for success, non-zero for failure. */ int mpr_config_set_dpm_pg0(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2DriverMappingPage0_t *config_page, u16 entry_idx) { MPI2_CONFIG_REQUEST *request; MPI2_CONFIG_REPLY *reply; struct mpr_command *cm; MPI2_CONFIG_PAGE_DRIVER_MAPPING_0 *page = NULL; int error = 0; u16 ioc_status; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_HEADER; request->Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; request->ExtPageType = MPI2_CONFIG_EXTPAGETYPE_DRIVER_MAPPING; request->Header.PageNumber = 0; request->ExtPageLength = request->Header.PageVersion = 0; /* We can remove below two lines ????*/ request->PageAddress = 1 << MPI2_DPM_PGAD_ENTRY_COUNT_SHIFT; request->PageAddress |= htole16(entry_idx); cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for header completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: header read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } /* We have to do free and alloc for the reply-free and reply-post * counters to match - Need to review the reply FIFO handling. */ mpr_free_command(sc, cm); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_WRITE_NVRAM; request->Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; request->ExtPageType = MPI2_CONFIG_EXTPAGETYPE_DRIVER_MAPPING; request->Header.PageNumber = 0; request->Header.PageVersion = mpi_reply->Header.PageVersion; request->ExtPageLength = mpi_reply->ExtPageLength; request->PageAddress = 1 << MPI2_DPM_PGAD_ENTRY_COUNT_SHIFT; request->PageAddress |= htole16(entry_idx); cm->cm_length = le16toh(mpi_reply->ExtPageLength) * 4; cm->cm_sge = &request->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAOUT; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; page = malloc(cm->cm_length, M_MPR, M_ZERO | M_NOWAIT); if (!page) { printf("%s: page alloc failed\n", __func__); error = ENOMEM; goto out; } bcopy(config_page, page, MIN(cm->cm_length, (sizeof(Mpi2DriverMappingPage0_t)))); cm->cm_data = page; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request to write page completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: page written with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } out: free(page, M_MPR); if (cm) mpr_free_command(sc, cm); return (error); } /** * mpr_config_get_sas_device_pg0 - obtain sas device page 0 * @sc: per adapter object * @mpi_reply: reply mf payload returned from firmware * @config_page: contents of the config page * @form: GET_NEXT_HANDLE or HANDLE * @handle: device handle * Context: sleep. * * Returns 0 for success, non-zero for failure. */ int mpr_config_get_sas_device_pg0(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2SasDevicePage0_t *config_page, u32 form, u16 handle) { MPI2_CONFIG_REQUEST *request; MPI2_CONFIG_REPLY *reply; struct mpr_command *cm; Mpi2SasDevicePage0_t *page = NULL; int error = 0; u16 ioc_status; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_HEADER; request->Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; request->ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_DEVICE; request->Header.PageNumber = 0; request->ExtPageLength = request->Header.PageVersion = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for header completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: header read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } /* We have to do free and alloc for the reply-free and reply-post * counters to match - Need to review the reply FIFO handling. */ mpr_free_command(sc, cm); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; request->Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; request->ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_DEVICE; request->Header.PageNumber = 0; request->Header.PageVersion = mpi_reply->Header.PageVersion; request->ExtPageLength = mpi_reply->ExtPageLength; request->PageAddress = htole32(form | handle); cm->cm_length = le16toh(mpi_reply->ExtPageLength) * 4; cm->cm_sge = &request->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAIN; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; page = malloc(cm->cm_length, M_MPR, M_ZERO | M_NOWAIT); if (!page) { printf("%s: page alloc failed\n", __func__); error = ENOMEM; goto out; } cm->cm_data = page; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for page completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: page read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } bcopy(page, config_page, MIN(cm->cm_length, sizeof(Mpi2SasDevicePage0_t))); out: free(page, M_MPR); if (cm) mpr_free_command(sc, cm); return (error); } /** * mpr_config_get_pcie_device_pg0 - obtain PCIe device page 0 * @sc: per adapter object * @mpi_reply: reply mf payload returned from firmware * @config_page: contents of the config page * @form: GET_NEXT_HANDLE or HANDLE * @handle: device handle * Context: sleep. * * Returns 0 for success, non-zero for failure. */ int mpr_config_get_pcie_device_pg0(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi26PCIeDevicePage0_t *config_page, u32 form, u16 handle) { MPI2_CONFIG_REQUEST *request; MPI2_CONFIG_REPLY *reply; struct mpr_command *cm; Mpi26PCIeDevicePage0_t *page = NULL; int error = 0; u16 ioc_status; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_HEADER; request->Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; request->ExtPageType = MPI2_CONFIG_EXTPAGETYPE_PCIE_DEVICE; request->Header.PageNumber = 0; request->ExtPageLength = request->Header.PageVersion = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for header completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: header read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } /* We have to do free and alloc for the reply-free and reply-post * counters to match - Need to review the reply FIFO handling. */ mpr_free_command(sc, cm); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; request->Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; request->ExtPageType = MPI2_CONFIG_EXTPAGETYPE_PCIE_DEVICE; request->Header.PageNumber = 0; request->Header.PageVersion = mpi_reply->Header.PageVersion; request->ExtPageLength = mpi_reply->ExtPageLength; request->PageAddress = htole32(form | handle); cm->cm_length = le16toh(mpi_reply->ExtPageLength) * 4; cm->cm_sge = &request->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAIN; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; page = malloc(cm->cm_length, M_MPR, M_ZERO | M_NOWAIT); if (!page) { printf("%s: page alloc failed\n", __func__); error = ENOMEM; goto out; } cm->cm_data = page; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for page completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: page read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } bcopy(page, config_page, MIN(cm->cm_length, sizeof(Mpi26PCIeDevicePage0_t))); out: free(page, M_MPR); if (cm) mpr_free_command(sc, cm); return (error); } /** * mpr_config_get_pcie_device_pg2 - obtain PCIe device page 2 * @sc: per adapter object * @mpi_reply: reply mf payload returned from firmware * @config_page: contents of the config page * @form: GET_NEXT_HANDLE or HANDLE * @handle: device handle * Context: sleep. * * Returns 0 for success, non-zero for failure. */ int mpr_config_get_pcie_device_pg2(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi26PCIeDevicePage2_t *config_page, u32 form, u16 handle) { MPI2_CONFIG_REQUEST *request; MPI2_CONFIG_REPLY *reply; struct mpr_command *cm; Mpi26PCIeDevicePage2_t *page = NULL; int error = 0; u16 ioc_status; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_HEADER; request->Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; request->ExtPageType = MPI2_CONFIG_EXTPAGETYPE_PCIE_DEVICE; request->Header.PageNumber = 2; request->ExtPageLength = request->Header.PageVersion = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for header completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: header read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } /* We have to do free and alloc for the reply-free and reply-post * counters to match - Need to review the reply FIFO handling. */ mpr_free_command(sc, cm); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; request->Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; request->ExtPageType = MPI2_CONFIG_EXTPAGETYPE_PCIE_DEVICE; request->Header.PageNumber = 2; request->Header.PageVersion = mpi_reply->Header.PageVersion; request->ExtPageLength = mpi_reply->ExtPageLength; request->PageAddress = htole32(form | handle); cm->cm_length = le16toh(mpi_reply->ExtPageLength) * 4; cm->cm_sge = &request->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAIN; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; page = malloc(cm->cm_length, M_MPR, M_ZERO | M_NOWAIT); if (!page) { printf("%s: page alloc failed\n", __func__); error = ENOMEM; goto out; } cm->cm_data = page; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for page completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: page read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } bcopy(page, config_page, MIN(cm->cm_length, sizeof(Mpi26PCIeDevicePage2_t))); out: free(page, M_MPR); if (cm) mpr_free_command(sc, cm); return (error); } /** * mpr_config_get_bios_pg3 - obtain BIOS page 3 * @sc: per adapter object * @mpi_reply: reply mf payload returned from firmware * @config_page: contents of the config page * Context: sleep. * * Returns 0 for success, non-zero for failure. */ int mpr_config_get_bios_pg3(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2BiosPage3_t *config_page) { MPI2_CONFIG_REQUEST *request; MPI2_CONFIG_REPLY *reply; struct mpr_command *cm; Mpi2BiosPage3_t *page = NULL; int error = 0; u16 ioc_status; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_HEADER; request->Header.PageType = MPI2_CONFIG_PAGETYPE_BIOS; request->Header.PageNumber = 3; request->Header.PageLength = request->Header.PageVersion = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for header completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: header read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } /* We have to do free and alloc for the reply-free and reply-post * counters to match - Need to review the reply FIFO handling. */ mpr_free_command(sc, cm); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; request->Header.PageType = MPI2_CONFIG_PAGETYPE_BIOS; request->Header.PageNumber = 3; request->Header.PageVersion = mpi_reply->Header.PageVersion; request->Header.PageLength = mpi_reply->Header.PageLength; cm->cm_length = le16toh(mpi_reply->Header.PageLength) * 4; cm->cm_sge = &request->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAIN; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; page = malloc(cm->cm_length, M_MPR, M_ZERO | M_NOWAIT); if (!page) { printf("%s: page alloc failed\n", __func__); error = ENOMEM; goto out; } cm->cm_data = page; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for page completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: page read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } bcopy(page, config_page, MIN(cm->cm_length, sizeof(Mpi2BiosPage3_t))); out: free(page, M_MPR); if (cm) mpr_free_command(sc, cm); return (error); } /** * mpr_config_get_raid_volume_pg0 - obtain raid volume page 0 * @sc: per adapter object * @mpi_reply: reply mf payload returned from firmware * @config_page: contents of the config page * @page_address: form and handle value used to get page * Context: sleep. * * Returns 0 for success, non-zero for failure. */ int mpr_config_get_raid_volume_pg0(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2RaidVolPage0_t *config_page, u32 page_address) { MPI2_CONFIG_REQUEST *request; MPI2_CONFIG_REPLY *reply = NULL; struct mpr_command *cm; Mpi2RaidVolPage0_t *page = NULL; int error = 0; u16 ioc_status; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_HEADER; request->Header.PageType = MPI2_CONFIG_PAGETYPE_RAID_VOLUME; request->Header.PageNumber = 0; request->Header.PageLength = request->Header.PageVersion = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; /* * This page must be polled because the IOC isn't ready yet when this * page is needed. */ error = mpr_request_polled(sc, &cm); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* If the poll returns error then we need to do diag reset */ printf("%s: poll for header completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* If the poll returns error then we need to do diag reset */ printf("%s: header read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } /* We have to do free and alloc for the reply-free and reply-post * counters to match - Need to review the reply FIFO handling. */ mpr_free_command(sc, cm); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; request->Header.PageType = MPI2_CONFIG_PAGETYPE_RAID_VOLUME; request->Header.PageNumber = 0; request->Header.PageLength = mpi_reply->Header.PageLength; request->Header.PageVersion = mpi_reply->Header.PageVersion; request->PageAddress = page_address; cm->cm_length = le16toh(mpi_reply->Header.PageLength) * 4; cm->cm_sge = &request->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAIN; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; page = malloc(cm->cm_length, M_MPR, M_ZERO | M_NOWAIT); if (!page) { printf("%s: page alloc failed\n", __func__); error = ENOMEM; goto out; } cm->cm_data = page; /* * This page must be polled because the IOC isn't ready yet when this * page is needed. */ error = mpr_request_polled(sc, &cm); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* If the poll returns error then we need to do diag reset */ printf("%s: poll for page completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* If the poll returns error then we need to do diag reset */ printf("%s: page read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } bcopy(page, config_page, cm->cm_length); out: free(page, M_MPR); if (cm) mpr_free_command(sc, cm); return (error); } /** * mpr_config_get_raid_volume_pg1 - obtain raid volume page 1 * @sc: per adapter object * @mpi_reply: reply mf payload returned from firmware * @config_page: contents of the config page * @form: GET_NEXT_HANDLE or HANDLE * @handle: volume handle * Context: sleep. * * Returns 0 for success, non-zero for failure. */ int mpr_config_get_raid_volume_pg1(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2RaidVolPage1_t *config_page, u32 form, u16 handle) { MPI2_CONFIG_REQUEST *request; MPI2_CONFIG_REPLY *reply; struct mpr_command *cm; Mpi2RaidVolPage1_t *page = NULL; int error = 0; u16 ioc_status; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_HEADER; request->Header.PageType = MPI2_CONFIG_PAGETYPE_RAID_VOLUME; request->Header.PageNumber = 1; request->Header.PageLength = request->Header.PageVersion = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for header completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: header read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } /* We have to do free and alloc for the reply-free and reply-post * counters to match - Need to review the reply FIFO handling. */ mpr_free_command(sc, cm); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; request->Header.PageType = MPI2_CONFIG_PAGETYPE_RAID_VOLUME; request->Header.PageNumber = 1; request->Header.PageLength = mpi_reply->Header.PageLength; request->Header.PageVersion = mpi_reply->Header.PageVersion; request->PageAddress = htole32(form | handle); cm->cm_length = le16toh(mpi_reply->Header.PageLength) * 4; cm->cm_sge = &request->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAIN; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; page = malloc(cm->cm_length, M_MPR, M_ZERO | M_NOWAIT); if (!page) { printf("%s: page alloc failed\n", __func__); error = ENOMEM; goto out; } cm->cm_data = page; error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: request for page completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ printf("%s: page read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } bcopy(page, config_page, MIN(cm->cm_length, sizeof(Mpi2RaidVolPage1_t))); out: free(page, M_MPR); if (cm) mpr_free_command(sc, cm); return (error); } /** * mpr_config_get_volume_wwid - returns wwid given the volume handle * @sc: per adapter object * @volume_handle: volume handle * @wwid: volume wwid * Context: sleep. * * Returns 0 for success, non-zero for failure. */ int mpr_config_get_volume_wwid(struct mpr_softc *sc, u16 volume_handle, u64 *wwid) { Mpi2ConfigReply_t mpi_reply; Mpi2RaidVolPage1_t raid_vol_pg1; *wwid = 0; if (!(mpr_config_get_raid_volume_pg1(sc, &mpi_reply, &raid_vol_pg1, MPI2_RAID_VOLUME_PGAD_FORM_HANDLE, volume_handle))) { *wwid = le64toh((u64)raid_vol_pg1.WWID.High << 32 | raid_vol_pg1.WWID.Low); return 0; } else return -1; } /** * mpr_config_get_pd_pg0 - obtain raid phys disk page 0 * @sc: per adapter object * @mpi_reply: reply mf payload returned from firmware * @config_page: contents of the config page * @page_address: form and handle value used to get page * Context: sleep. * * Returns 0 for success, non-zero for failure. */ int mpr_config_get_raid_pd_pg0(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2RaidPhysDiskPage0_t *config_page, u32 page_address) { MPI2_CONFIG_REQUEST *request; MPI2_CONFIG_REPLY *reply = NULL; struct mpr_command *cm; Mpi2RaidPhysDiskPage0_t *page = NULL; int error = 0; u16 ioc_status; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_HEADER; request->Header.PageType = MPI2_CONFIG_PAGETYPE_RAID_PHYSDISK; request->Header.PageNumber = 0; request->Header.PageLength = request->Header.PageVersion = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; /* * This page must be polled because the IOC isn't ready yet when this * page is needed. */ error = mpr_request_polled(sc, &cm); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* If the poll returns error then we need to do diag reset */ printf("%s: poll for header completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* If the poll returns error then we need to do diag reset */ printf("%s: header read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } /* We have to do free and alloc for the reply-free and reply-post * counters to match - Need to review the reply FIFO handling. */ mpr_free_command(sc, cm); if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed @ line %d\n", __func__, __LINE__); error = EBUSY; goto out; } request = (MPI2_CONFIG_REQUEST *)cm->cm_req; bzero(request, sizeof(MPI2_CONFIG_REQUEST)); request->Function = MPI2_FUNCTION_CONFIG; request->Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; request->Header.PageType = MPI2_CONFIG_PAGETYPE_RAID_PHYSDISK; request->Header.PageNumber = 0; request->Header.PageLength = mpi_reply->Header.PageLength; request->Header.PageVersion = mpi_reply->Header.PageVersion; request->PageAddress = page_address; cm->cm_length = le16toh(mpi_reply->Header.PageLength) * 4; cm->cm_sge = &request->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE | MPR_CM_FLAGS_DATAIN; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; page = malloc(cm->cm_length, M_MPR, M_ZERO | M_NOWAIT); if (!page) { printf("%s: page alloc failed\n", __func__); error = ENOMEM; goto out; } cm->cm_data = page; /* * This page must be polled because the IOC isn't ready yet when this * page is needed. */ error = mpr_request_polled(sc, &cm); if (cm != NULL) reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* If the poll returns error then we need to do diag reset */ printf("%s: poll for page completed with error %d", __func__, error); error = ENXIO; goto out; } ioc_status = le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK; bcopy(reply, mpi_reply, sizeof(MPI2_CONFIG_REPLY)); if (ioc_status != MPI2_IOCSTATUS_SUCCESS) { /* FIXME */ /* If the poll returns error then we need to do diag reset */ printf("%s: page read with error; iocstatus = 0x%x\n", __func__, ioc_status); error = ENXIO; goto out; } bcopy(page, config_page, MIN(cm->cm_length, sizeof(Mpi2RaidPhysDiskPage0_t))); out: free(page, M_MPR); if (cm) mpr_free_command(sc, cm); return (error); } Index: releng/12.1/sys/dev/mpr/mpr_ioctl.h =================================================================== --- releng/12.1/sys/dev/mpr/mpr_ioctl.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpr_ioctl.h (revision 352761) @@ -1,388 +1,389 @@ /*- * Copyright (c) 2008 Yahoo!, Inc. * All rights reserved. * Written by: John Baldwin * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD userland interface + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD userland interface * * $FreeBSD$ */ /*- * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2016 Avago Technologies + * Copyright 2000-2020 Broadcom Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ #ifndef _MPR_IOCTL_H_ #define _MPR_IOCTL_H_ #include #include #include #include /* * For the read header requests, the header should include the page * type or extended page type, page number, and page version. The * buffer and length are unused. The completed header is returned in * the 'header' member. * * For the read page and write page requests, 'buf' should point to a * buffer of 'len' bytes which holds the entire page (including the * header). * * All requests specify the page address in 'page_address'. */ struct mpr_cfg_page_req { MPI2_CONFIG_PAGE_HEADER header; uint32_t page_address; void *buf; int len; uint16_t ioc_status; }; struct mpr_ext_cfg_page_req { MPI2_CONFIG_EXTENDED_PAGE_HEADER header; uint32_t page_address; void *buf; int len; uint16_t ioc_status; }; struct mpr_raid_action { uint8_t action; uint8_t volume_bus; uint8_t volume_id; uint8_t phys_disk_num; uint32_t action_data_word; void *buf; int len; uint32_t volume_status; uint32_t action_data[4]; uint16_t action_status; uint16_t ioc_status; uint8_t write; }; struct mpr_usr_command { void *req; uint32_t req_len; void *rpl; uint32_t rpl_len; void *buf; int len; uint32_t flags; }; typedef struct mpr_pci_bits { union { struct { uint32_t DeviceNumber :5; uint32_t FunctionNumber :3; uint32_t BusNumber :24; } bits; uint32_t AsDWORD; } u; uint32_t PciSegmentId; } mpr_pci_bits_t; /* * The following is the MPRIOCTL_GET_ADAPTER_DATA data structure. This data * structure is setup so that we hopefully are properly aligned for both * 32-bit and 64-bit mode applications. * * Adapter Type - Value = 6 = SCSI Protocol through SAS-3 adapter * * MPI Port Number - The PCI Function number for this device * * PCI Device HW Id - The PCI device number for this device * */ #define MPRIOCTL_ADAPTER_TYPE_SAS3 6 #define MPRIOCTL_ADAPTER_TYPE_SAS35 7 typedef struct mpr_adapter_data { uint32_t StructureLength; uint32_t AdapterType; uint32_t MpiPortNumber; uint32_t PCIDeviceHwId; uint32_t PCIDeviceHwRev; uint32_t SubSystemId; uint32_t SubsystemVendorId; uint32_t Reserved1; uint32_t MpiFirmwareVersion; uint32_t BiosVersion; uint8_t DriverVersion[32]; uint8_t Reserved2; uint8_t ScsiId; uint16_t Reserved3; mpr_pci_bits_t PciInformation; } mpr_adapter_data_t; typedef struct mpr_update_flash { uint64_t PtrBuffer; uint32_t ImageChecksum; uint32_t ImageOffset; uint32_t ImageSize; uint32_t ImageType; } mpr_update_flash_t; #define MPR_PASS_THRU_DIRECTION_NONE 0 #define MPR_PASS_THRU_DIRECTION_READ 1 #define MPR_PASS_THRU_DIRECTION_WRITE 2 #define MPR_PASS_THRU_DIRECTION_BOTH 3 typedef struct mpr_pass_thru { uint64_t PtrRequest; uint64_t PtrReply; uint64_t PtrData; uint32_t RequestSize; uint32_t ReplySize; uint32_t DataSize; uint32_t DataDirection; uint64_t PtrDataOut; uint32_t DataOutSize; uint32_t Timeout; } mpr_pass_thru_t; /* * Event queue defines */ #define MPR_EVENT_QUEUE_SIZE (200) /* Max Events stored in driver */ #define MPR_MAX_EVENT_DATA_LENGTH (48) /* Size of each event in Dwords */ typedef struct mpr_event_query { uint16_t Entries; uint16_t Reserved; uint32_t Types[4]; } mpr_event_query_t; typedef struct mpr_event_enable { uint32_t Types[4]; } mpr_event_enable_t; /* * Event record entry for ioctl. */ typedef struct mpr_event_entry { uint32_t Type; uint32_t Number; uint32_t Data[MPR_MAX_EVENT_DATA_LENGTH]; } mpr_event_entry_t; typedef struct mpr_event_report { uint32_t Size; uint64_t PtrEvents; } mpr_event_report_t; typedef struct mpr_pci_info { uint32_t BusNumber; uint8_t DeviceNumber; uint8_t FunctionNumber; uint16_t InterruptVector; uint8_t PciHeader[256]; } mpr_pci_info_t; typedef struct mpr_diag_action { uint32_t Action; uint32_t Length; uint64_t PtrDiagAction; uint32_t ReturnCode; } mpr_diag_action_t; #define MPR_FW_DIAGNOSTIC_UID_NOT_FOUND (0xFF) #define MPR_FW_DIAG_NEW (0x806E6577) #define MPR_FW_DIAG_TYPE_REGISTER (0x00000001) #define MPR_FW_DIAG_TYPE_UNREGISTER (0x00000002) #define MPR_FW_DIAG_TYPE_QUERY (0x00000003) #define MPR_FW_DIAG_TYPE_READ_BUFFER (0x00000004) #define MPR_FW_DIAG_TYPE_RELEASE (0x00000005) #define MPR_FW_DIAG_INVALID_UID (0x00000000) #define MPR_DIAG_SUCCESS 0 #define MPR_DIAG_FAILURE 1 #define MPR_FW_DIAG_ERROR_SUCCESS (0x00000000) #define MPR_FW_DIAG_ERROR_FAILURE (0x00000001) #define MPR_FW_DIAG_ERROR_INVALID_PARAMETER (0x00000002) #define MPR_FW_DIAG_ERROR_POST_FAILED (0x00000010) #define MPR_FW_DIAG_ERROR_INVALID_UID (0x00000011) #define MPR_FW_DIAG_ERROR_RELEASE_FAILED (0x00000012) #define MPR_FW_DIAG_ERROR_NO_BUFFER (0x00000013) #define MPR_FW_DIAG_ERROR_ALREADY_RELEASED (0x00000014) typedef struct mpr_fw_diag_register { uint8_t ExtendedType; uint8_t BufferType; uint16_t ApplicationFlags; uint32_t DiagnosticFlags; uint32_t ProductSpecific[23]; uint32_t RequestedBufferSize; uint32_t UniqueId; } mpr_fw_diag_register_t; typedef struct mpr_fw_diag_unregister { uint32_t UniqueId; } mpr_fw_diag_unregister_t; #define MPR_FW_DIAG_FLAG_APP_OWNED (0x0001) #define MPR_FW_DIAG_FLAG_BUFFER_VALID (0x0002) #define MPR_FW_DIAG_FLAG_FW_BUFFER_ACCESS (0x0004) typedef struct mpr_fw_diag_query { uint8_t ExtendedType; uint8_t BufferType; uint16_t ApplicationFlags; uint32_t DiagnosticFlags; uint32_t ProductSpecific[23]; uint32_t TotalBufferSize; uint32_t DriverAddedBufferSize; uint32_t UniqueId; } mpr_fw_diag_query_t; typedef struct mpr_fw_diag_release { uint32_t UniqueId; } mpr_fw_diag_release_t; #define MPR_FW_DIAG_FLAG_REREGISTER (0x0001) #define MPR_FW_DIAG_FLAG_FORCE_RELEASE (0x0002) typedef struct mpr_diag_read_buffer { uint8_t Status; uint8_t Reserved; uint16_t Flags; uint32_t StartingOffset; uint32_t BytesToRead; uint32_t UniqueId; uint64_t PtrDataBuffer; } mpr_diag_read_buffer_t; /* * Register Access */ #define REG_IO_READ 1 #define REG_IO_WRITE 2 #define REG_MEM_READ 3 #define REG_MEM_WRITE 4 typedef struct mpr_reg_access { uint32_t Command; uint32_t RegOffset; uint32_t RegData; } mpr_reg_access_t; typedef struct mpr_btdh_mapping { uint16_t TargetID; uint16_t Bus; uint16_t DevHandle; uint16_t Reserved; } mpr_btdh_mapping_t; #define MPRIO_MPR_COMMAND_FLAG_VERBOSE 0x01 #define MPRIO_MPR_COMMAND_FLAG_DEBUG 0x02 #define MPRIO_READ_CFG_HEADER _IOWR('M', 200, struct mpr_cfg_page_req) #define MPRIO_READ_CFG_PAGE _IOWR('M', 201, struct mpr_cfg_page_req) #define MPRIO_READ_EXT_CFG_HEADER _IOWR('M', 202, struct mpr_ext_cfg_page_req) #define MPRIO_READ_EXT_CFG_PAGE _IOWR('M', 203, struct mpr_ext_cfg_page_req) #define MPRIO_WRITE_CFG_PAGE _IOWR('M', 204, struct mpr_cfg_page_req) #define MPRIO_RAID_ACTION _IOWR('M', 205, struct mpr_raid_action) #define MPRIO_MPR_COMMAND _IOWR('M', 210, struct mpr_usr_command) #define MPTIOCTL ('I') #define MPTIOCTL_GET_ADAPTER_DATA _IOWR(MPTIOCTL, 1,\ struct mpr_adapter_data) #define MPTIOCTL_UPDATE_FLASH _IOWR(MPTIOCTL, 2,\ struct mpr_update_flash) #define MPTIOCTL_RESET_ADAPTER _IO(MPTIOCTL, 3) #define MPTIOCTL_PASS_THRU _IOWR(MPTIOCTL, 4,\ struct mpr_pass_thru) #define MPTIOCTL_EVENT_QUERY _IOWR(MPTIOCTL, 5,\ struct mpr_event_query) #define MPTIOCTL_EVENT_ENABLE _IOWR(MPTIOCTL, 6,\ struct mpr_event_enable) #define MPTIOCTL_EVENT_REPORT _IOWR(MPTIOCTL, 7,\ struct mpr_event_report) #define MPTIOCTL_GET_PCI_INFO _IOWR(MPTIOCTL, 8,\ struct mpr_pci_info) #define MPTIOCTL_DIAG_ACTION _IOWR(MPTIOCTL, 9,\ struct mpr_diag_action) #define MPTIOCTL_REG_ACCESS _IOWR(MPTIOCTL, 10,\ struct mpr_reg_access) #define MPTIOCTL_BTDH_MAPPING _IOWR(MPTIOCTL, 11,\ struct mpr_btdh_mapping) #endif /* !_MPR_IOCTL_H_ */ Index: releng/12.1/sys/dev/mpr/mpr_mapping.c =================================================================== --- releng/12.1/sys/dev/mpr/mpr_mapping.c (revision 352760) +++ releng/12.1/sys/dev/mpr/mpr_mapping.c (revision 352761) @@ -1,3133 +1,3134 @@ /*- * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2016 Avago Technologies + * Copyright 2000-2020 Broadcom Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD */ #include __FBSDID("$FreeBSD$"); /* TODO Move headers to mprvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /** * _mapping_clear_map_entry - Clear a particular mapping entry. * @map_entry: map table entry * * Returns nothing. */ static inline void _mapping_clear_map_entry(struct dev_mapping_table *map_entry) { map_entry->physical_id = 0; map_entry->device_info = 0; map_entry->phy_bits = 0; map_entry->dpm_entry_num = MPR_DPM_BAD_IDX; map_entry->dev_handle = 0; map_entry->id = -1; map_entry->missing_count = 0; map_entry->init_complete = 0; map_entry->TLR_bits = (u8)MPI2_SCSIIO_CONTROL_NO_TLR; } /** * _mapping_clear_enc_entry - Clear a particular enclosure table entry. * @enc_entry: enclosure table entry * * Returns nothing. */ static inline void _mapping_clear_enc_entry(struct enc_mapping_table *enc_entry) { enc_entry->enclosure_id = 0; enc_entry->start_index = MPR_MAPTABLE_BAD_IDX; enc_entry->phy_bits = 0; enc_entry->dpm_entry_num = MPR_DPM_BAD_IDX; enc_entry->enc_handle = 0; enc_entry->num_slots = 0; enc_entry->start_slot = 0; enc_entry->missing_count = 0; enc_entry->removal_flag = 0; enc_entry->skip_search = 0; enc_entry->init_complete = 0; } /** * _mapping_commit_enc_entry - write a particular enc entry in DPM page0. * @sc: per adapter object * @enc_entry: enclosure table entry * * Returns 0 for success, non-zero for failure. */ static int _mapping_commit_enc_entry(struct mpr_softc *sc, struct enc_mapping_table *et_entry) { Mpi2DriverMap0Entry_t *dpm_entry; struct dev_mapping_table *mt_entry; Mpi2ConfigReply_t mpi_reply; Mpi2DriverMappingPage0_t config_page; if (!sc->is_dpm_enable) return 0; memset(&config_page, 0, sizeof(Mpi2DriverMappingPage0_t)); memcpy(&config_page.Header, (u8 *) sc->dpm_pg0, sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); dpm_entry = (Mpi2DriverMap0Entry_t *)((u8 *)sc->dpm_pg0 + sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); dpm_entry += et_entry->dpm_entry_num; dpm_entry->PhysicalIdentifier.Low = ( 0xFFFFFFFF & et_entry->enclosure_id); dpm_entry->PhysicalIdentifier.High = ( et_entry->enclosure_id >> 32); mt_entry = &sc->mapping_table[et_entry->start_index]; dpm_entry->DeviceIndex = htole16(mt_entry->id); dpm_entry->MappingInformation = et_entry->num_slots; dpm_entry->MappingInformation <<= MPI2_DRVMAP0_MAPINFO_SLOT_SHIFT; dpm_entry->MappingInformation |= et_entry->missing_count; dpm_entry->MappingInformation = htole16(dpm_entry->MappingInformation); dpm_entry->PhysicalBitsMapping = htole32(et_entry->phy_bits); dpm_entry->Reserved1 = 0; mpr_dprint(sc, MPR_MAPPING, "%s: Writing DPM entry %d for enclosure.\n", __func__, et_entry->dpm_entry_num); memcpy(&config_page.Entry, (u8 *)dpm_entry, sizeof(Mpi2DriverMap0Entry_t)); if (mpr_config_set_dpm_pg0(sc, &mpi_reply, &config_page, et_entry->dpm_entry_num)) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: Write of DPM " "entry %d for enclosure failed.\n", __func__, et_entry->dpm_entry_num); dpm_entry->MappingInformation = le16toh(dpm_entry-> MappingInformation); dpm_entry->DeviceIndex = le16toh(dpm_entry->DeviceIndex); dpm_entry->PhysicalBitsMapping = le32toh(dpm_entry->PhysicalBitsMapping); return -1; } dpm_entry->MappingInformation = le16toh(dpm_entry-> MappingInformation); dpm_entry->DeviceIndex = le16toh(dpm_entry->DeviceIndex); dpm_entry->PhysicalBitsMapping = le32toh(dpm_entry->PhysicalBitsMapping); return 0; } /** * _mapping_commit_map_entry - write a particular map table entry in DPM page0. * @sc: per adapter object * @mt_entry: mapping table entry * * Returns 0 for success, non-zero for failure. */ static int _mapping_commit_map_entry(struct mpr_softc *sc, struct dev_mapping_table *mt_entry) { Mpi2DriverMap0Entry_t *dpm_entry; Mpi2ConfigReply_t mpi_reply; Mpi2DriverMappingPage0_t config_page; if (!sc->is_dpm_enable) return 0; /* * It's possible that this Map Entry points to a BAD DPM index. This * can happen if the Map Entry is a for a missing device and the DPM * entry that was being used by this device is now being used by some * new device. So, check for a BAD DPM index and just return if so. */ if (mt_entry->dpm_entry_num == MPR_DPM_BAD_IDX) { mpr_dprint(sc, MPR_MAPPING, "%s: DPM entry location for target " "%d is invalid. DPM will not be written.\n", __func__, mt_entry->id); return 0; } memset(&config_page, 0, sizeof(Mpi2DriverMappingPage0_t)); memcpy(&config_page.Header, (u8 *)sc->dpm_pg0, sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); dpm_entry = (Mpi2DriverMap0Entry_t *)((u8 *) sc->dpm_pg0 + sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); dpm_entry = dpm_entry + mt_entry->dpm_entry_num; dpm_entry->PhysicalIdentifier.Low = (0xFFFFFFFF & mt_entry->physical_id); dpm_entry->PhysicalIdentifier.High = (mt_entry->physical_id >> 32); dpm_entry->DeviceIndex = htole16(mt_entry->id); dpm_entry->MappingInformation = htole16(mt_entry->missing_count); dpm_entry->PhysicalBitsMapping = 0; dpm_entry->Reserved1 = 0; memcpy(&config_page.Entry, (u8 *)dpm_entry, sizeof(Mpi2DriverMap0Entry_t)); mpr_dprint(sc, MPR_MAPPING, "%s: Writing DPM entry %d for target %d.\n", __func__, mt_entry->dpm_entry_num, mt_entry->id); if (mpr_config_set_dpm_pg0(sc, &mpi_reply, &config_page, mt_entry->dpm_entry_num)) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: Write of DPM " "entry %d for target %d failed.\n", __func__, mt_entry->dpm_entry_num, mt_entry->id); dpm_entry->MappingInformation = le16toh(dpm_entry-> MappingInformation); dpm_entry->DeviceIndex = le16toh(dpm_entry->DeviceIndex); return -1; } dpm_entry->MappingInformation = le16toh(dpm_entry->MappingInformation); dpm_entry->DeviceIndex = le16toh(dpm_entry->DeviceIndex); return 0; } /** * _mapping_get_ir_maprange - get start and end index for IR map range. * @sc: per adapter object * @start_idx: place holder for start index * @end_idx: place holder for end index * * The IR volumes can be mapped either at start or end of the mapping table * this function gets the detail of where IR volume mapping starts and ends * in the device mapping table * * Returns nothing. */ static void _mapping_get_ir_maprange(struct mpr_softc *sc, u32 *start_idx, u32 *end_idx) { u16 volume_mapping_flags; u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); volume_mapping_flags = le16toh(sc->ioc_pg8.IRVolumeMappingFlags) & MPI2_IOCPAGE8_IRFLAGS_MASK_VOLUME_MAPPING_MODE; if (volume_mapping_flags == MPI2_IOCPAGE8_IRFLAGS_LOW_VOLUME_MAPPING) { *start_idx = 0; if (ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_RESERVED_TARGETID_0) *start_idx = 1; } else *start_idx = sc->max_devices - sc->max_volumes; *end_idx = *start_idx + sc->max_volumes - 1; } /** * _mapping_get_enc_idx_from_id - get enclosure index from enclosure ID * @sc: per adapter object * @enc_id: enclosure logical identifier * * Returns the index of enclosure entry on success or bad index. */ static u8 _mapping_get_enc_idx_from_id(struct mpr_softc *sc, u64 enc_id, u64 phy_bits) { struct enc_mapping_table *et_entry; u8 enc_idx = 0; for (enc_idx = 0; enc_idx < sc->num_enc_table_entries; enc_idx++) { et_entry = &sc->enclosure_table[enc_idx]; if ((et_entry->enclosure_id == le64toh(enc_id)) && (!et_entry->phy_bits || (et_entry->phy_bits & le32toh(phy_bits)))) return enc_idx; } return MPR_ENCTABLE_BAD_IDX; } /** * _mapping_get_enc_idx_from_handle - get enclosure index from handle * @sc: per adapter object * @enc_id: enclosure handle * * Returns the index of enclosure entry on success or bad index. */ static u8 _mapping_get_enc_idx_from_handle(struct mpr_softc *sc, u16 handle) { struct enc_mapping_table *et_entry; u8 enc_idx = 0; for (enc_idx = 0; enc_idx < sc->num_enc_table_entries; enc_idx++) { et_entry = &sc->enclosure_table[enc_idx]; if (et_entry->missing_count) continue; if (et_entry->enc_handle == handle) return enc_idx; } return MPR_ENCTABLE_BAD_IDX; } /** * _mapping_get_high_missing_et_idx - get missing enclosure index * @sc: per adapter object * * Search through the enclosure table and identifies the enclosure entry * with high missing count and returns it's index * * Returns the index of enclosure entry on success or bad index. */ static u8 _mapping_get_high_missing_et_idx(struct mpr_softc *sc) { struct enc_mapping_table *et_entry; u8 high_missing_count = 0; u8 enc_idx, high_idx = MPR_ENCTABLE_BAD_IDX; for (enc_idx = 0; enc_idx < sc->num_enc_table_entries; enc_idx++) { et_entry = &sc->enclosure_table[enc_idx]; if ((et_entry->missing_count > high_missing_count) && !et_entry->skip_search) { high_missing_count = et_entry->missing_count; high_idx = enc_idx; } } return high_idx; } /** * _mapping_get_high_missing_mt_idx - get missing map table index * @sc: per adapter object * * Search through the map table and identifies the device entry * with high missing count and returns it's index * * Returns the index of map table entry on success or bad index. */ static u32 _mapping_get_high_missing_mt_idx(struct mpr_softc *sc) { u32 map_idx, high_idx = MPR_MAPTABLE_BAD_IDX; u8 high_missing_count = 0; u32 start_idx, end_idx, start_idx_ir, end_idx_ir; struct dev_mapping_table *mt_entry; u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); start_idx = 0; start_idx_ir = 0; end_idx_ir = 0; end_idx = sc->max_devices; if (ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_RESERVED_TARGETID_0) start_idx = 1; if (sc->ir_firmware) { _mapping_get_ir_maprange(sc, &start_idx_ir, &end_idx_ir); if (start_idx == start_idx_ir) start_idx = end_idx_ir + 1; else end_idx = start_idx_ir; } mt_entry = &sc->mapping_table[start_idx]; for (map_idx = start_idx; map_idx < end_idx; map_idx++, mt_entry++) { if (mt_entry->missing_count > high_missing_count) { high_missing_count = mt_entry->missing_count; high_idx = map_idx; } } return high_idx; } /** * _mapping_get_ir_mt_idx_from_wwid - get map table index from volume WWID * @sc: per adapter object * @wwid: world wide unique ID of the volume * * Returns the index of map table entry on success or bad index. */ static u32 _mapping_get_ir_mt_idx_from_wwid(struct mpr_softc *sc, u64 wwid) { u32 start_idx, end_idx, map_idx; struct dev_mapping_table *mt_entry; _mapping_get_ir_maprange(sc, &start_idx, &end_idx); mt_entry = &sc->mapping_table[start_idx]; for (map_idx = start_idx; map_idx <= end_idx; map_idx++, mt_entry++) if (mt_entry->physical_id == wwid) return map_idx; return MPR_MAPTABLE_BAD_IDX; } /** * _mapping_get_mt_idx_from_id - get map table index from a device ID * @sc: per adapter object * @dev_id: device identifer (SAS Address) * * Returns the index of map table entry on success or bad index. */ static u32 _mapping_get_mt_idx_from_id(struct mpr_softc *sc, u64 dev_id) { u32 map_idx; struct dev_mapping_table *mt_entry; for (map_idx = 0; map_idx < sc->max_devices; map_idx++) { mt_entry = &sc->mapping_table[map_idx]; if (mt_entry->physical_id == dev_id) return map_idx; } return MPR_MAPTABLE_BAD_IDX; } /** * _mapping_get_ir_mt_idx_from_handle - get map table index from volume handle * @sc: per adapter object * @wwid: volume device handle * * Returns the index of map table entry on success or bad index. */ static u32 _mapping_get_ir_mt_idx_from_handle(struct mpr_softc *sc, u16 volHandle) { u32 start_idx, end_idx, map_idx; struct dev_mapping_table *mt_entry; _mapping_get_ir_maprange(sc, &start_idx, &end_idx); mt_entry = &sc->mapping_table[start_idx]; for (map_idx = start_idx; map_idx <= end_idx; map_idx++, mt_entry++) if (mt_entry->dev_handle == volHandle) return map_idx; return MPR_MAPTABLE_BAD_IDX; } /** * _mapping_get_mt_idx_from_handle - get map table index from handle * @sc: per adapter object * @dev_id: device handle * * Returns the index of map table entry on success or bad index. */ static u32 _mapping_get_mt_idx_from_handle(struct mpr_softc *sc, u16 handle) { u32 map_idx; struct dev_mapping_table *mt_entry; for (map_idx = 0; map_idx < sc->max_devices; map_idx++) { mt_entry = &sc->mapping_table[map_idx]; if (mt_entry->dev_handle == handle) return map_idx; } return MPR_MAPTABLE_BAD_IDX; } /** * _mapping_get_free_ir_mt_idx - get first free index for a volume * @sc: per adapter object * * Search through mapping table for free index for a volume and if no free * index then looks for a volume with high mapping index * * Returns the index of map table entry on success or bad index. */ static u32 _mapping_get_free_ir_mt_idx(struct mpr_softc *sc) { u8 high_missing_count = 0; u32 start_idx, end_idx, map_idx; u32 high_idx = MPR_MAPTABLE_BAD_IDX; struct dev_mapping_table *mt_entry; /* * The IN_USE flag should be clear if the entry is available to use. * This flag is cleared on initialization and and when a volume is * deleted. All other times this flag should be set. If, for some * reason, a free entry cannot be found, look for the entry with the * highest missing count just in case there is one. */ _mapping_get_ir_maprange(sc, &start_idx, &end_idx); mt_entry = &sc->mapping_table[start_idx]; for (map_idx = start_idx; map_idx <= end_idx; map_idx++, mt_entry++) { if (!(mt_entry->device_info & MPR_MAP_IN_USE)) return map_idx; if (mt_entry->missing_count > high_missing_count) { high_missing_count = mt_entry->missing_count; high_idx = map_idx; } } if (high_idx == MPR_MAPTABLE_BAD_IDX) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: Could not find a " "free entry in the mapping table for a Volume. The mapping " "table is probably corrupt.\n", __func__); } return high_idx; } /** * _mapping_get_free_mt_idx - get first free index for a device * @sc: per adapter object * @start_idx: offset in the table to start search * * Returns the index of map table entry on success or bad index. */ static u32 _mapping_get_free_mt_idx(struct mpr_softc *sc, u32 start_idx) { u32 map_idx, max_idx = sc->max_devices; struct dev_mapping_table *mt_entry = &sc->mapping_table[start_idx]; u16 volume_mapping_flags; volume_mapping_flags = le16toh(sc->ioc_pg8.IRVolumeMappingFlags) & MPI2_IOCPAGE8_IRFLAGS_MASK_VOLUME_MAPPING_MODE; if (sc->ir_firmware && (volume_mapping_flags == MPI2_IOCPAGE8_IRFLAGS_HIGH_VOLUME_MAPPING)) max_idx -= sc->max_volumes; for (map_idx = start_idx; map_idx < max_idx; map_idx++, mt_entry++) if (!(mt_entry->device_info & (MPR_MAP_IN_USE | MPR_DEV_RESERVED))) return map_idx; return MPR_MAPTABLE_BAD_IDX; } /** * _mapping_get_dpm_idx_from_id - get DPM index from ID * @sc: per adapter object * @id: volume WWID or enclosure ID or device ID * * Returns the index of DPM entry on success or bad index. */ static u16 _mapping_get_dpm_idx_from_id(struct mpr_softc *sc, u64 id, u32 phy_bits) { u16 entry_num; uint64_t PhysicalIdentifier; Mpi2DriverMap0Entry_t *dpm_entry; dpm_entry = (Mpi2DriverMap0Entry_t *)((u8 *)sc->dpm_pg0 + sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); PhysicalIdentifier = dpm_entry->PhysicalIdentifier.High; PhysicalIdentifier = (PhysicalIdentifier << 32) | dpm_entry->PhysicalIdentifier.Low; for (entry_num = 0; entry_num < sc->max_dpm_entries; entry_num++, dpm_entry++) if ((id == PhysicalIdentifier) && (!phy_bits || !dpm_entry->PhysicalBitsMapping || (phy_bits & dpm_entry->PhysicalBitsMapping))) return entry_num; return MPR_DPM_BAD_IDX; } /** * _mapping_get_free_dpm_idx - get first available DPM index * @sc: per adapter object * * Returns the index of DPM entry on success or bad index. */ static u32 _mapping_get_free_dpm_idx(struct mpr_softc *sc) { u16 entry_num; Mpi2DriverMap0Entry_t *dpm_entry; u16 current_entry = MPR_DPM_BAD_IDX, missing_cnt, high_missing_cnt = 0; u64 physical_id; struct dev_mapping_table *mt_entry; u32 map_idx; for (entry_num = 0; entry_num < sc->max_dpm_entries; entry_num++) { dpm_entry = (Mpi2DriverMap0Entry_t *) ((u8 *)sc->dpm_pg0 + sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); dpm_entry += entry_num; missing_cnt = dpm_entry->MappingInformation & MPI2_DRVMAP0_MAPINFO_MISSING_MASK; /* * If entry is used and not missing, then this entry can't be * used. Look at next one. */ if (sc->dpm_entry_used[entry_num] && !missing_cnt) continue; /* * If this entry is not used at all, then the missing count * doesn't matter. Just use this one. Otherwise, keep looking * and make sure the entry with the highest missing count is * used. */ if (!sc->dpm_entry_used[entry_num]) { current_entry = entry_num; break; } if ((current_entry == MPR_DPM_BAD_IDX) || (missing_cnt > high_missing_cnt)) { current_entry = entry_num; high_missing_cnt = missing_cnt; } } /* * If an entry has been found to use and it's already marked as used * it means that some device was already using this entry but it's * missing, and that means that the connection between the missing * device's DPM entry and the mapping table needs to be cleared. To do * this, use the Physical ID of the old device still in the DPM entry * to find its mapping table entry, then mark its DPM entry as BAD. */ if ((current_entry != MPR_DPM_BAD_IDX) && sc->dpm_entry_used[current_entry]) { dpm_entry = (Mpi2DriverMap0Entry_t *) ((u8 *)sc->dpm_pg0 + sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); dpm_entry += current_entry; physical_id = dpm_entry->PhysicalIdentifier.High; physical_id = (physical_id << 32) | dpm_entry->PhysicalIdentifier.Low; map_idx = _mapping_get_mt_idx_from_id(sc, physical_id); if (map_idx != MPR_MAPTABLE_BAD_IDX) { mt_entry = &sc->mapping_table[map_idx]; mt_entry->dpm_entry_num = MPR_DPM_BAD_IDX; } } return current_entry; } /** * _mapping_update_ir_missing_cnt - Updates missing count for a volume * @sc: per adapter object * @map_idx: map table index of the volume * @element: IR configuration change element * @wwid: IR volume ID. * * Updates the missing count in the map table and in the DPM entry for a volume * * Returns nothing. */ static void _mapping_update_ir_missing_cnt(struct mpr_softc *sc, u32 map_idx, Mpi2EventIrConfigElement_t *element, u64 wwid) { struct dev_mapping_table *mt_entry; u8 missing_cnt, reason = element->ReasonCode, update_dpm = 1; u16 dpm_idx; Mpi2DriverMap0Entry_t *dpm_entry; /* * Depending on the reason code, update the missing count. Always set * the init_complete flag when here, so just do it first. That flag is * used for volumes to make sure that the DPM entry has been updated. * When a volume is deleted, clear the map entry's IN_USE flag so that * the entry can be used again if another volume is created. Also clear * its dev_handle entry so that other functions can't find this volume * by the handle, since it's not defined any longer. */ mt_entry = &sc->mapping_table[map_idx]; mt_entry->init_complete = 1; if ((reason == MPI2_EVENT_IR_CHANGE_RC_ADDED) || (reason == MPI2_EVENT_IR_CHANGE_RC_VOLUME_CREATED)) { mt_entry->missing_count = 0; } else if (reason == MPI2_EVENT_IR_CHANGE_RC_VOLUME_DELETED) { if (mt_entry->missing_count < MPR_MAX_MISSING_COUNT) mt_entry->missing_count++; mt_entry->device_info &= ~MPR_MAP_IN_USE; mt_entry->dev_handle = 0; } /* * If persistent mapping is enabled, update the DPM with the new missing * count for the volume. If the DPM index is bad, get a free one. If * it's bad for a volume that's being deleted do nothing because that * volume doesn't have a DPM entry. */ if (!sc->is_dpm_enable) return; dpm_idx = mt_entry->dpm_entry_num; if (dpm_idx == MPR_DPM_BAD_IDX) { if (reason == MPI2_EVENT_IR_CHANGE_RC_VOLUME_DELETED) { mpr_dprint(sc, MPR_MAPPING, "%s: Volume being deleted " "is not in DPM so DPM missing count will not be " "updated.\n", __func__); return; } } if (dpm_idx == MPR_DPM_BAD_IDX) dpm_idx = _mapping_get_free_dpm_idx(sc); /* * Got the DPM entry for the volume or found a free DPM entry if this is * a new volume. Check if the current information is outdated. */ if (dpm_idx != MPR_DPM_BAD_IDX) { dpm_entry = (Mpi2DriverMap0Entry_t *)((u8 *)sc->dpm_pg0 + sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); dpm_entry += dpm_idx; missing_cnt = dpm_entry->MappingInformation & MPI2_DRVMAP0_MAPINFO_MISSING_MASK; if ((mt_entry->physical_id == le64toh(((u64)dpm_entry->PhysicalIdentifier.High << 32) | (u64)dpm_entry->PhysicalIdentifier.Low)) && (missing_cnt == mt_entry->missing_count)) { mpr_dprint(sc, MPR_MAPPING, "%s: DPM entry for volume " "with target ID %d does not require an update.\n", __func__, mt_entry->id); update_dpm = 0; } } /* * Update the volume's persistent info if it's new or the ID or missing * count has changed. If a good DPM index has not been found by now, * there is no space left in the DPM table. */ if ((dpm_idx != MPR_DPM_BAD_IDX) && update_dpm) { mpr_dprint(sc, MPR_MAPPING, "%s: Update DPM entry for volume " "with target ID %d.\n", __func__, mt_entry->id); mt_entry->dpm_entry_num = dpm_idx; dpm_entry = (Mpi2DriverMap0Entry_t *)((u8 *)sc->dpm_pg0 + sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); dpm_entry += dpm_idx; dpm_entry->PhysicalIdentifier.Low = (0xFFFFFFFF & mt_entry->physical_id); dpm_entry->PhysicalIdentifier.High = (mt_entry->physical_id >> 32); dpm_entry->DeviceIndex = map_idx; dpm_entry->MappingInformation = mt_entry->missing_count; dpm_entry->PhysicalBitsMapping = 0; dpm_entry->Reserved1 = 0; sc->dpm_flush_entry[dpm_idx] = 1; sc->dpm_entry_used[dpm_idx] = 1; } else if (dpm_idx == MPR_DPM_BAD_IDX) { mpr_dprint(sc, MPR_INFO | MPR_MAPPING, "%s: No space to add an " "entry in the DPM table for volume with target ID %d.\n", __func__, mt_entry->id); } } /** * _mapping_add_to_removal_table - add DPM index to the removal table * @sc: per adapter object * @dpm_idx: Index of DPM entry to remove * * Adds a DPM entry number to the removal table. * * Returns nothing. */ static void _mapping_add_to_removal_table(struct mpr_softc *sc, u16 dpm_idx) { struct map_removal_table *remove_entry; u32 i; /* * This is only used to remove entries from the DPM in the controller. * If DPM is not enabled, just return. */ if (!sc->is_dpm_enable) return; /* * Find the first available removal_table entry and add the new entry * there. */ remove_entry = sc->removal_table; for (i = 0; i < sc->max_devices; i++, remove_entry++) { if (remove_entry->dpm_entry_num != MPR_DPM_BAD_IDX) continue; mpr_dprint(sc, MPR_MAPPING, "%s: Adding DPM entry %d to table " "for removal.\n", __func__, dpm_idx); remove_entry->dpm_entry_num = dpm_idx; break; } } /** * _mapping_inc_missing_count * @sc: per adapter object * @map_idx: index into the mapping table for the device that is missing * * Increment the missing count in the mapping table for a SAS, SATA, or PCIe * device that is not responding. If Persitent Mapping is used, increment the * DPM entry as well. Currently, this function is only called if the target * goes missing, so after initialization has completed. This means that the * missing count can only go from 0 to 1 here. The missing count is incremented * during initialization as well, so that's where a target's missing count can * go past 1. * * Returns nothing. */ static void _mapping_inc_missing_count(struct mpr_softc *sc, u32 map_idx) { u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); struct dev_mapping_table *mt_entry; Mpi2DriverMap0Entry_t *dpm_entry; if (map_idx == MPR_MAPTABLE_BAD_IDX) { mpr_dprint(sc, MPR_INFO | MPR_MAPPING, "%s: device is already " "removed from mapping table\n", __func__); return; } mt_entry = &sc->mapping_table[map_idx]; if (mt_entry->missing_count < MPR_MAX_MISSING_COUNT) mt_entry->missing_count++; /* * When using Enc/Slot mapping, when a device is removed, it's mapping * table information should be cleared. Otherwise, the target ID will * be incorrect if this same device is re-added to a different slot. */ if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING) { _mapping_clear_map_entry(mt_entry); } /* * When using device mapping, update the missing count in the DPM entry, * but only if the missing count has changed. */ if (((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_DEVICE_PERSISTENCE_MAPPING) && sc->is_dpm_enable && mt_entry->dpm_entry_num != MPR_DPM_BAD_IDX) { dpm_entry = (Mpi2DriverMap0Entry_t *) ((u8 *)sc->dpm_pg0 + sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); dpm_entry += mt_entry->dpm_entry_num; if (dpm_entry->MappingInformation != mt_entry->missing_count) { dpm_entry->MappingInformation = mt_entry->missing_count; sc->dpm_flush_entry[mt_entry->dpm_entry_num] = 1; } } } /** * _mapping_update_missing_count - Update missing count for a device * @sc: per adapter object * @topo_change: Topology change event entry * * Search through the topology change list and if any device is found not * responding it's associated map table entry and DPM entry is updated * * Returns nothing. */ static void _mapping_update_missing_count(struct mpr_softc *sc, struct _map_topology_change *topo_change) { u8 entry; struct _map_phy_change *phy_change; u32 map_idx; for (entry = 0; entry < topo_change->num_entries; entry++) { phy_change = &topo_change->phy_details[entry]; if (!phy_change->dev_handle || (phy_change->reason != MPI2_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING)) continue; map_idx = _mapping_get_mt_idx_from_handle(sc, phy_change-> dev_handle); phy_change->is_processed = 1; _mapping_inc_missing_count(sc, map_idx); } } /** * _mapping_update_pcie_missing_count - Update missing count for a PCIe device * @sc: per adapter object * @topo_change: Topology change event entry * * Search through the PCIe topology change list and if any device is found not * responding it's associated map table entry and DPM entry is updated * * Returns nothing. */ static void _mapping_update_pcie_missing_count(struct mpr_softc *sc, struct _map_pcie_topology_change *topo_change) { u8 entry; struct _map_port_change *port_change; u32 map_idx; for (entry = 0; entry < topo_change->num_entries; entry++) { port_change = &topo_change->port_details[entry]; if (!port_change->dev_handle || (port_change->reason != MPI26_EVENT_PCIE_TOPO_PS_NOT_RESPONDING)) continue; map_idx = _mapping_get_mt_idx_from_handle(sc, port_change-> dev_handle); port_change->is_processed = 1; _mapping_inc_missing_count(sc, map_idx); } } /** * _mapping_find_enc_map_space -find map table entries for enclosure * @sc: per adapter object * @et_entry: enclosure entry * * Search through the mapping table defragment it and provide contiguous * space in map table for a particular enclosure entry * * Returns start index in map table or bad index. */ static u32 _mapping_find_enc_map_space(struct mpr_softc *sc, struct enc_mapping_table *et_entry) { u16 vol_mapping_flags; u32 skip_count, end_of_table, map_idx, enc_idx; u16 num_found; u32 start_idx = MPR_MAPTABLE_BAD_IDX; struct dev_mapping_table *mt_entry; struct enc_mapping_table *enc_entry; unsigned char done_flag = 0, found_space; u16 max_num_phy_ids = le16toh(sc->ioc_pg8.MaxNumPhysicalMappedIDs); skip_count = sc->num_rsvd_entries; num_found = 0; vol_mapping_flags = le16toh(sc->ioc_pg8.IRVolumeMappingFlags) & MPI2_IOCPAGE8_IRFLAGS_MASK_VOLUME_MAPPING_MODE; /* * The end of the mapping table depends on where volumes are kept, if * IR is enabled. */ if (!sc->ir_firmware) end_of_table = sc->max_devices; else if (vol_mapping_flags == MPI2_IOCPAGE8_IRFLAGS_LOW_VOLUME_MAPPING) end_of_table = sc->max_devices; else end_of_table = sc->max_devices - sc->max_volumes; /* * The skip_count is the number of entries that are reserved at the * beginning of the mapping table. But, it does not include the number * of Physical IDs that are reserved for direct attached devices. Look * through the mapping table after these reserved entries to see if * the devices for this enclosure are already mapped. The PHY bit check * is used to make sure that at least one PHY bit is common between the * enclosure and the device that is already mapped. */ mpr_dprint(sc, MPR_MAPPING, "%s: Looking for space in the mapping " "table for added enclosure.\n", __func__); for (map_idx = (max_num_phy_ids + skip_count); map_idx < end_of_table; map_idx++) { mt_entry = &sc->mapping_table[map_idx]; if ((et_entry->enclosure_id == mt_entry->physical_id) && (!mt_entry->phy_bits || (mt_entry->phy_bits & et_entry->phy_bits))) { num_found += 1; if (num_found == et_entry->num_slots) { start_idx = (map_idx - num_found) + 1; mpr_dprint(sc, MPR_MAPPING, "%s: Found space " "in the mapping for enclosure at map index " "%d.\n", __func__, start_idx); return start_idx; } } else num_found = 0; } /* * If the enclosure's devices are not mapped already, look for * contiguous entries in the mapping table that are not reserved. If * enough entries are found, return the starting index for that space. */ num_found = 0; for (map_idx = (max_num_phy_ids + skip_count); map_idx < end_of_table; map_idx++) { mt_entry = &sc->mapping_table[map_idx]; if (!(mt_entry->device_info & MPR_DEV_RESERVED)) { num_found += 1; if (num_found == et_entry->num_slots) { start_idx = (map_idx - num_found) + 1; mpr_dprint(sc, MPR_MAPPING, "%s: Found space " "in the mapping for enclosure at map index " "%d.\n", __func__, start_idx); return start_idx; } } else num_found = 0; } /* * If here, it means that not enough space in the mapping table was * found to support this enclosure, so go through the enclosure table to * see if any enclosure entries have a missing count. If so, get the * enclosure with the highest missing count and check it to see if there * is enough space for the new enclosure. */ while (!done_flag) { enc_idx = _mapping_get_high_missing_et_idx(sc); if (enc_idx == MPR_ENCTABLE_BAD_IDX) { mpr_dprint(sc, MPR_MAPPING, "%s: Not enough space was " "found in the mapping for the added enclosure.\n", __func__); return MPR_MAPTABLE_BAD_IDX; } /* * Found a missing enclosure. Set the skip_search flag so this * enclosure is not checked again for a high missing count if * the loop continues. This way, all missing enclosures can * have their space added together to find enough space in the * mapping table for the added enclosure. The space must be * contiguous. */ mpr_dprint(sc, MPR_MAPPING, "%s: Space from a missing " "enclosure was found.\n", __func__); enc_entry = &sc->enclosure_table[enc_idx]; enc_entry->skip_search = 1; /* * Unmark all of the missing enclosure's device's reserved * space. These will be remarked as reserved if this missing * enclosure's space is not used. */ mpr_dprint(sc, MPR_MAPPING, "%s: Clear the reserved flag for " "all of the map entries for the enclosure.\n", __func__); mt_entry = &sc->mapping_table[enc_entry->start_index]; for (map_idx = enc_entry->start_index; map_idx < (enc_entry->start_index + enc_entry->num_slots); map_idx++, mt_entry++) mt_entry->device_info &= ~MPR_DEV_RESERVED; /* * Now that space has been unreserved, check again to see if * enough space is available for the new enclosure. */ mpr_dprint(sc, MPR_MAPPING, "%s: Check if new mapping space is " "enough for the new enclosure.\n", __func__); found_space = 0; num_found = 0; for (map_idx = (max_num_phy_ids + skip_count); map_idx < end_of_table; map_idx++) { mt_entry = &sc->mapping_table[map_idx]; if (!(mt_entry->device_info & MPR_DEV_RESERVED)) { num_found += 1; if (num_found == et_entry->num_slots) { start_idx = (map_idx - num_found) + 1; found_space = 1; break; } } else num_found = 0; } if (!found_space) continue; /* * If enough space was found, all of the missing enclosures that * will be used for the new enclosure must be added to the * removal table. Then all mappings for the enclosure's devices * and for the enclosure itself need to be cleared. There may be * more than one enclosure to add to the removal table and * clear. */ mpr_dprint(sc, MPR_MAPPING, "%s: Found space in the mapping " "for enclosure at map index %d.\n", __func__, start_idx); for (map_idx = start_idx; map_idx < (start_idx + num_found); map_idx++) { enc_entry = sc->enclosure_table; for (enc_idx = 0; enc_idx < sc->num_enc_table_entries; enc_idx++, enc_entry++) { if (map_idx < enc_entry->start_index || map_idx > (enc_entry->start_index + enc_entry->num_slots)) continue; if (!enc_entry->removal_flag) { mpr_dprint(sc, MPR_MAPPING, "%s: " "Enclosure %d will be removed from " "the mapping table.\n", __func__, enc_idx); enc_entry->removal_flag = 1; _mapping_add_to_removal_table(sc, enc_entry->dpm_entry_num); } mt_entry = &sc->mapping_table[map_idx]; _mapping_clear_map_entry(mt_entry); if (map_idx == (enc_entry->start_index + enc_entry->num_slots - 1)) _mapping_clear_enc_entry(et_entry); } } /* * During the search for space for this enclosure, some entries * in the mapping table may have been unreserved. Go back and * change all of these to reserved again. Only the enclosures * with the removal_flag set should be left as unreserved. The * skip_search flag needs to be cleared as well so that the * enclosure's space will be looked at the next time space is * needed. */ enc_entry = sc->enclosure_table; for (enc_idx = 0; enc_idx < sc->num_enc_table_entries; enc_idx++, enc_entry++) { if (!enc_entry->removal_flag) { mpr_dprint(sc, MPR_MAPPING, "%s: Reset the " "reserved flag for all of the map entries " "for enclosure %d.\n", __func__, enc_idx); mt_entry = &sc->mapping_table[enc_entry-> start_index]; for (map_idx = enc_entry->start_index; map_idx < (enc_entry->start_index + enc_entry->num_slots); map_idx++, mt_entry++) mt_entry->device_info |= MPR_DEV_RESERVED; et_entry->skip_search = 0; } } done_flag = 1; } return start_idx; } /** * _mapping_get_dev_info -get information about newly added devices * @sc: per adapter object * @topo_change: Topology change event entry * * Search through the topology change event list and issues sas device pg0 * requests for the newly added device and reserved entries in tables * * Returns nothing */ static void _mapping_get_dev_info(struct mpr_softc *sc, struct _map_topology_change *topo_change) { u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); Mpi2ConfigReply_t mpi_reply; Mpi2SasDevicePage0_t sas_device_pg0; u8 entry, enc_idx, phy_idx; u32 map_idx, index, device_info; struct _map_phy_change *phy_change, *tmp_phy_change; uint64_t sas_address; struct enc_mapping_table *et_entry; struct dev_mapping_table *mt_entry; u8 add_code = MPI2_EVENT_SAS_TOPO_RC_TARG_ADDED; int rc = 1; for (entry = 0; entry < topo_change->num_entries; entry++) { phy_change = &topo_change->phy_details[entry]; if (phy_change->is_processed || !phy_change->dev_handle || phy_change->reason != MPI2_EVENT_SAS_TOPO_RC_TARG_ADDED) continue; if (mpr_config_get_sas_device_pg0(sc, &mpi_reply, &sas_device_pg0, MPI2_SAS_DEVICE_PGAD_FORM_HANDLE, phy_change->dev_handle)) { phy_change->is_processed = 1; continue; } /* * Always get SATA Identify information because this is used * to determine if Start/Stop Unit should be sent to the drive * when the system is shutdown. */ device_info = le32toh(sas_device_pg0.DeviceInfo); sas_address = le32toh(sas_device_pg0.SASAddress.High); sas_address = (sas_address << 32) | le32toh(sas_device_pg0.SASAddress.Low); if ((device_info & MPI2_SAS_DEVICE_INFO_END_DEVICE) && (device_info & MPI2_SAS_DEVICE_INFO_SATA_DEVICE)) { rc = mprsas_get_sas_address_for_sata_disk(sc, &sas_address, phy_change->dev_handle, device_info, &phy_change->is_SATA_SSD); if (rc) { mpr_dprint(sc, MPR_ERROR, "%s: failed to get " "disk type (SSD or HDD) and SAS Address " "for SATA device with handle 0x%04x\n", __func__, phy_change->dev_handle); } } phy_change->physical_id = sas_address; phy_change->slot = le16toh(sas_device_pg0.Slot); phy_change->device_info = device_info; /* * When using Enc/Slot mapping, if this device is an enclosure * make sure that all of its slots can fit into the mapping * table. */ if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING) { /* * The enclosure should already be in the enclosure * table due to the Enclosure Add event. If not, just * continue, nothing can be done. */ enc_idx = _mapping_get_enc_idx_from_handle(sc, topo_change->enc_handle); if (enc_idx == MPR_ENCTABLE_BAD_IDX) { phy_change->is_processed = 1; mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: " "failed to add the device with handle " "0x%04x because the enclosure is not in " "the mapping table\n", __func__, phy_change->dev_handle); continue; } if (!((phy_change->device_info & MPI2_SAS_DEVICE_INFO_END_DEVICE) && (phy_change->device_info & (MPI2_SAS_DEVICE_INFO_SSP_TARGET | MPI2_SAS_DEVICE_INFO_STP_TARGET | MPI2_SAS_DEVICE_INFO_SATA_DEVICE)))) { phy_change->is_processed = 1; continue; } et_entry = &sc->enclosure_table[enc_idx]; /* * If the enclosure already has a start_index, it's been * mapped, so go to the next Topo change. */ if (et_entry->start_index != MPR_MAPTABLE_BAD_IDX) continue; /* * If the Expander Handle is 0, the devices are direct * attached. In that case, the start_index must be just * after the reserved entries. Otherwise, find space in * the mapping table for the enclosure's devices. */ if (!topo_change->exp_handle) { map_idx = sc->num_rsvd_entries; et_entry->start_index = map_idx; } else { map_idx = _mapping_find_enc_map_space(sc, et_entry); et_entry->start_index = map_idx; /* * If space cannot be found to hold all of the * enclosure's devices in the mapping table, * there's no need to continue checking the * other devices in this event. Set all of the * phy_details for this event (if the change is * for an add) as already processed because none * of these devices can be added to the mapping * table. */ if (et_entry->start_index == MPR_MAPTABLE_BAD_IDX) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: failed to add the enclosure " "with ID 0x%016jx because there is " "no free space available in the " "mapping table for all of the " "enclosure's devices.\n", __func__, (uintmax_t)et_entry->enclosure_id); phy_change->is_processed = 1; for (phy_idx = 0; phy_idx < topo_change->num_entries; phy_idx++) { tmp_phy_change = &topo_change->phy_details [phy_idx]; if (tmp_phy_change->reason == add_code) tmp_phy_change-> is_processed = 1; } break; } } /* * Found space in the mapping table for this enclosure. * Initialize each mapping table entry for the * enclosure. */ mpr_dprint(sc, MPR_MAPPING, "%s: Initialize %d map " "entries for the enclosure, starting at map index " " %d.\n", __func__, et_entry->num_slots, map_idx); mt_entry = &sc->mapping_table[map_idx]; for (index = map_idx; index < (et_entry->num_slots + map_idx); index++, mt_entry++) { mt_entry->device_info = MPR_DEV_RESERVED; mt_entry->physical_id = et_entry->enclosure_id; mt_entry->phy_bits = et_entry->phy_bits; mt_entry->missing_count = 0; } } } } /** * _mapping_get_pcie_dev_info -get information about newly added PCIe devices * @sc: per adapter object * @topo_change: Topology change event entry * * Searches through the PCIe topology change event list and issues PCIe device * pg0 requests for the newly added PCIe device. If the device is in an * enclosure, search for available space in the enclosure mapping table for the * device and reserve that space. * * Returns nothing */ static void _mapping_get_pcie_dev_info(struct mpr_softc *sc, struct _map_pcie_topology_change *topo_change) { u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); Mpi2ConfigReply_t mpi_reply; Mpi26PCIeDevicePage0_t pcie_device_pg0; u8 entry, enc_idx, port_idx; u32 map_idx, index; struct _map_port_change *port_change, *tmp_port_change; uint64_t pcie_wwid; struct enc_mapping_table *et_entry; struct dev_mapping_table *mt_entry; u8 add_code = MPI26_EVENT_PCIE_TOPO_PS_DEV_ADDED; for (entry = 0; entry < topo_change->num_entries; entry++) { port_change = &topo_change->port_details[entry]; if (port_change->is_processed || !port_change->dev_handle || port_change->reason != MPI26_EVENT_PCIE_TOPO_PS_DEV_ADDED) continue; if (mpr_config_get_pcie_device_pg0(sc, &mpi_reply, &pcie_device_pg0, MPI26_PCIE_DEVICE_PGAD_FORM_HANDLE, port_change->dev_handle)) { port_change->is_processed = 1; continue; } pcie_wwid = pcie_device_pg0.WWID.High; pcie_wwid = (pcie_wwid << 32) | pcie_device_pg0.WWID.Low; port_change->physical_id = pcie_wwid; port_change->slot = le16toh(pcie_device_pg0.Slot); port_change->device_info = le32toh(pcie_device_pg0.DeviceInfo); /* * When using Enc/Slot mapping, if this device is an enclosure * make sure that all of its slots can fit into the mapping * table. */ if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING) { /* * The enclosure should already be in the enclosure * table due to the Enclosure Add event. If not, just * continue, nothing can be done. */ enc_idx = _mapping_get_enc_idx_from_handle(sc, topo_change->enc_handle); if (enc_idx == MPR_ENCTABLE_BAD_IDX) { port_change->is_processed = 1; mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: " "failed to add the device with handle " "0x%04x because the enclosure is not in " "the mapping table\n", __func__, port_change->dev_handle); continue; } if (!(port_change->device_info & MPI26_PCIE_DEVINFO_NVME)) { port_change->is_processed = 1; continue; } et_entry = &sc->enclosure_table[enc_idx]; /* * If the enclosure already has a start_index, it's been * mapped, so go to the next Topo change. */ if (et_entry->start_index != MPR_MAPTABLE_BAD_IDX) continue; /* * If the Switch Handle is 0, the devices are direct * attached. In that case, the start_index must be just * after the reserved entries. Otherwise, find space in * the mapping table for the enclosure's devices. */ if (!topo_change->switch_dev_handle) { map_idx = sc->num_rsvd_entries; et_entry->start_index = map_idx; } else { map_idx = _mapping_find_enc_map_space(sc, et_entry); et_entry->start_index = map_idx; /* * If space cannot be found to hold all of the * enclosure's devices in the mapping table, * there's no need to continue checking the * other devices in this event. Set all of the * port_details for this event (if the change is * for an add) as already processed because none * of these devices can be added to the mapping * table. */ if (et_entry->start_index == MPR_MAPTABLE_BAD_IDX) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: failed to add the enclosure " "with ID 0x%016jx because there is " "no free space available in the " "mapping table for all of the " "enclosure's devices.\n", __func__, (uintmax_t)et_entry->enclosure_id); port_change->is_processed = 1; for (port_idx = 0; port_idx < topo_change->num_entries; port_idx++) { tmp_port_change = &topo_change->port_details [port_idx]; if (tmp_port_change->reason == add_code) tmp_port_change-> is_processed = 1; } break; } } /* * Found space in the mapping table for this enclosure. * Initialize each mapping table entry for the * enclosure. */ mpr_dprint(sc, MPR_MAPPING, "%s: Initialize %d map " "entries for the enclosure, starting at map index " " %d.\n", __func__, et_entry->num_slots, map_idx); mt_entry = &sc->mapping_table[map_idx]; for (index = map_idx; index < (et_entry->num_slots + map_idx); index++, mt_entry++) { mt_entry->device_info = MPR_DEV_RESERVED; mt_entry->physical_id = et_entry->enclosure_id; mt_entry->phy_bits = et_entry->phy_bits; mt_entry->missing_count = 0; } } } } /** * _mapping_set_mid_to_eid -set map table data from enclosure table * @sc: per adapter object * @et_entry: enclosure entry * * Returns nothing */ static inline void _mapping_set_mid_to_eid(struct mpr_softc *sc, struct enc_mapping_table *et_entry) { struct dev_mapping_table *mt_entry; u16 slots = et_entry->num_slots, map_idx; u32 start_idx = et_entry->start_index; if (start_idx != MPR_MAPTABLE_BAD_IDX) { mt_entry = &sc->mapping_table[start_idx]; for (map_idx = 0; map_idx < slots; map_idx++, mt_entry++) mt_entry->physical_id = et_entry->enclosure_id; } } /** * _mapping_clear_removed_entries - mark the entries to be cleared * @sc: per adapter object * * Search through the removal table and mark the entries which needs to be * flushed to DPM and also updates the map table and enclosure table by * clearing the corresponding entries. * * Returns nothing */ static void _mapping_clear_removed_entries(struct mpr_softc *sc) { u32 remove_idx; struct map_removal_table *remove_entry; Mpi2DriverMap0Entry_t *dpm_entry; u8 done_flag = 0, num_entries, m, i; struct enc_mapping_table *et_entry, *from, *to; u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); if (sc->is_dpm_enable) { remove_entry = sc->removal_table; for (remove_idx = 0; remove_idx < sc->max_devices; remove_idx++, remove_entry++) { if (remove_entry->dpm_entry_num != MPR_DPM_BAD_IDX) { dpm_entry = (Mpi2DriverMap0Entry_t *) ((u8 *) sc->dpm_pg0 + sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); dpm_entry += remove_entry->dpm_entry_num; dpm_entry->PhysicalIdentifier.Low = 0; dpm_entry->PhysicalIdentifier.High = 0; dpm_entry->DeviceIndex = 0; dpm_entry->MappingInformation = 0; dpm_entry->PhysicalBitsMapping = 0; sc->dpm_flush_entry[remove_entry-> dpm_entry_num] = 1; sc->dpm_entry_used[remove_entry->dpm_entry_num] = 0; remove_entry->dpm_entry_num = MPR_DPM_BAD_IDX; } } } /* * When using Enc/Slot mapping, if a new enclosure was added and old * enclosure space was needed, the enclosure table may now have gaps * that need to be closed. All enclosure mappings need to be contiguous * so that space can be reused correctly if available. */ if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING) { num_entries = sc->num_enc_table_entries; while (!done_flag) { done_flag = 1; et_entry = sc->enclosure_table; for (i = 0; i < num_entries; i++, et_entry++) { if (!et_entry->enc_handle && et_entry-> init_complete) { done_flag = 0; if (i != (num_entries - 1)) { from = &sc->enclosure_table [i+1]; to = &sc->enclosure_table[i]; for (m = i; m < (num_entries - 1); m++, from++, to++) { _mapping_set_mid_to_eid (sc, to); *to = *from; } _mapping_clear_enc_entry(to); sc->num_enc_table_entries--; num_entries = sc->num_enc_table_entries; } else { _mapping_clear_enc_entry (et_entry); sc->num_enc_table_entries--; num_entries = sc->num_enc_table_entries; } } } } } } /** * _mapping_add_new_device -Add the new device into mapping table * @sc: per adapter object * @topo_change: Topology change event entry * * Search through the topology change event list and update map table, * enclosure table and DPM pages for the newly added devices. * * Returns nothing */ static void _mapping_add_new_device(struct mpr_softc *sc, struct _map_topology_change *topo_change) { u8 enc_idx, missing_cnt, is_removed = 0; u16 dpm_idx; u32 search_idx, map_idx; u32 entry; struct dev_mapping_table *mt_entry; struct enc_mapping_table *et_entry; struct _map_phy_change *phy_change; u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); Mpi2DriverMap0Entry_t *dpm_entry; uint64_t temp64_var; u8 map_shift = MPI2_DRVMAP0_MAPINFO_SLOT_SHIFT; u8 hdr_sz = sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER); u16 max_num_phy_ids = le16toh(sc->ioc_pg8.MaxNumPhysicalMappedIDs); for (entry = 0; entry < topo_change->num_entries; entry++) { phy_change = &topo_change->phy_details[entry]; if (phy_change->is_processed) continue; if (phy_change->reason != MPI2_EVENT_SAS_TOPO_RC_TARG_ADDED || !phy_change->dev_handle) { phy_change->is_processed = 1; continue; } if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING) { enc_idx = _mapping_get_enc_idx_from_handle (sc, topo_change->enc_handle); if (enc_idx == MPR_ENCTABLE_BAD_IDX) { phy_change->is_processed = 1; mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: " "failed to add the device with handle " "0x%04x because the enclosure is not in " "the mapping table\n", __func__, phy_change->dev_handle); continue; } /* * If the enclosure's start_index is BAD here, it means * that there is no room in the mapping table to cover * all of the devices that could be in the enclosure. * There's no reason to process any of the devices for * this enclosure since they can't be mapped. */ et_entry = &sc->enclosure_table[enc_idx]; if (et_entry->start_index == MPR_MAPTABLE_BAD_IDX) { phy_change->is_processed = 1; mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: " "failed to add the device with handle " "0x%04x because there is no free space " "available in the mapping table\n", __func__, phy_change->dev_handle); continue; } /* * Add this device to the mapping table at the correct * offset where space was found to map the enclosure. * Then setup the DPM entry information if being used. */ map_idx = et_entry->start_index + phy_change->slot - et_entry->start_slot; mt_entry = &sc->mapping_table[map_idx]; mt_entry->physical_id = phy_change->physical_id; mt_entry->id = map_idx; mt_entry->dev_handle = phy_change->dev_handle; mt_entry->missing_count = 0; mt_entry->dpm_entry_num = et_entry->dpm_entry_num; mt_entry->device_info = phy_change->device_info | (MPR_DEV_RESERVED | MPR_MAP_IN_USE); if (sc->is_dpm_enable) { dpm_idx = et_entry->dpm_entry_num; if (dpm_idx == MPR_DPM_BAD_IDX) dpm_idx = _mapping_get_dpm_idx_from_id (sc, et_entry->enclosure_id, et_entry->phy_bits); if (dpm_idx == MPR_DPM_BAD_IDX) { dpm_idx = _mapping_get_free_dpm_idx(sc); if (dpm_idx != MPR_DPM_BAD_IDX) { dpm_entry = (Mpi2DriverMap0Entry_t *) ((u8 *) sc->dpm_pg0 + hdr_sz); dpm_entry += dpm_idx; dpm_entry-> PhysicalIdentifier.Low = (0xFFFFFFFF & et_entry->enclosure_id); dpm_entry-> PhysicalIdentifier.High = (et_entry->enclosure_id >> 32); dpm_entry->DeviceIndex = (U16)et_entry->start_index; dpm_entry->MappingInformation = et_entry->num_slots; dpm_entry->MappingInformation <<= map_shift; dpm_entry->PhysicalBitsMapping = et_entry->phy_bits; et_entry->dpm_entry_num = dpm_idx; sc->dpm_entry_used[dpm_idx] = 1; sc->dpm_flush_entry[dpm_idx] = 1; phy_change->is_processed = 1; } else { phy_change->is_processed = 1; mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: failed " "to add the device with " "handle 0x%04x to " "persistent table because " "there is no free space " "available\n", __func__, phy_change->dev_handle); } } else { et_entry->dpm_entry_num = dpm_idx; mt_entry->dpm_entry_num = dpm_idx; } } et_entry->init_complete = 1; } else if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_DEVICE_PERSISTENCE_MAPPING) { /* * Get the mapping table index for this device. If it's * not in the mapping table yet, find a free entry if * one is available. If there are no free entries, look * for the entry that has the highest missing count. If * none of that works to find an entry in the mapping * table, there is a problem. Log a message and just * continue on. */ map_idx = _mapping_get_mt_idx_from_id (sc, phy_change->physical_id); if (map_idx == MPR_MAPTABLE_BAD_IDX) { search_idx = sc->num_rsvd_entries; if (topo_change->exp_handle) search_idx += max_num_phy_ids; map_idx = _mapping_get_free_mt_idx(sc, search_idx); } /* * If an entry will be used that has a missing device, * clear its entry from the DPM in the controller. */ if (map_idx == MPR_MAPTABLE_BAD_IDX) { map_idx = _mapping_get_high_missing_mt_idx(sc); if (map_idx != MPR_MAPTABLE_BAD_IDX) { mt_entry = &sc->mapping_table[map_idx]; _mapping_add_to_removal_table(sc, mt_entry->dpm_entry_num); is_removed = 1; mt_entry->init_complete = 0; } } if (map_idx != MPR_MAPTABLE_BAD_IDX) { mt_entry = &sc->mapping_table[map_idx]; mt_entry->physical_id = phy_change->physical_id; mt_entry->id = map_idx; mt_entry->dev_handle = phy_change->dev_handle; mt_entry->missing_count = 0; mt_entry->device_info = phy_change->device_info | (MPR_DEV_RESERVED | MPR_MAP_IN_USE); } else { phy_change->is_processed = 1; mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: " "failed to add the device with handle " "0x%04x because there is no free space " "available in the mapping table\n", __func__, phy_change->dev_handle); continue; } if (sc->is_dpm_enable) { if (mt_entry->dpm_entry_num != MPR_DPM_BAD_IDX) { dpm_idx = mt_entry->dpm_entry_num; dpm_entry = (Mpi2DriverMap0Entry_t *) ((u8 *)sc->dpm_pg0 + hdr_sz); dpm_entry += dpm_idx; missing_cnt = dpm_entry-> MappingInformation & MPI2_DRVMAP0_MAPINFO_MISSING_MASK; temp64_var = dpm_entry-> PhysicalIdentifier.High; temp64_var = (temp64_var << 32) | dpm_entry->PhysicalIdentifier.Low; /* * If the Mapping Table's info is not * the same as the DPM entry, clear the * init_complete flag so that it's * updated. */ if ((mt_entry->physical_id == temp64_var) && !missing_cnt) mt_entry->init_complete = 1; else mt_entry->init_complete = 0; } else { dpm_idx = _mapping_get_free_dpm_idx(sc); mt_entry->init_complete = 0; } if (dpm_idx != MPR_DPM_BAD_IDX && !mt_entry->init_complete) { mt_entry->dpm_entry_num = dpm_idx; dpm_entry = (Mpi2DriverMap0Entry_t *) ((u8 *)sc->dpm_pg0 + hdr_sz); dpm_entry += dpm_idx; dpm_entry->PhysicalIdentifier.Low = (0xFFFFFFFF & mt_entry->physical_id); dpm_entry->PhysicalIdentifier.High = (mt_entry->physical_id >> 32); dpm_entry->DeviceIndex = (U16) map_idx; dpm_entry->MappingInformation = 0; dpm_entry->PhysicalBitsMapping = 0; sc->dpm_entry_used[dpm_idx] = 1; sc->dpm_flush_entry[dpm_idx] = 1; phy_change->is_processed = 1; } else if (dpm_idx == MPR_DPM_BAD_IDX) { phy_change->is_processed = 1; mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: failed to add the device with " "handle 0x%04x to persistent table " "because there is no free space " "available\n", __func__, phy_change->dev_handle); } } mt_entry->init_complete = 1; } phy_change->is_processed = 1; } if (is_removed) _mapping_clear_removed_entries(sc); } /** * _mapping_add_new_pcie_device -Add the new PCIe device into mapping table * @sc: per adapter object * @topo_change: Topology change event entry * * Search through the PCIe topology change event list and update map table, * enclosure table and DPM pages for the newly added devices. * * Returns nothing */ static void _mapping_add_new_pcie_device(struct mpr_softc *sc, struct _map_pcie_topology_change *topo_change) { u8 enc_idx, missing_cnt, is_removed = 0; u16 dpm_idx; u32 search_idx, map_idx; u32 entry; struct dev_mapping_table *mt_entry; struct enc_mapping_table *et_entry; struct _map_port_change *port_change; u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); Mpi2DriverMap0Entry_t *dpm_entry; uint64_t temp64_var; u8 map_shift = MPI2_DRVMAP0_MAPINFO_SLOT_SHIFT; u8 hdr_sz = sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER); u16 max_num_phy_ids = le16toh(sc->ioc_pg8.MaxNumPhysicalMappedIDs); for (entry = 0; entry < topo_change->num_entries; entry++) { port_change = &topo_change->port_details[entry]; if (port_change->is_processed) continue; if (port_change->reason != MPI26_EVENT_PCIE_TOPO_PS_DEV_ADDED || !port_change->dev_handle) { port_change->is_processed = 1; continue; } if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING) { enc_idx = _mapping_get_enc_idx_from_handle (sc, topo_change->enc_handle); if (enc_idx == MPR_ENCTABLE_BAD_IDX) { port_change->is_processed = 1; mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: " "failed to add the device with handle " "0x%04x because the enclosure is not in " "the mapping table\n", __func__, port_change->dev_handle); continue; } /* * If the enclosure's start_index is BAD here, it means * that there is no room in the mapping table to cover * all of the devices that could be in the enclosure. * There's no reason to process any of the devices for * this enclosure since they can't be mapped. */ et_entry = &sc->enclosure_table[enc_idx]; if (et_entry->start_index == MPR_MAPTABLE_BAD_IDX) { port_change->is_processed = 1; mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: " "failed to add the device with handle " "0x%04x because there is no free space " "available in the mapping table\n", __func__, port_change->dev_handle); continue; } /* * Add this device to the mapping table at the correct * offset where space was found to map the enclosure. * Then setup the DPM entry information if being used. */ map_idx = et_entry->start_index + port_change->slot - et_entry->start_slot; mt_entry = &sc->mapping_table[map_idx]; mt_entry->physical_id = port_change->physical_id; mt_entry->id = map_idx; mt_entry->dev_handle = port_change->dev_handle; mt_entry->missing_count = 0; mt_entry->dpm_entry_num = et_entry->dpm_entry_num; mt_entry->device_info = port_change->device_info | (MPR_DEV_RESERVED | MPR_MAP_IN_USE); if (sc->is_dpm_enable) { dpm_idx = et_entry->dpm_entry_num; if (dpm_idx == MPR_DPM_BAD_IDX) dpm_idx = _mapping_get_dpm_idx_from_id (sc, et_entry->enclosure_id, et_entry->phy_bits); if (dpm_idx == MPR_DPM_BAD_IDX) { dpm_idx = _mapping_get_free_dpm_idx(sc); if (dpm_idx != MPR_DPM_BAD_IDX) { dpm_entry = (Mpi2DriverMap0Entry_t *) ((u8 *) sc->dpm_pg0 + hdr_sz); dpm_entry += dpm_idx; dpm_entry-> PhysicalIdentifier.Low = (0xFFFFFFFF & et_entry->enclosure_id); dpm_entry-> PhysicalIdentifier.High = (et_entry->enclosure_id >> 32); dpm_entry->DeviceIndex = (U16)et_entry->start_index; dpm_entry->MappingInformation = et_entry->num_slots; dpm_entry->MappingInformation <<= map_shift; dpm_entry->PhysicalBitsMapping = et_entry->phy_bits; et_entry->dpm_entry_num = dpm_idx; sc->dpm_entry_used[dpm_idx] = 1; sc->dpm_flush_entry[dpm_idx] = 1; port_change->is_processed = 1; } else { port_change->is_processed = 1; mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: failed " "to add the device with " "handle 0x%04x to " "persistent table because " "there is no free space " "available\n", __func__, port_change->dev_handle); } } else { et_entry->dpm_entry_num = dpm_idx; mt_entry->dpm_entry_num = dpm_idx; } } et_entry->init_complete = 1; } else if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_DEVICE_PERSISTENCE_MAPPING) { /* * Get the mapping table index for this device. If it's * not in the mapping table yet, find a free entry if * one is available. If there are no free entries, look * for the entry that has the highest missing count. If * none of that works to find an entry in the mapping * table, there is a problem. Log a message and just * continue on. */ map_idx = _mapping_get_mt_idx_from_id (sc, port_change->physical_id); if (map_idx == MPR_MAPTABLE_BAD_IDX) { search_idx = sc->num_rsvd_entries; if (topo_change->switch_dev_handle) search_idx += max_num_phy_ids; map_idx = _mapping_get_free_mt_idx(sc, search_idx); } /* * If an entry will be used that has a missing device, * clear its entry from the DPM in the controller. */ if (map_idx == MPR_MAPTABLE_BAD_IDX) { map_idx = _mapping_get_high_missing_mt_idx(sc); if (map_idx != MPR_MAPTABLE_BAD_IDX) { mt_entry = &sc->mapping_table[map_idx]; _mapping_add_to_removal_table(sc, mt_entry->dpm_entry_num); is_removed = 1; mt_entry->init_complete = 0; } } if (map_idx != MPR_MAPTABLE_BAD_IDX) { mt_entry = &sc->mapping_table[map_idx]; mt_entry->physical_id = port_change->physical_id; mt_entry->id = map_idx; mt_entry->dev_handle = port_change->dev_handle; mt_entry->missing_count = 0; mt_entry->device_info = port_change->device_info | (MPR_DEV_RESERVED | MPR_MAP_IN_USE); } else { port_change->is_processed = 1; mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: " "failed to add the device with handle " "0x%04x because there is no free space " "available in the mapping table\n", __func__, port_change->dev_handle); continue; } if (sc->is_dpm_enable) { if (mt_entry->dpm_entry_num != MPR_DPM_BAD_IDX) { dpm_idx = mt_entry->dpm_entry_num; dpm_entry = (Mpi2DriverMap0Entry_t *) ((u8 *)sc->dpm_pg0 + hdr_sz); dpm_entry += dpm_idx; missing_cnt = dpm_entry-> MappingInformation & MPI2_DRVMAP0_MAPINFO_MISSING_MASK; temp64_var = dpm_entry-> PhysicalIdentifier.High; temp64_var = (temp64_var << 32) | dpm_entry->PhysicalIdentifier.Low; /* * If the Mapping Table's info is not * the same as the DPM entry, clear the * init_complete flag so that it's * updated. */ if ((mt_entry->physical_id == temp64_var) && !missing_cnt) mt_entry->init_complete = 1; else mt_entry->init_complete = 0; } else { dpm_idx = _mapping_get_free_dpm_idx(sc); mt_entry->init_complete = 0; } if (dpm_idx != MPR_DPM_BAD_IDX && !mt_entry->init_complete) { mt_entry->dpm_entry_num = dpm_idx; dpm_entry = (Mpi2DriverMap0Entry_t *) ((u8 *)sc->dpm_pg0 + hdr_sz); dpm_entry += dpm_idx; dpm_entry->PhysicalIdentifier.Low = (0xFFFFFFFF & mt_entry->physical_id); dpm_entry->PhysicalIdentifier.High = (mt_entry->physical_id >> 32); dpm_entry->DeviceIndex = (U16) map_idx; dpm_entry->MappingInformation = 0; dpm_entry->PhysicalBitsMapping = 0; sc->dpm_entry_used[dpm_idx] = 1; sc->dpm_flush_entry[dpm_idx] = 1; port_change->is_processed = 1; } else if (dpm_idx == MPR_DPM_BAD_IDX) { port_change->is_processed = 1; mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: failed to add the device with " "handle 0x%04x to persistent table " "because there is no free space " "available\n", __func__, port_change->dev_handle); } } mt_entry->init_complete = 1; } port_change->is_processed = 1; } if (is_removed) _mapping_clear_removed_entries(sc); } /** * _mapping_flush_dpm_pages -Flush the DPM pages to NVRAM * @sc: per adapter object * * Returns nothing */ static void _mapping_flush_dpm_pages(struct mpr_softc *sc) { Mpi2DriverMap0Entry_t *dpm_entry; Mpi2ConfigReply_t mpi_reply; Mpi2DriverMappingPage0_t config_page; u16 entry_num; for (entry_num = 0; entry_num < sc->max_dpm_entries; entry_num++) { if (!sc->dpm_flush_entry[entry_num]) continue; memset(&config_page, 0, sizeof(Mpi2DriverMappingPage0_t)); memcpy(&config_page.Header, (u8 *)sc->dpm_pg0, sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); dpm_entry = (Mpi2DriverMap0Entry_t *) ((u8 *)sc->dpm_pg0 + sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); dpm_entry += entry_num; dpm_entry->MappingInformation = htole16(dpm_entry-> MappingInformation); dpm_entry->DeviceIndex = htole16(dpm_entry->DeviceIndex); dpm_entry->PhysicalBitsMapping = htole32(dpm_entry-> PhysicalBitsMapping); memcpy(&config_page.Entry, (u8 *)dpm_entry, sizeof(Mpi2DriverMap0Entry_t)); /* TODO-How to handle failed writes? */ mpr_dprint(sc, MPR_MAPPING, "%s: Flushing DPM entry %d.\n", __func__, entry_num); if (mpr_config_set_dpm_pg0(sc, &mpi_reply, &config_page, entry_num)) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: Flush of " "DPM entry %d for device failed\n", __func__, entry_num); } else sc->dpm_flush_entry[entry_num] = 0; dpm_entry->MappingInformation = le16toh(dpm_entry-> MappingInformation); dpm_entry->DeviceIndex = le16toh(dpm_entry->DeviceIndex); dpm_entry->PhysicalBitsMapping = le32toh(dpm_entry-> PhysicalBitsMapping); } } /** * _mapping_allocate_memory- allocates the memory required for mapping tables * @sc: per adapter object * * Allocates the memory for all the tables required for host mapping * * Return 0 on success or non-zero on failure. */ int mpr_mapping_allocate_memory(struct mpr_softc *sc) { uint32_t dpm_pg0_sz; sc->mapping_table = malloc((sizeof(struct dev_mapping_table) * sc->max_devices), M_MPR, M_ZERO|M_NOWAIT); if (!sc->mapping_table) goto free_resources; sc->removal_table = malloc((sizeof(struct map_removal_table) * sc->max_devices), M_MPR, M_ZERO|M_NOWAIT); if (!sc->removal_table) goto free_resources; sc->enclosure_table = malloc((sizeof(struct enc_mapping_table) * sc->max_enclosures), M_MPR, M_ZERO|M_NOWAIT); if (!sc->enclosure_table) goto free_resources; sc->dpm_entry_used = malloc((sizeof(u8) * sc->max_dpm_entries), M_MPR, M_ZERO|M_NOWAIT); if (!sc->dpm_entry_used) goto free_resources; sc->dpm_flush_entry = malloc((sizeof(u8) * sc->max_dpm_entries), M_MPR, M_ZERO|M_NOWAIT); if (!sc->dpm_flush_entry) goto free_resources; dpm_pg0_sz = sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER) + (sc->max_dpm_entries * sizeof(MPI2_CONFIG_PAGE_DRIVER_MAP0_ENTRY)); sc->dpm_pg0 = malloc(dpm_pg0_sz, M_MPR, M_ZERO|M_NOWAIT); if (!sc->dpm_pg0) { printf("%s: memory alloc failed for dpm page; disabling dpm\n", __func__); sc->is_dpm_enable = 0; } return 0; free_resources: free(sc->mapping_table, M_MPR); free(sc->removal_table, M_MPR); free(sc->enclosure_table, M_MPR); free(sc->dpm_entry_used, M_MPR); free(sc->dpm_flush_entry, M_MPR); free(sc->dpm_pg0, M_MPR); printf("%s: device initialization failed due to failure in mapping " "table memory allocation\n", __func__); return -1; } /** * mpr_mapping_free_memory- frees the memory allocated for mapping tables * @sc: per adapter object * * Returns nothing. */ void mpr_mapping_free_memory(struct mpr_softc *sc) { free(sc->mapping_table, M_MPR); free(sc->removal_table, M_MPR); free(sc->enclosure_table, M_MPR); free(sc->dpm_entry_used, M_MPR); free(sc->dpm_flush_entry, M_MPR); free(sc->dpm_pg0, M_MPR); } static bool _mapping_process_dpm_pg0(struct mpr_softc *sc) { u8 missing_cnt, enc_idx; u16 slot_id, entry_num, num_slots; u32 map_idx, dev_idx, start_idx, end_idx; struct dev_mapping_table *mt_entry; Mpi2DriverMap0Entry_t *dpm_entry; u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); u16 max_num_phy_ids = le16toh(sc->ioc_pg8.MaxNumPhysicalMappedIDs); struct enc_mapping_table *et_entry; u64 physical_id; u32 phy_bits = 0; /* * start_idx and end_idx are only used for IR. */ if (sc->ir_firmware) _mapping_get_ir_maprange(sc, &start_idx, &end_idx); /* * Look through all of the DPM entries that were read from the * controller and copy them over to the driver's internal table if they * have a non-zero ID. At this point, any ID with a value of 0 would be * invalid, so don't copy it. */ mpr_dprint(sc, MPR_MAPPING, "%s: Start copy of %d DPM entries into the " "mapping table.\n", __func__, sc->max_dpm_entries); dpm_entry = (Mpi2DriverMap0Entry_t *) ((uint8_t *) sc->dpm_pg0 + sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); for (entry_num = 0; entry_num < sc->max_dpm_entries; entry_num++, dpm_entry++) { physical_id = dpm_entry->PhysicalIdentifier.High; physical_id = (physical_id << 32) | dpm_entry->PhysicalIdentifier.Low; if (!physical_id) { sc->dpm_entry_used[entry_num] = 0; continue; } sc->dpm_entry_used[entry_num] = 1; dpm_entry->MappingInformation = le16toh(dpm_entry-> MappingInformation); missing_cnt = dpm_entry->MappingInformation & MPI2_DRVMAP0_MAPINFO_MISSING_MASK; dev_idx = le16toh(dpm_entry->DeviceIndex); phy_bits = le32toh(dpm_entry->PhysicalBitsMapping); /* * Volumes are at special locations in the mapping table so * account for that. Volume mapping table entries do not depend * on the type of mapping, so continue the loop after adding * volumes to the mapping table. */ if (sc->ir_firmware && (dev_idx >= start_idx) && (dev_idx <= end_idx)) { mt_entry = &sc->mapping_table[dev_idx]; mt_entry->physical_id = dpm_entry->PhysicalIdentifier.High; mt_entry->physical_id = (mt_entry->physical_id << 32) | dpm_entry->PhysicalIdentifier.Low; mt_entry->id = dev_idx; mt_entry->missing_count = missing_cnt; mt_entry->dpm_entry_num = entry_num; mt_entry->device_info = MPR_DEV_RESERVED; continue; } if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING) { /* * The dev_idx for an enclosure is the start index. If * the start index is within the controller's default * enclosure area, set the number of slots for this * enclosure to the max allowed. Otherwise, it should be * a normal enclosure and the number of slots is in the * DPM entry's Mapping Information. */ if (dev_idx < (sc->num_rsvd_entries + max_num_phy_ids)) { slot_id = 0; if (ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_DA_START_SLOT_1) slot_id = 1; num_slots = max_num_phy_ids; } else { slot_id = 0; num_slots = dpm_entry->MappingInformation & MPI2_DRVMAP0_MAPINFO_SLOT_MASK; num_slots >>= MPI2_DRVMAP0_MAPINFO_SLOT_SHIFT; } enc_idx = sc->num_enc_table_entries; if (enc_idx >= sc->max_enclosures) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: " "Number of enclosure entries in DPM exceed " "the max allowed of %d.\n", __func__, sc->max_enclosures); break; } sc->num_enc_table_entries++; et_entry = &sc->enclosure_table[enc_idx]; physical_id = dpm_entry->PhysicalIdentifier.High; et_entry->enclosure_id = (physical_id << 32) | dpm_entry->PhysicalIdentifier.Low; et_entry->start_index = dev_idx; et_entry->dpm_entry_num = entry_num; et_entry->num_slots = num_slots; et_entry->start_slot = slot_id; et_entry->missing_count = missing_cnt; et_entry->phy_bits = phy_bits; /* * Initialize all entries for this enclosure in the * mapping table and mark them as reserved. The actual * devices have not been processed yet but when they are * they will use these entries. If an entry is found * that already has a valid DPM index, the mapping table * is corrupt. This can happen if the mapping type is * changed without clearing all of the DPM entries in * the controller. */ mt_entry = &sc->mapping_table[dev_idx]; for (map_idx = dev_idx; map_idx < (dev_idx + num_slots); map_idx++, mt_entry++) { if (mt_entry->dpm_entry_num != MPR_DPM_BAD_IDX) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: Conflict in mapping table for " " enclosure %d\n", __func__, enc_idx); goto fail; } physical_id = dpm_entry->PhysicalIdentifier.High; mt_entry->physical_id = (physical_id << 32) | dpm_entry->PhysicalIdentifier.Low; mt_entry->phy_bits = phy_bits; mt_entry->id = dev_idx; mt_entry->dpm_entry_num = entry_num; mt_entry->missing_count = missing_cnt; mt_entry->device_info = MPR_DEV_RESERVED; } } else if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_DEVICE_PERSISTENCE_MAPPING) { /* * Device mapping, so simply copy the DPM entries to the * mapping table, but check for a corrupt mapping table * (as described above in Enc/Slot mapping). */ map_idx = dev_idx; mt_entry = &sc->mapping_table[map_idx]; if (mt_entry->dpm_entry_num != MPR_DPM_BAD_IDX) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: " "Conflict in mapping table for device %d\n", __func__, map_idx); goto fail; } physical_id = dpm_entry->PhysicalIdentifier.High; mt_entry->physical_id = (physical_id << 32) | dpm_entry->PhysicalIdentifier.Low; mt_entry->phy_bits = phy_bits; mt_entry->id = dev_idx; mt_entry->missing_count = missing_cnt; mt_entry->dpm_entry_num = entry_num; mt_entry->device_info = MPR_DEV_RESERVED; } } /*close the loop for DPM table */ return (true); fail: for (entry_num = 0; entry_num < sc->max_dpm_entries; entry_num++) { sc->dpm_entry_used[entry_num] = 0; /* * for IR firmware, it may be necessary to wipe out * sc->mapping_table volumes tooi */ } for (enc_idx = 0; enc_idx < sc->num_enc_table_entries; enc_idx++) _mapping_clear_enc_entry(sc->enclosure_table + enc_idx); sc->num_enc_table_entries = 0; return (false); } /* * mpr_mapping_check_devices - start of the day check for device availabilty * @sc: per adapter object * * Returns nothing. */ void mpr_mapping_check_devices(void *data) { u32 i; struct dev_mapping_table *mt_entry; struct mpr_softc *sc = (struct mpr_softc *)data; u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); struct enc_mapping_table *et_entry; u32 start_idx = 0, end_idx = 0; u8 stop_device_checks = 0; MPR_FUNCTRACE(sc); /* * Clear this flag so that this function is never called again except * within this function if the check needs to be done again. The * purpose is to check for missing devices that are currently in the * mapping table so do this only at driver init after discovery. */ sc->track_mapping_events = 0; /* * callout synchronization * This is used to prevent race conditions for the callout. */ mpr_dprint(sc, MPR_MAPPING, "%s: Start check for missing devices.\n", __func__); mtx_assert(&sc->mpr_mtx, MA_OWNED); if ((callout_pending(&sc->device_check_callout)) || (!callout_active(&sc->device_check_callout))) { mpr_dprint(sc, MPR_MAPPING, "%s: Device Check Callout is " "already pending or not active.\n", __func__); return; } callout_deactivate(&sc->device_check_callout); /* * Use callout to check if any devices in the mapping table have been * processed yet. If ALL devices are marked as not init_complete, no * devices have been processed and mapped. Until devices are mapped * there's no reason to mark them as missing. Continue resetting this * callout until devices have been mapped. */ if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING) { et_entry = sc->enclosure_table; for (i = 0; i < sc->num_enc_table_entries; i++, et_entry++) { if (et_entry->init_complete) { stop_device_checks = 1; break; } } } else if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_DEVICE_PERSISTENCE_MAPPING) { mt_entry = sc->mapping_table; for (i = 0; i < sc->max_devices; i++, mt_entry++) { if (mt_entry->init_complete) { stop_device_checks = 1; break; } } } /* * Setup another callout check after a delay. Keep doing this until * devices are mapped. */ if (!stop_device_checks) { mpr_dprint(sc, MPR_MAPPING, "%s: No devices have been mapped. " "Reset callout to check again after a %d second delay.\n", __func__, MPR_MISSING_CHECK_DELAY); callout_reset(&sc->device_check_callout, MPR_MISSING_CHECK_DELAY * hz, mpr_mapping_check_devices, sc); return; } mpr_dprint(sc, MPR_MAPPING, "%s: Device check complete.\n", __func__); /* * Depending on the mapping type, check if devices have been processed * and update their missing counts if not processed. */ if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING) { et_entry = sc->enclosure_table; for (i = 0; i < sc->num_enc_table_entries; i++, et_entry++) { if (!et_entry->init_complete) { if (et_entry->missing_count < MPR_MAX_MISSING_COUNT) { mpr_dprint(sc, MPR_MAPPING, "%s: " "Enclosure %d is missing from the " "topology. Update its missing " "count.\n", __func__, i); et_entry->missing_count++; if (et_entry->dpm_entry_num != MPR_DPM_BAD_IDX) { _mapping_commit_enc_entry(sc, et_entry); } } et_entry->init_complete = 1; } } if (!sc->ir_firmware) return; _mapping_get_ir_maprange(sc, &start_idx, &end_idx); mt_entry = &sc->mapping_table[start_idx]; } else if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) == MPI2_IOCPAGE8_FLAGS_DEVICE_PERSISTENCE_MAPPING) { start_idx = 0; end_idx = sc->max_devices - 1; mt_entry = sc->mapping_table; } /* * The start and end indices have been set above according to the * mapping type. Go through these mappings and update any entries that * do not have the init_complete flag set, which means they are missing. */ if (end_idx == 0) return; for (i = start_idx; i < (end_idx + 1); i++, mt_entry++) { if (mt_entry->device_info & MPR_DEV_RESERVED && !mt_entry->physical_id) mt_entry->init_complete = 1; else if (mt_entry->device_info & MPR_DEV_RESERVED) { if (!mt_entry->init_complete) { mpr_dprint(sc, MPR_MAPPING, "%s: Device in " "mapping table at index %d is missing from " "topology. Update its missing count.\n", __func__, i); if (mt_entry->missing_count < MPR_MAX_MISSING_COUNT) { mt_entry->missing_count++; if (mt_entry->dpm_entry_num != MPR_DPM_BAD_IDX) { _mapping_commit_map_entry(sc, mt_entry); } } mt_entry->init_complete = 1; } } } } /** * mpr_mapping_initialize - initialize mapping tables * @sc: per adapter object * * Read controller persitant mapping tables into internal data area. * * Return 0 for success or non-zero for failure. */ int mpr_mapping_initialize(struct mpr_softc *sc) { uint16_t volume_mapping_flags, dpm_pg0_sz; uint32_t i; Mpi2ConfigReply_t mpi_reply; int error; uint8_t retry_count; uint16_t ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); /* The additional 1 accounts for the virtual enclosure * created for the controller */ sc->max_enclosures = sc->facts->MaxEnclosures + 1; sc->max_expanders = sc->facts->MaxSasExpanders; sc->max_volumes = sc->facts->MaxVolumes; sc->max_devices = sc->facts->MaxTargets + sc->max_volumes; sc->pending_map_events = 0; sc->num_enc_table_entries = 0; sc->num_rsvd_entries = 0; sc->max_dpm_entries = sc->ioc_pg8.MaxPersistentEntries; sc->is_dpm_enable = (sc->max_dpm_entries) ? 1 : 0; sc->track_mapping_events = 0; mpr_dprint(sc, MPR_MAPPING, "%s: Mapping table has a max of %d entries " "and DPM has a max of %d entries.\n", __func__, sc->max_devices, sc->max_dpm_entries); if (ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_DISABLE_PERSISTENT_MAPPING) sc->is_dpm_enable = 0; if (ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_RESERVED_TARGETID_0) sc->num_rsvd_entries = 1; volume_mapping_flags = sc->ioc_pg8.IRVolumeMappingFlags & MPI2_IOCPAGE8_IRFLAGS_MASK_VOLUME_MAPPING_MODE; if (sc->ir_firmware && (volume_mapping_flags == MPI2_IOCPAGE8_IRFLAGS_LOW_VOLUME_MAPPING)) sc->num_rsvd_entries += sc->max_volumes; error = mpr_mapping_allocate_memory(sc); if (error) return (error); for (i = 0; i < sc->max_devices; i++) _mapping_clear_map_entry(sc->mapping_table + i); for (i = 0; i < sc->max_enclosures; i++) _mapping_clear_enc_entry(sc->enclosure_table + i); for (i = 0; i < sc->max_devices; i++) { sc->removal_table[i].dev_handle = 0; sc->removal_table[i].dpm_entry_num = MPR_DPM_BAD_IDX; } memset(sc->dpm_entry_used, 0, sc->max_dpm_entries); memset(sc->dpm_flush_entry, 0, sc->max_dpm_entries); if (sc->is_dpm_enable) { dpm_pg0_sz = sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER) + (sc->max_dpm_entries * sizeof(MPI2_CONFIG_PAGE_DRIVER_MAP0_ENTRY)); retry_count = 0; retry_read_dpm: if (mpr_config_get_dpm_pg0(sc, &mpi_reply, sc->dpm_pg0, dpm_pg0_sz)) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: DPM page " "read failed.\n", __func__); if (retry_count < 3) { retry_count++; goto retry_read_dpm; } sc->is_dpm_enable = 0; } } if (sc->is_dpm_enable) { if (!_mapping_process_dpm_pg0(sc)) sc->is_dpm_enable = 0; } if (! sc->is_dpm_enable) { mpr_dprint(sc, MPR_MAPPING, "%s: DPM processing is disabled. " "Device mappings will not persist across reboots or " "resets.\n", __func__); } sc->track_mapping_events = 1; return 0; } /** * mpr_mapping_exit - clear mapping table and associated memory * @sc: per adapter object * * Returns nothing. */ void mpr_mapping_exit(struct mpr_softc *sc) { _mapping_flush_dpm_pages(sc); mpr_mapping_free_memory(sc); } /** * mpr_mapping_get_tid - return the target id for sas device and handle * @sc: per adapter object * @sas_address: sas address of the device * @handle: device handle * * Returns valid target ID on success or BAD_ID. */ unsigned int mpr_mapping_get_tid(struct mpr_softc *sc, uint64_t sas_address, u16 handle) { u32 map_idx; struct dev_mapping_table *mt_entry; for (map_idx = 0; map_idx < sc->max_devices; map_idx++) { mt_entry = &sc->mapping_table[map_idx]; if (mt_entry->dev_handle == handle && mt_entry->physical_id == sas_address) return mt_entry->id; } return MPR_MAP_BAD_ID; } /** * mpr_mapping_get_tid_from_handle - find a target id in mapping table using * only the dev handle. This is just a wrapper function for the local function * _mapping_get_mt_idx_from_handle. * @sc: per adapter object * @handle: device handle * * Returns valid target ID on success or BAD_ID. */ unsigned int mpr_mapping_get_tid_from_handle(struct mpr_softc *sc, u16 handle) { return (_mapping_get_mt_idx_from_handle(sc, handle)); } /** * mpr_mapping_get_raid_tid - return the target id for raid device * @sc: per adapter object * @wwid: world wide identifier for raid volume * @volHandle: volume device handle * * Returns valid target ID on success or BAD_ID. */ unsigned int mpr_mapping_get_raid_tid(struct mpr_softc *sc, u64 wwid, u16 volHandle) { u32 start_idx, end_idx, map_idx; struct dev_mapping_table *mt_entry; _mapping_get_ir_maprange(sc, &start_idx, &end_idx); mt_entry = &sc->mapping_table[start_idx]; for (map_idx = start_idx; map_idx <= end_idx; map_idx++, mt_entry++) { if (mt_entry->dev_handle == volHandle && mt_entry->physical_id == wwid) return mt_entry->id; } return MPR_MAP_BAD_ID; } /** * mpr_mapping_get_raid_tid_from_handle - find raid device in mapping table * using only the volume dev handle. This is just a wrapper function for the * local function _mapping_get_ir_mt_idx_from_handle. * @sc: per adapter object * @volHandle: volume device handle * * Returns valid target ID on success or BAD_ID. */ unsigned int mpr_mapping_get_raid_tid_from_handle(struct mpr_softc *sc, u16 volHandle) { return (_mapping_get_ir_mt_idx_from_handle(sc, volHandle)); } /** * mpr_mapping_enclosure_dev_status_change_event - handle enclosure events * @sc: per adapter object * @event_data: event data payload * * Return nothing. */ void mpr_mapping_enclosure_dev_status_change_event(struct mpr_softc *sc, Mpi2EventDataSasEnclDevStatusChange_t *event_data) { u8 enc_idx, missing_count; struct enc_mapping_table *et_entry; Mpi2DriverMap0Entry_t *dpm_entry; u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); u8 map_shift = MPI2_DRVMAP0_MAPINFO_SLOT_SHIFT; u8 update_phy_bits = 0; u32 saved_phy_bits; uint64_t temp64_var; if ((ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE) != MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING) goto out; dpm_entry = (Mpi2DriverMap0Entry_t *)((u8 *)sc->dpm_pg0 + sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); if (event_data->ReasonCode == MPI2_EVENT_SAS_ENCL_RC_ADDED) { if (!event_data->NumSlots) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: Enclosure " "with handle = 0x%x reported 0 slots.\n", __func__, le16toh(event_data->EnclosureHandle)); goto out; } temp64_var = event_data->EnclosureLogicalID.High; temp64_var = (temp64_var << 32) | event_data->EnclosureLogicalID.Low; enc_idx = _mapping_get_enc_idx_from_id(sc, temp64_var, event_data->PhyBits); /* * If the Added enclosure is already in the Enclosure Table, * make sure that all the the enclosure info is up to date. If * the enclosure was missing and has just been added back, or if * the enclosure's Phy Bits have changed, clear the missing * count and update the Phy Bits in the mapping table and in the * DPM, if it's being used. */ if (enc_idx != MPR_ENCTABLE_BAD_IDX) { et_entry = &sc->enclosure_table[enc_idx]; if (et_entry->init_complete && !et_entry->missing_count) { mpr_dprint(sc, MPR_MAPPING, "%s: Enclosure %d " "is already present with handle = 0x%x\n", __func__, enc_idx, et_entry->enc_handle); goto out; } et_entry->enc_handle = le16toh(event_data-> EnclosureHandle); et_entry->start_slot = le16toh(event_data->StartSlot); saved_phy_bits = et_entry->phy_bits; et_entry->phy_bits |= le32toh(event_data->PhyBits); if (saved_phy_bits != et_entry->phy_bits) update_phy_bits = 1; if (et_entry->missing_count || update_phy_bits) { et_entry->missing_count = 0; if (sc->is_dpm_enable && et_entry->dpm_entry_num != MPR_DPM_BAD_IDX) { dpm_entry += et_entry->dpm_entry_num; missing_count = (u8)(dpm_entry->MappingInformation & MPI2_DRVMAP0_MAPINFO_MISSING_MASK); if (missing_count || update_phy_bits) { dpm_entry->MappingInformation = et_entry->num_slots; dpm_entry->MappingInformation <<= map_shift; dpm_entry->PhysicalBitsMapping = et_entry->phy_bits; sc->dpm_flush_entry[et_entry-> dpm_entry_num] = 1; } } } } else { /* * This is a new enclosure that is being added. * Initialize the Enclosure Table entry. It will be * finalized when a device is added for the enclosure * and the enclosure has enough space in the Mapping * Table to map its devices. */ enc_idx = sc->num_enc_table_entries; if (enc_idx >= sc->max_enclosures) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: " "Enclosure cannot be added to mapping " "table because it's full.\n", __func__); goto out; } sc->num_enc_table_entries++; et_entry = &sc->enclosure_table[enc_idx]; et_entry->enc_handle = le16toh(event_data-> EnclosureHandle); et_entry->enclosure_id = le64toh(event_data-> EnclosureLogicalID.High); et_entry->enclosure_id = ((et_entry->enclosure_id << 32) | le64toh(event_data->EnclosureLogicalID.Low)); et_entry->start_index = MPR_MAPTABLE_BAD_IDX; et_entry->dpm_entry_num = MPR_DPM_BAD_IDX; et_entry->num_slots = le16toh(event_data->NumSlots); et_entry->start_slot = le16toh(event_data->StartSlot); et_entry->phy_bits = le32toh(event_data->PhyBits); } et_entry->init_complete = 1; } else if (event_data->ReasonCode == MPI2_EVENT_SAS_ENCL_RC_NOT_RESPONDING) { /* * An enclosure was removed. Update its missing count and then * update the DPM entry with the new missing count for the * enclosure. */ enc_idx = _mapping_get_enc_idx_from_handle(sc, le16toh(event_data->EnclosureHandle)); if (enc_idx == MPR_ENCTABLE_BAD_IDX) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: Cannot " "unmap enclosure %d because it has already been " "deleted.\n", __func__, enc_idx); goto out; } et_entry = &sc->enclosure_table[enc_idx]; if (et_entry->missing_count < MPR_MAX_MISSING_COUNT) et_entry->missing_count++; if (sc->is_dpm_enable && et_entry->dpm_entry_num != MPR_DPM_BAD_IDX) { dpm_entry += et_entry->dpm_entry_num; dpm_entry->MappingInformation = et_entry->num_slots; dpm_entry->MappingInformation <<= map_shift; dpm_entry->MappingInformation |= et_entry->missing_count; sc->dpm_flush_entry[et_entry->dpm_entry_num] = 1; } et_entry->init_complete = 1; } out: _mapping_flush_dpm_pages(sc); if (sc->pending_map_events) sc->pending_map_events--; } /** * mpr_mapping_topology_change_event - handle topology change events * @sc: per adapter object * @event_data: event data payload * * Returns nothing. */ void mpr_mapping_topology_change_event(struct mpr_softc *sc, Mpi2EventDataSasTopologyChangeList_t *event_data) { struct _map_topology_change topo_change; struct _map_phy_change *phy_change; Mpi2EventSasTopoPhyEntry_t *event_phy_change; u8 i, num_entries; topo_change.enc_handle = le16toh(event_data->EnclosureHandle); topo_change.exp_handle = le16toh(event_data->ExpanderDevHandle); num_entries = event_data->NumEntries; topo_change.num_entries = num_entries; topo_change.start_phy_num = event_data->StartPhyNum; topo_change.num_phys = event_data->NumPhys; topo_change.exp_status = event_data->ExpStatus; event_phy_change = event_data->PHY; topo_change.phy_details = NULL; if (!num_entries) goto out; phy_change = malloc(sizeof(struct _map_phy_change) * num_entries, M_MPR, M_NOWAIT|M_ZERO); topo_change.phy_details = phy_change; if (!phy_change) goto out; for (i = 0; i < num_entries; i++, event_phy_change++, phy_change++) { phy_change->dev_handle = le16toh(event_phy_change-> AttachedDevHandle); phy_change->reason = event_phy_change->PhyStatus & MPI2_EVENT_SAS_TOPO_RC_MASK; } _mapping_update_missing_count(sc, &topo_change); _mapping_get_dev_info(sc, &topo_change); _mapping_clear_removed_entries(sc); _mapping_add_new_device(sc, &topo_change); out: free(topo_change.phy_details, M_MPR); _mapping_flush_dpm_pages(sc); if (sc->pending_map_events) sc->pending_map_events--; } /** * mpr_mapping_pcie_topology_change_event - handle PCIe topology change events * @sc: per adapter object * @event_data: event data payload * * Returns nothing. */ void mpr_mapping_pcie_topology_change_event(struct mpr_softc *sc, Mpi26EventDataPCIeTopologyChangeList_t *event_data) { struct _map_pcie_topology_change topo_change; struct _map_port_change *port_change; Mpi26EventPCIeTopoPortEntry_t *event_port_change; u8 i, num_entries; topo_change.switch_dev_handle = le16toh(event_data->SwitchDevHandle); topo_change.enc_handle = le16toh(event_data->EnclosureHandle); num_entries = event_data->NumEntries; topo_change.num_entries = num_entries; topo_change.start_port_num = event_data->StartPortNum; topo_change.num_ports = event_data->NumPorts; topo_change.switch_status = event_data->SwitchStatus; event_port_change = event_data->PortEntry; topo_change.port_details = NULL; if (!num_entries) goto out; port_change = malloc(sizeof(struct _map_port_change) * num_entries, M_MPR, M_NOWAIT|M_ZERO); topo_change.port_details = port_change; if (!port_change) goto out; for (i = 0; i < num_entries; i++, event_port_change++, port_change++) { port_change->dev_handle = le16toh(event_port_change-> AttachedDevHandle); port_change->reason = event_port_change->PortStatus; } _mapping_update_pcie_missing_count(sc, &topo_change); _mapping_get_pcie_dev_info(sc, &topo_change); _mapping_clear_removed_entries(sc); _mapping_add_new_pcie_device(sc, &topo_change); out: free(topo_change.port_details, M_MPR); _mapping_flush_dpm_pages(sc); if (sc->pending_map_events) sc->pending_map_events--; } /** * mpr_mapping_ir_config_change_event - handle IR config change list events * @sc: per adapter object * @event_data: event data payload * * Returns nothing. */ void mpr_mapping_ir_config_change_event(struct mpr_softc *sc, Mpi2EventDataIrConfigChangeList_t *event_data) { Mpi2EventIrConfigElement_t *element; int i; u64 *wwid_table; u32 map_idx, flags; struct dev_mapping_table *mt_entry; u16 element_flags; wwid_table = malloc(sizeof(u64) * event_data->NumElements, M_MPR, M_NOWAIT | M_ZERO); if (!wwid_table) goto out; element = (Mpi2EventIrConfigElement_t *)&event_data->ConfigElement[0]; flags = le32toh(event_data->Flags); /* * For volume changes, get the WWID for the volume and put it in a * table to be used in the processing of the IR change event. */ for (i = 0; i < event_data->NumElements; i++, element++) { element_flags = le16toh(element->ElementFlags); if ((element->ReasonCode != MPI2_EVENT_IR_CHANGE_RC_ADDED) && (element->ReasonCode != MPI2_EVENT_IR_CHANGE_RC_REMOVED) && (element->ReasonCode != MPI2_EVENT_IR_CHANGE_RC_NO_CHANGE) && (element->ReasonCode != MPI2_EVENT_IR_CHANGE_RC_VOLUME_CREATED)) continue; if ((element_flags & MPI2_EVENT_IR_CHANGE_EFLAGS_ELEMENT_TYPE_MASK) == MPI2_EVENT_IR_CHANGE_EFLAGS_VOLUME_ELEMENT) { mpr_config_get_volume_wwid(sc, le16toh(element->VolDevHandle), &wwid_table[i]); } } /* * Check the ReasonCode for each element in the IR event and Add/Remove * Volumes or Physical Disks of Volumes to/from the mapping table. Use * the WWIDs gotten above in wwid_table. */ if (flags == MPI2_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG) goto out; else { element = (Mpi2EventIrConfigElement_t *)&event_data-> ConfigElement[0]; for (i = 0; i < event_data->NumElements; i++, element++) { if (element->ReasonCode == MPI2_EVENT_IR_CHANGE_RC_ADDED || element->ReasonCode == MPI2_EVENT_IR_CHANGE_RC_VOLUME_CREATED) { map_idx = _mapping_get_ir_mt_idx_from_wwid (sc, wwid_table[i]); if (map_idx != MPR_MAPTABLE_BAD_IDX) { /* * The volume is already in the mapping * table. Just update it's info. */ mt_entry = &sc->mapping_table[map_idx]; mt_entry->id = map_idx; mt_entry->dev_handle = le16toh (element->VolDevHandle); mt_entry->device_info = MPR_DEV_RESERVED | MPR_MAP_IN_USE; _mapping_update_ir_missing_cnt(sc, map_idx, element, wwid_table[i]); continue; } /* * Volume is not in mapping table yet. Find a * free entry in the mapping table at the * volume mapping locations. If no entries are * available, this is an error because it means * there are more volumes than can be mapped * and that should never happen for volumes. */ map_idx = _mapping_get_free_ir_mt_idx(sc); if (map_idx == MPR_MAPTABLE_BAD_IDX) { mpr_dprint(sc, MPR_ERROR | MPR_MAPPING, "%s: failed to add the volume with " "handle 0x%04x because there is no " "free space available in the " "mapping table\n", __func__, le16toh(element->VolDevHandle)); continue; } mt_entry = &sc->mapping_table[map_idx]; mt_entry->physical_id = wwid_table[i]; mt_entry->id = map_idx; mt_entry->dev_handle = le16toh(element-> VolDevHandle); mt_entry->device_info = MPR_DEV_RESERVED | MPR_MAP_IN_USE; _mapping_update_ir_missing_cnt(sc, map_idx, element, wwid_table[i]); } else if (element->ReasonCode == MPI2_EVENT_IR_CHANGE_RC_REMOVED) { map_idx = _mapping_get_ir_mt_idx_from_wwid(sc, wwid_table[i]); if (map_idx == MPR_MAPTABLE_BAD_IDX) { mpr_dprint(sc, MPR_MAPPING,"%s: Failed " "to remove a volume because it has " "already been removed.\n", __func__); continue; } _mapping_update_ir_missing_cnt(sc, map_idx, element, wwid_table[i]); } else if (element->ReasonCode == MPI2_EVENT_IR_CHANGE_RC_VOLUME_DELETED) { map_idx = _mapping_get_mt_idx_from_handle(sc, le16toh(element->VolDevHandle)); if (map_idx == MPR_MAPTABLE_BAD_IDX) { mpr_dprint(sc, MPR_MAPPING,"%s: Failed " "to remove volume with handle " "0x%04x because it has already " "been removed.\n", __func__, le16toh(element->VolDevHandle)); continue; } mt_entry = &sc->mapping_table[map_idx]; _mapping_update_ir_missing_cnt(sc, map_idx, element, mt_entry->physical_id); } } } out: _mapping_flush_dpm_pages(sc); free(wwid_table, M_MPR); if (sc->pending_map_events) sc->pending_map_events--; } Index: releng/12.1/sys/dev/mpr/mpr_mapping.h =================================================================== --- releng/12.1/sys/dev/mpr/mpr_mapping.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpr_mapping.h (revision 352761) @@ -1,122 +1,123 @@ /*- * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2016 Avago Technologies + * Copyright 2000-2020 Broadcom Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ #ifndef _MPR_MAPPING_H #define _MPR_MAPPING_H /** * struct _map_phy_change - PHY entries received in Topology change list * @physical_id: SAS address of the device attached with the associate PHY * @device_info: bitfield provides detailed info about the device * @dev_handle: device handle for the device pointed by this entry * @slot: slot ID * @is_processed: Flag to indicate whether this entry is processed or not * @is_SATA_SSD: 1 if this is a SATA device AND an SSD, 0 otherwise */ struct _map_phy_change { uint64_t physical_id; uint32_t device_info; uint16_t dev_handle; uint16_t slot; uint8_t reason; uint8_t is_processed; uint8_t is_SATA_SSD; uint8_t reserved; }; /** * struct _map_port_change - PCIe Port entries received in PCIe Topology change * list event * @physical_id: WWID of the device attached to the associated port * @device_info: bitfield provides detailed info about the device * @MDTS: Maximum Data Transfer Size for the device * @dev_handle: device handle for the device pointed by this entry * @slot: slot ID * @is_processed: Flag to indicate whether this entry is processed or not */ struct _map_port_change { uint64_t physical_id; uint32_t device_info; uint32_t MDTS; uint16_t dev_handle; uint16_t slot; uint8_t reason; uint8_t is_processed; uint8_t reserved[2]; }; /** * struct _map_topology_change - SAS/SATA entries to be removed from mapping * table * @enc_handle: enclosure handle where this device is located * @exp_handle: expander handle where this device is located * @num_entries: number of entries in the SAS Topology Change List event * @start_phy_num: PHY number of the first PHY in the event data * @num_phys: number of PHYs in the expander where this device is located * @exp_status: status for the expander where this device is located * @phy_details: more details about each PHY in the event data */ struct _map_topology_change { uint16_t enc_handle; uint16_t exp_handle; uint8_t num_entries; uint8_t start_phy_num; uint8_t num_phys; uint8_t exp_status; struct _map_phy_change *phy_details; }; /** * struct _map_pcie_topology_change - PCIe entries to be removed from mapping * table * @enc_handle: enclosure handle where this device is located * @switch_dev_handle: PCIe switch device handle where this device is located * @num_entries: number of entries in the PCIe Topology Change List event * @start_port_num: port number of the first port in the event data * @num_ports: number of ports in the PCIe switch device * @switch_status: status for the PCIe switch where this device is located * @port_details: more details about each Port in the event data */ struct _map_pcie_topology_change { uint16_t enc_handle; uint16_t switch_dev_handle; uint8_t num_entries; uint8_t start_port_num; uint8_t num_ports; uint8_t switch_status; struct _map_port_change *port_details; }; extern int mprsas_get_sas_address_for_sata_disk(struct mpr_softc *ioc, u64 *sas_address, u16 handle, u32 device_info, u8 *is_SATA_SSD); #endif Index: releng/12.1/sys/dev/mpr/mpr_pci.c =================================================================== --- releng/12.1/sys/dev/mpr/mpr_pci.c (revision 352760) +++ releng/12.1/sys/dev/mpr/mpr_pci.c (revision 352761) @@ -1,461 +1,500 @@ /*- * Copyright (c) 2009 Yahoo! Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* PCI/PCI-X/PCIe bus interface for the Avago Tech (LSI) MPT3 controllers */ /* TODO Move headers to mprvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include static int mpr_pci_probe(device_t); static int mpr_pci_attach(device_t); static int mpr_pci_detach(device_t); static int mpr_pci_suspend(device_t); static int mpr_pci_resume(device_t); static void mpr_pci_free(struct mpr_softc *); static int mpr_alloc_msix(struct mpr_softc *sc, int msgs); static int mpr_alloc_msi(struct mpr_softc *sc, int msgs); static int mpr_pci_alloc_interrupts(struct mpr_softc *sc); static device_method_t mpr_methods[] = { DEVMETHOD(device_probe, mpr_pci_probe), DEVMETHOD(device_attach, mpr_pci_attach), DEVMETHOD(device_detach, mpr_pci_detach), DEVMETHOD(device_suspend, mpr_pci_suspend), DEVMETHOD(device_resume, mpr_pci_resume), DEVMETHOD(bus_print_child, bus_generic_print_child), DEVMETHOD(bus_driver_added, bus_generic_driver_added), { 0, 0 } }; static driver_t mpr_pci_driver = { "mpr", mpr_methods, sizeof(struct mpr_softc) }; struct mpr_ident { uint16_t vendor; uint16_t device; uint16_t subvendor; uint16_t subdevice; u_int flags; const char *desc; } mpr_identifiers[] = { { MPI2_MFGPAGE_VENDORID_LSI, MPI25_MFGPAGE_DEVID_SAS3004, 0xffff, 0xffff, 0, "Avago Technologies (LSI) SAS3004" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI25_MFGPAGE_DEVID_SAS3008, 0xffff, 0xffff, 0, "Avago Technologies (LSI) SAS3008" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI25_MFGPAGE_DEVID_SAS3108_1, 0xffff, 0xffff, 0, "Avago Technologies (LSI) SAS3108_1" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI25_MFGPAGE_DEVID_SAS3108_2, 0xffff, 0xffff, 0, "Avago Technologies (LSI) SAS3108_2" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI25_MFGPAGE_DEVID_SAS3108_5, 0xffff, 0xffff, 0, "Avago Technologies (LSI) SAS3108_5" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI25_MFGPAGE_DEVID_SAS3108_6, 0xffff, 0xffff, 0, "Avago Technologies (LSI) SAS3108_6" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3216, 0xffff, 0xffff, 0, "Avago Technologies (LSI) SAS3216" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3224, 0xffff, 0xffff, 0, "Avago Technologies (LSI) SAS3224" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3316_1, 0xffff, 0xffff, 0, "Avago Technologies (LSI) SAS3316_1" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3316_2, 0xffff, 0xffff, 0, "Avago Technologies (LSI) SAS3316_2" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3324_1, 0xffff, 0xffff, 0, "Avago Technologies (LSI) SAS3324_1" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3324_2, 0xffff, 0xffff, 0, "Avago Technologies (LSI) SAS3324_2" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3408, 0xffff, 0xffff, MPR_FLAGS_GEN35_IOC, "Avago Technologies (LSI) SAS3408" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3416, 0xffff, 0xffff, MPR_FLAGS_GEN35_IOC, "Avago Technologies (LSI) SAS3416" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3508, 0xffff, 0xffff, MPR_FLAGS_GEN35_IOC, "Avago Technologies (LSI) SAS3508" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3508_1, 0xffff, 0xffff, MPR_FLAGS_GEN35_IOC, "Avago Technologies (LSI) SAS3508_1" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3516, 0xffff, 0xffff, MPR_FLAGS_GEN35_IOC, "Avago Technologies (LSI) SAS3516" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3516_1, 0xffff, 0xffff, MPR_FLAGS_GEN35_IOC, "Avago Technologies (LSI) SAS3516_1" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3616, 0xffff, 0xffff, MPR_FLAGS_GEN35_IOC, "Avago Technologies (LSI) SAS3616" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3708, 0xffff, 0xffff, MPR_FLAGS_GEN35_IOC, "Avago Technologies (LSI) SAS3708" }, { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3716, 0xffff, 0xffff, MPR_FLAGS_GEN35_IOC, "Avago Technologies (LSI) SAS3716" }, + { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_INVALID0_SAS3816, + 0xffff, 0xffff, (MPR_FLAGS_GEN35_IOC | MPR_FLAGS_SEA_IOC), + "Broadcom Inc. (LSI) INVALID0 SAS3816" }, + { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_CFG_SEC_SAS3816, + 0xffff, 0xffff, (MPR_FLAGS_GEN35_IOC | MPR_FLAGS_SEA_IOC), + "Broadcom Inc. (LSI) CFG SEC SAS3816" }, + { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_HARD_SEC_SAS3816, + 0xffff, 0xffff, (MPR_FLAGS_GEN35_IOC | MPR_FLAGS_SEA_IOC), + "Broadcom Inc. (LSI) HARD SEC SAS3816" }, + { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_INVALID1_SAS3816, + 0xffff, 0xffff, (MPR_FLAGS_GEN35_IOC | MPR_FLAGS_SEA_IOC), + "Broadcom Inc. (LSI) INVALID1 SAS3816" }, + { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_INVALID0_SAS3916, + 0xffff, 0xffff, (MPR_FLAGS_GEN35_IOC | MPR_FLAGS_SEA_IOC), + "Broadcom Inc. (LSI) INVALID0 SAS3916" }, + { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_CFG_SEC_SAS3916, + 0xffff, 0xffff, (MPR_FLAGS_GEN35_IOC | MPR_FLAGS_SEA_IOC), + "Broadcom Inc. (LSI) CFG SEC SAS3916" }, + { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_HARD_SEC_SAS3916, + 0xffff, 0xffff, (MPR_FLAGS_GEN35_IOC | MPR_FLAGS_SEA_IOC), + "Broadcom Inc. (LSI) HARD SEC SAS3916" }, + { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_INVALID1_SAS3916, + 0xffff, 0xffff, (MPR_FLAGS_GEN35_IOC | MPR_FLAGS_SEA_IOC), + "Broadcom Inc. (LSI) INVALID1 SAS3916" }, { 0, 0, 0, 0, 0, NULL } }; static devclass_t mpr_devclass; DRIVER_MODULE(mpr, pci, mpr_pci_driver, mpr_devclass, 0, 0); MODULE_PNP_INFO("U16:vendor;U16:device;U16:subvendor;U16:subdevice;D:#", pci, mpr, mpr_identifiers, nitems(mpr_identifiers) - 1); MODULE_DEPEND(mpr, cam, 1, 1, 1); static struct mpr_ident * mpr_find_ident(device_t dev) { struct mpr_ident *m; for (m = mpr_identifiers; m->vendor != 0; m++) { if (m->vendor != pci_get_vendor(dev)) continue; if (m->device != pci_get_device(dev)) continue; if ((m->subvendor != 0xffff) && (m->subvendor != pci_get_subvendor(dev))) continue; if ((m->subdevice != 0xffff) && (m->subdevice != pci_get_subdevice(dev))) continue; return (m); } return (NULL); } static int mpr_pci_probe(device_t dev) { struct mpr_ident *id; if ((id = mpr_find_ident(dev)) != NULL) { device_set_desc(dev, id->desc); return (BUS_PROBE_DEFAULT); } return (ENXIO); } static int mpr_pci_attach(device_t dev) { struct mpr_softc *sc; struct mpr_ident *m; int error, i; sc = device_get_softc(dev); bzero(sc, sizeof(*sc)); sc->mpr_dev = dev; m = mpr_find_ident(dev); sc->mpr_flags = m->flags; + + switch (m->device) { + case MPI26_MFGPAGE_DEVID_INVALID0_SAS3816: + case MPI26_MFGPAGE_DEVID_INVALID1_SAS3816: + case MPI26_MFGPAGE_DEVID_INVALID0_SAS3916: + case MPI26_MFGPAGE_DEVID_INVALID1_SAS3916: + mpr_printf(sc, "HBA is in Non Secure mode\n"); + return (ENXIO); + case MPI26_MFGPAGE_DEVID_CFG_SEC_SAS3816: + case MPI26_MFGPAGE_DEVID_CFG_SEC_SAS3916: + mpr_printf(sc, "HBA is in Configurable Secure mode\n"); + break; + default: + break; + } mpr_get_tunables(sc); /* Twiddle basic PCI config bits for a sanity check */ pci_enable_busmaster(dev); for (i = 0; i < PCI_MAXMAPS_0; i++) { sc->mpr_regs_rid = PCIR_BAR(i); if ((sc->mpr_regs_resource = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &sc->mpr_regs_rid, RF_ACTIVE)) != NULL) break; } if (sc->mpr_regs_resource == NULL) { mpr_printf(sc, "Cannot allocate PCI registers\n"); return (ENXIO); } sc->mpr_btag = rman_get_bustag(sc->mpr_regs_resource); sc->mpr_bhandle = rman_get_bushandle(sc->mpr_regs_resource); /* Allocate the parent DMA tag */ if (bus_dma_tag_create( bus_get_dma_tag(dev), /* parent */ 1, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ BUS_SPACE_MAXSIZE_32BIT,/* maxsize */ BUS_SPACE_UNRESTRICTED, /* nsegments */ BUS_SPACE_MAXSIZE_32BIT,/* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->mpr_parent_dmat)) { mpr_printf(sc, "Cannot allocate parent DMA tag\n"); mpr_pci_free(sc); return (ENOMEM); } if (((error = mpr_pci_alloc_interrupts(sc)) != 0) || ((error = mpr_attach(sc)) != 0)) mpr_pci_free(sc); return (error); } /* * Allocate, but don't assign interrupts early. Doing it before requesting * the IOCFacts message informs the firmware that we want to do MSI-X * multiqueue. We might not use all of the available messages, but there's * no reason to re-alloc if we don't. */ int mpr_pci_alloc_interrupts(struct mpr_softc *sc) { device_t dev; int error, msgs; dev = sc->mpr_dev; error = 0; msgs = 0; if (sc->disable_msix == 0) { msgs = pci_msix_count(dev); mpr_dprint(sc, MPR_INIT, "Counted %d MSI-X messages\n", msgs); msgs = min(msgs, sc->max_msix); msgs = min(msgs, MPR_MSIX_MAX); msgs = min(msgs, 1); /* XXX */ if (msgs != 0) { mpr_dprint(sc, MPR_INIT, "Attempting to allocate %d " "MSI-X messages\n", msgs); error = mpr_alloc_msix(sc, msgs); } } if (((error != 0) || (msgs == 0)) && (sc->disable_msi == 0)) { msgs = pci_msi_count(dev); mpr_dprint(sc, MPR_INIT, "Counted %d MSI messages\n", msgs); msgs = min(msgs, MPR_MSI_MAX); if (msgs != 0) { mpr_dprint(sc, MPR_INIT, "Attempting to allocated %d " "MSI messages\n", MPR_MSI_MAX); error = mpr_alloc_msi(sc, MPR_MSI_MAX); } } if ((error != 0) || (msgs == 0)) { /* * If neither MSI or MSI-X are available, assume legacy INTx. * This also implies that there will be only 1 queue. */ mpr_dprint(sc, MPR_INIT, "Falling back to legacy INTx\n"); sc->mpr_flags |= MPR_FLAGS_INTX; msgs = 1; } else sc->mpr_flags |= MPR_FLAGS_MSI; sc->msi_msgs = msgs; mpr_dprint(sc, MPR_INIT, "Allocated %d interrupts\n", msgs); return (error); } int mpr_pci_setup_interrupts(struct mpr_softc *sc) { device_t dev; struct mpr_queue *q; void *ihandler; int i, error, rid, initial_rid; dev = sc->mpr_dev; error = ENXIO; if (sc->mpr_flags & MPR_FLAGS_INTX) { initial_rid = 0; ihandler = mpr_intr; } else if (sc->mpr_flags & MPR_FLAGS_MSI) { initial_rid = 1; ihandler = mpr_intr_msi; } else { mpr_dprint(sc, MPR_ERROR|MPR_INIT, "Unable to set up interrupts\n"); return (EINVAL); } for (i = 0; i < sc->msi_msgs; i++) { q = &sc->queues[i]; rid = i + initial_rid; q->irq_rid = rid; q->irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &q->irq_rid, RF_ACTIVE); if (q->irq == NULL) { mpr_dprint(sc, MPR_ERROR|MPR_INIT, "Cannot allocate interrupt RID %d\n", rid); sc->msi_msgs = i; break; } error = bus_setup_intr(dev, q->irq, INTR_TYPE_BIO | INTR_MPSAFE, NULL, ihandler, sc, &q->intrhand); if (error) { mpr_dprint(sc, MPR_ERROR|MPR_INIT, "Cannot setup interrupt RID %d\n", rid); sc->msi_msgs = i; break; } } mpr_dprint(sc, MPR_INIT, "Set up %d interrupts\n", sc->msi_msgs); return (error); } static int mpr_pci_detach(device_t dev) { struct mpr_softc *sc; int error; sc = device_get_softc(dev); if ((error = mpr_free(sc)) != 0) return (error); mpr_pci_free(sc); return (0); } void mpr_pci_free_interrupts(struct mpr_softc *sc) { struct mpr_queue *q; int i; if (sc->queues == NULL) return; for (i = 0; i < sc->msi_msgs; i++) { q = &sc->queues[i]; if (q->irq != NULL) { bus_teardown_intr(sc->mpr_dev, q->irq, q->intrhand); bus_release_resource(sc->mpr_dev, SYS_RES_IRQ, q->irq_rid, q->irq); } } } static void mpr_pci_free(struct mpr_softc *sc) { if (sc->mpr_parent_dmat != NULL) { bus_dma_tag_destroy(sc->mpr_parent_dmat); } mpr_pci_free_interrupts(sc); if (sc->mpr_flags & MPR_FLAGS_MSI) pci_release_msi(sc->mpr_dev); if (sc->mpr_regs_resource != NULL) { bus_release_resource(sc->mpr_dev, SYS_RES_MEMORY, sc->mpr_regs_rid, sc->mpr_regs_resource); } return; } static int mpr_pci_suspend(device_t dev) { return (EINVAL); } static int mpr_pci_resume(device_t dev) { return (EINVAL); } static int mpr_alloc_msix(struct mpr_softc *sc, int msgs) { int error; error = pci_alloc_msix(sc->mpr_dev, &msgs); return (error); } static int mpr_alloc_msi(struct mpr_softc *sc, int msgs) { int error; error = pci_alloc_msi(sc->mpr_dev, &msgs); return (error); } int mpr_pci_restore(struct mpr_softc *sc) { struct pci_devinfo *dinfo; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); dinfo = device_get_ivars(sc->mpr_dev); if (dinfo == NULL) { mpr_dprint(sc, MPR_FAULT, "%s: NULL dinfo\n", __func__); return (EINVAL); } pci_cfg_restore(sc->mpr_dev, dinfo); return (0); } Index: releng/12.1/sys/dev/mpr/mpr_sas.c =================================================================== --- releng/12.1/sys/dev/mpr/mpr_sas.c (revision 352760) +++ releng/12.1/sys/dev/mpr/mpr_sas.c (revision 352761) @@ -1,3898 +1,3931 @@ /*- * Copyright (c) 2009 Yahoo! Inc. * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2016 Avago Technologies + * Copyright 2000-2020 Broadcom Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * */ #include __FBSDID("$FreeBSD$"); /* Communications core for Avago Technologies (LSI) MPT3 */ /* TODO Move headers to mprvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #if __FreeBSD_version >= 900026 #include #endif #include #include #include #include #include #include #include #include #include #include #include #include #include #define MPRSAS_DISCOVERY_TIMEOUT 20 #define MPRSAS_MAX_DISCOVERY_TIMEOUTS 10 /* 200 seconds */ /* * static array to check SCSI OpCode for EEDP protection bits */ #define PRO_R MPI2_SCSIIO_EEDPFLAGS_CHECK_REMOVE_OP #define PRO_W MPI2_SCSIIO_EEDPFLAGS_INSERT_OP #define PRO_V MPI2_SCSIIO_EEDPFLAGS_INSERT_OP static uint8_t op_code_prot[256] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, PRO_R, 0, PRO_W, 0, 0, 0, PRO_W, PRO_V, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, PRO_W, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, PRO_R, 0, PRO_W, 0, 0, 0, PRO_W, PRO_V, 0, 0, 0, PRO_W, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, PRO_R, 0, PRO_W, 0, 0, 0, PRO_W, PRO_V, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; MALLOC_DEFINE(M_MPRSAS, "MPRSAS", "MPR SAS memory"); static void mprsas_remove_device(struct mpr_softc *, struct mpr_command *); static void mprsas_remove_complete(struct mpr_softc *, struct mpr_command *); static void mprsas_action(struct cam_sim *sim, union ccb *ccb); static void mprsas_poll(struct cam_sim *sim); static void mprsas_scsiio_timeout(void *data); static void mprsas_abort_complete(struct mpr_softc *sc, struct mpr_command *cm); static void mprsas_action_scsiio(struct mprsas_softc *, union ccb *); static void mprsas_scsiio_complete(struct mpr_softc *, struct mpr_command *); static void mprsas_action_resetdev(struct mprsas_softc *, union ccb *); static void mprsas_resetdev_complete(struct mpr_softc *, struct mpr_command *); static int mprsas_send_abort(struct mpr_softc *sc, struct mpr_command *tm, struct mpr_command *cm); static void mprsas_async(void *callback_arg, uint32_t code, struct cam_path *path, void *arg); #if (__FreeBSD_version < 901503) || \ ((__FreeBSD_version >= 1000000) && (__FreeBSD_version < 1000006)) static void mprsas_check_eedp(struct mpr_softc *sc, struct cam_path *path, struct ccb_getdev *cgd); static void mprsas_read_cap_done(struct cam_periph *periph, union ccb *done_ccb); #endif static int mprsas_send_portenable(struct mpr_softc *sc); static void mprsas_portenable_complete(struct mpr_softc *sc, struct mpr_command *cm); #if __FreeBSD_version >= 900026 static void mprsas_smpio_complete(struct mpr_softc *sc, struct mpr_command *cm); static void mprsas_send_smpcmd(struct mprsas_softc *sassc, union ccb *ccb, uint64_t sasaddr); static void mprsas_action_smpio(struct mprsas_softc *sassc, union ccb *ccb); #endif //FreeBSD_version >= 900026 struct mprsas_target * mprsas_find_target_by_handle(struct mprsas_softc *sassc, int start, uint16_t handle) { struct mprsas_target *target; int i; for (i = start; i < sassc->maxtargets; i++) { target = &sassc->targets[i]; if (target->handle == handle) return (target); } return (NULL); } /* we need to freeze the simq during attach and diag reset, to avoid failing * commands before device handles have been found by discovery. Since * discovery involves reading config pages and possibly sending commands, * discovery actions may continue even after we receive the end of discovery * event, so refcount discovery actions instead of assuming we can unfreeze * the simq when we get the event. */ void mprsas_startup_increment(struct mprsas_softc *sassc) { MPR_FUNCTRACE(sassc->sc); if ((sassc->flags & MPRSAS_IN_STARTUP) != 0) { if (sassc->startup_refcount++ == 0) { /* just starting, freeze the simq */ mpr_dprint(sassc->sc, MPR_INIT, "%s freezing simq\n", __func__); #if (__FreeBSD_version >= 1000039) || \ ((__FreeBSD_version < 1000000) && (__FreeBSD_version >= 902502)) xpt_hold_boot(); #endif xpt_freeze_simq(sassc->sim, 1); } mpr_dprint(sassc->sc, MPR_INIT, "%s refcount %u\n", __func__, sassc->startup_refcount); } } void mprsas_release_simq_reinit(struct mprsas_softc *sassc) { if (sassc->flags & MPRSAS_QUEUE_FROZEN) { sassc->flags &= ~MPRSAS_QUEUE_FROZEN; xpt_release_simq(sassc->sim, 1); mpr_dprint(sassc->sc, MPR_INFO, "Unfreezing SIM queue\n"); } } void mprsas_startup_decrement(struct mprsas_softc *sassc) { MPR_FUNCTRACE(sassc->sc); if ((sassc->flags & MPRSAS_IN_STARTUP) != 0) { if (--sassc->startup_refcount == 0) { /* finished all discovery-related actions, release * the simq and rescan for the latest topology. */ mpr_dprint(sassc->sc, MPR_INIT, "%s releasing simq\n", __func__); sassc->flags &= ~MPRSAS_IN_STARTUP; xpt_release_simq(sassc->sim, 1); #if (__FreeBSD_version >= 1000039) || \ ((__FreeBSD_version < 1000000) && (__FreeBSD_version >= 902502)) xpt_release_boot(); #else mprsas_rescan_target(sassc->sc, NULL); #endif } mpr_dprint(sassc->sc, MPR_INIT, "%s refcount %u\n", __func__, sassc->startup_refcount); } } -/* The firmware requires us to stop sending commands when we're doing task - * management, so refcount the TMs and keep the simq frozen when any are in +/* + * The firmware requires us to stop sending commands when we're doing task + * management. * use. + * XXX The logic for serializing the device has been made lazy and moved to + * mprsas_prepare_for_tm(). */ struct mpr_command * mprsas_alloc_tm(struct mpr_softc *sc) { + MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mpr_command *tm; MPR_FUNCTRACE(sc); tm = mpr_alloc_high_priority_command(sc); + if (tm == NULL) + return (NULL); + + req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; + req->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; return tm; } void mprsas_free_tm(struct mpr_softc *sc, struct mpr_command *tm) { int target_id = 0xFFFFFFFF; MPR_FUNCTRACE(sc); if (tm == NULL) return; /* * For TM's the devq is frozen for the device. Unfreeze it here and * free the resources used for freezing the devq. Must clear the * INRESET flag as well or scsi I/O will not work. */ if (tm->cm_targ != NULL) { tm->cm_targ->flags &= ~MPRSAS_TARGET_INRESET; target_id = tm->cm_targ->tid; } if (tm->cm_ccb) { mpr_dprint(sc, MPR_INFO, "Unfreezing devq for target ID %d\n", target_id); xpt_release_devq(tm->cm_ccb->ccb_h.path, 1, TRUE); xpt_free_path(tm->cm_ccb->ccb_h.path); xpt_free_ccb(tm->cm_ccb); } mpr_free_high_priority_command(sc, tm); } void mprsas_rescan_target(struct mpr_softc *sc, struct mprsas_target *targ) { struct mprsas_softc *sassc = sc->sassc; path_id_t pathid; target_id_t targetid; union ccb *ccb; MPR_FUNCTRACE(sc); pathid = cam_sim_path(sassc->sim); if (targ == NULL) targetid = CAM_TARGET_WILDCARD; else targetid = targ - sassc->targets; /* * Allocate a CCB and schedule a rescan. */ ccb = xpt_alloc_ccb_nowait(); if (ccb == NULL) { mpr_dprint(sc, MPR_ERROR, "unable to alloc CCB for rescan\n"); return; } if (xpt_create_path(&ccb->ccb_h.path, NULL, pathid, targetid, CAM_LUN_WILDCARD) != CAM_REQ_CMP) { mpr_dprint(sc, MPR_ERROR, "unable to create path for rescan\n"); xpt_free_ccb(ccb); return; } if (targetid == CAM_TARGET_WILDCARD) ccb->ccb_h.func_code = XPT_SCAN_BUS; else ccb->ccb_h.func_code = XPT_SCAN_TGT; mpr_dprint(sc, MPR_TRACE, "%s targetid %u\n", __func__, targetid); xpt_rescan(ccb); } static void mprsas_log_command(struct mpr_command *cm, u_int level, const char *fmt, ...) { struct sbuf sb; va_list ap; char str[192]; char path_str[64]; if (cm == NULL) return; /* No need to be in here if debugging isn't enabled */ if ((cm->cm_sc->mpr_debug & level) == 0) return; sbuf_new(&sb, str, sizeof(str), 0); va_start(ap, fmt); if (cm->cm_ccb != NULL) { xpt_path_string(cm->cm_ccb->csio.ccb_h.path, path_str, sizeof(path_str)); sbuf_cat(&sb, path_str); if (cm->cm_ccb->ccb_h.func_code == XPT_SCSI_IO) { scsi_command_string(&cm->cm_ccb->csio, &sb); sbuf_printf(&sb, "length %d ", cm->cm_ccb->csio.dxfer_len); } } else { sbuf_printf(&sb, "(noperiph:%s%d:%u:%u:%u): ", cam_sim_name(cm->cm_sc->sassc->sim), cam_sim_unit(cm->cm_sc->sassc->sim), cam_sim_bus(cm->cm_sc->sassc->sim), cm->cm_targ ? cm->cm_targ->tid : 0xFFFFFFFF, cm->cm_lun); } sbuf_printf(&sb, "SMID %u ", cm->cm_desc.Default.SMID); sbuf_vprintf(&sb, fmt, ap); sbuf_finish(&sb); mpr_print_field(cm->cm_sc, "%s", sbuf_data(&sb)); va_end(ap); } static void mprsas_remove_volume(struct mpr_softc *sc, struct mpr_command *tm) { MPI2_SCSI_TASK_MANAGE_REPLY *reply; struct mprsas_target *targ; uint16_t handle; MPR_FUNCTRACE(sc); reply = (MPI2_SCSI_TASK_MANAGE_REPLY *)tm->cm_reply; handle = (uint16_t)(uintptr_t)tm->cm_complete_data; targ = tm->cm_targ; if (reply == NULL) { /* XXX retry the remove after the diag reset completes? */ mpr_dprint(sc, MPR_FAULT, "%s NULL reply resetting device " "0x%04x\n", __func__, handle); mprsas_free_tm(sc, tm); return; } if ((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) { mpr_dprint(sc, MPR_ERROR, "IOCStatus = 0x%x while resetting " "device 0x%x\n", le16toh(reply->IOCStatus), handle); } mpr_dprint(sc, MPR_XINFO, "Reset aborted %u commands\n", le32toh(reply->TerminationCount)); mpr_free_reply(sc, tm->cm_reply_data); tm->cm_reply = NULL; /* Ensures the reply won't get re-freed */ mpr_dprint(sc, MPR_XINFO, "clearing target %u handle 0x%04x\n", targ->tid, handle); /* * Don't clear target if remove fails because things will get confusing. * Leave the devname and sasaddr intact so that we know to avoid reusing * this target id if possible, and so we can assign the same target id * to this device if it comes back in the future. */ if ((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) == MPI2_IOCSTATUS_SUCCESS) { targ = tm->cm_targ; targ->handle = 0x0; targ->encl_handle = 0x0; targ->encl_level_valid = 0x0; targ->encl_level = 0x0; targ->connector_name[0] = ' '; targ->connector_name[1] = ' '; targ->connector_name[2] = ' '; targ->connector_name[3] = ' '; targ->encl_slot = 0x0; targ->exp_dev_handle = 0x0; targ->phy_num = 0x0; targ->linkrate = 0x0; targ->devinfo = 0x0; targ->flags = 0x0; targ->scsi_req_desc_type = 0; } mprsas_free_tm(sc, tm); } /* * No Need to call "MPI2_SAS_OP_REMOVE_DEVICE" For Volume removal. * Otherwise Volume Delete is same as Bare Drive Removal. */ void mprsas_prepare_volume_remove(struct mprsas_softc *sassc, uint16_t handle) { MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mpr_softc *sc; struct mpr_command *cm; struct mprsas_target *targ = NULL; MPR_FUNCTRACE(sassc->sc); sc = sassc->sc; targ = mprsas_find_target_by_handle(sassc, 0, handle); if (targ == NULL) { /* FIXME: what is the action? */ /* We don't know about this device? */ mpr_dprint(sc, MPR_ERROR, "%s %d : invalid handle 0x%x \n", __func__,__LINE__, handle); return; } targ->flags |= MPRSAS_TARGET_INREMOVAL; cm = mprsas_alloc_tm(sc); if (cm == NULL) { mpr_dprint(sc, MPR_ERROR, "%s: command alloc failure\n", __func__); return; } mprsas_rescan_target(sc, targ); req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)cm->cm_req; req->DevHandle = targ->handle; - req->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; req->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET; - /* SAS Hard Link Reset / SATA Link Reset */ - req->MsgFlags = MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET; + if (!targ->is_nvme || sc->custom_nvme_tm_handling) { + /* SAS Hard Link Reset / SATA Link Reset */ + req->MsgFlags = MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET; + } else { + /* PCIe Protocol Level Reset*/ + req->MsgFlags = + MPI26_SCSITASKMGMT_MSGFLAGS_PROTOCOL_LVL_RST_PCIE; + } cm->cm_targ = targ; cm->cm_data = NULL; - cm->cm_desc.HighPriority.RequestFlags = - MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; cm->cm_complete = mprsas_remove_volume; cm->cm_complete_data = (void *)(uintptr_t)handle; mpr_dprint(sc, MPR_INFO, "%s: Sending reset for target ID %d\n", __func__, targ->tid); mprsas_prepare_for_tm(sc, cm, targ, CAM_LUN_WILDCARD); mpr_map_command(sc, cm); } /* * The firmware performs debounce on the link to avoid transient link errors * and false removals. When it does decide that link has been lost and a * device needs to go away, it expects that the host will perform a target reset * and then an op remove. The reset has the side-effect of aborting any * outstanding requests for the device, which is required for the op-remove to * succeed. It's not clear if the host should check for the device coming back * alive after the reset. */ void mprsas_prepare_remove(struct mprsas_softc *sassc, uint16_t handle) { MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mpr_softc *sc; - struct mpr_command *cm; + struct mpr_command *tm; struct mprsas_target *targ = NULL; MPR_FUNCTRACE(sassc->sc); sc = sassc->sc; targ = mprsas_find_target_by_handle(sassc, 0, handle); if (targ == NULL) { /* FIXME: what is the action? */ /* We don't know about this device? */ mpr_dprint(sc, MPR_ERROR, "%s : invalid handle 0x%x \n", __func__, handle); return; } targ->flags |= MPRSAS_TARGET_INREMOVAL; - cm = mprsas_alloc_tm(sc); - if (cm == NULL) { + tm = mprsas_alloc_tm(sc); + if (tm == NULL) { mpr_dprint(sc, MPR_ERROR, "%s: command alloc failure\n", __func__); return; } mprsas_rescan_target(sc, targ); - req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)cm->cm_req; + req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; memset(req, 0, sizeof(*req)); req->DevHandle = htole16(targ->handle); - req->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; req->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET; /* SAS Hard Link Reset / SATA Link Reset */ req->MsgFlags = MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET; - cm->cm_targ = targ; - cm->cm_data = NULL; - cm->cm_desc.HighPriority.RequestFlags = - MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; - cm->cm_complete = mprsas_remove_device; - cm->cm_complete_data = (void *)(uintptr_t)handle; + tm->cm_targ = targ; + tm->cm_data = NULL; + tm->cm_complete = mprsas_remove_device; + tm->cm_complete_data = (void *)(uintptr_t)handle; mpr_dprint(sc, MPR_INFO, "%s: Sending reset for target ID %d\n", __func__, targ->tid); - mprsas_prepare_for_tm(sc, cm, targ, CAM_LUN_WILDCARD); + mprsas_prepare_for_tm(sc, tm, targ, CAM_LUN_WILDCARD); - mpr_map_command(sc, cm); + mpr_map_command(sc, tm); } static void mprsas_remove_device(struct mpr_softc *sc, struct mpr_command *tm) { MPI2_SCSI_TASK_MANAGE_REPLY *reply; MPI2_SAS_IOUNIT_CONTROL_REQUEST *req; struct mprsas_target *targ; struct mpr_command *next_cm; uint16_t handle; MPR_FUNCTRACE(sc); reply = (MPI2_SCSI_TASK_MANAGE_REPLY *)tm->cm_reply; handle = (uint16_t)(uintptr_t)tm->cm_complete_data; targ = tm->cm_targ; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * task management commands don't have S/G lists. */ if ((tm->cm_flags & MPR_CM_FLAGS_ERROR_MASK) != 0) { mpr_dprint(sc, MPR_ERROR, "%s: cm_flags = %#x for remove of " "handle %#04x! This should not happen!\n", __func__, tm->cm_flags, handle); } if (reply == NULL) { /* XXX retry the remove after the diag reset completes? */ mpr_dprint(sc, MPR_FAULT, "%s NULL reply resetting device " "0x%04x\n", __func__, handle); mprsas_free_tm(sc, tm); return; } if ((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) { mpr_dprint(sc, MPR_ERROR, "IOCStatus = 0x%x while resetting " "device 0x%x\n", le16toh(reply->IOCStatus), handle); } mpr_dprint(sc, MPR_XINFO, "Reset aborted %u commands\n", le32toh(reply->TerminationCount)); mpr_free_reply(sc, tm->cm_reply_data); tm->cm_reply = NULL; /* Ensures the reply won't get re-freed */ /* Reuse the existing command */ req = (MPI2_SAS_IOUNIT_CONTROL_REQUEST *)tm->cm_req; memset(req, 0, sizeof(*req)); req->Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL; req->Operation = MPI2_SAS_OP_REMOVE_DEVICE; req->DevHandle = htole16(handle); tm->cm_data = NULL; tm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; tm->cm_complete = mprsas_remove_complete; tm->cm_complete_data = (void *)(uintptr_t)handle; mpr_map_command(sc, tm); mpr_dprint(sc, MPR_INFO, "clearing target %u handle 0x%04x\n", targ->tid, handle); if (targ->encl_level_valid) { mpr_dprint(sc, MPR_INFO, "At enclosure level %d, slot %d, " "connector name (%4s)\n", targ->encl_level, targ->encl_slot, targ->connector_name); } TAILQ_FOREACH_SAFE(tm, &targ->commands, cm_link, next_cm) { union ccb *ccb; mpr_dprint(sc, MPR_XINFO, "Completing missed command %p\n", tm); ccb = tm->cm_complete_data; mprsas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); mprsas_scsiio_complete(sc, tm); } } static void mprsas_remove_complete(struct mpr_softc *sc, struct mpr_command *tm) { MPI2_SAS_IOUNIT_CONTROL_REPLY *reply; uint16_t handle; struct mprsas_target *targ; struct mprsas_lun *lun; MPR_FUNCTRACE(sc); reply = (MPI2_SAS_IOUNIT_CONTROL_REPLY *)tm->cm_reply; handle = (uint16_t)(uintptr_t)tm->cm_complete_data; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * task management commands don't have S/G lists. */ if ((tm->cm_flags & MPR_CM_FLAGS_ERROR_MASK) != 0) { mpr_dprint(sc, MPR_XINFO, "%s: cm_flags = %#x for remove of " "handle %#04x! This should not happen!\n", __func__, tm->cm_flags, handle); mprsas_free_tm(sc, tm); return; } if (reply == NULL) { /* most likely a chip reset */ mpr_dprint(sc, MPR_FAULT, "%s NULL reply removing device " "0x%04x\n", __func__, handle); mprsas_free_tm(sc, tm); return; } mpr_dprint(sc, MPR_XINFO, "%s on handle 0x%04x, IOCStatus= 0x%x\n", __func__, handle, le16toh(reply->IOCStatus)); /* * Don't clear target if remove fails because things will get confusing. * Leave the devname and sasaddr intact so that we know to avoid reusing * this target id if possible, and so we can assign the same target id * to this device if it comes back in the future. */ if ((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) == MPI2_IOCSTATUS_SUCCESS) { targ = tm->cm_targ; targ->handle = 0x0; targ->encl_handle = 0x0; targ->encl_level_valid = 0x0; targ->encl_level = 0x0; targ->connector_name[0] = ' '; targ->connector_name[1] = ' '; targ->connector_name[2] = ' '; targ->connector_name[3] = ' '; targ->encl_slot = 0x0; targ->exp_dev_handle = 0x0; targ->phy_num = 0x0; targ->linkrate = 0x0; targ->devinfo = 0x0; targ->flags = 0x0; targ->scsi_req_desc_type = 0; while (!SLIST_EMPTY(&targ->luns)) { lun = SLIST_FIRST(&targ->luns); SLIST_REMOVE_HEAD(&targ->luns, lun_link); free(lun, M_MPR); } } mprsas_free_tm(sc, tm); } static int mprsas_register_events(struct mpr_softc *sc) { uint8_t events[16]; bzero(events, 16); setbit(events, MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE); setbit(events, MPI2_EVENT_SAS_DISCOVERY); setbit(events, MPI2_EVENT_SAS_BROADCAST_PRIMITIVE); setbit(events, MPI2_EVENT_SAS_INIT_DEVICE_STATUS_CHANGE); setbit(events, MPI2_EVENT_SAS_INIT_TABLE_OVERFLOW); setbit(events, MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST); setbit(events, MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE); setbit(events, MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST); setbit(events, MPI2_EVENT_IR_VOLUME); setbit(events, MPI2_EVENT_IR_PHYSICAL_DISK); setbit(events, MPI2_EVENT_IR_OPERATION_STATUS); setbit(events, MPI2_EVENT_TEMP_THRESHOLD); setbit(events, MPI2_EVENT_SAS_DEVICE_DISCOVERY_ERROR); if (sc->facts->MsgVersion >= MPI2_VERSION_02_06) { setbit(events, MPI2_EVENT_ACTIVE_CABLE_EXCEPTION); if (sc->mpr_flags & MPR_FLAGS_GEN35_IOC) { setbit(events, MPI2_EVENT_PCIE_DEVICE_STATUS_CHANGE); setbit(events, MPI2_EVENT_PCIE_ENUMERATION); setbit(events, MPI2_EVENT_PCIE_TOPOLOGY_CHANGE_LIST); } } mpr_register_events(sc, events, mprsas_evt_handler, NULL, &sc->sassc->mprsas_eh); return (0); } int mpr_attach_sas(struct mpr_softc *sc) { struct mprsas_softc *sassc; cam_status status; int unit, error = 0, reqs; MPR_FUNCTRACE(sc); mpr_dprint(sc, MPR_INIT, "%s entered\n", __func__); sassc = malloc(sizeof(struct mprsas_softc), M_MPR, M_WAITOK|M_ZERO); if (!sassc) { mpr_dprint(sc, MPR_INIT|MPR_ERROR, "Cannot allocate SAS subsystem memory\n"); return (ENOMEM); } /* * XXX MaxTargets could change during a reinit. Since we don't * resize the targets[] array during such an event, cache the value * of MaxTargets here so that we don't get into trouble later. This * should move into the reinit logic. */ sassc->maxtargets = sc->facts->MaxTargets + sc->facts->MaxVolumes; sassc->targets = malloc(sizeof(struct mprsas_target) * sassc->maxtargets, M_MPR, M_WAITOK|M_ZERO); if (!sassc->targets) { mpr_dprint(sc, MPR_INIT|MPR_ERROR, "Cannot allocate SAS target memory\n"); free(sassc, M_MPR); return (ENOMEM); } sc->sassc = sassc; sassc->sc = sc; reqs = sc->num_reqs - sc->num_prireqs - 1; if ((sassc->devq = cam_simq_alloc(reqs)) == NULL) { mpr_dprint(sc, MPR_INIT|MPR_ERROR, "Cannot allocate SIMQ\n"); error = ENOMEM; goto out; } unit = device_get_unit(sc->mpr_dev); sassc->sim = cam_sim_alloc(mprsas_action, mprsas_poll, "mpr", sassc, unit, &sc->mpr_mtx, reqs, reqs, sassc->devq); if (sassc->sim == NULL) { mpr_dprint(sc, MPR_INIT|MPR_ERROR, "Cannot allocate SIM\n"); error = EINVAL; goto out; } TAILQ_INIT(&sassc->ev_queue); /* Initialize taskqueue for Event Handling */ TASK_INIT(&sassc->ev_task, 0, mprsas_firmware_event_work, sc); sassc->ev_tq = taskqueue_create("mpr_taskq", M_NOWAIT | M_ZERO, taskqueue_thread_enqueue, &sassc->ev_tq); taskqueue_start_threads(&sassc->ev_tq, 1, PRIBIO, "%s taskq", device_get_nameunit(sc->mpr_dev)); mpr_lock(sc); /* * XXX There should be a bus for every port on the adapter, but since * we're just going to fake the topology for now, we'll pretend that * everything is just a target on a single bus. */ if ((error = xpt_bus_register(sassc->sim, sc->mpr_dev, 0)) != 0) { mpr_dprint(sc, MPR_INIT|MPR_ERROR, "Error %d registering SCSI bus\n", error); mpr_unlock(sc); goto out; } /* * Assume that discovery events will start right away. * * Hold off boot until discovery is complete. */ sassc->flags |= MPRSAS_IN_STARTUP | MPRSAS_IN_DISCOVERY; sc->sassc->startup_refcount = 0; mprsas_startup_increment(sassc); callout_init(&sassc->discovery_callout, 1 /*mpsafe*/); /* * Register for async events so we can determine the EEDP * capabilities of devices. */ status = xpt_create_path(&sassc->path, /*periph*/NULL, cam_sim_path(sc->sassc->sim), CAM_TARGET_WILDCARD, CAM_LUN_WILDCARD); if (status != CAM_REQ_CMP) { mpr_dprint(sc, MPR_INIT|MPR_ERROR, "Error %#x creating sim path\n", status); sassc->path = NULL; } else { int event; #if (__FreeBSD_version >= 1000006) || \ ((__FreeBSD_version >= 901503) && (__FreeBSD_version < 1000000)) event = AC_ADVINFO_CHANGED | AC_FOUND_DEVICE; #else event = AC_FOUND_DEVICE; #endif /* * Prior to the CAM locking improvements, we can't call * xpt_register_async() with a particular path specified. * * If a path isn't specified, xpt_register_async() will * generate a wildcard path and acquire the XPT lock while * it calls xpt_action() to execute the XPT_SASYNC_CB CCB. * It will then drop the XPT lock once that is done. * * If a path is specified for xpt_register_async(), it will * not acquire and drop the XPT lock around the call to * xpt_action(). xpt_action() asserts that the caller * holds the SIM lock, so the SIM lock has to be held when * calling xpt_register_async() when the path is specified. * * But xpt_register_async calls xpt_for_all_devices(), * which calls xptbustraverse(), which will acquire each * SIM lock. When it traverses our particular bus, it will * necessarily acquire the SIM lock, which will lead to a * recursive lock acquisition. * * The CAM locking changes fix this problem by acquiring * the XPT topology lock around bus traversal in * xptbustraverse(), so the caller can hold the SIM lock * and it does not cause a recursive lock acquisition. * * These __FreeBSD_version values are approximate, especially * for stable/10, which is two months later than the actual * change. */ #if (__FreeBSD_version < 1000703) || \ ((__FreeBSD_version >= 1100000) && (__FreeBSD_version < 1100002)) mpr_unlock(sc); status = xpt_register_async(event, mprsas_async, sc, NULL); mpr_lock(sc); #else status = xpt_register_async(event, mprsas_async, sc, sassc->path); #endif if (status != CAM_REQ_CMP) { mpr_dprint(sc, MPR_ERROR, "Error %#x registering async handler for " "AC_ADVINFO_CHANGED events\n", status); xpt_free_path(sassc->path); sassc->path = NULL; } } if (status != CAM_REQ_CMP) { /* * EEDP use is the exception, not the rule. * Warn the user, but do not fail to attach. */ mpr_printf(sc, "EEDP capabilities disabled.\n"); } mpr_unlock(sc); mprsas_register_events(sc); out: if (error) mpr_detach_sas(sc); mpr_dprint(sc, MPR_INIT, "%s exit, error= %d\n", __func__, error); return (error); } int mpr_detach_sas(struct mpr_softc *sc) { struct mprsas_softc *sassc; struct mprsas_lun *lun, *lun_tmp; struct mprsas_target *targ; int i; MPR_FUNCTRACE(sc); if (sc->sassc == NULL) return (0); sassc = sc->sassc; mpr_deregister_events(sc, sassc->mprsas_eh); /* * Drain and free the event handling taskqueue with the lock * unheld so that any parallel processing tasks drain properly * without deadlocking. */ if (sassc->ev_tq != NULL) taskqueue_free(sassc->ev_tq); /* Make sure CAM doesn't wedge if we had to bail out early. */ mpr_lock(sc); while (sassc->startup_refcount != 0) mprsas_startup_decrement(sassc); /* Deregister our async handler */ if (sassc->path != NULL) { xpt_register_async(0, mprsas_async, sc, sassc->path); xpt_free_path(sassc->path); sassc->path = NULL; } if (sassc->flags & MPRSAS_IN_STARTUP) xpt_release_simq(sassc->sim, 1); if (sassc->sim != NULL) { xpt_bus_deregister(cam_sim_path(sassc->sim)); cam_sim_free(sassc->sim, FALSE); } mpr_unlock(sc); if (sassc->devq != NULL) cam_simq_free(sassc->devq); for (i = 0; i < sassc->maxtargets; i++) { targ = &sassc->targets[i]; SLIST_FOREACH_SAFE(lun, &targ->luns, lun_link, lun_tmp) { free(lun, M_MPR); } } free(sassc->targets, M_MPR); free(sassc, M_MPR); sc->sassc = NULL; return (0); } void mprsas_discovery_end(struct mprsas_softc *sassc) { struct mpr_softc *sc = sassc->sc; MPR_FUNCTRACE(sc); if (sassc->flags & MPRSAS_DISCOVERY_TIMEOUT_PENDING) callout_stop(&sassc->discovery_callout); /* * After discovery has completed, check the mapping table for any * missing devices and update their missing counts. Only do this once * whenever the driver is initialized so that missing counts aren't * updated unnecessarily. Note that just because discovery has * completed doesn't mean that events have been processed yet. The * check_devices function is a callout timer that checks if ALL devices * are missing. If so, it will wait a little longer for events to * complete and keep resetting itself until some device in the mapping * table is not missing, meaning that event processing has started. */ if (sc->track_mapping_events) { mpr_dprint(sc, MPR_XINFO | MPR_MAPPING, "Discovery has " "completed. Check for missing devices in the mapping " "table.\n"); callout_reset(&sc->device_check_callout, MPR_MISSING_CHECK_DELAY * hz, mpr_mapping_check_devices, sc); } } static void mprsas_action(struct cam_sim *sim, union ccb *ccb) { struct mprsas_softc *sassc; sassc = cam_sim_softc(sim); MPR_FUNCTRACE(sassc->sc); mpr_dprint(sassc->sc, MPR_TRACE, "ccb func_code 0x%x\n", ccb->ccb_h.func_code); mtx_assert(&sassc->sc->mpr_mtx, MA_OWNED); switch (ccb->ccb_h.func_code) { case XPT_PATH_INQ: { struct ccb_pathinq *cpi = &ccb->cpi; struct mpr_softc *sc = sassc->sc; cpi->version_num = 1; cpi->hba_inquiry = PI_SDTR_ABLE|PI_TAG_ABLE|PI_WIDE_16; cpi->target_sprt = 0; #if (__FreeBSD_version >= 1000039) || \ ((__FreeBSD_version < 1000000) && (__FreeBSD_version >= 902502)) cpi->hba_misc = PIM_NOBUSRESET | PIM_UNMAPPED | PIM_NOSCAN; #else cpi->hba_misc = PIM_NOBUSRESET | PIM_UNMAPPED; #endif cpi->hba_eng_cnt = 0; cpi->max_target = sassc->maxtargets - 1; cpi->max_lun = 255; /* * initiator_id is set here to an ID outside the set of valid * target IDs (including volumes). */ cpi->initiator_id = sassc->maxtargets; strlcpy(cpi->sim_vid, "FreeBSD", SIM_IDLEN); strlcpy(cpi->hba_vid, "Avago Tech", HBA_IDLEN); strlcpy(cpi->dev_name, cam_sim_name(sim), DEV_IDLEN); cpi->unit_number = cam_sim_unit(sim); cpi->bus_id = cam_sim_bus(sim); /* * XXXSLM-I think this needs to change based on config page or * something instead of hardcoded to 150000. */ cpi->base_transfer_speed = 150000; cpi->transport = XPORT_SAS; cpi->transport_version = 0; cpi->protocol = PROTO_SCSI; cpi->protocol_version = SCSI_REV_SPC; cpi->maxio = sc->maxio; mprsas_set_ccbstatus(ccb, CAM_REQ_CMP); break; } case XPT_GET_TRAN_SETTINGS: { struct ccb_trans_settings *cts; struct ccb_trans_settings_sas *sas; struct ccb_trans_settings_scsi *scsi; struct mprsas_target *targ; cts = &ccb->cts; sas = &cts->xport_specific.sas; scsi = &cts->proto_specific.scsi; KASSERT(cts->ccb_h.target_id < sassc->maxtargets, ("Target %d out of bounds in XPT_GET_TRAN_SETTINGS\n", cts->ccb_h.target_id)); targ = &sassc->targets[cts->ccb_h.target_id]; if (targ->handle == 0x0) { mprsas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); break; } cts->protocol_version = SCSI_REV_SPC2; cts->transport = XPORT_SAS; cts->transport_version = 0; sas->valid = CTS_SAS_VALID_SPEED; switch (targ->linkrate) { case 0x08: sas->bitrate = 150000; break; case 0x09: sas->bitrate = 300000; break; case 0x0a: sas->bitrate = 600000; break; case 0x0b: sas->bitrate = 1200000; break; default: sas->valid = 0; } cts->protocol = PROTO_SCSI; scsi->valid = CTS_SCSI_VALID_TQ; scsi->flags = CTS_SCSI_FLAGS_TAG_ENB; mprsas_set_ccbstatus(ccb, CAM_REQ_CMP); break; } case XPT_CALC_GEOMETRY: cam_calc_geometry(&ccb->ccg, /*extended*/1); mprsas_set_ccbstatus(ccb, CAM_REQ_CMP); break; case XPT_RESET_DEV: mpr_dprint(sassc->sc, MPR_XINFO, "mprsas_action " "XPT_RESET_DEV\n"); mprsas_action_resetdev(sassc, ccb); return; case XPT_RESET_BUS: case XPT_ABORT: case XPT_TERM_IO: mpr_dprint(sassc->sc, MPR_XINFO, "mprsas_action faking success " "for abort or reset\n"); mprsas_set_ccbstatus(ccb, CAM_REQ_CMP); break; case XPT_SCSI_IO: mprsas_action_scsiio(sassc, ccb); return; #if __FreeBSD_version >= 900026 case XPT_SMP_IO: mprsas_action_smpio(sassc, ccb); return; #endif default: mprsas_set_ccbstatus(ccb, CAM_FUNC_NOTAVAIL); break; } xpt_done(ccb); } static void mprsas_announce_reset(struct mpr_softc *sc, uint32_t ac_code, target_id_t target_id, lun_id_t lun_id) { path_id_t path_id = cam_sim_path(sc->sassc->sim); struct cam_path *path; mpr_dprint(sc, MPR_XINFO, "%s code %x target %d lun %jx\n", __func__, ac_code, target_id, (uintmax_t)lun_id); if (xpt_create_path(&path, NULL, path_id, target_id, lun_id) != CAM_REQ_CMP) { mpr_dprint(sc, MPR_ERROR, "unable to create path for reset " "notification\n"); return; } xpt_async(ac_code, path, NULL); xpt_free_path(path); } static void mprsas_complete_all_commands(struct mpr_softc *sc) { struct mpr_command *cm; int i; int completed; MPR_FUNCTRACE(sc); mtx_assert(&sc->mpr_mtx, MA_OWNED); /* complete all commands with a NULL reply */ for (i = 1; i < sc->num_reqs; i++) { cm = &sc->commands[i]; if (cm->cm_state == MPR_CM_STATE_FREE) continue; cm->cm_state = MPR_CM_STATE_BUSY; cm->cm_reply = NULL; completed = 0; + if (cm->cm_flags & MPR_CM_FLAGS_SATA_ID_TIMEOUT) { + MPASS(cm->cm_data); + free(cm->cm_data, M_MPR); + cm->cm_data = NULL; + } + if (cm->cm_flags & MPR_CM_FLAGS_POLLED) cm->cm_flags |= MPR_CM_FLAGS_COMPLETE; if (cm->cm_complete != NULL) { mprsas_log_command(cm, MPR_RECOVERY, "completing cm %p state %x ccb %p for diag reset\n", cm, cm->cm_state, cm->cm_ccb); cm->cm_complete(sc, cm); completed = 1; } else if (cm->cm_flags & MPR_CM_FLAGS_WAKEUP) { mprsas_log_command(cm, MPR_RECOVERY, "waking up cm %p state %x ccb %p for diag reset\n", cm, cm->cm_state, cm->cm_ccb); wakeup(cm); completed = 1; } if ((completed == 0) && (cm->cm_state != MPR_CM_STATE_FREE)) { /* this should never happen, but if it does, log */ mprsas_log_command(cm, MPR_RECOVERY, "cm %p state %x flags 0x%x ccb %p during diag " "reset\n", cm, cm->cm_state, cm->cm_flags, cm->cm_ccb); } } sc->io_cmds_active = 0; } void mprsas_handle_reinit(struct mpr_softc *sc) { int i; /* Go back into startup mode and freeze the simq, so that CAM * doesn't send any commands until after we've rediscovered all * targets and found the proper device handles for them. * * After the reset, portenable will trigger discovery, and after all * discovery-related activities have finished, the simq will be * released. */ mpr_dprint(sc, MPR_INIT, "%s startup\n", __func__); sc->sassc->flags |= MPRSAS_IN_STARTUP; sc->sassc->flags |= MPRSAS_IN_DISCOVERY; mprsas_startup_increment(sc->sassc); /* notify CAM of a bus reset */ mprsas_announce_reset(sc, AC_BUS_RESET, CAM_TARGET_WILDCARD, CAM_LUN_WILDCARD); /* complete and cleanup after all outstanding commands */ mprsas_complete_all_commands(sc); mpr_dprint(sc, MPR_INIT, "%s startup %u after command completion\n", __func__, sc->sassc->startup_refcount); /* zero all the target handles, since they may change after the * reset, and we have to rediscover all the targets and use the new * handles. */ for (i = 0; i < sc->sassc->maxtargets; i++) { if (sc->sassc->targets[i].outstanding != 0) mpr_dprint(sc, MPR_INIT, "target %u outstanding %u\n", i, sc->sassc->targets[i].outstanding); sc->sassc->targets[i].handle = 0x0; sc->sassc->targets[i].exp_dev_handle = 0x0; sc->sassc->targets[i].outstanding = 0; sc->sassc->targets[i].flags = MPRSAS_TARGET_INDIAGRESET; } } static void mprsas_tm_timeout(void *data) { struct mpr_command *tm = data; struct mpr_softc *sc = tm->cm_sc; mtx_assert(&sc->mpr_mtx, MA_OWNED); mprsas_log_command(tm, MPR_INFO|MPR_RECOVERY, "task mgmt %p timed " "out\n", tm); KASSERT(tm->cm_state == MPR_CM_STATE_INQUEUE, ("command not inqueue\n")); tm->cm_state = MPR_CM_STATE_BUSY; mpr_reinit(sc); } static void mprsas_logical_unit_reset_complete(struct mpr_softc *sc, struct mpr_command *tm) { MPI2_SCSI_TASK_MANAGE_REPLY *reply; MPI2_SCSI_TASK_MANAGE_REQUEST *req; unsigned int cm_count = 0; struct mpr_command *cm; struct mprsas_target *targ; callout_stop(&tm->cm_callout); req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; reply = (MPI2_SCSI_TASK_MANAGE_REPLY *)tm->cm_reply; targ = tm->cm_targ; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * task management commands don't have S/G lists. */ if ((tm->cm_flags & MPR_CM_FLAGS_ERROR_MASK) != 0) { mpr_dprint(sc, MPR_RECOVERY|MPR_ERROR, "%s: cm_flags = %#x for LUN reset! " "This should not happen!\n", __func__, tm->cm_flags); mprsas_free_tm(sc, tm); return; } if (reply == NULL) { mpr_dprint(sc, MPR_RECOVERY, "NULL reset reply for tm %p\n", tm); if ((sc->mpr_flags & MPR_FLAGS_DIAGRESET) != 0) { /* this completion was due to a reset, just cleanup */ mpr_dprint(sc, MPR_RECOVERY, "Hardware undergoing " "reset, ignoring NULL LUN reset reply\n"); targ->tm = NULL; mprsas_free_tm(sc, tm); } else { /* we should have gotten a reply. */ mpr_dprint(sc, MPR_INFO|MPR_RECOVERY, "NULL reply on " "LUN reset attempt, resetting controller\n"); mpr_reinit(sc); } return; } mpr_dprint(sc, MPR_RECOVERY, "logical unit reset status 0x%x code 0x%x count %u\n", le16toh(reply->IOCStatus), le32toh(reply->ResponseCode), le32toh(reply->TerminationCount)); /* * See if there are any outstanding commands for this LUN. * This could be made more efficient by using a per-LU data * structure of some sort. */ TAILQ_FOREACH(cm, &targ->commands, cm_link) { if (cm->cm_lun == tm->cm_lun) cm_count++; } if (cm_count == 0) { mpr_dprint(sc, MPR_RECOVERY|MPR_INFO, "Finished recovery after LUN reset for target %u\n", targ->tid); mprsas_announce_reset(sc, AC_SENT_BDR, targ->tid, tm->cm_lun); /* * We've finished recovery for this logical unit. check and * see if some other logical unit has a timedout command * that needs to be processed. */ cm = TAILQ_FIRST(&targ->timedout_commands); if (cm) { mpr_dprint(sc, MPR_INFO|MPR_RECOVERY, "More commands to abort for target %u\n", targ->tid); mprsas_send_abort(sc, tm, cm); } else { targ->tm = NULL; mprsas_free_tm(sc, tm); } } else { /* if we still have commands for this LUN, the reset * effectively failed, regardless of the status reported. * Escalate to a target reset. */ mpr_dprint(sc, MPR_INFO|MPR_RECOVERY, "logical unit reset complete for target %u, but still " "have %u command(s), sending target reset\n", targ->tid, cm_count); - mprsas_send_reset(sc, tm, - MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET); + if (!targ->is_nvme || sc->custom_nvme_tm_handling) + mprsas_send_reset(sc, tm, + MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET); + else + mpr_reinit(sc); } } static void mprsas_target_reset_complete(struct mpr_softc *sc, struct mpr_command *tm) { MPI2_SCSI_TASK_MANAGE_REPLY *reply; MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mprsas_target *targ; callout_stop(&tm->cm_callout); req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; reply = (MPI2_SCSI_TASK_MANAGE_REPLY *)tm->cm_reply; targ = tm->cm_targ; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * task management commands don't have S/G lists. */ if ((tm->cm_flags & MPR_CM_FLAGS_ERROR_MASK) != 0) { mpr_dprint(sc, MPR_ERROR, "%s: cm_flags = %#x for target " "reset! This should not happen!\n", __func__, tm->cm_flags); mprsas_free_tm(sc, tm); return; } if (reply == NULL) { mpr_dprint(sc, MPR_RECOVERY, "NULL target reset reply for tm %p TaskMID %u\n", tm, le16toh(req->TaskMID)); if ((sc->mpr_flags & MPR_FLAGS_DIAGRESET) != 0) { /* this completion was due to a reset, just cleanup */ mpr_dprint(sc, MPR_RECOVERY, "Hardware undergoing " "reset, ignoring NULL target reset reply\n"); targ->tm = NULL; mprsas_free_tm(sc, tm); } else { /* we should have gotten a reply. */ mpr_dprint(sc, MPR_INFO|MPR_RECOVERY, "NULL reply on " "target reset attempt, resetting controller\n"); mpr_reinit(sc); } return; } mpr_dprint(sc, MPR_RECOVERY, "target reset status 0x%x code 0x%x count %u\n", le16toh(reply->IOCStatus), le32toh(reply->ResponseCode), le32toh(reply->TerminationCount)); if (targ->outstanding == 0) { /* * We've finished recovery for this target and all * of its logical units. */ mpr_dprint(sc, MPR_RECOVERY|MPR_INFO, "Finished reset recovery for target %u\n", targ->tid); mprsas_announce_reset(sc, AC_SENT_BDR, tm->cm_targ->tid, CAM_LUN_WILDCARD); targ->tm = NULL; mprsas_free_tm(sc, tm); } else { /* * After a target reset, if this target still has * outstanding commands, the reset effectively failed, * regardless of the status reported. escalate. */ mpr_dprint(sc, MPR_INFO|MPR_RECOVERY, "Target reset complete for target %u, but still have %u " "command(s), resetting controller\n", targ->tid, targ->outstanding); mpr_reinit(sc); } } #define MPR_RESET_TIMEOUT 30 int mprsas_send_reset(struct mpr_softc *sc, struct mpr_command *tm, uint8_t type) { MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mprsas_target *target; - int err; + int err, timeout; target = tm->cm_targ; if (target->handle == 0) { mpr_dprint(sc, MPR_ERROR, "%s null devhandle for target_id " "%d\n", __func__, target->tid); return -1; } req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; req->DevHandle = htole16(target->handle); - req->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; req->TaskType = type; + if (!target->is_nvme || sc->custom_nvme_tm_handling) { + timeout = MPR_RESET_TIMEOUT; + /* + * Target reset method = + * SAS Hard Link Reset / SATA Link Reset + */ + req->MsgFlags = MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET; + } else { + timeout = (target->controller_reset_timeout) ? ( + target->controller_reset_timeout) : (MPR_RESET_TIMEOUT); + /* PCIe Protocol Level Reset*/ + req->MsgFlags = + MPI26_SCSITASKMGMT_MSGFLAGS_PROTOCOL_LVL_RST_PCIE; + } + if (type == MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET) { /* XXX Need to handle invalid LUNs */ MPR_SET_LUN(req->LUN, tm->cm_lun); tm->cm_targ->logical_unit_resets++; mpr_dprint(sc, MPR_RECOVERY|MPR_INFO, "Sending logical unit reset to target %u lun %d\n", target->tid, tm->cm_lun); tm->cm_complete = mprsas_logical_unit_reset_complete; mprsas_prepare_for_tm(sc, tm, target, tm->cm_lun); } else if (type == MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET) { - /* - * Target reset method = - * SAS Hard Link Reset / SATA Link Reset - */ - req->MsgFlags = MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET; tm->cm_targ->target_resets++; mpr_dprint(sc, MPR_RECOVERY|MPR_INFO, "Sending target reset to target %u\n", target->tid); tm->cm_complete = mprsas_target_reset_complete; mprsas_prepare_for_tm(sc, tm, target, CAM_LUN_WILDCARD); } else { mpr_dprint(sc, MPR_ERROR, "unexpected reset type 0x%x\n", type); return -1; } if (target->encl_level_valid) { mpr_dprint(sc, MPR_RECOVERY|MPR_INFO, "At enclosure level %d, slot %d, connector name (%4s)\n", target->encl_level, target->encl_slot, target->connector_name); } tm->cm_data = NULL; - tm->cm_desc.HighPriority.RequestFlags = - MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; tm->cm_complete_data = (void *)tm; - callout_reset(&tm->cm_callout, MPR_RESET_TIMEOUT * hz, + callout_reset(&tm->cm_callout, timeout * hz, mprsas_tm_timeout, tm); err = mpr_map_command(sc, tm); if (err) mpr_dprint(sc, MPR_ERROR|MPR_RECOVERY, "error %d sending reset type %u\n", err, type); return err; } static void mprsas_abort_complete(struct mpr_softc *sc, struct mpr_command *tm) { struct mpr_command *cm; MPI2_SCSI_TASK_MANAGE_REPLY *reply; MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mprsas_target *targ; callout_stop(&tm->cm_callout); req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; reply = (MPI2_SCSI_TASK_MANAGE_REPLY *)tm->cm_reply; targ = tm->cm_targ; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * task management commands don't have S/G lists. */ if ((tm->cm_flags & MPR_CM_FLAGS_ERROR_MASK) != 0) { mpr_dprint(sc, MPR_RECOVERY|MPR_ERROR, "cm_flags = %#x for abort %p TaskMID %u!\n", tm->cm_flags, tm, le16toh(req->TaskMID)); mprsas_free_tm(sc, tm); return; } if (reply == NULL) { mpr_dprint(sc, MPR_RECOVERY, "NULL abort reply for tm %p TaskMID %u\n", tm, le16toh(req->TaskMID)); if ((sc->mpr_flags & MPR_FLAGS_DIAGRESET) != 0) { /* this completion was due to a reset, just cleanup */ mpr_dprint(sc, MPR_RECOVERY, "Hardware undergoing " "reset, ignoring NULL abort reply\n"); targ->tm = NULL; mprsas_free_tm(sc, tm); } else { /* we should have gotten a reply. */ mpr_dprint(sc, MPR_INFO|MPR_RECOVERY, "NULL reply on " "abort attempt, resetting controller\n"); mpr_reinit(sc); } return; } mpr_dprint(sc, MPR_RECOVERY, "abort TaskMID %u status 0x%x code 0x%x count %u\n", le16toh(req->TaskMID), le16toh(reply->IOCStatus), le32toh(reply->ResponseCode), le32toh(reply->TerminationCount)); cm = TAILQ_FIRST(&tm->cm_targ->timedout_commands); if (cm == NULL) { /* * if there are no more timedout commands, we're done with * error recovery for this target. */ mpr_dprint(sc, MPR_INFO|MPR_RECOVERY, "Finished abort recovery for target %u\n", targ->tid); targ->tm = NULL; mprsas_free_tm(sc, tm); } else if (le16toh(req->TaskMID) != cm->cm_desc.Default.SMID) { /* abort success, but we have more timedout commands to abort */ mpr_dprint(sc, MPR_INFO|MPR_RECOVERY, "Continuing abort recovery for target %u\n", targ->tid); mprsas_send_abort(sc, tm, cm); } else { /* * we didn't get a command completion, so the abort * failed as far as we're concerned. escalate. */ mpr_dprint(sc, MPR_INFO|MPR_RECOVERY, "Abort failed for target %u, sending logical unit reset\n", targ->tid); mprsas_send_reset(sc, tm, MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET); } } #define MPR_ABORT_TIMEOUT 5 static int mprsas_send_abort(struct mpr_softc *sc, struct mpr_command *tm, struct mpr_command *cm) { MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mprsas_target *targ; - int err; + int err, timeout; targ = cm->cm_targ; if (targ->handle == 0) { mpr_dprint(sc, MPR_ERROR|MPR_RECOVERY, "%s null devhandle for target_id %d\n", __func__, cm->cm_ccb->ccb_h.target_id); return -1; } mprsas_log_command(cm, MPR_RECOVERY|MPR_INFO, "Aborting command %p\n", cm); req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; req->DevHandle = htole16(targ->handle); - req->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; req->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK; /* XXX Need to handle invalid LUNs */ MPR_SET_LUN(req->LUN, cm->cm_ccb->ccb_h.target_lun); req->TaskMID = htole16(cm->cm_desc.Default.SMID); tm->cm_data = NULL; - tm->cm_desc.HighPriority.RequestFlags = - MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; tm->cm_complete = mprsas_abort_complete; tm->cm_complete_data = (void *)tm; tm->cm_targ = cm->cm_targ; tm->cm_lun = cm->cm_lun; - callout_reset(&tm->cm_callout, MPR_ABORT_TIMEOUT * hz, + if (!targ->is_nvme || sc->custom_nvme_tm_handling) + timeout = MPR_ABORT_TIMEOUT; + else + timeout = sc->nvme_abort_timeout; + + callout_reset(&tm->cm_callout, timeout * hz, mprsas_tm_timeout, tm); targ->aborts++; mprsas_prepare_for_tm(sc, tm, targ, tm->cm_lun); err = mpr_map_command(sc, tm); if (err) mpr_dprint(sc, MPR_ERROR|MPR_RECOVERY, "error %d sending abort for cm %p SMID %u\n", err, cm, req->TaskMID); return err; } static void mprsas_scsiio_timeout(void *data) { sbintime_t elapsed, now; union ccb *ccb; struct mpr_softc *sc; struct mpr_command *cm; struct mprsas_target *targ; cm = (struct mpr_command *)data; sc = cm->cm_sc; ccb = cm->cm_ccb; now = sbinuptime(); MPR_FUNCTRACE(sc); mtx_assert(&sc->mpr_mtx, MA_OWNED); mpr_dprint(sc, MPR_XINFO|MPR_RECOVERY, "Timeout checking cm %p\n", cm); /* * Run the interrupt handler to make sure it's not pending. This * isn't perfect because the command could have already completed * and been re-used, though this is unlikely. */ mpr_intr_locked(sc); - if (cm->cm_state != MPR_CM_STATE_INQUEUE) { + if (cm->cm_flags & MPR_CM_FLAGS_ON_RECOVERY) { mprsas_log_command(cm, MPR_XINFO, "SCSI command %p almost timed out\n", cm); return; } if (cm->cm_ccb == NULL) { mpr_dprint(sc, MPR_ERROR, "command timeout with NULL ccb\n"); return; } targ = cm->cm_targ; targ->timeouts++; elapsed = now - ccb->ccb_h.qos.sim_data; mprsas_log_command(cm, MPR_INFO|MPR_RECOVERY, "Command timeout on target %u(0x%04x), %d set, %d.%d elapsed\n", targ->tid, targ->handle, ccb->ccb_h.timeout, sbintime_getsec(elapsed), elapsed & 0xffffffff); if (targ->encl_level_valid) { mpr_dprint(sc, MPR_INFO|MPR_RECOVERY, "At enclosure level %d, slot %d, connector name (%4s)\n", targ->encl_level, targ->encl_slot, targ->connector_name); } /* XXX first, check the firmware state, to see if it's still * operational. if not, do a diag reset. */ mprsas_set_ccbstatus(cm->cm_ccb, CAM_CMD_TIMEOUT); - cm->cm_state = MPR_CM_STATE_TIMEDOUT; + cm->cm_flags |= MPR_CM_FLAGS_ON_RECOVERY | MPR_CM_FLAGS_TIMEDOUT; TAILQ_INSERT_TAIL(&targ->timedout_commands, cm, cm_recovery); if (targ->tm != NULL) { /* target already in recovery, just queue up another * timedout command to be processed later. */ mpr_dprint(sc, MPR_RECOVERY, "queued timedout cm %p for " "processing by tm %p\n", cm, targ->tm); } else if ((targ->tm = mprsas_alloc_tm(sc)) != NULL) { /* start recovery by aborting the first timedout command */ mpr_dprint(sc, MPR_RECOVERY|MPR_INFO, "Sending abort to target %u for SMID %d\n", targ->tid, cm->cm_desc.Default.SMID); mpr_dprint(sc, MPR_RECOVERY, "timedout cm %p allocated tm %p\n", cm, targ->tm); mprsas_send_abort(sc, targ->tm, cm); } else { /* XXX queue this target up for recovery once a TM becomes * available. The firmware only has a limited number of * HighPriority credits for the high priority requests used * for task management, and we ran out. * * Isilon: don't worry about this for now, since we have * more credits than disks in an enclosure, and limit * ourselves to one TM per target for recovery. */ mpr_dprint(sc, MPR_ERROR|MPR_RECOVERY, "timedout cm %p failed to allocate a tm\n", cm); } } /** * mprsas_build_nvme_unmap - Build Native NVMe DSM command equivalent * to SCSI Unmap. * Return 0 - for success, * 1 - to immediately return back the command with success status to CAM * negative value - to fallback to firmware path i.e. issue scsi unmap * to FW without any translation. */ static int mprsas_build_nvme_unmap(struct mpr_softc *sc, struct mpr_command *cm, union ccb *ccb, struct mprsas_target *targ) { Mpi26NVMeEncapsulatedRequest_t *req = NULL; struct ccb_scsiio *csio; struct unmap_parm_list *plist; struct nvme_dsm_range *nvme_dsm_ranges = NULL; struct nvme_command *c; int i, res; uint16_t ndesc, list_len, data_length; struct mpr_prp_page *prp_page_info; uint64_t nvme_dsm_ranges_dma_handle; csio = &ccb->csio; #if __FreeBSD_version >= 1100103 list_len = (scsiio_cdb_ptr(csio)[7] << 8 | scsiio_cdb_ptr(csio)[8]); #else if (csio->ccb_h.flags & CAM_CDB_POINTER) { list_len = (ccb->csio.cdb_io.cdb_ptr[7] << 8 | ccb->csio.cdb_io.cdb_ptr[8]); } else { list_len = (ccb->csio.cdb_io.cdb_bytes[7] << 8 | ccb->csio.cdb_io.cdb_bytes[8]); } #endif if (!list_len) { mpr_dprint(sc, MPR_ERROR, "Parameter list length is Zero\n"); return -EINVAL; } plist = malloc(csio->dxfer_len, M_MPR, M_ZERO|M_NOWAIT); if (!plist) { mpr_dprint(sc, MPR_ERROR, "Unable to allocate memory to " "save UNMAP data\n"); return -ENOMEM; } /* Copy SCSI unmap data to a local buffer */ bcopy(csio->data_ptr, plist, csio->dxfer_len); /* return back the unmap command to CAM with success status, * if number of descripts is zero. */ ndesc = be16toh(plist->unmap_blk_desc_data_len) >> 4; if (!ndesc) { mpr_dprint(sc, MPR_XINFO, "Number of descriptors in " "UNMAP cmd is Zero\n"); res = 1; goto out; } data_length = ndesc * sizeof(struct nvme_dsm_range); if (data_length > targ->MDTS) { mpr_dprint(sc, MPR_ERROR, "data length: %d is greater than " "Device's MDTS: %d\n", data_length, targ->MDTS); res = -EINVAL; goto out; } prp_page_info = mpr_alloc_prp_page(sc); KASSERT(prp_page_info != NULL, ("%s: There is no PRP Page for " "UNMAP command.\n", __func__)); /* * Insert the allocated PRP page into the command's PRP page list. This * will be freed when the command is freed. */ TAILQ_INSERT_TAIL(&cm->cm_prp_page_list, prp_page_info, prp_page_link); nvme_dsm_ranges = (struct nvme_dsm_range *)prp_page_info->prp_page; nvme_dsm_ranges_dma_handle = prp_page_info->prp_page_busaddr; bzero(nvme_dsm_ranges, data_length); /* Convert SCSI unmap's descriptor data to NVMe DSM specific Range data * for each descriptors contained in SCSI UNMAP data. */ for (i = 0; i < ndesc; i++) { nvme_dsm_ranges[i].length = htole32(be32toh(plist->desc[i].nlb)); nvme_dsm_ranges[i].starting_lba = htole64(be64toh(plist->desc[i].slba)); nvme_dsm_ranges[i].attributes = 0; } /* Build MPI2.6's NVMe Encapsulated Request Message */ req = (Mpi26NVMeEncapsulatedRequest_t *)cm->cm_req; bzero(req, sizeof(*req)); req->DevHandle = htole16(targ->handle); req->Function = MPI2_FUNCTION_NVME_ENCAPSULATED; req->Flags = MPI26_NVME_FLAGS_WRITE; req->ErrorResponseBaseAddress.High = htole32((uint32_t)((uint64_t)cm->cm_sense_busaddr >> 32)); req->ErrorResponseBaseAddress.Low = htole32(cm->cm_sense_busaddr); req->ErrorResponseAllocationLength = htole16(sizeof(struct nvme_completion)); req->EncapsulatedCommandLength = htole16(sizeof(struct nvme_command)); req->DataLength = htole32(data_length); /* Build NVMe DSM command */ c = (struct nvme_command *) req->NVMe_Command; c->opc = NVME_OPC_DATASET_MANAGEMENT; c->nsid = htole32(csio->ccb_h.target_lun + 1); c->cdw10 = htole32(ndesc - 1); c->cdw11 = htole32(NVME_DSM_ATTR_DEALLOCATE); cm->cm_length = data_length; cm->cm_data = NULL; cm->cm_complete = mprsas_scsiio_complete; cm->cm_complete_data = ccb; cm->cm_targ = targ; cm->cm_lun = csio->ccb_h.target_lun; cm->cm_ccb = ccb; cm->cm_desc.Default.RequestFlags = MPI26_REQ_DESCRIPT_FLAGS_PCIE_ENCAPSULATED; csio->ccb_h.qos.sim_data = sbinuptime(); #if __FreeBSD_version >= 1000029 callout_reset_sbt(&cm->cm_callout, SBT_1MS * ccb->ccb_h.timeout, 0, mprsas_scsiio_timeout, cm, 0); #else //__FreeBSD_version < 1000029 callout_reset(&cm->cm_callout, (ccb->ccb_h.timeout * hz) / 1000, mprsas_scsiio_timeout, cm); #endif //__FreeBSD_version >= 1000029 targ->issued++; targ->outstanding++; TAILQ_INSERT_TAIL(&targ->commands, cm, cm_link); ccb->ccb_h.status |= CAM_SIM_QUEUED; mprsas_log_command(cm, MPR_XINFO, "%s cm %p ccb %p outstanding %u\n", __func__, cm, ccb, targ->outstanding); mpr_build_nvme_prp(sc, cm, req, (void *)(uintptr_t)nvme_dsm_ranges_dma_handle, 0, data_length); mpr_map_command(sc, cm); out: free(plist, M_MPR); return 0; } static void mprsas_action_scsiio(struct mprsas_softc *sassc, union ccb *ccb) { MPI2_SCSI_IO_REQUEST *req; struct ccb_scsiio *csio; struct mpr_softc *sc; struct mprsas_target *targ; struct mprsas_lun *lun; struct mpr_command *cm; uint8_t i, lba_byte, *ref_tag_addr, scsi_opcode; uint16_t eedp_flags; uint32_t mpi_control; int rc; sc = sassc->sc; MPR_FUNCTRACE(sc); mtx_assert(&sc->mpr_mtx, MA_OWNED); csio = &ccb->csio; KASSERT(csio->ccb_h.target_id < sassc->maxtargets, ("Target %d out of bounds in XPT_SCSI_IO\n", csio->ccb_h.target_id)); targ = &sassc->targets[csio->ccb_h.target_id]; mpr_dprint(sc, MPR_TRACE, "ccb %p target flag %x\n", ccb, targ->flags); if (targ->handle == 0x0) { mpr_dprint(sc, MPR_ERROR, "%s NULL handle for target %u\n", __func__, csio->ccb_h.target_id); mprsas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); xpt_done(ccb); return; } if (targ->flags & MPR_TARGET_FLAGS_RAID_COMPONENT) { mpr_dprint(sc, MPR_ERROR, "%s Raid component no SCSI IO " "supported %u\n", __func__, csio->ccb_h.target_id); mprsas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); xpt_done(ccb); return; } /* * Sometimes, it is possible to get a command that is not "In * Progress" and was actually aborted by the upper layer. Check for * this here and complete the command without error. */ if (mprsas_get_ccbstatus(ccb) != CAM_REQ_INPROG) { mpr_dprint(sc, MPR_TRACE, "%s Command is not in progress for " "target %u\n", __func__, csio->ccb_h.target_id); xpt_done(ccb); return; } /* * If devinfo is 0 this will be a volume. In that case don't tell CAM * that the volume has timed out. We want volumes to be enumerated * until they are deleted/removed, not just failed. */ if (targ->flags & MPRSAS_TARGET_INREMOVAL) { if (targ->devinfo == 0) mprsas_set_ccbstatus(ccb, CAM_REQ_CMP); else mprsas_set_ccbstatus(ccb, CAM_SEL_TIMEOUT); xpt_done(ccb); return; } if ((sc->mpr_flags & MPR_FLAGS_SHUTDOWN) != 0) { mpr_dprint(sc, MPR_INFO, "%s shutting down\n", __func__); mprsas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); xpt_done(ccb); return; } /* * If target has a reset in progress, freeze the devq and return. The * devq will be released when the TM reset is finished. */ if (targ->flags & MPRSAS_TARGET_INRESET) { ccb->ccb_h.status = CAM_BUSY | CAM_DEV_QFRZN; mpr_dprint(sc, MPR_INFO, "%s: Freezing devq for target ID %d\n", __func__, targ->tid); xpt_freeze_devq(ccb->ccb_h.path, 1); xpt_done(ccb); return; } cm = mpr_alloc_command(sc); if (cm == NULL || (sc->mpr_flags & MPR_FLAGS_DIAGRESET)) { if (cm != NULL) { mpr_free_command(sc, cm); } if ((sassc->flags & MPRSAS_QUEUE_FROZEN) == 0) { xpt_freeze_simq(sassc->sim, 1); sassc->flags |= MPRSAS_QUEUE_FROZEN; } ccb->ccb_h.status &= ~CAM_SIM_QUEUED; ccb->ccb_h.status |= CAM_REQUEUE_REQ; xpt_done(ccb); return; } /* For NVME device's issue UNMAP command directly to NVME drives by * constructing equivalent native NVMe DataSetManagement command. */ #if __FreeBSD_version >= 1100103 scsi_opcode = scsiio_cdb_ptr(csio)[0]; #else if (csio->ccb_h.flags & CAM_CDB_POINTER) scsi_opcode = csio->cdb_io.cdb_ptr[0]; else scsi_opcode = csio->cdb_io.cdb_bytes[0]; #endif if (scsi_opcode == UNMAP && targ->is_nvme && (csio->ccb_h.flags & CAM_DATA_MASK) == CAM_DATA_VADDR) { rc = mprsas_build_nvme_unmap(sc, cm, ccb, targ); if (rc == 1) { /* return command to CAM with success status */ mpr_free_command(sc, cm); mprsas_set_ccbstatus(ccb, CAM_REQ_CMP); xpt_done(ccb); return; } else if (!rc) /* Issued NVMe Encapsulated Request Message */ return; } req = (MPI2_SCSI_IO_REQUEST *)cm->cm_req; bzero(req, sizeof(*req)); req->DevHandle = htole16(targ->handle); req->Function = MPI2_FUNCTION_SCSI_IO_REQUEST; req->MsgFlags = 0; req->SenseBufferLowAddress = htole32(cm->cm_sense_busaddr); req->SenseBufferLength = MPR_SENSE_LEN; req->SGLFlags = 0; req->ChainOffset = 0; req->SGLOffset0 = 24; /* 32bit word offset to the SGL */ req->SGLOffset1= 0; req->SGLOffset2= 0; req->SGLOffset3= 0; req->SkipCount = 0; req->DataLength = htole32(csio->dxfer_len); req->BidirectionalDataLength = 0; req->IoFlags = htole16(csio->cdb_len); req->EEDPFlags = 0; /* Note: BiDirectional transfers are not supported */ switch (csio->ccb_h.flags & CAM_DIR_MASK) { case CAM_DIR_IN: mpi_control = MPI2_SCSIIO_CONTROL_READ; cm->cm_flags |= MPR_CM_FLAGS_DATAIN; break; case CAM_DIR_OUT: mpi_control = MPI2_SCSIIO_CONTROL_WRITE; cm->cm_flags |= MPR_CM_FLAGS_DATAOUT; break; case CAM_DIR_NONE: default: mpi_control = MPI2_SCSIIO_CONTROL_NODATATRANSFER; break; } if (csio->cdb_len == 32) mpi_control |= 4 << MPI2_SCSIIO_CONTROL_ADDCDBLEN_SHIFT; /* * It looks like the hardware doesn't require an explicit tag * number for each transaction. SAM Task Management not supported * at the moment. */ switch (csio->tag_action) { case MSG_HEAD_OF_Q_TAG: mpi_control |= MPI2_SCSIIO_CONTROL_HEADOFQ; break; case MSG_ORDERED_Q_TAG: mpi_control |= MPI2_SCSIIO_CONTROL_ORDEREDQ; break; case MSG_ACA_TASK: mpi_control |= MPI2_SCSIIO_CONTROL_ACAQ; break; case CAM_TAG_ACTION_NONE: case MSG_SIMPLE_Q_TAG: default: mpi_control |= MPI2_SCSIIO_CONTROL_SIMPLEQ; break; } mpi_control |= sc->mapping_table[csio->ccb_h.target_id].TLR_bits; req->Control = htole32(mpi_control); if (MPR_SET_LUN(req->LUN, csio->ccb_h.target_lun) != 0) { mpr_free_command(sc, cm); mprsas_set_ccbstatus(ccb, CAM_LUN_INVALID); xpt_done(ccb); return; } if (csio->ccb_h.flags & CAM_CDB_POINTER) bcopy(csio->cdb_io.cdb_ptr, &req->CDB.CDB32[0], csio->cdb_len); else { KASSERT(csio->cdb_len <= IOCDBLEN, ("cdb_len %d is greater than IOCDBLEN but CAM_CDB_POINTER " "is not set", csio->cdb_len)); bcopy(csio->cdb_io.cdb_bytes, &req->CDB.CDB32[0],csio->cdb_len); } req->IoFlags = htole16(csio->cdb_len); /* * Check if EEDP is supported and enabled. If it is then check if the * SCSI opcode could be using EEDP. If so, make sure the LUN exists and * is formatted for EEDP support. If all of this is true, set CDB up * for EEDP transfer. */ eedp_flags = op_code_prot[req->CDB.CDB32[0]]; if (sc->eedp_enabled && eedp_flags) { SLIST_FOREACH(lun, &targ->luns, lun_link) { if (lun->lun_id == csio->ccb_h.target_lun) { break; } } if ((lun != NULL) && (lun->eedp_formatted)) { req->EEDPBlockSize = htole16(lun->eedp_block_size); eedp_flags |= (MPI2_SCSIIO_EEDPFLAGS_INC_PRI_REFTAG | MPI2_SCSIIO_EEDPFLAGS_CHECK_REFTAG | MPI2_SCSIIO_EEDPFLAGS_CHECK_GUARD); if (sc->mpr_flags & MPR_FLAGS_GEN35_IOC) { eedp_flags |= MPI25_SCSIIO_EEDPFLAGS_APPTAG_DISABLE_MODE; } req->EEDPFlags = htole16(eedp_flags); /* * If CDB less than 32, fill in Primary Ref Tag with * low 4 bytes of LBA. If CDB is 32, tag stuff is * already there. Also, set protection bit. FreeBSD * currently does not support CDBs bigger than 16, but * the code doesn't hurt, and will be here for the * future. */ if (csio->cdb_len != 32) { lba_byte = (csio->cdb_len == 16) ? 6 : 2; ref_tag_addr = (uint8_t *)&req->CDB.EEDP32. PrimaryReferenceTag; for (i = 0; i < 4; i++) { *ref_tag_addr = req->CDB.CDB32[lba_byte + i]; ref_tag_addr++; } req->CDB.EEDP32.PrimaryReferenceTag = htole32(req-> CDB.EEDP32.PrimaryReferenceTag); req->CDB.EEDP32.PrimaryApplicationTagMask = 0xFFFF; req->CDB.CDB32[1] = (req->CDB.CDB32[1] & 0x1F) | 0x20; } else { eedp_flags |= MPI2_SCSIIO_EEDPFLAGS_INC_PRI_APPTAG; req->EEDPFlags = htole16(eedp_flags); req->CDB.CDB32[10] = (req->CDB.CDB32[10] & 0x1F) | 0x20; } } } cm->cm_length = csio->dxfer_len; if (cm->cm_length != 0) { cm->cm_data = ccb; cm->cm_flags |= MPR_CM_FLAGS_USE_CCB; } else { cm->cm_data = NULL; } cm->cm_sge = &req->SGL; cm->cm_sglsize = (32 - 24) * 4; cm->cm_complete = mprsas_scsiio_complete; cm->cm_complete_data = ccb; cm->cm_targ = targ; cm->cm_lun = csio->ccb_h.target_lun; cm->cm_ccb = ccb; /* * If using FP desc type, need to set a bit in IoFlags (SCSI IO is 0) * and set descriptor type. */ if (targ->scsi_req_desc_type == MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO) { req->IoFlags |= MPI25_SCSIIO_IOFLAGS_FAST_PATH; cm->cm_desc.FastPathSCSIIO.RequestFlags = MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO; if (!sc->atomic_desc_capable) { cm->cm_desc.FastPathSCSIIO.DevHandle = htole16(targ->handle); } } else { cm->cm_desc.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO; if (!sc->atomic_desc_capable) cm->cm_desc.SCSIIO.DevHandle = htole16(targ->handle); } csio->ccb_h.qos.sim_data = sbinuptime(); #if __FreeBSD_version >= 1000029 callout_reset_sbt(&cm->cm_callout, SBT_1MS * ccb->ccb_h.timeout, 0, mprsas_scsiio_timeout, cm, 0); #else //__FreeBSD_version < 1000029 callout_reset(&cm->cm_callout, (ccb->ccb_h.timeout * hz) / 1000, mprsas_scsiio_timeout, cm); #endif //__FreeBSD_version >= 1000029 targ->issued++; targ->outstanding++; TAILQ_INSERT_TAIL(&targ->commands, cm, cm_link); ccb->ccb_h.status |= CAM_SIM_QUEUED; mprsas_log_command(cm, MPR_XINFO, "%s cm %p ccb %p outstanding %u\n", __func__, cm, ccb, targ->outstanding); mpr_map_command(sc, cm); return; } /** * mpr_sc_failed_io_info - translated non-succesfull SCSI_IO request */ static void mpr_sc_failed_io_info(struct mpr_softc *sc, struct ccb_scsiio *csio, Mpi2SCSIIOReply_t *mpi_reply, struct mprsas_target *targ) { u32 response_info; u8 *response_bytes; u16 ioc_status = le16toh(mpi_reply->IOCStatus) & MPI2_IOCSTATUS_MASK; u8 scsi_state = mpi_reply->SCSIState; u8 scsi_status = mpi_reply->SCSIStatus; char *desc_ioc_state = NULL; char *desc_scsi_status = NULL; u32 log_info = le32toh(mpi_reply->IOCLogInfo); if (log_info == 0x31170000) return; desc_ioc_state = mpr_describe_table(mpr_iocstatus_string, ioc_status); desc_scsi_status = mpr_describe_table(mpr_scsi_status_string, scsi_status); mpr_dprint(sc, MPR_XINFO, "\thandle(0x%04x), ioc_status(%s)(0x%04x)\n", le16toh(mpi_reply->DevHandle), desc_ioc_state, ioc_status); if (targ->encl_level_valid) { mpr_dprint(sc, MPR_XINFO, "At enclosure level %d, slot %d, " "connector name (%4s)\n", targ->encl_level, targ->encl_slot, targ->connector_name); } /* * We can add more detail about underflow data here * TO-DO */ mpr_dprint(sc, MPR_XINFO, "\tscsi_status(%s)(0x%02x), " "scsi_state %b\n", desc_scsi_status, scsi_status, scsi_state, "\20" "\1AutosenseValid" "\2AutosenseFailed" "\3NoScsiStatus" "\4Terminated" "\5Response InfoValid"); if (sc->mpr_debug & MPR_XINFO && scsi_state & MPI2_SCSI_STATE_AUTOSENSE_VALID) { mpr_dprint(sc, MPR_XINFO, "-> Sense Buffer Data : Start :\n"); scsi_sense_print(csio); mpr_dprint(sc, MPR_XINFO, "-> Sense Buffer Data : End :\n"); } if (scsi_state & MPI2_SCSI_STATE_RESPONSE_INFO_VALID) { response_info = le32toh(mpi_reply->ResponseInfo); response_bytes = (u8 *)&response_info; mpr_dprint(sc, MPR_XINFO, "response code(0x%01x): %s\n", response_bytes[0], mpr_describe_table(mpr_scsi_taskmgmt_string, response_bytes[0])); } } /** mprsas_nvme_trans_status_code * * Convert Native NVMe command error status to * equivalent SCSI error status. * * Returns appropriate scsi_status */ static u8 mprsas_nvme_trans_status_code(uint16_t nvme_status, struct mpr_command *cm) { u8 status = MPI2_SCSI_STATUS_GOOD; int skey, asc, ascq; union ccb *ccb = cm->cm_complete_data; int returned_sense_len; uint8_t sct, sc; sct = NVME_STATUS_GET_SCT(nvme_status); sc = NVME_STATUS_GET_SC(nvme_status); status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_ILLEGAL_REQUEST; asc = SCSI_ASC_NO_SENSE; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; switch (sct) { case NVME_SCT_GENERIC: switch (sc) { case NVME_SC_SUCCESS: status = MPI2_SCSI_STATUS_GOOD; skey = SSD_KEY_NO_SENSE; asc = SCSI_ASC_NO_SENSE; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; case NVME_SC_INVALID_OPCODE: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_ILLEGAL_REQUEST; asc = SCSI_ASC_ILLEGAL_COMMAND; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; case NVME_SC_INVALID_FIELD: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_ILLEGAL_REQUEST; asc = SCSI_ASC_INVALID_CDB; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; case NVME_SC_DATA_TRANSFER_ERROR: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_MEDIUM_ERROR; asc = SCSI_ASC_NO_SENSE; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; case NVME_SC_ABORTED_POWER_LOSS: status = MPI2_SCSI_STATUS_TASK_ABORTED; skey = SSD_KEY_ABORTED_COMMAND; asc = SCSI_ASC_WARNING; ascq = SCSI_ASCQ_POWER_LOSS_EXPECTED; break; case NVME_SC_INTERNAL_DEVICE_ERROR: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_HARDWARE_ERROR; asc = SCSI_ASC_INTERNAL_TARGET_FAILURE; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; case NVME_SC_ABORTED_BY_REQUEST: case NVME_SC_ABORTED_SQ_DELETION: case NVME_SC_ABORTED_FAILED_FUSED: case NVME_SC_ABORTED_MISSING_FUSED: status = MPI2_SCSI_STATUS_TASK_ABORTED; skey = SSD_KEY_ABORTED_COMMAND; asc = SCSI_ASC_NO_SENSE; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; case NVME_SC_INVALID_NAMESPACE_OR_FORMAT: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_ILLEGAL_REQUEST; asc = SCSI_ASC_ACCESS_DENIED_INVALID_LUN_ID; ascq = SCSI_ASCQ_INVALID_LUN_ID; break; case NVME_SC_LBA_OUT_OF_RANGE: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_ILLEGAL_REQUEST; asc = SCSI_ASC_ILLEGAL_BLOCK; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; case NVME_SC_CAPACITY_EXCEEDED: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_MEDIUM_ERROR; asc = SCSI_ASC_NO_SENSE; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; case NVME_SC_NAMESPACE_NOT_READY: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_NOT_READY; asc = SCSI_ASC_LUN_NOT_READY; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; } break; case NVME_SCT_COMMAND_SPECIFIC: switch (sc) { case NVME_SC_INVALID_FORMAT: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_ILLEGAL_REQUEST; asc = SCSI_ASC_FORMAT_COMMAND_FAILED; ascq = SCSI_ASCQ_FORMAT_COMMAND_FAILED; break; case NVME_SC_CONFLICTING_ATTRIBUTES: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_ILLEGAL_REQUEST; asc = SCSI_ASC_INVALID_CDB; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; } break; case NVME_SCT_MEDIA_ERROR: switch (sc) { case NVME_SC_WRITE_FAULTS: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_MEDIUM_ERROR; asc = SCSI_ASC_PERIPHERAL_DEV_WRITE_FAULT; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; case NVME_SC_UNRECOVERED_READ_ERROR: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_MEDIUM_ERROR; asc = SCSI_ASC_UNRECOVERED_READ_ERROR; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; case NVME_SC_GUARD_CHECK_ERROR: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_MEDIUM_ERROR; asc = SCSI_ASC_LOG_BLOCK_GUARD_CHECK_FAILED; ascq = SCSI_ASCQ_LOG_BLOCK_GUARD_CHECK_FAILED; break; case NVME_SC_APPLICATION_TAG_CHECK_ERROR: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_MEDIUM_ERROR; asc = SCSI_ASC_LOG_BLOCK_APPTAG_CHECK_FAILED; ascq = SCSI_ASCQ_LOG_BLOCK_APPTAG_CHECK_FAILED; break; case NVME_SC_REFERENCE_TAG_CHECK_ERROR: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_MEDIUM_ERROR; asc = SCSI_ASC_LOG_BLOCK_REFTAG_CHECK_FAILED; ascq = SCSI_ASCQ_LOG_BLOCK_REFTAG_CHECK_FAILED; break; case NVME_SC_COMPARE_FAILURE: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_MISCOMPARE; asc = SCSI_ASC_MISCOMPARE_DURING_VERIFY; ascq = SCSI_ASCQ_CAUSE_NOT_REPORTABLE; break; case NVME_SC_ACCESS_DENIED: status = MPI2_SCSI_STATUS_CHECK_CONDITION; skey = SSD_KEY_ILLEGAL_REQUEST; asc = SCSI_ASC_ACCESS_DENIED_INVALID_LUN_ID; ascq = SCSI_ASCQ_INVALID_LUN_ID; break; } break; } returned_sense_len = sizeof(struct scsi_sense_data); if (returned_sense_len < ccb->csio.sense_len) ccb->csio.sense_resid = ccb->csio.sense_len - returned_sense_len; else ccb->csio.sense_resid = 0; scsi_set_sense_data(&ccb->csio.sense_data, SSD_TYPE_FIXED, 1, skey, asc, ascq, SSD_ELEM_NONE); ccb->ccb_h.status |= CAM_AUTOSNS_VALID; return status; } /** mprsas_complete_nvme_unmap * * Complete native NVMe command issued using NVMe Encapsulated * Request Message. */ static u8 mprsas_complete_nvme_unmap(struct mpr_softc *sc, struct mpr_command *cm) { Mpi26NVMeEncapsulatedErrorReply_t *mpi_reply; struct nvme_completion *nvme_completion = NULL; u8 scsi_status = MPI2_SCSI_STATUS_GOOD; mpi_reply =(Mpi26NVMeEncapsulatedErrorReply_t *)cm->cm_reply; if (le16toh(mpi_reply->ErrorResponseCount)){ nvme_completion = (struct nvme_completion *)cm->cm_sense; scsi_status = mprsas_nvme_trans_status_code( nvme_completion->status, cm); } return scsi_status; } static void mprsas_scsiio_complete(struct mpr_softc *sc, struct mpr_command *cm) { MPI2_SCSI_IO_REPLY *rep; union ccb *ccb; struct ccb_scsiio *csio; struct mprsas_softc *sassc; struct scsi_vpd_supported_page_list *vpd_list = NULL; u8 *TLR_bits, TLR_on, *scsi_cdb; int dir = 0, i; u16 alloc_len; struct mprsas_target *target; target_id_t target_id; MPR_FUNCTRACE(sc); mpr_dprint(sc, MPR_TRACE, "cm %p SMID %u ccb %p reply %p outstanding %u\n", cm, cm->cm_desc.Default.SMID, cm->cm_ccb, cm->cm_reply, cm->cm_targ->outstanding); callout_stop(&cm->cm_callout); mtx_assert(&sc->mpr_mtx, MA_OWNED); sassc = sc->sassc; ccb = cm->cm_complete_data; csio = &ccb->csio; target_id = csio->ccb_h.target_id; rep = (MPI2_SCSI_IO_REPLY *)cm->cm_reply; /* * XXX KDM if the chain allocation fails, does it matter if we do * the sync and unload here? It is simpler to do it in every case, * assuming it doesn't cause problems. */ if (cm->cm_data != NULL) { if (cm->cm_flags & MPR_CM_FLAGS_DATAIN) dir = BUS_DMASYNC_POSTREAD; else if (cm->cm_flags & MPR_CM_FLAGS_DATAOUT) dir = BUS_DMASYNC_POSTWRITE; bus_dmamap_sync(sc->buffer_dmat, cm->cm_dmamap, dir); bus_dmamap_unload(sc->buffer_dmat, cm->cm_dmamap); } cm->cm_targ->completed++; cm->cm_targ->outstanding--; TAILQ_REMOVE(&cm->cm_targ->commands, cm, cm_link); ccb->ccb_h.status &= ~(CAM_STATUS_MASK | CAM_SIM_QUEUED); - if (cm->cm_state == MPR_CM_STATE_TIMEDOUT) { + if (cm->cm_flags & MPR_CM_FLAGS_ON_RECOVERY) { TAILQ_REMOVE(&cm->cm_targ->timedout_commands, cm, cm_recovery); - cm->cm_state = MPR_CM_STATE_BUSY; + KASSERT(cm->cm_state == MPR_CM_STATE_BUSY, + ("Not busy for CM_FLAGS_TIMEDOUT: %d\n", cm->cm_state)); + cm->cm_flags &= ~MPR_CM_FLAGS_ON_RECOVERY; if (cm->cm_reply != NULL) mprsas_log_command(cm, MPR_RECOVERY, "completed timedout cm %p ccb %p during recovery " "ioc %x scsi %x state %x xfer %u\n", cm, cm->cm_ccb, le16toh(rep->IOCStatus), rep->SCSIStatus, rep->SCSIState, le32toh(rep->TransferCount)); else mprsas_log_command(cm, MPR_RECOVERY, "completed timedout cm %p ccb %p during recovery\n", cm, cm->cm_ccb); } else if (cm->cm_targ->tm != NULL) { if (cm->cm_reply != NULL) mprsas_log_command(cm, MPR_RECOVERY, "completed cm %p ccb %p during recovery " "ioc %x scsi %x state %x xfer %u\n", cm, cm->cm_ccb, le16toh(rep->IOCStatus), rep->SCSIStatus, rep->SCSIState, le32toh(rep->TransferCount)); else mprsas_log_command(cm, MPR_RECOVERY, "completed cm %p ccb %p during recovery\n", cm, cm->cm_ccb); } else if ((sc->mpr_flags & MPR_FLAGS_DIAGRESET) != 0) { mprsas_log_command(cm, MPR_RECOVERY, "reset completed cm %p ccb %p\n", cm, cm->cm_ccb); } if ((cm->cm_flags & MPR_CM_FLAGS_ERROR_MASK) != 0) { /* * We ran into an error after we tried to map the command, * so we're getting a callback without queueing the command * to the hardware. So we set the status here, and it will * be retained below. We'll go through the "fast path", * because there can be no reply when we haven't actually * gone out to the hardware. */ mprsas_set_ccbstatus(ccb, CAM_REQUEUE_REQ); /* * Currently the only error included in the mask is * MPR_CM_FLAGS_CHAIN_FAILED, which means we're out of * chain frames. We need to freeze the queue until we get * a command that completed without this error, which will * hopefully have some chain frames attached that we can * use. If we wanted to get smarter about it, we would * only unfreeze the queue in this condition when we're * sure that we're getting some chain frames back. That's * probably unnecessary. */ if ((sassc->flags & MPRSAS_QUEUE_FROZEN) == 0) { xpt_freeze_simq(sassc->sim, 1); sassc->flags |= MPRSAS_QUEUE_FROZEN; mpr_dprint(sc, MPR_XINFO, "Error sending command, " "freezing SIM queue\n"); } } /* * Point to the SCSI CDB, which is dependent on the CAM_CDB_POINTER * flag, and use it in a few places in the rest of this function for * convenience. Use the macro if available. */ #if __FreeBSD_version >= 1100103 scsi_cdb = scsiio_cdb_ptr(csio); #else if (csio->ccb_h.flags & CAM_CDB_POINTER) scsi_cdb = csio->cdb_io.cdb_ptr; else scsi_cdb = csio->cdb_io.cdb_bytes; #endif /* * If this is a Start Stop Unit command and it was issued by the driver * during shutdown, decrement the refcount to account for all of the * commands that were sent. All SSU commands should be completed before * shutdown completes, meaning SSU_refcount will be 0 after SSU_started * is TRUE. */ if (sc->SSU_started && (scsi_cdb[0] == START_STOP_UNIT)) { mpr_dprint(sc, MPR_INFO, "Decrementing SSU count.\n"); sc->SSU_refcount--; } /* Take the fast path to completion */ if (cm->cm_reply == NULL) { if (mprsas_get_ccbstatus(ccb) == CAM_REQ_INPROG) { if ((sc->mpr_flags & MPR_FLAGS_DIAGRESET) != 0) mprsas_set_ccbstatus(ccb, CAM_SCSI_BUS_RESET); else { mprsas_set_ccbstatus(ccb, CAM_REQ_CMP); csio->scsi_status = SCSI_STATUS_OK; } if (sassc->flags & MPRSAS_QUEUE_FROZEN) { ccb->ccb_h.status |= CAM_RELEASE_SIMQ; sassc->flags &= ~MPRSAS_QUEUE_FROZEN; mpr_dprint(sc, MPR_XINFO, "Unfreezing SIM queue\n"); } } /* * There are two scenarios where the status won't be * CAM_REQ_CMP. The first is if MPR_CM_FLAGS_ERROR_MASK is * set, the second is in the MPR_FLAGS_DIAGRESET above. */ if (mprsas_get_ccbstatus(ccb) != CAM_REQ_CMP) { /* * Freeze the dev queue so that commands are * executed in the correct order after error * recovery. */ ccb->ccb_h.status |= CAM_DEV_QFRZN; xpt_freeze_devq(ccb->ccb_h.path, /*count*/ 1); } mpr_free_command(sc, cm); xpt_done(ccb); return; } target = &sassc->targets[target_id]; if (scsi_cdb[0] == UNMAP && target->is_nvme && (csio->ccb_h.flags & CAM_DATA_MASK) == CAM_DATA_VADDR) { rep->SCSIStatus = mprsas_complete_nvme_unmap(sc, cm); csio->scsi_status = rep->SCSIStatus; } mprsas_log_command(cm, MPR_XINFO, "ioc %x scsi %x state %x xfer %u\n", le16toh(rep->IOCStatus), rep->SCSIStatus, rep->SCSIState, le32toh(rep->TransferCount)); switch (le16toh(rep->IOCStatus) & MPI2_IOCSTATUS_MASK) { case MPI2_IOCSTATUS_SCSI_DATA_UNDERRUN: csio->resid = cm->cm_length - le32toh(rep->TransferCount); /* FALLTHROUGH */ case MPI2_IOCSTATUS_SUCCESS: case MPI2_IOCSTATUS_SCSI_RECOVERED_ERROR: if ((le16toh(rep->IOCStatus) & MPI2_IOCSTATUS_MASK) == MPI2_IOCSTATUS_SCSI_RECOVERED_ERROR) mprsas_log_command(cm, MPR_XINFO, "recovered error\n"); /* Completion failed at the transport level. */ if (rep->SCSIState & (MPI2_SCSI_STATE_NO_SCSI_STATUS | MPI2_SCSI_STATE_TERMINATED)) { mprsas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); break; } /* In a modern packetized environment, an autosense failure * implies that there's not much else that can be done to * recover the command. */ if (rep->SCSIState & MPI2_SCSI_STATE_AUTOSENSE_FAILED) { mprsas_set_ccbstatus(ccb, CAM_AUTOSENSE_FAIL); break; } /* * CAM doesn't care about SAS Response Info data, but if this is * the state check if TLR should be done. If not, clear the * TLR_bits for the target. */ if ((rep->SCSIState & MPI2_SCSI_STATE_RESPONSE_INFO_VALID) && ((le32toh(rep->ResponseInfo) & MPI2_SCSI_RI_MASK_REASONCODE) == MPR_SCSI_RI_INVALID_FRAME)) { sc->mapping_table[target_id].TLR_bits = (u8)MPI2_SCSIIO_CONTROL_NO_TLR; } /* * Intentionally override the normal SCSI status reporting * for these two cases. These are likely to happen in a * multi-initiator environment, and we want to make sure that * CAM retries these commands rather than fail them. */ if ((rep->SCSIStatus == MPI2_SCSI_STATUS_COMMAND_TERMINATED) || (rep->SCSIStatus == MPI2_SCSI_STATUS_TASK_ABORTED)) { mprsas_set_ccbstatus(ccb, CAM_REQ_ABORTED); break; } /* Handle normal status and sense */ csio->scsi_status = rep->SCSIStatus; if (rep->SCSIStatus == MPI2_SCSI_STATUS_GOOD) mprsas_set_ccbstatus(ccb, CAM_REQ_CMP); else mprsas_set_ccbstatus(ccb, CAM_SCSI_STATUS_ERROR); if (rep->SCSIState & MPI2_SCSI_STATE_AUTOSENSE_VALID) { int sense_len, returned_sense_len; returned_sense_len = min(le32toh(rep->SenseCount), sizeof(struct scsi_sense_data)); if (returned_sense_len < csio->sense_len) csio->sense_resid = csio->sense_len - returned_sense_len; else csio->sense_resid = 0; sense_len = min(returned_sense_len, csio->sense_len - csio->sense_resid); bzero(&csio->sense_data, sizeof(csio->sense_data)); bcopy(cm->cm_sense, &csio->sense_data, sense_len); ccb->ccb_h.status |= CAM_AUTOSNS_VALID; } /* * Check if this is an INQUIRY command. If it's a VPD inquiry, * and it's page code 0 (Supported Page List), and there is * inquiry data, and this is for a sequential access device, and * the device is an SSP target, and TLR is supported by the * controller, turn the TLR_bits value ON if page 0x90 is * supported. */ if ((scsi_cdb[0] == INQUIRY) && (scsi_cdb[1] & SI_EVPD) && (scsi_cdb[2] == SVPD_SUPPORTED_PAGE_LIST) && ((csio->ccb_h.flags & CAM_DATA_MASK) == CAM_DATA_VADDR) && (csio->data_ptr != NULL) && ((csio->data_ptr[0] & 0x1f) == T_SEQUENTIAL) && (sc->control_TLR) && (sc->mapping_table[target_id].device_info & MPI2_SAS_DEVICE_INFO_SSP_TARGET)) { vpd_list = (struct scsi_vpd_supported_page_list *) csio->data_ptr; TLR_bits = &sc->mapping_table[target_id].TLR_bits; *TLR_bits = (u8)MPI2_SCSIIO_CONTROL_NO_TLR; TLR_on = (u8)MPI2_SCSIIO_CONTROL_TLR_ON; alloc_len = ((u16)scsi_cdb[3] << 8) + scsi_cdb[4]; alloc_len -= csio->resid; for (i = 0; i < MIN(vpd_list->length, alloc_len); i++) { if (vpd_list->list[i] == 0x90) { *TLR_bits = TLR_on; break; } } } /* * If this is a SATA direct-access end device, mark it so that * a SCSI StartStopUnit command will be sent to it when the * driver is being shutdown. */ if ((scsi_cdb[0] == INQUIRY) && (csio->data_ptr != NULL) && ((csio->data_ptr[0] & 0x1f) == T_DIRECT) && (sc->mapping_table[target_id].device_info & MPI2_SAS_DEVICE_INFO_SATA_DEVICE) && ((sc->mapping_table[target_id].device_info & MPI2_SAS_DEVICE_INFO_MASK_DEVICE_TYPE) == MPI2_SAS_DEVICE_INFO_END_DEVICE)) { target = &sassc->targets[target_id]; target->supports_SSU = TRUE; mpr_dprint(sc, MPR_XINFO, "Target %d supports SSU\n", target_id); } break; case MPI2_IOCSTATUS_SCSI_INVALID_DEVHANDLE: case MPI2_IOCSTATUS_SCSI_DEVICE_NOT_THERE: /* * If devinfo is 0 this will be a volume. In that case don't * tell CAM that the volume is not there. We want volumes to * be enumerated until they are deleted/removed, not just * failed. */ if (cm->cm_targ->devinfo == 0) mprsas_set_ccbstatus(ccb, CAM_REQ_CMP); else mprsas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); break; case MPI2_IOCSTATUS_INVALID_SGL: mpr_print_scsiio_cmd(sc, cm); mprsas_set_ccbstatus(ccb, CAM_UNREC_HBA_ERROR); break; case MPI2_IOCSTATUS_SCSI_TASK_TERMINATED: /* * This is one of the responses that comes back when an I/O * has been aborted. If it is because of a timeout that we * initiated, just set the status to CAM_CMD_TIMEOUT. * Otherwise set it to CAM_REQ_ABORTED. The effect on the * command is the same (it gets retried, subject to the * retry counter), the only difference is what gets printed * on the console. */ - if (cm->cm_state == MPR_CM_STATE_TIMEDOUT) + if (cm->cm_flags & MPR_CM_FLAGS_TIMEDOUT) mprsas_set_ccbstatus(ccb, CAM_CMD_TIMEOUT); else mprsas_set_ccbstatus(ccb, CAM_REQ_ABORTED); break; case MPI2_IOCSTATUS_SCSI_DATA_OVERRUN: /* resid is ignored for this condition */ csio->resid = 0; mprsas_set_ccbstatus(ccb, CAM_DATA_RUN_ERR); break; case MPI2_IOCSTATUS_SCSI_IOC_TERMINATED: case MPI2_IOCSTATUS_SCSI_EXT_TERMINATED: /* * These can sometimes be transient transport-related * errors, and sometimes persistent drive-related errors. * We used to retry these without decrementing the retry * count by returning CAM_REQUEUE_REQ. Unfortunately, if * we hit a persistent drive problem that returns one of * these error codes, we would retry indefinitely. So, * return CAM_REQ_CMP_ERROR so that we decrement the retry * count and avoid infinite retries. We're taking the * potential risk of flagging false failures in the event * of a topology-related error (e.g. a SAS expander problem * causes a command addressed to a drive to fail), but * avoiding getting into an infinite retry loop. */ mprsas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); mpr_dprint(sc, MPR_INFO, "Controller reported %s tgt %u SMID %u loginfo %x\n", mpr_describe_table(mpr_iocstatus_string, le16toh(rep->IOCStatus) & MPI2_IOCSTATUS_MASK), target_id, cm->cm_desc.Default.SMID, le32toh(rep->IOCLogInfo)); mpr_dprint(sc, MPR_XINFO, "SCSIStatus %x SCSIState %x xfercount %u\n", rep->SCSIStatus, rep->SCSIState, le32toh(rep->TransferCount)); break; case MPI2_IOCSTATUS_INVALID_FUNCTION: case MPI2_IOCSTATUS_INTERNAL_ERROR: case MPI2_IOCSTATUS_INVALID_VPID: case MPI2_IOCSTATUS_INVALID_FIELD: case MPI2_IOCSTATUS_INVALID_STATE: case MPI2_IOCSTATUS_OP_STATE_NOT_SUPPORTED: case MPI2_IOCSTATUS_SCSI_IO_DATA_ERROR: case MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR: case MPI2_IOCSTATUS_SCSI_RESIDUAL_MISMATCH: case MPI2_IOCSTATUS_SCSI_TASK_MGMT_FAILED: default: mprsas_log_command(cm, MPR_XINFO, "completed ioc %x loginfo %x scsi %x state %x xfer %u\n", le16toh(rep->IOCStatus), le32toh(rep->IOCLogInfo), rep->SCSIStatus, rep->SCSIState, le32toh(rep->TransferCount)); csio->resid = cm->cm_length; if (scsi_cdb[0] == UNMAP && target->is_nvme && (csio->ccb_h.flags & CAM_DATA_MASK) == CAM_DATA_VADDR) mprsas_set_ccbstatus(ccb, CAM_REQ_CMP); else mprsas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); break; } mpr_sc_failed_io_info(sc, csio, rep, cm->cm_targ); if (sassc->flags & MPRSAS_QUEUE_FROZEN) { ccb->ccb_h.status |= CAM_RELEASE_SIMQ; sassc->flags &= ~MPRSAS_QUEUE_FROZEN; mpr_dprint(sc, MPR_XINFO, "Command completed, unfreezing SIM " "queue\n"); } if (mprsas_get_ccbstatus(ccb) != CAM_REQ_CMP) { ccb->ccb_h.status |= CAM_DEV_QFRZN; xpt_freeze_devq(ccb->ccb_h.path, /*count*/ 1); } mpr_free_command(sc, cm); xpt_done(ccb); } #if __FreeBSD_version >= 900026 static void mprsas_smpio_complete(struct mpr_softc *sc, struct mpr_command *cm) { MPI2_SMP_PASSTHROUGH_REPLY *rpl; MPI2_SMP_PASSTHROUGH_REQUEST *req; uint64_t sasaddr; union ccb *ccb; ccb = cm->cm_complete_data; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and SMP * commands require two S/G elements only. That should be handled * in the standard request size. */ if ((cm->cm_flags & MPR_CM_FLAGS_ERROR_MASK) != 0) { mpr_dprint(sc, MPR_ERROR, "%s: cm_flags = %#x on SMP " "request!\n", __func__, cm->cm_flags); mprsas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); goto bailout; } rpl = (MPI2_SMP_PASSTHROUGH_REPLY *)cm->cm_reply; if (rpl == NULL) { mpr_dprint(sc, MPR_ERROR, "%s: NULL cm_reply!\n", __func__); mprsas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); goto bailout; } req = (MPI2_SMP_PASSTHROUGH_REQUEST *)cm->cm_req; sasaddr = le32toh(req->SASAddress.Low); sasaddr |= ((uint64_t)(le32toh(req->SASAddress.High))) << 32; if ((le16toh(rpl->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS || rpl->SASStatus != MPI2_SASSTATUS_SUCCESS) { mpr_dprint(sc, MPR_XINFO, "%s: IOCStatus %04x SASStatus %02x\n", __func__, le16toh(rpl->IOCStatus), rpl->SASStatus); mprsas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); goto bailout; } mpr_dprint(sc, MPR_XINFO, "%s: SMP request to SAS address %#jx " "completed successfully\n", __func__, (uintmax_t)sasaddr); if (ccb->smpio.smp_response[2] == SMP_FR_ACCEPTED) mprsas_set_ccbstatus(ccb, CAM_REQ_CMP); else mprsas_set_ccbstatus(ccb, CAM_SMP_STATUS_ERROR); bailout: /* * We sync in both directions because we had DMAs in the S/G list * in both directions. */ bus_dmamap_sync(sc->buffer_dmat, cm->cm_dmamap, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(sc->buffer_dmat, cm->cm_dmamap); mpr_free_command(sc, cm); xpt_done(ccb); } static void mprsas_send_smpcmd(struct mprsas_softc *sassc, union ccb *ccb, uint64_t sasaddr) { struct mpr_command *cm; uint8_t *request, *response; MPI2_SMP_PASSTHROUGH_REQUEST *req; struct mpr_softc *sc; struct sglist *sg; int error; sc = sassc->sc; sg = NULL; error = 0; #if (__FreeBSD_version >= 1000028) || \ ((__FreeBSD_version >= 902001) && (__FreeBSD_version < 1000000)) switch (ccb->ccb_h.flags & CAM_DATA_MASK) { case CAM_DATA_PADDR: case CAM_DATA_SG_PADDR: /* * XXX We don't yet support physical addresses here. */ mpr_dprint(sc, MPR_ERROR, "%s: physical addresses not " "supported\n", __func__); mprsas_set_ccbstatus(ccb, CAM_REQ_INVALID); xpt_done(ccb); return; case CAM_DATA_SG: /* * The chip does not support more than one buffer for the * request or response. */ if ((ccb->smpio.smp_request_sglist_cnt > 1) || (ccb->smpio.smp_response_sglist_cnt > 1)) { mpr_dprint(sc, MPR_ERROR, "%s: multiple request or " "response buffer segments not supported for SMP\n", __func__); mprsas_set_ccbstatus(ccb, CAM_REQ_INVALID); xpt_done(ccb); return; } /* * The CAM_SCATTER_VALID flag was originally implemented * for the XPT_SCSI_IO CCB, which only has one data pointer. * We have two. So, just take that flag to mean that we * might have S/G lists, and look at the S/G segment count * to figure out whether that is the case for each individual * buffer. */ if (ccb->smpio.smp_request_sglist_cnt != 0) { bus_dma_segment_t *req_sg; req_sg = (bus_dma_segment_t *)ccb->smpio.smp_request; request = (uint8_t *)(uintptr_t)req_sg[0].ds_addr; } else request = ccb->smpio.smp_request; if (ccb->smpio.smp_response_sglist_cnt != 0) { bus_dma_segment_t *rsp_sg; rsp_sg = (bus_dma_segment_t *)ccb->smpio.smp_response; response = (uint8_t *)(uintptr_t)rsp_sg[0].ds_addr; } else response = ccb->smpio.smp_response; break; case CAM_DATA_VADDR: request = ccb->smpio.smp_request; response = ccb->smpio.smp_response; break; default: mprsas_set_ccbstatus(ccb, CAM_REQ_INVALID); xpt_done(ccb); return; } #else /* __FreeBSD_version < 1000028 */ /* * XXX We don't yet support physical addresses here. */ if (ccb->ccb_h.flags & (CAM_DATA_PHYS|CAM_SG_LIST_PHYS)) { mpr_dprint(sc, MPR_ERROR, "%s: physical addresses not " "supported\n", __func__); mprsas_set_ccbstatus(ccb, CAM_REQ_INVALID); xpt_done(ccb); return; } /* * If the user wants to send an S/G list, check to make sure they * have single buffers. */ if (ccb->ccb_h.flags & CAM_SCATTER_VALID) { /* * The chip does not support more than one buffer for the * request or response. */ if ((ccb->smpio.smp_request_sglist_cnt > 1) || (ccb->smpio.smp_response_sglist_cnt > 1)) { mpr_dprint(sc, MPR_ERROR, "%s: multiple request or " "response buffer segments not supported for SMP\n", __func__); mprsas_set_ccbstatus(ccb, CAM_REQ_INVALID); xpt_done(ccb); return; } /* * The CAM_SCATTER_VALID flag was originally implemented * for the XPT_SCSI_IO CCB, which only has one data pointer. * We have two. So, just take that flag to mean that we * might have S/G lists, and look at the S/G segment count * to figure out whether that is the case for each individual * buffer. */ if (ccb->smpio.smp_request_sglist_cnt != 0) { bus_dma_segment_t *req_sg; req_sg = (bus_dma_segment_t *)ccb->smpio.smp_request; request = (uint8_t *)(uintptr_t)req_sg[0].ds_addr; } else request = ccb->smpio.smp_request; if (ccb->smpio.smp_response_sglist_cnt != 0) { bus_dma_segment_t *rsp_sg; rsp_sg = (bus_dma_segment_t *)ccb->smpio.smp_response; response = (uint8_t *)(uintptr_t)rsp_sg[0].ds_addr; } else response = ccb->smpio.smp_response; } else { request = ccb->smpio.smp_request; response = ccb->smpio.smp_response; } #endif /* __FreeBSD_version < 1000028 */ cm = mpr_alloc_command(sc); if (cm == NULL) { mpr_dprint(sc, MPR_ERROR, "%s: cannot allocate command\n", __func__); mprsas_set_ccbstatus(ccb, CAM_RESRC_UNAVAIL); xpt_done(ccb); return; } req = (MPI2_SMP_PASSTHROUGH_REQUEST *)cm->cm_req; bzero(req, sizeof(*req)); req->Function = MPI2_FUNCTION_SMP_PASSTHROUGH; /* Allow the chip to use any route to this SAS address. */ req->PhysicalPort = 0xff; req->RequestDataLength = htole16(ccb->smpio.smp_request_len); req->SGLFlags = MPI2_SGLFLAGS_SYSTEM_ADDRESS_SPACE | MPI2_SGLFLAGS_SGL_TYPE_MPI; mpr_dprint(sc, MPR_XINFO, "%s: sending SMP request to SAS address " "%#jx\n", __func__, (uintmax_t)sasaddr); mpr_init_sge(cm, req, &req->SGL); /* * Set up a uio to pass into mpr_map_command(). This allows us to * do one map command, and one busdma call in there. */ cm->cm_uio.uio_iov = cm->cm_iovec; cm->cm_uio.uio_iovcnt = 2; cm->cm_uio.uio_segflg = UIO_SYSSPACE; /* * The read/write flag isn't used by busdma, but set it just in * case. This isn't exactly accurate, either, since we're going in * both directions. */ cm->cm_uio.uio_rw = UIO_WRITE; cm->cm_iovec[0].iov_base = request; cm->cm_iovec[0].iov_len = le16toh(req->RequestDataLength); cm->cm_iovec[1].iov_base = response; cm->cm_iovec[1].iov_len = ccb->smpio.smp_response_len; cm->cm_uio.uio_resid = cm->cm_iovec[0].iov_len + cm->cm_iovec[1].iov_len; /* * Trigger a warning message in mpr_data_cb() for the user if we * wind up exceeding two S/G segments. The chip expects one * segment for the request and another for the response. */ cm->cm_max_segs = 2; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_complete = mprsas_smpio_complete; cm->cm_complete_data = ccb; /* * Tell the mapping code that we're using a uio, and that this is * an SMP passthrough request. There is a little special-case * logic there (in mpr_data_cb()) to handle the bidirectional * transfer. */ cm->cm_flags |= MPR_CM_FLAGS_USE_UIO | MPR_CM_FLAGS_SMP_PASS | MPR_CM_FLAGS_DATAIN | MPR_CM_FLAGS_DATAOUT; /* The chip data format is little endian. */ req->SASAddress.High = htole32(sasaddr >> 32); req->SASAddress.Low = htole32(sasaddr); /* * XXX Note that we don't have a timeout/abort mechanism here. * From the manual, it looks like task management requests only * work for SCSI IO and SATA passthrough requests. We may need to * have a mechanism to retry requests in the event of a chip reset * at least. Hopefully the chip will insure that any errors short * of that are relayed back to the driver. */ error = mpr_map_command(sc, cm); if ((error != 0) && (error != EINPROGRESS)) { mpr_dprint(sc, MPR_ERROR, "%s: error %d returned from " "mpr_map_command()\n", __func__, error); goto bailout_error; } return; bailout_error: mpr_free_command(sc, cm); mprsas_set_ccbstatus(ccb, CAM_RESRC_UNAVAIL); xpt_done(ccb); return; } static void mprsas_action_smpio(struct mprsas_softc *sassc, union ccb *ccb) { struct mpr_softc *sc; struct mprsas_target *targ; uint64_t sasaddr = 0; sc = sassc->sc; /* * Make sure the target exists. */ KASSERT(ccb->ccb_h.target_id < sassc->maxtargets, ("Target %d out of bounds in XPT_SMP_IO\n", ccb->ccb_h.target_id)); targ = &sassc->targets[ccb->ccb_h.target_id]; if (targ->handle == 0x0) { mpr_dprint(sc, MPR_ERROR, "%s: target %d does not exist!\n", __func__, ccb->ccb_h.target_id); mprsas_set_ccbstatus(ccb, CAM_SEL_TIMEOUT); xpt_done(ccb); return; } /* * If this device has an embedded SMP target, we'll talk to it * directly. * figure out what the expander's address is. */ if ((targ->devinfo & MPI2_SAS_DEVICE_INFO_SMP_TARGET) != 0) sasaddr = targ->sasaddr; /* * If we don't have a SAS address for the expander yet, try * grabbing it from the page 0x83 information cached in the * transport layer for this target. LSI expanders report the * expander SAS address as the port-associated SAS address in * Inquiry VPD page 0x83. Maxim expanders don't report it in page * 0x83. * * XXX KDM disable this for now, but leave it commented out so that * it is obvious that this is another possible way to get the SAS * address. * * The parent handle method below is a little more reliable, and * the other benefit is that it works for devices other than SES * devices. So you can send a SMP request to a da(4) device and it * will get routed to the expander that device is attached to. * (Assuming the da(4) device doesn't contain an SMP target...) */ #if 0 if (sasaddr == 0) sasaddr = xpt_path_sas_addr(ccb->ccb_h.path); #endif /* * If we still don't have a SAS address for the expander, look for * the parent device of this device, which is probably the expander. */ if (sasaddr == 0) { #ifdef OLD_MPR_PROBE struct mprsas_target *parent_target; #endif if (targ->parent_handle == 0x0) { mpr_dprint(sc, MPR_ERROR, "%s: handle %d does not have " "a valid parent handle!\n", __func__, targ->handle); mprsas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); goto bailout; } #ifdef OLD_MPR_PROBE parent_target = mprsas_find_target_by_handle(sassc, 0, targ->parent_handle); if (parent_target == NULL) { mpr_dprint(sc, MPR_ERROR, "%s: handle %d does not have " "a valid parent target!\n", __func__, targ->handle); mprsas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); goto bailout; } if ((parent_target->devinfo & MPI2_SAS_DEVICE_INFO_SMP_TARGET) == 0) { mpr_dprint(sc, MPR_ERROR, "%s: handle %d parent %d " "does not have an SMP target!\n", __func__, targ->handle, parent_target->handle); mprsas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); goto bailout; } sasaddr = parent_target->sasaddr; #else /* OLD_MPR_PROBE */ if ((targ->parent_devinfo & MPI2_SAS_DEVICE_INFO_SMP_TARGET) == 0) { mpr_dprint(sc, MPR_ERROR, "%s: handle %d parent %d " "does not have an SMP target!\n", __func__, targ->handle, targ->parent_handle); mprsas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); goto bailout; } if (targ->parent_sasaddr == 0x0) { mpr_dprint(sc, MPR_ERROR, "%s: handle %d parent handle " "%d does not have a valid SAS address!\n", __func__, targ->handle, targ->parent_handle); mprsas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); goto bailout; } sasaddr = targ->parent_sasaddr; #endif /* OLD_MPR_PROBE */ } if (sasaddr == 0) { mpr_dprint(sc, MPR_INFO, "%s: unable to find SAS address for " "handle %d\n", __func__, targ->handle); mprsas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); goto bailout; } mprsas_send_smpcmd(sassc, ccb, sasaddr); return; bailout: xpt_done(ccb); } #endif //__FreeBSD_version >= 900026 static void mprsas_action_resetdev(struct mprsas_softc *sassc, union ccb *ccb) { MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mpr_softc *sc; struct mpr_command *tm; struct mprsas_target *targ; MPR_FUNCTRACE(sassc->sc); mtx_assert(&sassc->sc->mpr_mtx, MA_OWNED); KASSERT(ccb->ccb_h.target_id < sassc->maxtargets, ("Target %d out of " "bounds in XPT_RESET_DEV\n", ccb->ccb_h.target_id)); sc = sassc->sc; - tm = mpr_alloc_command(sc); + tm = mprsas_alloc_tm(sc); if (tm == NULL) { mpr_dprint(sc, MPR_ERROR, "command alloc failure in " "mprsas_action_resetdev\n"); mprsas_set_ccbstatus(ccb, CAM_RESRC_UNAVAIL); xpt_done(ccb); return; } targ = &sassc->targets[ccb->ccb_h.target_id]; req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; req->DevHandle = htole16(targ->handle); - req->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; req->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET; - /* SAS Hard Link Reset / SATA Link Reset */ - req->MsgFlags = MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET; + if (!targ->is_nvme || sc->custom_nvme_tm_handling) { + /* SAS Hard Link Reset / SATA Link Reset */ + req->MsgFlags = MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET; + } else { + /* PCIe Protocol Level Reset*/ + req->MsgFlags = + MPI26_SCSITASKMGMT_MSGFLAGS_PROTOCOL_LVL_RST_PCIE; + } tm->cm_data = NULL; - tm->cm_desc.HighPriority.RequestFlags = - MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; tm->cm_complete = mprsas_resetdev_complete; tm->cm_complete_data = ccb; mpr_dprint(sc, MPR_INFO, "%s: Sending reset for target ID %d\n", __func__, targ->tid); tm->cm_targ = targ; - targ->flags |= MPRSAS_TARGET_INRESET; + mprsas_prepare_for_tm(sc, tm, targ, CAM_LUN_WILDCARD); mpr_map_command(sc, tm); } static void mprsas_resetdev_complete(struct mpr_softc *sc, struct mpr_command *tm) { MPI2_SCSI_TASK_MANAGE_REPLY *resp; union ccb *ccb; MPR_FUNCTRACE(sc); mtx_assert(&sc->mpr_mtx, MA_OWNED); resp = (MPI2_SCSI_TASK_MANAGE_REPLY *)tm->cm_reply; ccb = tm->cm_complete_data; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * task management commands don't have S/G lists. */ if ((tm->cm_flags & MPR_CM_FLAGS_ERROR_MASK) != 0) { MPI2_SCSI_TASK_MANAGE_REQUEST *req; req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; mpr_dprint(sc, MPR_ERROR, "%s: cm_flags = %#x for reset of " "handle %#04x! This should not happen!\n", __func__, tm->cm_flags, req->DevHandle); mprsas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); goto bailout; } mpr_dprint(sc, MPR_XINFO, "%s: IOCStatus = 0x%x ResponseCode = 0x%x\n", __func__, le16toh(resp->IOCStatus), le32toh(resp->ResponseCode)); if (le32toh(resp->ResponseCode) == MPI2_SCSITASKMGMT_RSP_TM_COMPLETE) { mprsas_set_ccbstatus(ccb, CAM_REQ_CMP); mprsas_announce_reset(sc, AC_SENT_BDR, tm->cm_targ->tid, CAM_LUN_WILDCARD); } else mprsas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); bailout: mprsas_free_tm(sc, tm); xpt_done(ccb); } static void mprsas_poll(struct cam_sim *sim) { struct mprsas_softc *sassc; sassc = cam_sim_softc(sim); if (sassc->sc->mpr_debug & MPR_TRACE) { /* frequent debug messages during a panic just slow * everything down too much. */ mpr_dprint(sassc->sc, MPR_XINFO, "%s clearing MPR_TRACE\n", __func__); sassc->sc->mpr_debug &= ~MPR_TRACE; } mpr_intr_locked(sassc->sc); } static void mprsas_async(void *callback_arg, uint32_t code, struct cam_path *path, void *arg) { struct mpr_softc *sc; sc = (struct mpr_softc *)callback_arg; switch (code) { #if (__FreeBSD_version >= 1000006) || \ ((__FreeBSD_version >= 901503) && (__FreeBSD_version < 1000000)) case AC_ADVINFO_CHANGED: { struct mprsas_target *target; struct mprsas_softc *sassc; struct scsi_read_capacity_data_long rcap_buf; struct ccb_dev_advinfo cdai; struct mprsas_lun *lun; lun_id_t lunid; int found_lun; uintptr_t buftype; buftype = (uintptr_t)arg; found_lun = 0; sassc = sc->sassc; /* * We're only interested in read capacity data changes. */ if (buftype != CDAI_TYPE_RCAPLONG) break; /* * See the comment in mpr_attach_sas() for a detailed * explanation. In these versions of FreeBSD we register * for all events and filter out the events that don't * apply to us. */ #if (__FreeBSD_version < 1000703) || \ ((__FreeBSD_version >= 1100000) && (__FreeBSD_version < 1100002)) if (xpt_path_path_id(path) != sassc->sim->path_id) break; #endif /* * We should have a handle for this, but check to make sure. */ KASSERT(xpt_path_target_id(path) < sassc->maxtargets, ("Target %d out of bounds in mprsas_async\n", xpt_path_target_id(path))); target = &sassc->targets[xpt_path_target_id(path)]; if (target->handle == 0) break; lunid = xpt_path_lun_id(path); SLIST_FOREACH(lun, &target->luns, lun_link) { if (lun->lun_id == lunid) { found_lun = 1; break; } } if (found_lun == 0) { lun = malloc(sizeof(struct mprsas_lun), M_MPR, M_NOWAIT | M_ZERO); if (lun == NULL) { mpr_dprint(sc, MPR_ERROR, "Unable to alloc " "LUN for EEDP support.\n"); break; } lun->lun_id = lunid; SLIST_INSERT_HEAD(&target->luns, lun, lun_link); } bzero(&rcap_buf, sizeof(rcap_buf)); xpt_setup_ccb(&cdai.ccb_h, path, CAM_PRIORITY_NORMAL); cdai.ccb_h.func_code = XPT_DEV_ADVINFO; cdai.ccb_h.flags = CAM_DIR_IN; cdai.buftype = CDAI_TYPE_RCAPLONG; #if (__FreeBSD_version >= 1100061) || \ ((__FreeBSD_version >= 1001510) && (__FreeBSD_version < 1100000)) cdai.flags = CDAI_FLAG_NONE; #else cdai.flags = 0; #endif cdai.bufsiz = sizeof(rcap_buf); cdai.buf = (uint8_t *)&rcap_buf; xpt_action((union ccb *)&cdai); if ((cdai.ccb_h.status & CAM_DEV_QFRZN) != 0) cam_release_devq(cdai.ccb_h.path, 0, 0, 0, FALSE); if ((mprsas_get_ccbstatus((union ccb *)&cdai) == CAM_REQ_CMP) && (rcap_buf.prot & SRC16_PROT_EN)) { switch (rcap_buf.prot & SRC16_P_TYPE) { case SRC16_PTYPE_1: case SRC16_PTYPE_3: lun->eedp_formatted = TRUE; lun->eedp_block_size = scsi_4btoul(rcap_buf.length); break; case SRC16_PTYPE_2: default: lun->eedp_formatted = FALSE; lun->eedp_block_size = 0; break; } } else { lun->eedp_formatted = FALSE; lun->eedp_block_size = 0; } break; } #endif case AC_FOUND_DEVICE: { struct ccb_getdev *cgd; /* * See the comment in mpr_attach_sas() for a detailed * explanation. In these versions of FreeBSD we register * for all events and filter out the events that don't * apply to us. */ #if (__FreeBSD_version < 1000703) || \ ((__FreeBSD_version >= 1100000) && (__FreeBSD_version < 1100002)) if (xpt_path_path_id(path) != sc->sassc->sim->path_id) break; #endif cgd = arg; #if (__FreeBSD_version < 901503) || \ ((__FreeBSD_version >= 1000000) && (__FreeBSD_version < 1000006)) mprsas_check_eedp(sc, path, cgd); #endif break; } default: break; } } #if (__FreeBSD_version < 901503) || \ ((__FreeBSD_version >= 1000000) && (__FreeBSD_version < 1000006)) static void mprsas_check_eedp(struct mpr_softc *sc, struct cam_path *path, struct ccb_getdev *cgd) { struct mprsas_softc *sassc = sc->sassc; struct ccb_scsiio *csio; struct scsi_read_capacity_16 *scsi_cmd; struct scsi_read_capacity_eedp *rcap_buf; path_id_t pathid; target_id_t targetid; lun_id_t lunid; union ccb *ccb; struct cam_path *local_path; struct mprsas_target *target; struct mprsas_lun *lun; uint8_t found_lun; char path_str[64]; pathid = cam_sim_path(sassc->sim); targetid = xpt_path_target_id(path); lunid = xpt_path_lun_id(path); KASSERT(targetid < sassc->maxtargets, ("Target %d out of bounds in " "mprsas_check_eedp\n", targetid)); target = &sassc->targets[targetid]; if (target->handle == 0x0) return; /* * Determine if the device is EEDP capable. * * If this flag is set in the inquiry data, the device supports * protection information, and must support the 16 byte read capacity * command, otherwise continue without sending read cap 16. */ if ((cgd->inq_data.spc3_flags & SPC3_SID_PROTECT) == 0) return; /* * Issue a READ CAPACITY 16 command. This info is used to determine if * the LUN is formatted for EEDP support. */ ccb = xpt_alloc_ccb_nowait(); if (ccb == NULL) { mpr_dprint(sc, MPR_ERROR, "Unable to alloc CCB for EEDP " "support.\n"); return; } if (xpt_create_path(&local_path, xpt_periph, pathid, targetid, lunid) != CAM_REQ_CMP) { mpr_dprint(sc, MPR_ERROR, "Unable to create path for EEDP " "support.\n"); xpt_free_ccb(ccb); return; } /* * If LUN is already in list, don't create a new one. */ found_lun = FALSE; SLIST_FOREACH(lun, &target->luns, lun_link) { if (lun->lun_id == lunid) { found_lun = TRUE; break; } } if (!found_lun) { lun = malloc(sizeof(struct mprsas_lun), M_MPR, M_NOWAIT | M_ZERO); if (lun == NULL) { mpr_dprint(sc, MPR_ERROR, "Unable to alloc LUN for " "EEDP support.\n"); xpt_free_path(local_path); xpt_free_ccb(ccb); return; } lun->lun_id = lunid; SLIST_INSERT_HEAD(&target->luns, lun, lun_link); } xpt_path_string(local_path, path_str, sizeof(path_str)); mpr_dprint(sc, MPR_INFO, "Sending read cap: path %s handle %d\n", path_str, target->handle); /* * Issue a READ CAPACITY 16 command for the LUN. The * mprsas_read_cap_done function will load the read cap info into the * LUN struct. */ rcap_buf = malloc(sizeof(struct scsi_read_capacity_eedp), M_MPR, M_NOWAIT | M_ZERO); if (rcap_buf == NULL) { mpr_dprint(sc, MPR_ERROR, "Unable to alloc read capacity " "buffer for EEDP support.\n"); xpt_free_path(ccb->ccb_h.path); xpt_free_ccb(ccb); return; } xpt_setup_ccb(&ccb->ccb_h, local_path, CAM_PRIORITY_XPT); csio = &ccb->csio; csio->ccb_h.func_code = XPT_SCSI_IO; csio->ccb_h.flags = CAM_DIR_IN; csio->ccb_h.retry_count = 4; csio->ccb_h.cbfcnp = mprsas_read_cap_done; csio->ccb_h.timeout = 60000; csio->data_ptr = (uint8_t *)rcap_buf; csio->dxfer_len = sizeof(struct scsi_read_capacity_eedp); csio->sense_len = MPR_SENSE_LEN; csio->cdb_len = sizeof(*scsi_cmd); csio->tag_action = MSG_SIMPLE_Q_TAG; scsi_cmd = (struct scsi_read_capacity_16 *)&csio->cdb_io.cdb_bytes; bzero(scsi_cmd, sizeof(*scsi_cmd)); scsi_cmd->opcode = 0x9E; scsi_cmd->service_action = SRC16_SERVICE_ACTION; ((uint8_t *)scsi_cmd)[13] = sizeof(struct scsi_read_capacity_eedp); ccb->ccb_h.ppriv_ptr1 = sassc; xpt_action(ccb); } static void mprsas_read_cap_done(struct cam_periph *periph, union ccb *done_ccb) { struct mprsas_softc *sassc; struct mprsas_target *target; struct mprsas_lun *lun; struct scsi_read_capacity_eedp *rcap_buf; if (done_ccb == NULL) return; /* Driver need to release devq, it Scsi command is * generated by driver internally. * Currently there is a single place where driver * calls scsi command internally. In future if driver * calls more scsi command internally, it needs to release * devq internally, since those command will not go back to * cam_periph. */ if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) ) { done_ccb->ccb_h.status &= ~CAM_DEV_QFRZN; xpt_release_devq(done_ccb->ccb_h.path, /*count*/ 1, /*run_queue*/TRUE); } rcap_buf = (struct scsi_read_capacity_eedp *)done_ccb->csio.data_ptr; /* * Get the LUN ID for the path and look it up in the LUN list for the * target. */ sassc = (struct mprsas_softc *)done_ccb->ccb_h.ppriv_ptr1; KASSERT(done_ccb->ccb_h.target_id < sassc->maxtargets, ("Target %d out " "of bounds in mprsas_read_cap_done\n", done_ccb->ccb_h.target_id)); target = &sassc->targets[done_ccb->ccb_h.target_id]; SLIST_FOREACH(lun, &target->luns, lun_link) { if (lun->lun_id != done_ccb->ccb_h.target_lun) continue; /* * Got the LUN in the target's LUN list. Fill it in with EEDP * info. If the READ CAP 16 command had some SCSI error (common * if command is not supported), mark the lun as not supporting * EEDP and set the block size to 0. */ if ((mprsas_get_ccbstatus(done_ccb) != CAM_REQ_CMP) || (done_ccb->csio.scsi_status != SCSI_STATUS_OK)) { lun->eedp_formatted = FALSE; lun->eedp_block_size = 0; break; } if (rcap_buf->protect & 0x01) { mpr_dprint(sassc->sc, MPR_INFO, "LUN %d for target ID " "%d is formatted for EEDP support.\n", done_ccb->ccb_h.target_lun, done_ccb->ccb_h.target_id); lun->eedp_formatted = TRUE; lun->eedp_block_size = scsi_4btoul(rcap_buf->length); } break; } // Finished with this CCB and path. free(rcap_buf, M_MPR); xpt_free_path(done_ccb->ccb_h.path); xpt_free_ccb(done_ccb); } #endif /* (__FreeBSD_version < 901503) || \ ((__FreeBSD_version >= 1000000) && (__FreeBSD_version < 1000006)) */ +/* + * Set the INRESET flag for this target so that no I/O will be sent to + * the target until the reset has completed. If an I/O request does + * happen, the devq will be frozen. The CCB holds the path which is + * used to release the devq. The devq is released and the CCB is freed + * when the TM completes. + */ void mprsas_prepare_for_tm(struct mpr_softc *sc, struct mpr_command *tm, struct mprsas_target *target, lun_id_t lun_id) { union ccb *ccb; path_id_t path_id; - /* - * Set the INRESET flag for this target so that no I/O will be sent to - * the target until the reset has completed. If an I/O request does - * happen, the devq will be frozen. The CCB holds the path which is - * used to release the devq. The devq is released and the CCB is freed - * when the TM completes. - */ ccb = xpt_alloc_ccb_nowait(); if (ccb) { path_id = cam_sim_path(sc->sassc->sim); if (xpt_create_path(&ccb->ccb_h.path, xpt_periph, path_id, target->tid, lun_id) != CAM_REQ_CMP) { xpt_free_ccb(ccb); } else { tm->cm_ccb = ccb; tm->cm_targ = target; target->flags |= MPRSAS_TARGET_INRESET; } } } int mprsas_startup(struct mpr_softc *sc) { /* * Send the port enable message and set the wait_for_port_enable flag. * This flag helps to keep the simq frozen until all discovery events * are processed. */ sc->wait_for_port_enable = 1; mprsas_send_portenable(sc); return (0); } static int mprsas_send_portenable(struct mpr_softc *sc) { MPI2_PORT_ENABLE_REQUEST *request; struct mpr_command *cm; MPR_FUNCTRACE(sc); if ((cm = mpr_alloc_command(sc)) == NULL) return (EBUSY); request = (MPI2_PORT_ENABLE_REQUEST *)cm->cm_req; request->Function = MPI2_FUNCTION_PORT_ENABLE; request->MsgFlags = 0; request->VP_ID = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_complete = mprsas_portenable_complete; cm->cm_data = NULL; cm->cm_sge = NULL; mpr_map_command(sc, cm); mpr_dprint(sc, MPR_XINFO, "mpr_send_portenable finished cm %p req %p complete %p\n", cm, cm->cm_req, cm->cm_complete); return (0); } static void mprsas_portenable_complete(struct mpr_softc *sc, struct mpr_command *cm) { MPI2_PORT_ENABLE_REPLY *reply; struct mprsas_softc *sassc; MPR_FUNCTRACE(sc); sassc = sc->sassc; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * port enable commands don't have S/G lists. */ if ((cm->cm_flags & MPR_CM_FLAGS_ERROR_MASK) != 0) { mpr_dprint(sc, MPR_ERROR, "%s: cm_flags = %#x for port enable! " "This should not happen!\n", __func__, cm->cm_flags); } reply = (MPI2_PORT_ENABLE_REPLY *)cm->cm_reply; if (reply == NULL) mpr_dprint(sc, MPR_FAULT, "Portenable NULL reply\n"); else if (le16toh(reply->IOCStatus & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) mpr_dprint(sc, MPR_FAULT, "Portenable failed\n"); mpr_free_command(sc, cm); /* * Done waiting for port enable to complete. Decrement the refcount. * If refcount is 0, discovery is complete and a rescan of the bus can * take place. */ sc->wait_for_port_enable = 0; sc->port_enable_complete = 1; wakeup(&sc->port_enable_complete); mprsas_startup_decrement(sassc); } int mprsas_check_id(struct mprsas_softc *sassc, int id) { struct mpr_softc *sc = sassc->sc; char *ids; char *name; ids = &sc->exclude_ids[0]; while((name = strsep(&ids, ",")) != NULL) { if (name[0] == '\0') continue; if (strtol(name, NULL, 0) == (long)id) return (1); } return (0); } void mprsas_realloc_targets(struct mpr_softc *sc, int maxtargets) { struct mprsas_softc *sassc; struct mprsas_lun *lun, *lun_tmp; struct mprsas_target *targ; int i; sassc = sc->sassc; /* * The number of targets is based on IOC Facts, so free all of * the allocated LUNs for each target and then the target buffer * itself. */ for (i=0; i< maxtargets; i++) { targ = &sassc->targets[i]; SLIST_FOREACH_SAFE(lun, &targ->luns, lun_link, lun_tmp) { free(lun, M_MPR); } } free(sassc->targets, M_MPR); sassc->targets = malloc(sizeof(struct mprsas_target) * maxtargets, M_MPR, M_WAITOK|M_ZERO); if (!sassc->targets) { panic("%s failed to alloc targets with error %d\n", __func__, ENOMEM); } } Index: releng/12.1/sys/dev/mpr/mpr_sas.h =================================================================== --- releng/12.1/sys/dev/mpr/mpr_sas.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mpr_sas.h (revision 352761) @@ -1,178 +1,180 @@ /*- * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2016 Avago Technologies + * Copyright 2000-2020 Broadcom Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ struct mpr_fw_event_work; struct mprsas_lun { SLIST_ENTRY(mprsas_lun) lun_link; lun_id_t lun_id; uint8_t eedp_formatted; uint32_t eedp_block_size; }; struct mprsas_target { uint16_t handle; uint8_t linkrate; uint8_t encl_level_valid; uint8_t encl_level; char connector_name[4]; uint64_t devname; uint32_t devinfo; uint16_t encl_handle; uint16_t encl_slot; uint8_t flags; #define MPRSAS_TARGET_INABORT (1 << 0) #define MPRSAS_TARGET_INRESET (1 << 1) #define MPRSAS_TARGET_INDIAGRESET (1 << 2) #define MPRSAS_TARGET_INREMOVAL (1 << 3) #define MPR_TARGET_FLAGS_RAID_COMPONENT (1 << 4) #define MPR_TARGET_FLAGS_VOLUME (1 << 5) #define MPR_TARGET_IS_SATA_SSD (1 << 6) #define MPRSAS_TARGET_INRECOVERY (MPRSAS_TARGET_INABORT | \ MPRSAS_TARGET_INRESET | MPRSAS_TARGET_INCHIPRESET) uint16_t tid; SLIST_HEAD(, mprsas_lun) luns; TAILQ_HEAD(, mpr_command) commands; struct mpr_command *tm; TAILQ_HEAD(, mpr_command) timedout_commands; uint16_t exp_dev_handle; uint16_t phy_num; uint64_t sasaddr; uint16_t parent_handle; uint64_t parent_sasaddr; uint32_t parent_devinfo; uint64_t issued; uint64_t completed; unsigned int outstanding; unsigned int timeouts; unsigned int aborts; unsigned int logical_unit_resets; unsigned int target_resets; uint8_t scsi_req_desc_type; uint8_t stop_at_shutdown; uint8_t supports_SSU; uint8_t is_nvme; uint32_t MDTS; + uint8_t controller_reset_timeout; }; struct mprsas_softc { struct mpr_softc *sc; u_int flags; #define MPRSAS_IN_DISCOVERY (1 << 0) #define MPRSAS_IN_STARTUP (1 << 1) #define MPRSAS_DISCOVERY_TIMEOUT_PENDING (1 << 2) #define MPRSAS_QUEUE_FROZEN (1 << 3) #define MPRSAS_SHUTDOWN (1 << 4) u_int maxtargets; struct mprsas_target *targets; struct cam_devq *devq; struct cam_sim *sim; struct cam_path *path; struct intr_config_hook sas_ich; struct callout discovery_callout; struct mpr_event_handle *mprsas_eh; u_int startup_refcount; struct proc *sysctl_proc; struct taskqueue *ev_tq; struct task ev_task; TAILQ_HEAD(, mpr_fw_event_work) ev_queue; }; MALLOC_DECLARE(M_MPRSAS); /* * Abstracted so that the driver can be backwards and forwards compatible * with future versions of CAM that will provide this functionality. */ #define MPR_SET_LUN(lun, ccblun) \ mprsas_set_lun(lun, ccblun) static __inline int mprsas_set_lun(uint8_t *lun, u_int ccblun) { uint64_t *newlun; newlun = (uint64_t *)lun; *newlun = 0; if (ccblun <= 0xff) { /* Peripheral device address method, LUN is 0 to 255 */ lun[1] = ccblun; } else if (ccblun <= 0x3fff) { /* Flat space address method, LUN is <= 16383 */ scsi_ulto2b(ccblun, lun); lun[0] |= 0x40; } else if (ccblun <= 0xffffff) { /* Extended flat space address method, LUN is <= 16777215 */ scsi_ulto3b(ccblun, &lun[1]); /* Extended Flat space address method */ lun[0] = 0xc0; /* Length = 1, i.e. LUN is 3 bytes long */ lun[0] |= 0x10; /* Extended Address Method */ lun[0] |= 0x02; } else { return (EINVAL); } return (0); } static __inline void mprsas_set_ccbstatus(union ccb *ccb, int status) { ccb->ccb_h.status &= ~CAM_STATUS_MASK; ccb->ccb_h.status |= status; } static __inline int mprsas_get_ccbstatus(union ccb *ccb) { return (ccb->ccb_h.status & CAM_STATUS_MASK); } #define MPR_SET_SINGLE_LUN(req, lun) \ do { \ bzero((req)->LUN, 8); \ (req)->LUN[1] = lun; \ } while(0) void mprsas_rescan_target(struct mpr_softc *sc, struct mprsas_target *targ); void mprsas_discovery_end(struct mprsas_softc *sassc); void mprsas_prepare_for_tm(struct mpr_softc *sc, struct mpr_command *tm, struct mprsas_target *target, lun_id_t lun_id); void mprsas_startup_increment(struct mprsas_softc *sassc); void mprsas_startup_decrement(struct mprsas_softc *sassc); void mprsas_firmware_event_work(void *arg, int pending); int mprsas_check_id(struct mprsas_softc *sassc, int id); Index: releng/12.1/sys/dev/mpr/mpr_sas_lsi.c =================================================================== --- releng/12.1/sys/dev/mpr/mpr_sas_lsi.c (revision 352760) +++ releng/12.1/sys/dev/mpr/mpr_sas_lsi.c (revision 352761) @@ -1,1715 +1,1701 @@ /*- * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2016 Avago Technologies + * Copyright 2000-2020 Broadcom Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD */ #include __FBSDID("$FreeBSD$"); /* Communications core for Avago Technologies (LSI) MPT3 */ /* TODO Move headers to mprvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* For Hashed SAS Address creation for SATA Drives */ #define MPT2SAS_SN_LEN 20 #define MPT2SAS_MN_LEN 40 struct mpr_fw_event_work { u16 event; void *event_data; TAILQ_ENTRY(mpr_fw_event_work) ev_link; }; union _sata_sas_address { u8 wwid[8]; struct { u32 high; u32 low; } word; }; /* * define the IDENTIFY DEVICE structure */ struct _ata_identify_device_data { u16 reserved1[10]; /* 0-9 */ u16 serial_number[10]; /* 10-19 */ u16 reserved2[7]; /* 20-26 */ u16 model_number[20]; /* 27-46*/ u16 reserved3[170]; /* 47-216 */ u16 rotational_speed; /* 217 */ u16 reserved4[38]; /* 218-255 */ }; static u32 event_count; static void mprsas_fw_work(struct mpr_softc *sc, struct mpr_fw_event_work *fw_event); static void mprsas_fw_event_free(struct mpr_softc *, struct mpr_fw_event_work *); static int mprsas_add_device(struct mpr_softc *sc, u16 handle, u8 linkrate); static int mprsas_add_pcie_device(struct mpr_softc *sc, u16 handle, u8 linkrate); static int mprsas_get_sata_identify(struct mpr_softc *sc, u16 handle, Mpi2SataPassthroughReply_t *mpi_reply, char *id_buffer, int sz, u32 devinfo); -static void mprsas_ata_id_timeout(void *data); +static void mprsas_ata_id_timeout(struct mpr_softc *, struct mpr_command *); int mprsas_get_sas_address_for_sata_disk(struct mpr_softc *sc, u64 *sas_address, u16 handle, u32 device_info, u8 *is_SATA_SSD); static int mprsas_volume_add(struct mpr_softc *sc, u16 handle); static void mprsas_SSU_to_SATA_devices(struct mpr_softc *sc, int howto); static void mprsas_stop_unit_done(struct cam_periph *periph, union ccb *done_ccb); void mprsas_evt_handler(struct mpr_softc *sc, uintptr_t data, MPI2_EVENT_NOTIFICATION_REPLY *event) { struct mpr_fw_event_work *fw_event; u16 sz; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); MPR_DPRINT_EVENT(sc, sas, event); mprsas_record_event(sc, event); fw_event = malloc(sizeof(struct mpr_fw_event_work), M_MPR, M_ZERO|M_NOWAIT); if (!fw_event) { printf("%s: allocate failed for fw_event\n", __func__); return; } sz = le16toh(event->EventDataLength) * 4; fw_event->event_data = malloc(sz, M_MPR, M_ZERO|M_NOWAIT); if (!fw_event->event_data) { printf("%s: allocate failed for event_data\n", __func__); free(fw_event, M_MPR); return; } bcopy(event->EventData, fw_event->event_data, sz); fw_event->event = event->Event; if ((event->Event == MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST || event->Event == MPI2_EVENT_PCIE_TOPOLOGY_CHANGE_LIST || event->Event == MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE || event->Event == MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST) && sc->track_mapping_events) sc->pending_map_events++; /* * When wait_for_port_enable flag is set, make sure that all the events * are processed. Increment the startup_refcount and decrement it after * events are processed. */ if ((event->Event == MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST || event->Event == MPI2_EVENT_PCIE_TOPOLOGY_CHANGE_LIST || event->Event == MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST) && sc->wait_for_port_enable) mprsas_startup_increment(sc->sassc); TAILQ_INSERT_TAIL(&sc->sassc->ev_queue, fw_event, ev_link); taskqueue_enqueue(sc->sassc->ev_tq, &sc->sassc->ev_task); } static void mprsas_fw_event_free(struct mpr_softc *sc, struct mpr_fw_event_work *fw_event) { free(fw_event->event_data, M_MPR); free(fw_event, M_MPR); } /** * _mpr_fw_work - delayed task for processing firmware events * @sc: per adapter object * @fw_event: The fw_event_work object * Context: user. * * Return nothing. */ static void mprsas_fw_work(struct mpr_softc *sc, struct mpr_fw_event_work *fw_event) { struct mprsas_softc *sassc; sassc = sc->sassc; mpr_dprint(sc, MPR_EVENT, "(%d)->(%s) Working on Event: [%x]\n", event_count++, __func__, fw_event->event); switch (fw_event->event) { case MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST: { MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST *data; MPI2_EVENT_SAS_TOPO_PHY_ENTRY *phy; uint8_t i; data = (MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST *) fw_event->event_data; mpr_mapping_topology_change_event(sc, fw_event->event_data); for (i = 0; i < data->NumEntries; i++) { phy = &data->PHY[i]; switch (phy->PhyStatus & MPI2_EVENT_SAS_TOPO_RC_MASK) { case MPI2_EVENT_SAS_TOPO_RC_TARG_ADDED: if (mprsas_add_device(sc, le16toh(phy->AttachedDevHandle), phy->LinkRate)) { mpr_dprint(sc, MPR_ERROR, "%s: " "failed to add device with handle " "0x%x\n", __func__, le16toh(phy->AttachedDevHandle)); mprsas_prepare_remove(sassc, le16toh( phy->AttachedDevHandle)); } break; case MPI2_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING: mprsas_prepare_remove(sassc, le16toh( phy->AttachedDevHandle)); break; case MPI2_EVENT_SAS_TOPO_RC_PHY_CHANGED: case MPI2_EVENT_SAS_TOPO_RC_NO_CHANGE: case MPI2_EVENT_SAS_TOPO_RC_DELAY_NOT_RESPONDING: default: break; } } /* * refcount was incremented for this event in * mprsas_evt_handler. Decrement it here because the event has * been processed. */ mprsas_startup_decrement(sassc); break; } case MPI2_EVENT_SAS_DISCOVERY: { MPI2_EVENT_DATA_SAS_DISCOVERY *data; data = (MPI2_EVENT_DATA_SAS_DISCOVERY *)fw_event->event_data; if (data->ReasonCode & MPI2_EVENT_SAS_DISC_RC_STARTED) mpr_dprint(sc, MPR_TRACE,"SAS discovery start event\n"); if (data->ReasonCode & MPI2_EVENT_SAS_DISC_RC_COMPLETED) { mpr_dprint(sc, MPR_TRACE,"SAS discovery stop event\n"); sassc->flags &= ~MPRSAS_IN_DISCOVERY; mprsas_discovery_end(sassc); } break; } case MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE: { Mpi2EventDataSasEnclDevStatusChange_t *data; data = (Mpi2EventDataSasEnclDevStatusChange_t *) fw_event->event_data; mpr_mapping_enclosure_dev_status_change_event(sc, fw_event->event_data); break; } case MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST: { Mpi2EventIrConfigElement_t *element; int i; u8 foreign_config, reason; u16 elementType; Mpi2EventDataIrConfigChangeList_t *event_data; struct mprsas_target *targ; unsigned int id; event_data = fw_event->event_data; foreign_config = (le32toh(event_data->Flags) & MPI2_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG) ? 1 : 0; element = (Mpi2EventIrConfigElement_t *)&event_data->ConfigElement[0]; id = mpr_mapping_get_raid_tid_from_handle(sc, element->VolDevHandle); mpr_mapping_ir_config_change_event(sc, event_data); for (i = 0; i < event_data->NumElements; i++, element++) { reason = element->ReasonCode; elementType = le16toh(element->ElementFlags) & MPI2_EVENT_IR_CHANGE_EFLAGS_ELEMENT_TYPE_MASK; /* * check for element type of Phys Disk or Hot Spare */ if ((elementType != MPI2_EVENT_IR_CHANGE_EFLAGS_VOLPHYSDISK_ELEMENT) && (elementType != MPI2_EVENT_IR_CHANGE_EFLAGS_HOTSPARE_ELEMENT)) // do next element goto skip_fp_send; /* * check for reason of Hide, Unhide, PD Created, or PD * Deleted */ if ((reason != MPI2_EVENT_IR_CHANGE_RC_HIDE) && (reason != MPI2_EVENT_IR_CHANGE_RC_UNHIDE) && (reason != MPI2_EVENT_IR_CHANGE_RC_PD_CREATED) && (reason != MPI2_EVENT_IR_CHANGE_RC_PD_DELETED)) goto skip_fp_send; // check for a reason of Hide or PD Created if ((reason == MPI2_EVENT_IR_CHANGE_RC_HIDE) || (reason == MPI2_EVENT_IR_CHANGE_RC_PD_CREATED)) { // build RAID Action message Mpi2RaidActionRequest_t *action; Mpi2RaidActionReply_t *reply = NULL; struct mpr_command *cm; int error = 0; if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed\n", __func__); return; } mpr_dprint(sc, MPR_EVENT, "Sending FP action " "from " "MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST " ":\n"); action = (MPI2_RAID_ACTION_REQUEST *)cm->cm_req; action->Function = MPI2_FUNCTION_RAID_ACTION; action->Action = MPI2_RAID_ACTION_PHYSDISK_HIDDEN; action->PhysDiskNum = element->PhysDiskNum; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; error = mpr_request_polled(sc, &cm); if (cm != NULL) reply = (Mpi2RaidActionReply_t *) cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the poll returns error then we * need to do diag reset */ printf("%s: poll for page completed " "with error %d", __func__, error); } if (reply && (le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) { mpr_dprint(sc, MPR_ERROR, "%s: error " "sending RaidActionPage; " "iocstatus = 0x%x\n", __func__, le16toh(reply->IOCStatus)); } if (cm) mpr_free_command(sc, cm); } skip_fp_send: mpr_dprint(sc, MPR_EVENT, "Received " "MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST Reason " "code %x:\n", element->ReasonCode); switch (element->ReasonCode) { case MPI2_EVENT_IR_CHANGE_RC_VOLUME_CREATED: case MPI2_EVENT_IR_CHANGE_RC_ADDED: if (!foreign_config) { if (mprsas_volume_add(sc, le16toh(element->VolDevHandle))) { printf("%s: failed to add RAID " "volume with handle 0x%x\n", __func__, le16toh(element-> VolDevHandle)); } } break; case MPI2_EVENT_IR_CHANGE_RC_VOLUME_DELETED: case MPI2_EVENT_IR_CHANGE_RC_REMOVED: /* * Rescan after volume is deleted or removed. */ if (!foreign_config) { if (id == MPR_MAP_BAD_ID) { printf("%s: could not get ID " "for volume with handle " "0x%04x\n", __func__, le16toh(element-> VolDevHandle)); break; } targ = &sassc->targets[id]; targ->handle = 0x0; targ->encl_slot = 0x0; targ->encl_handle = 0x0; targ->encl_level_valid = 0x0; targ->encl_level = 0x0; targ->connector_name[0] = ' '; targ->connector_name[1] = ' '; targ->connector_name[2] = ' '; targ->connector_name[3] = ' '; targ->exp_dev_handle = 0x0; targ->phy_num = 0x0; targ->linkrate = 0x0; mprsas_rescan_target(sc, targ); printf("RAID target id 0x%x removed\n", targ->tid); } break; case MPI2_EVENT_IR_CHANGE_RC_PD_CREATED: case MPI2_EVENT_IR_CHANGE_RC_HIDE: /* * Phys Disk of a volume has been created. Hide * it from the OS. */ targ = mprsas_find_target_by_handle(sassc, 0, element->PhysDiskDevHandle); if (targ == NULL) break; targ->flags |= MPR_TARGET_FLAGS_RAID_COMPONENT; mprsas_rescan_target(sc, targ); break; case MPI2_EVENT_IR_CHANGE_RC_PD_DELETED: /* * Phys Disk of a volume has been deleted. * Expose it to the OS. */ if (mprsas_add_device(sc, le16toh(element->PhysDiskDevHandle), 0)) { printf("%s: failed to add device with " "handle 0x%x\n", __func__, le16toh(element-> PhysDiskDevHandle)); mprsas_prepare_remove(sassc, le16toh(element-> PhysDiskDevHandle)); } break; } } /* * refcount was incremented for this event in * mprsas_evt_handler. Decrement it here because the event has * been processed. */ mprsas_startup_decrement(sassc); break; } case MPI2_EVENT_IR_VOLUME: { Mpi2EventDataIrVolume_t *event_data = fw_event->event_data; /* * Informational only. */ mpr_dprint(sc, MPR_EVENT, "Received IR Volume event:\n"); switch (event_data->ReasonCode) { case MPI2_EVENT_IR_VOLUME_RC_SETTINGS_CHANGED: mpr_dprint(sc, MPR_EVENT, " Volume Settings " "changed from 0x%x to 0x%x for Volome with " "handle 0x%x", le32toh(event_data->PreviousValue), le32toh(event_data->NewValue), le16toh(event_data->VolDevHandle)); break; case MPI2_EVENT_IR_VOLUME_RC_STATUS_FLAGS_CHANGED: mpr_dprint(sc, MPR_EVENT, " Volume Status " "changed from 0x%x to 0x%x for Volome with " "handle 0x%x", le32toh(event_data->PreviousValue), le32toh(event_data->NewValue), le16toh(event_data->VolDevHandle)); break; case MPI2_EVENT_IR_VOLUME_RC_STATE_CHANGED: mpr_dprint(sc, MPR_EVENT, " Volume State " "changed from 0x%x to 0x%x for Volome with " "handle 0x%x", le32toh(event_data->PreviousValue), le32toh(event_data->NewValue), le16toh(event_data->VolDevHandle)); u32 state; struct mprsas_target *targ; state = le32toh(event_data->NewValue); switch (state) { case MPI2_RAID_VOL_STATE_MISSING: case MPI2_RAID_VOL_STATE_FAILED: mprsas_prepare_volume_remove(sassc, event_data->VolDevHandle); break; case MPI2_RAID_VOL_STATE_ONLINE: case MPI2_RAID_VOL_STATE_DEGRADED: case MPI2_RAID_VOL_STATE_OPTIMAL: targ = mprsas_find_target_by_handle(sassc, 0, event_data->VolDevHandle); if (targ) { printf("%s %d: Volume handle " "0x%x is already added \n", __func__, __LINE__, event_data->VolDevHandle); break; } if (mprsas_volume_add(sc, le16toh(event_data-> VolDevHandle))) { printf("%s: failed to add RAID " "volume with handle 0x%x\n", __func__, le16toh( event_data->VolDevHandle)); } break; default: break; } break; default: break; } break; } case MPI2_EVENT_IR_PHYSICAL_DISK: { Mpi2EventDataIrPhysicalDisk_t *event_data = fw_event->event_data; struct mprsas_target *targ; /* * Informational only. */ mpr_dprint(sc, MPR_EVENT, "Received IR Phys Disk event:\n"); switch (event_data->ReasonCode) { case MPI2_EVENT_IR_PHYSDISK_RC_SETTINGS_CHANGED: mpr_dprint(sc, MPR_EVENT, " Phys Disk Settings " "changed from 0x%x to 0x%x for Phys Disk Number " "%d and handle 0x%x at Enclosure handle 0x%x, Slot " "%d", le32toh(event_data->PreviousValue), le32toh(event_data->NewValue), event_data->PhysDiskNum, le16toh(event_data->PhysDiskDevHandle), le16toh(event_data->EnclosureHandle), le16toh(event_data->Slot)); break; case MPI2_EVENT_IR_PHYSDISK_RC_STATUS_FLAGS_CHANGED: mpr_dprint(sc, MPR_EVENT, " Phys Disk Status changed " "from 0x%x to 0x%x for Phys Disk Number %d and " "handle 0x%x at Enclosure handle 0x%x, Slot %d", le32toh(event_data->PreviousValue), le32toh(event_data->NewValue), event_data->PhysDiskNum, le16toh(event_data->PhysDiskDevHandle), le16toh(event_data->EnclosureHandle), le16toh(event_data->Slot)); break; case MPI2_EVENT_IR_PHYSDISK_RC_STATE_CHANGED: mpr_dprint(sc, MPR_EVENT, " Phys Disk State changed " "from 0x%x to 0x%x for Phys Disk Number %d and " "handle 0x%x at Enclosure handle 0x%x, Slot %d", le32toh(event_data->PreviousValue), le32toh(event_data->NewValue), event_data->PhysDiskNum, le16toh(event_data->PhysDiskDevHandle), le16toh(event_data->EnclosureHandle), le16toh(event_data->Slot)); switch (event_data->NewValue) { case MPI2_RAID_PD_STATE_ONLINE: case MPI2_RAID_PD_STATE_DEGRADED: case MPI2_RAID_PD_STATE_REBUILDING: case MPI2_RAID_PD_STATE_OPTIMAL: case MPI2_RAID_PD_STATE_HOT_SPARE: targ = mprsas_find_target_by_handle( sassc, 0, event_data->PhysDiskDevHandle); if (targ) { targ->flags |= MPR_TARGET_FLAGS_RAID_COMPONENT; printf("%s %d: Found Target " "for handle 0x%x.\n", __func__, __LINE__ , event_data-> PhysDiskDevHandle); } break; case MPI2_RAID_PD_STATE_OFFLINE: case MPI2_RAID_PD_STATE_NOT_CONFIGURED: case MPI2_RAID_PD_STATE_NOT_COMPATIBLE: default: targ = mprsas_find_target_by_handle( sassc, 0, event_data->PhysDiskDevHandle); if (targ) { targ->flags |= ~MPR_TARGET_FLAGS_RAID_COMPONENT; printf("%s %d: Found Target " "for handle 0x%x. \n", __func__, __LINE__ , event_data-> PhysDiskDevHandle); } break; } default: break; } break; } case MPI2_EVENT_IR_OPERATION_STATUS: { Mpi2EventDataIrOperationStatus_t *event_data = fw_event->event_data; /* * Informational only. */ mpr_dprint(sc, MPR_EVENT, "Received IR Op Status event:\n"); mpr_dprint(sc, MPR_EVENT, " RAID Operation of %d is %d " "percent complete for Volume with handle 0x%x", event_data->RAIDOperation, event_data->PercentComplete, le16toh(event_data->VolDevHandle)); break; } case MPI2_EVENT_TEMP_THRESHOLD: { pMpi2EventDataTemperature_t temp_event; temp_event = (pMpi2EventDataTemperature_t)fw_event->event_data; /* * The Temp Sensor Count must be greater than the event's Sensor * Num to be valid. If valid, print the temp thresholds that * have been exceeded. */ if (sc->iounit_pg8.NumSensors > temp_event->SensorNum) { mpr_dprint(sc, MPR_FAULT, "Temperature Threshold flags " "%s %s %s %s exceeded for Sensor: %d !!!\n", ((temp_event->Status & 0x01) == 1) ? "0 " : " ", ((temp_event->Status & 0x02) == 2) ? "1 " : " ", ((temp_event->Status & 0x04) == 4) ? "2 " : " ", ((temp_event->Status & 0x08) == 8) ? "3 " : " ", temp_event->SensorNum); mpr_dprint(sc, MPR_FAULT, "Current Temp in Celsius: " "%d\n", temp_event->CurrentTemperature); } break; } case MPI2_EVENT_ACTIVE_CABLE_EXCEPTION: { pMpi26EventDataActiveCableExcept_t ace_event_data; ace_event_data = (pMpi26EventDataActiveCableExcept_t)fw_event->event_data; switch(ace_event_data->ReasonCode) { case MPI26_EVENT_ACTIVE_CABLE_INSUFFICIENT_POWER: { mpr_printf(sc, "Currently a cable with " "ReceptacleID %d cannot be powered and device " "connected to this active cable will not be seen. " "This active cable requires %d mW of power.\n", ace_event_data->ReceptacleID, ace_event_data->ActiveCablePowerRequirement); break; } case MPI26_EVENT_ACTIVE_CABLE_DEGRADED: { mpr_printf(sc, "Currently a cable with " "ReceptacleID %d is not running at optimal speed " "(12 Gb/s rate)\n", ace_event_data->ReceptacleID); break; } default: break; } break; } + case MPI2_EVENT_PCIE_DEVICE_STATUS_CHANGE: + { + pMpi26EventDataPCIeDeviceStatusChange_t pcie_status_event_data; + pcie_status_event_data = + (pMpi26EventDataPCIeDeviceStatusChange_t)fw_event->event_data; + + switch (pcie_status_event_data->ReasonCode) { + case MPI26_EVENT_PCIDEV_STAT_RC_PCIE_HOT_RESET_FAILED: + { + mpr_printf(sc, "PCIe Host Reset failed on DevHandle " + "0x%x\n", pcie_status_event_data->DevHandle); + break; + } + default: + break; + } + break; + } case MPI2_EVENT_SAS_DEVICE_DISCOVERY_ERROR: { pMpi25EventDataSasDeviceDiscoveryError_t discovery_error_data; uint64_t sas_address; discovery_error_data = (pMpi25EventDataSasDeviceDiscoveryError_t) fw_event->event_data; sas_address = discovery_error_data->SASAddress.High; sas_address = (sas_address << 32) | discovery_error_data->SASAddress.Low; switch(discovery_error_data->ReasonCode) { case MPI25_EVENT_SAS_DISC_ERR_SMP_FAILED: { mpr_printf(sc, "SMP command failed during discovery " "for expander with SAS Address %jx and " "handle 0x%x.\n", sas_address, discovery_error_data->DevHandle); break; } case MPI25_EVENT_SAS_DISC_ERR_SMP_TIMEOUT: { mpr_printf(sc, "SMP command timed out during " "discovery for expander with SAS Address %jx and " "handle 0x%x.\n", sas_address, discovery_error_data->DevHandle); break; } default: break; } break; } case MPI2_EVENT_PCIE_TOPOLOGY_CHANGE_LIST: { MPI26_EVENT_DATA_PCIE_TOPOLOGY_CHANGE_LIST *data; MPI26_EVENT_PCIE_TOPO_PORT_ENTRY *port_entry; uint8_t i, link_rate; uint16_t handle; data = (MPI26_EVENT_DATA_PCIE_TOPOLOGY_CHANGE_LIST *) fw_event->event_data; mpr_mapping_pcie_topology_change_event(sc, fw_event->event_data); for (i = 0; i < data->NumEntries; i++) { port_entry = &data->PortEntry[i]; handle = le16toh(port_entry->AttachedDevHandle); link_rate = port_entry->CurrentPortInfo & MPI26_EVENT_PCIE_TOPO_PI_RATE_MASK; switch (port_entry->PortStatus) { case MPI26_EVENT_PCIE_TOPO_PS_DEV_ADDED: if (link_rate < MPI26_EVENT_PCIE_TOPO_PI_RATE_2_5) { mpr_dprint(sc, MPR_ERROR, "%s: Cannot " "add PCIe device with handle 0x%x " "with unknown link rate.\n", __func__, handle); break; } if (mprsas_add_pcie_device(sc, handle, link_rate)) { mpr_dprint(sc, MPR_ERROR, "%s: failed " "to add PCIe device with handle " "0x%x\n", __func__, handle); mprsas_prepare_remove(sassc, handle); } break; case MPI26_EVENT_PCIE_TOPO_PS_NOT_RESPONDING: mprsas_prepare_remove(sassc, handle); break; case MPI26_EVENT_PCIE_TOPO_PS_PORT_CHANGED: case MPI26_EVENT_PCIE_TOPO_PS_NO_CHANGE: case MPI26_EVENT_PCIE_TOPO_PS_DELAY_NOT_RESPONDING: default: break; } } /* * refcount was incremented for this event in * mprsas_evt_handler. Decrement it here because the event has * been processed. */ mprsas_startup_decrement(sassc); break; } case MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE: case MPI2_EVENT_SAS_BROADCAST_PRIMITIVE: default: mpr_dprint(sc, MPR_TRACE,"Unhandled event 0x%0X\n", fw_event->event); break; } mpr_dprint(sc, MPR_EVENT, "(%d)->(%s) Event Free: [%x]\n", event_count, __func__, fw_event->event); mprsas_fw_event_free(sc, fw_event); } void mprsas_firmware_event_work(void *arg, int pending) { struct mpr_fw_event_work *fw_event; struct mpr_softc *sc; sc = (struct mpr_softc *)arg; mpr_lock(sc); while ((fw_event = TAILQ_FIRST(&sc->sassc->ev_queue)) != NULL) { TAILQ_REMOVE(&sc->sassc->ev_queue, fw_event, ev_link); mprsas_fw_work(sc, fw_event); } mpr_unlock(sc); } static int mprsas_add_device(struct mpr_softc *sc, u16 handle, u8 linkrate) { char devstring[80]; struct mprsas_softc *sassc; struct mprsas_target *targ; Mpi2ConfigReply_t mpi_reply; Mpi2SasDevicePage0_t config_page; uint64_t sas_address, parent_sas_address = 0; u32 device_info, parent_devinfo = 0; unsigned int id; int ret = 1, error = 0, i; struct mprsas_lun *lun; u8 is_SATA_SSD = 0; struct mpr_command *cm; sassc = sc->sassc; mprsas_startup_increment(sassc); if (mpr_config_get_sas_device_pg0(sc, &mpi_reply, &config_page, MPI2_SAS_DEVICE_PGAD_FORM_HANDLE, handle) != 0) { mpr_dprint(sc, MPR_INFO|MPR_MAPPING|MPR_FAULT, "Error reading SAS device %#x page0, iocstatus= 0x%x\n", handle, mpi_reply.IOCStatus); error = ENXIO; goto out; } device_info = le32toh(config_page.DeviceInfo); if (((device_info & MPI2_SAS_DEVICE_INFO_SMP_TARGET) == 0) && (le16toh(config_page.ParentDevHandle) != 0)) { Mpi2ConfigReply_t tmp_mpi_reply; Mpi2SasDevicePage0_t parent_config_page; if (mpr_config_get_sas_device_pg0(sc, &tmp_mpi_reply, &parent_config_page, MPI2_SAS_DEVICE_PGAD_FORM_HANDLE, le16toh(config_page.ParentDevHandle)) != 0) { mpr_dprint(sc, MPR_MAPPING|MPR_FAULT, "Error reading parent SAS device %#x page0, " "iocstatus= 0x%x\n", le16toh(config_page.ParentDevHandle), tmp_mpi_reply.IOCStatus); } else { parent_sas_address = parent_config_page.SASAddress.High; parent_sas_address = (parent_sas_address << 32) | parent_config_page.SASAddress.Low; parent_devinfo = le32toh(parent_config_page.DeviceInfo); } } /* TODO Check proper endianness */ sas_address = config_page.SASAddress.High; sas_address = (sas_address << 32) | config_page.SASAddress.Low; mpr_dprint(sc, MPR_MAPPING, "Handle 0x%04x SAS Address from SAS device " "page0 = %jx\n", handle, sas_address); /* * Always get SATA Identify information because this is used to * determine if Start/Stop Unit should be sent to the drive when the * system is shutdown. */ if (device_info & MPI2_SAS_DEVICE_INFO_SATA_DEVICE) { ret = mprsas_get_sas_address_for_sata_disk(sc, &sas_address, handle, device_info, &is_SATA_SSD); if (ret) { mpr_dprint(sc, MPR_MAPPING|MPR_ERROR, "%s: failed to get disk type (SSD or HDD) for SATA " "device with handle 0x%04x\n", __func__, handle); } else { mpr_dprint(sc, MPR_MAPPING, "Handle 0x%04x SAS Address " "from SATA device = %jx\n", handle, sas_address); } } /* * use_phynum: * 1 - use the PhyNum field as a fallback to the mapping logic * 0 - never use the PhyNum field * -1 - only use the PhyNum field * * Note that using the Phy number to map a device can cause device adds * to fail if multiple enclosures/expanders are in the topology. For * example, if two devices are in the same slot number in two different * enclosures within the topology, only one of those devices will be * added. PhyNum mapping should not be used if multiple enclosures are * in the topology. */ id = MPR_MAP_BAD_ID; if (sc->use_phynum != -1) id = mpr_mapping_get_tid(sc, sas_address, handle); if (id == MPR_MAP_BAD_ID) { if ((sc->use_phynum == 0) || ((id = config_page.PhyNum) > sassc->maxtargets)) { mpr_dprint(sc, MPR_INFO, "failure at %s:%d/%s()! " "Could not get ID for device with handle 0x%04x\n", __FILE__, __LINE__, __func__, handle); error = ENXIO; goto out; } } mpr_dprint(sc, MPR_MAPPING, "%s: Target ID for added device is %d.\n", __func__, id); /* * Only do the ID check and reuse check if the target is not from a * RAID Component. For Physical Disks of a Volume, the ID will be reused * when a volume is deleted because the mapping entry for the PD will * still be in the mapping table. The ID check should not be done here * either since this PD is already being used. */ targ = &sassc->targets[id]; if (!(targ->flags & MPR_TARGET_FLAGS_RAID_COMPONENT)) { if (mprsas_check_id(sassc, id) != 0) { mpr_dprint(sc, MPR_MAPPING|MPR_INFO, "Excluding target id %d\n", id); error = ENXIO; goto out; } if (targ->handle != 0x0) { mpr_dprint(sc, MPR_MAPPING, "Attempting to reuse " "target id %d handle 0x%04x\n", id, targ->handle); error = ENXIO; goto out; } } targ->devinfo = device_info; targ->devname = le32toh(config_page.DeviceName.High); targ->devname = (targ->devname << 32) | le32toh(config_page.DeviceName.Low); targ->encl_handle = le16toh(config_page.EnclosureHandle); targ->encl_slot = le16toh(config_page.Slot); targ->encl_level = config_page.EnclosureLevel; targ->connector_name[0] = config_page.ConnectorName[0]; targ->connector_name[1] = config_page.ConnectorName[1]; targ->connector_name[2] = config_page.ConnectorName[2]; targ->connector_name[3] = config_page.ConnectorName[3]; targ->handle = handle; targ->parent_handle = le16toh(config_page.ParentDevHandle); targ->sasaddr = mpr_to_u64(&config_page.SASAddress); targ->parent_sasaddr = le64toh(parent_sas_address); targ->parent_devinfo = parent_devinfo; targ->tid = id; targ->linkrate = (linkrate>>4); targ->flags = 0; if (is_SATA_SSD) { targ->flags = MPR_TARGET_IS_SATA_SSD; } if ((le16toh(config_page.Flags) & MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH) && (le16toh(config_page.Flags) & MPI25_SAS_DEVICE0_FLAGS_FAST_PATH_CAPABLE)) { targ->scsi_req_desc_type = MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO; } if (le16toh(config_page.Flags) & MPI2_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID) { targ->encl_level_valid = TRUE; } TAILQ_INIT(&targ->commands); TAILQ_INIT(&targ->timedout_commands); while (!SLIST_EMPTY(&targ->luns)) { lun = SLIST_FIRST(&targ->luns); SLIST_REMOVE_HEAD(&targ->luns, lun_link); free(lun, M_MPR); } SLIST_INIT(&targ->luns); mpr_describe_devinfo(targ->devinfo, devstring, 80); mpr_dprint(sc, (MPR_INFO|MPR_MAPPING), "Found device <%s> <%s> " "handle<0x%04x> enclosureHandle<0x%04x> slot %d\n", devstring, mpr_describe_table(mpr_linkrate_names, targ->linkrate), targ->handle, targ->encl_handle, targ->encl_slot); if (targ->encl_level_valid) { mpr_dprint(sc, (MPR_INFO|MPR_MAPPING), "At enclosure level %d " "and connector name (%4s)\n", targ->encl_level, targ->connector_name); } #if ((__FreeBSD_version >= 1000000) && (__FreeBSD_version < 1000039)) || \ (__FreeBSD_version < 902502) if ((sassc->flags & MPRSAS_IN_STARTUP) == 0) #endif mprsas_rescan_target(sc, targ); mpr_dprint(sc, MPR_MAPPING, "Target id 0x%x added\n", targ->tid); /* * Check all commands to see if the SATA_ID_TIMEOUT flag has been set. * If so, send a Target Reset TM to the target that was just created. * An Abort Task TM should be used instead of a Target Reset, but that * would be much more difficult because targets have not been fully * discovered yet, and LUN's haven't been setup. So, just reset the * target instead of the LUN. */ for (i = 1; i < sc->num_reqs; i++) { cm = &sc->commands[i]; if (cm->cm_flags & MPR_CM_FLAGS_SATA_ID_TIMEOUT) { targ->timeouts++; - cm->cm_state = MPR_CM_STATE_TIMEDOUT; + cm->cm_flags |= MPR_CM_FLAGS_TIMEDOUT; if ((targ->tm = mprsas_alloc_tm(sc)) != NULL) { mpr_dprint(sc, MPR_INFO, "%s: sending Target " "Reset for stuck SATA identify command " "(cm = %p)\n", __func__, cm); targ->tm->cm_targ = targ; mprsas_send_reset(sc, targ->tm, MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET); } else { mpr_dprint(sc, MPR_ERROR, "Failed to allocate " "tm for Target Reset after SATA ID command " "timed out (cm %p)\n", cm); } /* * No need to check for more since the target is * already being reset. */ break; } } out: /* * Free the commands that may not have been freed from the SATA ID call */ for (i = 1; i < sc->num_reqs; i++) { cm = &sc->commands[i]; if (cm->cm_flags & MPR_CM_FLAGS_SATA_ID_TIMEOUT) { + free(cm->cm_data, M_MPR); mpr_free_command(sc, cm); } } mprsas_startup_decrement(sassc); return (error); } int mprsas_get_sas_address_for_sata_disk(struct mpr_softc *sc, u64 *sas_address, u16 handle, u32 device_info, u8 *is_SATA_SSD) { Mpi2SataPassthroughReply_t mpi_reply; int i, rc, try_count; u32 *bufferptr; union _sata_sas_address hash_address; struct _ata_identify_device_data ata_identify; u8 buffer[MPT2SAS_MN_LEN + MPT2SAS_SN_LEN]; u32 ioc_status; u8 sas_status; memset(&ata_identify, 0, sizeof(ata_identify)); memset(&mpi_reply, 0, sizeof(mpi_reply)); try_count = 0; do { rc = mprsas_get_sata_identify(sc, handle, &mpi_reply, (char *)&ata_identify, sizeof(ata_identify), device_info); try_count++; ioc_status = le16toh(mpi_reply.IOCStatus) & MPI2_IOCSTATUS_MASK; sas_status = mpi_reply.SASStatus; switch (ioc_status) { case MPI2_IOCSTATUS_SUCCESS: break; case MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR: /* No sense sleeping. this error won't get better */ break; default: if (sc->spinup_wait_time > 0) { mpr_dprint(sc, MPR_INFO, "Sleeping %d seconds " "after SATA ID error to wait for spinup\n", sc->spinup_wait_time); msleep(&sc->msleep_fake_chan, &sc->mpr_mtx, 0, "mprid", sc->spinup_wait_time * hz); } } } while (((rc && (rc != EWOULDBLOCK)) || (ioc_status && (ioc_status != MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR)) || sas_status) && (try_count < 5)); if (rc == 0 && !ioc_status && !sas_status) { mpr_dprint(sc, MPR_MAPPING, "%s: got SATA identify " "successfully for handle = 0x%x with try_count = %d\n", __func__, handle, try_count); } else { mpr_dprint(sc, MPR_MAPPING, "%s: handle = 0x%x failed\n", __func__, handle); return -1; } /* Copy & byteswap the 40 byte model number to a buffer */ for (i = 0; i < MPT2SAS_MN_LEN; i += 2) { buffer[i] = ((u8 *)ata_identify.model_number)[i + 1]; buffer[i + 1] = ((u8 *)ata_identify.model_number)[i]; } /* Copy & byteswap the 20 byte serial number to a buffer */ for (i = 0; i < MPT2SAS_SN_LEN; i += 2) { buffer[MPT2SAS_MN_LEN + i] = ((u8 *)ata_identify.serial_number)[i + 1]; buffer[MPT2SAS_MN_LEN + i + 1] = ((u8 *)ata_identify.serial_number)[i]; } bufferptr = (u32 *)buffer; /* There are 60 bytes to hash down to 8. 60 isn't divisible by 8, * so loop through the first 56 bytes (7*8), * and then add in the last dword. */ hash_address.word.low = 0; hash_address.word.high = 0; for (i = 0; (i < ((MPT2SAS_MN_LEN+MPT2SAS_SN_LEN)/8)); i++) { hash_address.word.low += *bufferptr; bufferptr++; hash_address.word.high += *bufferptr; bufferptr++; } /* Add the last dword */ hash_address.word.low += *bufferptr; /* Make sure the hash doesn't start with 5, because it could clash * with a SAS address. Change 5 to a D. */ if ((hash_address.word.high & 0x000000F0) == (0x00000050)) hash_address.word.high |= 0x00000080; *sas_address = (u64)hash_address.wwid[0] << 56 | (u64)hash_address.wwid[1] << 48 | (u64)hash_address.wwid[2] << 40 | (u64)hash_address.wwid[3] << 32 | (u64)hash_address.wwid[4] << 24 | (u64)hash_address.wwid[5] << 16 | (u64)hash_address.wwid[6] << 8 | (u64)hash_address.wwid[7]; if (ata_identify.rotational_speed == 1) { *is_SATA_SSD = 1; } return 0; } static int mprsas_get_sata_identify(struct mpr_softc *sc, u16 handle, Mpi2SataPassthroughReply_t *mpi_reply, char *id_buffer, int sz, u32 devinfo) { Mpi2SataPassthroughRequest_t *mpi_request; Mpi2SataPassthroughReply_t *reply; struct mpr_command *cm; char *buffer; int error = 0; buffer = malloc( sz, M_MPR, M_NOWAIT | M_ZERO); if (!buffer) return ENOMEM; if ((cm = mpr_alloc_command(sc)) == NULL) { free(buffer, M_MPR); return (EBUSY); } mpi_request = (MPI2_SATA_PASSTHROUGH_REQUEST *)cm->cm_req; bzero(mpi_request,sizeof(MPI2_SATA_PASSTHROUGH_REQUEST)); mpi_request->Function = MPI2_FUNCTION_SATA_PASSTHROUGH; mpi_request->VF_ID = 0; mpi_request->DevHandle = htole16(handle); mpi_request->PassthroughFlags = (MPI2_SATA_PT_REQ_PT_FLAGS_PIO | MPI2_SATA_PT_REQ_PT_FLAGS_READ); mpi_request->DataLength = htole32(sz); mpi_request->CommandFIS[0] = 0x27; mpi_request->CommandFIS[1] = 0x80; mpi_request->CommandFIS[2] = (devinfo & MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE) ? 0xA1 : 0xEC; cm->cm_sge = &mpi_request->SGL; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPR_CM_FLAGS_DATAIN; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = buffer; cm->cm_length = htole32(sz); /* - * Start a timeout counter specifically for the SATA ID command. This - * is used to fix a problem where the FW does not send a reply sometimes + * Use a custom handler to avoid reinit'ing the controller on timeout. + * This fixes a problem where the FW does not send a reply sometimes * when a bad disk is in the topology. So, this is used to timeout the * command so that processing can continue normally. */ - mpr_dprint(sc, MPR_XINFO, "%s start timeout counter for SATA ID " - "command\n", __func__); - callout_reset(&cm->cm_callout, MPR_ATA_ID_TIMEOUT * hz, - mprsas_ata_id_timeout, cm); - error = mpr_wait_command(sc, &cm, 60, CAN_SLEEP); - mpr_dprint(sc, MPR_XINFO, "%s stop timeout counter for SATA ID " - "command\n", __func__); - /* XXX KDM need to fix the case where this command is destroyed */ - callout_stop(&cm->cm_callout); + cm->cm_timeout_handler = mprsas_ata_id_timeout; - if (cm != NULL) - reply = (Mpi2SataPassthroughReply_t *)cm->cm_reply; + error = mpr_wait_command(sc, &cm, MPR_ATA_ID_TIMEOUT, CAN_SLEEP); + + /* mprsas_ata_id_timeout does not reset controller */ + KASSERT(cm != NULL, ("%s: surprise command freed", __func__)); + + reply = (Mpi2SataPassthroughReply_t *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ mpr_dprint(sc, MPR_INFO|MPR_FAULT|MPR_MAPPING, - "Request for SATA PASSTHROUGH page completed with error %d", + "Request for SATA PASSTHROUGH page completed with error %d\n", error); error = ENXIO; goto out; } bcopy(buffer, id_buffer, sz); bcopy(reply, mpi_reply, sizeof(Mpi2SataPassthroughReply_t)); if ((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) { mpr_dprint(sc, MPR_INFO|MPR_MAPPING|MPR_FAULT, "Error reading device %#x SATA PASSTHRU; iocstatus= 0x%x\n", handle, reply->IOCStatus); error = ENXIO; goto out; } out: /* * If the SATA_ID_TIMEOUT flag has been set for this command, don't free - * it. The command will be freed after sending a target reset TM. If - * the command did timeout, use EWOULDBLOCK. + * it. The command and buffer will be freed after sending an Abort + * Task TM. */ - if ((cm->cm_flags & MPR_CM_FLAGS_SATA_ID_TIMEOUT) == 0) + if ((cm->cm_flags & MPR_CM_FLAGS_SATA_ID_TIMEOUT) == 0) { mpr_free_command(sc, cm); - else if (error == 0) - error = EWOULDBLOCK; - cm->cm_data = NULL; - free(buffer, M_MPR); + free(buffer, M_MPR); + } return (error); } static void -mprsas_ata_id_timeout(void *data) +mprsas_ata_id_timeout(struct mpr_softc *sc, struct mpr_command *cm) { - struct mpr_softc *sc; - struct mpr_command *cm; - cm = (struct mpr_command *)data; - sc = cm->cm_sc; - mtx_assert(&sc->mpr_mtx, MA_OWNED); - - mpr_dprint(sc, MPR_INFO, "%s checking ATA ID command %p sc %p\n", + mpr_dprint(sc, MPR_INFO, "%s ATA ID command timeout cm %p sc %p\n", __func__, cm, sc); - if ((callout_pending(&cm->cm_callout)) || - (!callout_active(&cm->cm_callout))) { - mpr_dprint(sc, MPR_INFO, "%s ATA ID command almost timed out\n", - __func__); - return; - } - callout_deactivate(&cm->cm_callout); /* - * Run the interrupt handler to make sure it's not pending. This - * isn't perfect because the command could have already completed - * and been re-used, though this is unlikely. + * The Abort Task cannot be sent from here because the driver has not + * completed setting up targets. Instead, the command is flagged so + * that special handling will be used to send the abort. Now that + * this command has timed out, it's no longer in the queue. */ - mpr_intr_locked(sc); - if (cm->cm_state == MPR_CM_STATE_FREE) { - mpr_dprint(sc, MPR_INFO, "%s ATA ID command almost timed out\n", - __func__); - return; - } - - mpr_dprint(sc, MPR_INFO, "ATA ID command timeout cm %p\n", cm); - - /* - * Send wakeup() to the sleeping thread that issued this ATA ID command. - * wakeup() will cause msleep to return a 0 (not EWOULDBLOCK), and this - * will keep reinit() from being called. This way, an Abort Task TM can - * be issued so that the timed out command can be cleared. The Abort - * Task cannot be sent from here because the driver has not completed - * setting up targets. Instead, the command is flagged so that special - * handling will be used to send the abort. - */ cm->cm_flags |= MPR_CM_FLAGS_SATA_ID_TIMEOUT; - wakeup(cm); + cm->cm_state = MPR_CM_STATE_BUSY; } static int mprsas_add_pcie_device(struct mpr_softc *sc, u16 handle, u8 linkrate) { char devstring[80]; struct mprsas_softc *sassc; struct mprsas_target *targ; Mpi2ConfigReply_t mpi_reply; Mpi26PCIeDevicePage0_t config_page; Mpi26PCIeDevicePage2_t config_page2; uint64_t pcie_wwid, parent_wwid = 0; u32 device_info, parent_devinfo = 0; unsigned int id; int error = 0; struct mprsas_lun *lun; sassc = sc->sassc; mprsas_startup_increment(sassc); if ((mpr_config_get_pcie_device_pg0(sc, &mpi_reply, &config_page, MPI26_PCIE_DEVICE_PGAD_FORM_HANDLE, handle))) { printf("%s: error reading PCIe device page0\n", __func__); error = ENXIO; goto out; } device_info = le32toh(config_page.DeviceInfo); if (((device_info & MPI26_PCIE_DEVINFO_PCI_SWITCH) == 0) && (le16toh(config_page.ParentDevHandle) != 0)) { Mpi2ConfigReply_t tmp_mpi_reply; Mpi26PCIeDevicePage0_t parent_config_page; if ((mpr_config_get_pcie_device_pg0(sc, &tmp_mpi_reply, &parent_config_page, MPI26_PCIE_DEVICE_PGAD_FORM_HANDLE, le16toh(config_page.ParentDevHandle)))) { printf("%s: error reading PCIe device %#x page0\n", __func__, le16toh(config_page.ParentDevHandle)); } else { parent_wwid = parent_config_page.WWID.High; parent_wwid = (parent_wwid << 32) | parent_config_page.WWID.Low; parent_devinfo = le32toh(parent_config_page.DeviceInfo); } } /* TODO Check proper endianness */ pcie_wwid = config_page.WWID.High; pcie_wwid = (pcie_wwid << 32) | config_page.WWID.Low; mpr_dprint(sc, MPR_INFO, "PCIe WWID from PCIe device page0 = %jx\n", pcie_wwid); if ((mpr_config_get_pcie_device_pg2(sc, &mpi_reply, &config_page2, MPI26_PCIE_DEVICE_PGAD_FORM_HANDLE, handle))) { printf("%s: error reading PCIe device page2\n", __func__); error = ENXIO; goto out; } id = mpr_mapping_get_tid(sc, pcie_wwid, handle); if (id == MPR_MAP_BAD_ID) { mpr_dprint(sc, MPR_ERROR | MPR_INFO, "failure at %s:%d/%s()! " "Could not get ID for device with handle 0x%04x\n", __FILE__, __LINE__, __func__, handle); error = ENXIO; goto out; } mpr_dprint(sc, MPR_MAPPING, "%s: Target ID for added device is %d.\n", __func__, id); if (mprsas_check_id(sassc, id) != 0) { mpr_dprint(sc, MPR_MAPPING|MPR_INFO, "Excluding target id %d\n", id); error = ENXIO; goto out; } mpr_dprint(sc, MPR_MAPPING, "WWID from PCIe device page0 = %jx\n", pcie_wwid); targ = &sassc->targets[id]; targ->devinfo = device_info; targ->encl_handle = le16toh(config_page.EnclosureHandle); targ->encl_slot = le16toh(config_page.Slot); targ->encl_level = config_page.EnclosureLevel; targ->connector_name[0] = ((char *)&config_page.ConnectorName)[0]; targ->connector_name[1] = ((char *)&config_page.ConnectorName)[1]; targ->connector_name[2] = ((char *)&config_page.ConnectorName)[2]; targ->connector_name[3] = ((char *)&config_page.ConnectorName)[3]; targ->is_nvme = device_info & MPI26_PCIE_DEVINFO_NVME; targ->MDTS = config_page2.MaximumDataTransferSize; + if (targ->is_nvme) + targ->controller_reset_timeout = config_page2.ControllerResetTO; /* * Assume always TRUE for encl_level_valid because there is no valid * flag for PCIe. */ targ->encl_level_valid = TRUE; targ->handle = handle; targ->parent_handle = le16toh(config_page.ParentDevHandle); targ->sasaddr = mpr_to_u64(&config_page.WWID); targ->parent_sasaddr = le64toh(parent_wwid); targ->parent_devinfo = parent_devinfo; targ->tid = id; targ->linkrate = linkrate; targ->flags = 0; if ((le16toh(config_page.Flags) & MPI26_PCIEDEV0_FLAGS_ENABLED_FAST_PATH) && (le16toh(config_page.Flags) & MPI26_PCIEDEV0_FLAGS_FAST_PATH_CAPABLE)) { targ->scsi_req_desc_type = MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO; } TAILQ_INIT(&targ->commands); TAILQ_INIT(&targ->timedout_commands); while (!SLIST_EMPTY(&targ->luns)) { lun = SLIST_FIRST(&targ->luns); SLIST_REMOVE_HEAD(&targ->luns, lun_link); free(lun, M_MPR); } SLIST_INIT(&targ->luns); mpr_describe_devinfo(targ->devinfo, devstring, 80); mpr_dprint(sc, (MPR_INFO|MPR_MAPPING), "Found PCIe device <%s> <%s> " "handle<0x%04x> enclosureHandle<0x%04x> slot %d\n", devstring, mpr_describe_table(mpr_pcie_linkrate_names, targ->linkrate), targ->handle, targ->encl_handle, targ->encl_slot); if (targ->encl_level_valid) { mpr_dprint(sc, (MPR_INFO|MPR_MAPPING), "At enclosure level %d " "and connector name (%4s)\n", targ->encl_level, targ->connector_name); } #if ((__FreeBSD_version >= 1000000) && (__FreeBSD_version < 1000039)) || \ (__FreeBSD_version < 902502) if ((sassc->flags & MPRSAS_IN_STARTUP) == 0) #endif mprsas_rescan_target(sc, targ); mpr_dprint(sc, MPR_MAPPING, "Target id 0x%x added\n", targ->tid); out: mprsas_startup_decrement(sassc); return (error); } static int mprsas_volume_add(struct mpr_softc *sc, u16 handle) { struct mprsas_softc *sassc; struct mprsas_target *targ; u64 wwid; unsigned int id; int error = 0; struct mprsas_lun *lun; sassc = sc->sassc; mprsas_startup_increment(sassc); /* wwid is endian safe */ mpr_config_get_volume_wwid(sc, handle, &wwid); if (!wwid) { printf("%s: invalid WWID; cannot add volume to mapping table\n", __func__); error = ENXIO; goto out; } id = mpr_mapping_get_raid_tid(sc, wwid, handle); if (id == MPR_MAP_BAD_ID) { printf("%s: could not get ID for volume with handle 0x%04x and " "WWID 0x%016llx\n", __func__, handle, (unsigned long long)wwid); error = ENXIO; goto out; } targ = &sassc->targets[id]; targ->tid = id; targ->handle = handle; targ->devname = wwid; TAILQ_INIT(&targ->commands); TAILQ_INIT(&targ->timedout_commands); while (!SLIST_EMPTY(&targ->luns)) { lun = SLIST_FIRST(&targ->luns); SLIST_REMOVE_HEAD(&targ->luns, lun_link); free(lun, M_MPR); } SLIST_INIT(&targ->luns); #if ((__FreeBSD_version >= 1000000) && (__FreeBSD_version < 1000039)) || \ (__FreeBSD_version < 902502) if ((sassc->flags & MPRSAS_IN_STARTUP) == 0) #endif mprsas_rescan_target(sc, targ); mpr_dprint(sc, MPR_MAPPING, "RAID target id %d added (WWID = 0x%jx)\n", targ->tid, wwid); out: mprsas_startup_decrement(sassc); return (error); } /** * mprsas_SSU_to_SATA_devices * @sc: per adapter object * * Looks through the target list and issues a StartStopUnit SCSI command to each * SATA direct-access device. This helps to ensure that data corruption is * avoided when the system is being shut down. This must be called after the IR * System Shutdown RAID Action is sent if in IR mode. * * Return nothing. */ static void mprsas_SSU_to_SATA_devices(struct mpr_softc *sc, int howto) { struct mprsas_softc *sassc = sc->sassc; union ccb *ccb; path_id_t pathid = cam_sim_path(sassc->sim); target_id_t targetid; struct mprsas_target *target; char path_str[64]; int timeout; mpr_lock(sc); /* * For each target, issue a StartStopUnit command to stop the device. */ sc->SSU_started = TRUE; sc->SSU_refcount = 0; for (targetid = 0; targetid < sc->max_devices; targetid++) { target = &sassc->targets[targetid]; if (target->handle == 0x0) { continue; } /* * The stop_at_shutdown flag will be set if this device is * a SATA direct-access end device. */ if (target->stop_at_shutdown) { ccb = xpt_alloc_ccb_nowait(); if (ccb == NULL) { mpr_dprint(sc, MPR_FAULT, "Unable to alloc CCB " "to stop unit.\n"); return; } if (xpt_create_path(&ccb->ccb_h.path, xpt_periph, pathid, targetid, CAM_LUN_WILDCARD) != CAM_REQ_CMP) { mpr_dprint(sc, MPR_ERROR, "Unable to create " "path to stop unit.\n"); xpt_free_ccb(ccb); return; } xpt_path_string(ccb->ccb_h.path, path_str, sizeof(path_str)); mpr_dprint(sc, MPR_INFO, "Sending StopUnit: path %s " "handle %d\n", path_str, target->handle); /* * Issue a START STOP UNIT command for the target. * Increment the SSU counter to be used to count the * number of required replies. */ mpr_dprint(sc, MPR_INFO, "Incrementing SSU count\n"); sc->SSU_refcount++; ccb->ccb_h.target_id = xpt_path_target_id(ccb->ccb_h.path); ccb->ccb_h.ppriv_ptr1 = sassc; scsi_start_stop(&ccb->csio, /*retries*/0, mprsas_stop_unit_done, MSG_SIMPLE_Q_TAG, /*start*/FALSE, /*load/eject*/0, /*immediate*/FALSE, MPR_SENSE_LEN, /*timeout*/10000); xpt_action(ccb); } } mpr_unlock(sc); /* * Timeout after 60 seconds by default or 10 seconds if howto has * RB_NOSYNC set which indicates we're likely handling a panic. */ timeout = 600; if (howto & RB_NOSYNC) timeout = 100; /* * Wait until all of the SSU commands have completed or time * has expired. Pause for 100ms each time through. If any * command times out, the target will be reset in the SCSI * command timeout routine. */ while (sc->SSU_refcount > 0) { pause("mprwait", hz/10); if (SCHEDULER_STOPPED()) xpt_sim_poll(sassc->sim); if (--timeout == 0) { mpr_dprint(sc, MPR_ERROR, "Time has expired waiting " "for SSU commands to complete.\n"); break; } } } static void mprsas_stop_unit_done(struct cam_periph *periph, union ccb *done_ccb) { struct mprsas_softc *sassc; char path_str[64]; if (done_ccb == NULL) return; sassc = (struct mprsas_softc *)done_ccb->ccb_h.ppriv_ptr1; xpt_path_string(done_ccb->ccb_h.path, path_str, sizeof(path_str)); mpr_dprint(sassc->sc, MPR_INFO, "Completing stop unit for %s\n", path_str); /* * Nothing more to do except free the CCB and path. If the command * timed out, an abort reset, then target reset will be issued during * the SCSI Command process. */ xpt_free_path(done_ccb->ccb_h.path); xpt_free_ccb(done_ccb); } /** * mprsas_ir_shutdown - IR shutdown notification * @sc: per adapter object * * Sending RAID Action to alert the Integrated RAID subsystem of the IOC that * the host system is shutting down. * * Return nothing. */ void mprsas_ir_shutdown(struct mpr_softc *sc, int howto) { u16 volume_mapping_flags; u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); struct dev_mapping_table *mt_entry; u32 start_idx, end_idx; unsigned int id, found_volume = 0; struct mpr_command *cm; Mpi2RaidActionRequest_t *action; target_id_t targetid; struct mprsas_target *target; mpr_dprint(sc, MPR_TRACE, "%s\n", __func__); /* is IR firmware build loaded? */ if (!sc->ir_firmware) goto out; /* are there any volumes? Look at IR target IDs. */ // TODO-later, this should be looked up in the RAID config structure // when it is implemented. volume_mapping_flags = le16toh(sc->ioc_pg8.IRVolumeMappingFlags) & MPI2_IOCPAGE8_IRFLAGS_MASK_VOLUME_MAPPING_MODE; if (volume_mapping_flags == MPI2_IOCPAGE8_IRFLAGS_LOW_VOLUME_MAPPING) { start_idx = 0; if (ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_RESERVED_TARGETID_0) start_idx = 1; } else start_idx = sc->max_devices - sc->max_volumes; end_idx = start_idx + sc->max_volumes - 1; for (id = start_idx; id < end_idx; id++) { mt_entry = &sc->mapping_table[id]; if ((mt_entry->physical_id != 0) && (mt_entry->missing_count == 0)) { found_volume = 1; break; } } if (!found_volume) goto out; if ((cm = mpr_alloc_command(sc)) == NULL) { printf("%s: command alloc failed\n", __func__); goto out; } action = (MPI2_RAID_ACTION_REQUEST *)cm->cm_req; action->Function = MPI2_FUNCTION_RAID_ACTION; action->Action = MPI2_RAID_ACTION_SYSTEM_SHUTDOWN_INITIATED; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; mpr_lock(sc); mpr_wait_command(sc, &cm, 5, CAN_SLEEP); mpr_unlock(sc); /* * Don't check for reply, just leave. */ if (cm) mpr_free_command(sc, cm); out: /* * All of the targets must have the correct value set for * 'stop_at_shutdown' for the current 'enable_ssu' sysctl variable. * * The possible values for the 'enable_ssu' variable are: * 0: disable to SSD and HDD * 1: disable only to HDD (default) * 2: disable only to SSD * 3: enable to SSD and HDD * anything else will default to 1. */ for (targetid = 0; targetid < sc->max_devices; targetid++) { target = &sc->sassc->targets[targetid]; if (target->handle == 0x0) { continue; } if (target->supports_SSU) { switch (sc->enable_ssu) { case MPR_SSU_DISABLE_SSD_DISABLE_HDD: target->stop_at_shutdown = FALSE; break; case MPR_SSU_DISABLE_SSD_ENABLE_HDD: target->stop_at_shutdown = TRUE; if (target->flags & MPR_TARGET_IS_SATA_SSD) { target->stop_at_shutdown = FALSE; } break; case MPR_SSU_ENABLE_SSD_ENABLE_HDD: target->stop_at_shutdown = TRUE; break; case MPR_SSU_ENABLE_SSD_DISABLE_HDD: default: target->stop_at_shutdown = TRUE; if ((target->flags & MPR_TARGET_IS_SATA_SSD) == 0) { target->stop_at_shutdown = FALSE; } break; } } } mprsas_SSU_to_SATA_devices(sc, howto); } Index: releng/12.1/sys/dev/mpr/mpr_table.c =================================================================== --- releng/12.1/sys/dev/mpr/mpr_table.c (revision 352760) +++ releng/12.1/sys/dev/mpr/mpr_table.c (revision 352761) @@ -1,590 +1,602 @@ /*- * Copyright (c) 2009 Yahoo! Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* Debugging tables for MPT2 */ /* TODO Move headers to mprvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include char * mpr_describe_table(struct mpr_table_lookup *table, u_int code) { int i; for (i = 0; table[i].string != NULL; i++) { if (table[i].code == code) return(table[i].string); } return(table[i+1].string); } //SLM-Add new PCIe info to all of these tables struct mpr_table_lookup mpr_event_names[] = { {"LogData", 0x01}, {"StateChange", 0x02}, {"HardResetReceived", 0x05}, {"EventChange", 0x0a}, {"TaskSetFull", 0x0e}, {"SasDeviceStatusChange", 0x0f}, {"IrOperationStatus", 0x14}, {"SasDiscovery", 0x16}, {"SasBroadcastPrimitive", 0x17}, {"SasInitDeviceStatusChange", 0x18}, {"SasInitTableOverflow", 0x19}, {"SasTopologyChangeList", 0x1c}, {"SasEnclDeviceStatusChange", 0x1d}, {"IrVolume", 0x1e}, {"IrPhysicalDisk", 0x1f}, {"IrConfigurationChangeList", 0x20}, {"LogEntryAdded", 0x21}, {"SasPhyCounter", 0x22}, {"GpioInterrupt", 0x23}, {"HbdPhyEvent", 0x24}, {"SasQuiesce", 0x25}, {"SasNotifyPrimitive", 0x26}, {"TempThreshold", 0x27}, {"HostMessage", 0x28}, {"PowerPerformanceChange", 0x29}, {"PCIeDeviceStatusChange", 0x30}, {"PCIeEnumeration", 0x31}, {"PCIeTopologyChangeList", 0x32}, {"PCIeLinkCounter", 0x33}, {"CableEvent", 0x34}, {NULL, 0}, {"Unknown Event", 0} }; struct mpr_table_lookup mpr_phystatus_names[] = { {"NewTargetAdded", 0x01}, {"TargetGone", 0x02}, {"PHYLinkStatusChange", 0x03}, {"PHYLinkStatusUnchanged", 0x04}, {"TargetMissing", 0x05}, {NULL, 0}, {"Unknown Status", 0} }; struct mpr_table_lookup mpr_linkrate_names[] = { {"PHY disabled", 0x01}, {"Speed Negotiation Failed", 0x02}, {"SATA OOB Complete", 0x03}, {"SATA Port Selector", 0x04}, {"SMP Reset in Progress", 0x05}, {"1.5Gbps", 0x08}, {"3.0Gbps", 0x09}, {"6.0Gbps", 0x0a}, {"12.0Gbps", 0x0b}, {NULL, 0}, {"LinkRate Unknown", 0x00} }; struct mpr_table_lookup mpr_sasdev0_devtype[] = { {"End Device", 0x01}, {"Edge Expander", 0x02}, {"Fanout Expander", 0x03}, {NULL, 0}, {"No Device", 0x00} }; struct mpr_table_lookup mpr_phyinfo_reason_names[] = { {"Power On", 0x01}, {"Hard Reset", 0x02}, {"SMP Phy Control Link Reset", 0x03}, {"Loss DWORD Sync", 0x04}, {"Multiplex Sequence", 0x05}, {"I-T Nexus Loss Timer", 0x06}, {"Break Timeout Timer", 0x07}, {"PHY Test Function", 0x08}, {NULL, 0}, {"Unknown Reason", 0x00} }; struct mpr_table_lookup mpr_whoinit_names[] = { {"System BIOS", 0x01}, {"ROM BIOS", 0x02}, {"PCI Peer", 0x03}, {"Host Driver", 0x04}, {"Manufacturing", 0x05}, {NULL, 0}, {"Not Initialized", 0x00} }; struct mpr_table_lookup mpr_sasdisc_reason[] = { {"Discovery Started", 0x01}, {"Discovery Complete", 0x02}, {NULL, 0}, {"Unknown", 0x00} }; struct mpr_table_lookup mpr_sastopo_exp[] = { {"Added", 0x01}, {"Not Responding", 0x02}, {"Responding", 0x03}, {"Delay Not Responding", 0x04}, {NULL, 0}, {"Unknown", 0x00} }; struct mpr_table_lookup mpr_sasdev_reason[] = { {"SMART Data", 0x05}, {"Unsupported", 0x07}, {"Internal Device Reset", 0x08}, {"Task Abort Internal", 0x09}, {"Abort Task Set Internal", 0x0a}, {"Clear Task Set Internal", 0x0b}, {"Query Task Internal", 0x0c}, {"Async Notification", 0x0d}, {"Cmp Internal Device Reset", 0x0e}, {"Cmp Task Abort Internal", 0x0f}, {"Sata Init Failure", 0x10}, {NULL, 0}, {"Unknown", 0x00} }; struct mpr_table_lookup mpr_pcie_linkrate_names[] = { {"Port disabled", 0x01}, {"2.5GT/sec", 0x02}, {"5.0GT/sec", 0x03}, {"8.0GT/sec", 0x04}, {"16.0GT/sec", 0x05}, {NULL, 0}, {"LinkRate Unknown", 0x00} }; struct mpr_table_lookup mpr_iocstatus_string[] = { {"success", MPI2_IOCSTATUS_SUCCESS}, {"invalid function", MPI2_IOCSTATUS_INVALID_FUNCTION}, {"scsi recovered error", MPI2_IOCSTATUS_SCSI_RECOVERED_ERROR}, {"scsi invalid dev handle", MPI2_IOCSTATUS_SCSI_INVALID_DEVHANDLE}, {"scsi device not there", MPI2_IOCSTATUS_SCSI_DEVICE_NOT_THERE}, {"scsi data overrun", MPI2_IOCSTATUS_SCSI_DATA_OVERRUN}, {"scsi data underrun", MPI2_IOCSTATUS_SCSI_DATA_UNDERRUN}, {"scsi io data error", MPI2_IOCSTATUS_SCSI_IO_DATA_ERROR}, {"scsi protocol error", MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR}, {"scsi task terminated", MPI2_IOCSTATUS_SCSI_TASK_TERMINATED}, {"scsi residual mismatch", MPI2_IOCSTATUS_SCSI_RESIDUAL_MISMATCH}, {"scsi task mgmt failed", MPI2_IOCSTATUS_SCSI_TASK_MGMT_FAILED}, {"scsi ioc terminated", MPI2_IOCSTATUS_SCSI_IOC_TERMINATED}, {"scsi ext terminated", MPI2_IOCSTATUS_SCSI_EXT_TERMINATED}, {"eedp guard error", MPI2_IOCSTATUS_EEDP_GUARD_ERROR}, {"eedp ref tag error", MPI2_IOCSTATUS_EEDP_REF_TAG_ERROR}, {"eedp app tag error", MPI2_IOCSTATUS_EEDP_APP_TAG_ERROR}, {NULL, 0}, {"unknown", 0x00} }; struct mpr_table_lookup mpr_scsi_status_string[] = { {"good", MPI2_SCSI_STATUS_GOOD}, {"check condition", MPI2_SCSI_STATUS_CHECK_CONDITION}, {"condition met", MPI2_SCSI_STATUS_CONDITION_MET}, {"busy", MPI2_SCSI_STATUS_BUSY}, {"intermediate", MPI2_SCSI_STATUS_INTERMEDIATE}, {"intermediate condmet", MPI2_SCSI_STATUS_INTERMEDIATE_CONDMET}, {"reservation conflict", MPI2_SCSI_STATUS_RESERVATION_CONFLICT}, {"command terminated", MPI2_SCSI_STATUS_COMMAND_TERMINATED}, {"task set full", MPI2_SCSI_STATUS_TASK_SET_FULL}, {"aca active", MPI2_SCSI_STATUS_ACA_ACTIVE}, {"task aborted", MPI2_SCSI_STATUS_TASK_ABORTED}, {NULL, 0}, {"unknown", 0x00} }; struct mpr_table_lookup mpr_scsi_taskmgmt_string[] = { {"task mgmt request completed", MPI2_SCSITASKMGMT_RSP_TM_COMPLETE}, {"invalid frame", MPI2_SCSITASKMGMT_RSP_INVALID_FRAME}, {"task mgmt request not supp", MPI2_SCSITASKMGMT_RSP_TM_NOT_SUPPORTED}, {"task mgmt request failed", MPI2_SCSITASKMGMT_RSP_TM_FAILED}, {"task mgmt request_succeeded", MPI2_SCSITASKMGMT_RSP_TM_SUCCEEDED}, {"invalid lun", MPI2_SCSITASKMGMT_RSP_TM_INVALID_LUN}, {"overlapped tag attempt", 0xA}, {"task queued on IOC", MPI2_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC}, {NULL, 0}, {"unknown", 0x00} }; void mpr_describe_devinfo(uint32_t devinfo, char *string, int len) { snprintf(string, len, "%b,%s", devinfo, "\20" "\4SataHost" "\5SmpInit" "\6StpInit" "\7SspInit" "\10SataDev" "\11SmpTarg" "\12StpTarg" "\13SspTarg" "\14Direct" "\15LsiDev" "\16AtapiDev" "\17SepDev", mpr_describe_table(mpr_sasdev0_devtype, devinfo & 0x03)); } void mpr_print_iocfacts(struct mpr_softc *sc, MPI2_IOC_FACTS_REPLY *facts) { MPR_PRINTFIELD_START(sc, "IOCFacts"); MPR_PRINTFIELD(sc, facts, MsgVersion, 0x%x); MPR_PRINTFIELD(sc, facts, HeaderVersion, 0x%x); MPR_PRINTFIELD(sc, facts, IOCNumber, %d); MPR_PRINTFIELD(sc, facts, IOCExceptions, 0x%x); MPR_PRINTFIELD(sc, facts, MaxChainDepth, %d); mpr_print_field(sc, "WhoInit: %s\n", mpr_describe_table(mpr_whoinit_names, facts->WhoInit)); MPR_PRINTFIELD(sc, facts, NumberOfPorts, %d); MPR_PRINTFIELD(sc, facts, MaxMSIxVectors, %d); MPR_PRINTFIELD(sc, facts, RequestCredit, %d); MPR_PRINTFIELD(sc, facts, ProductID, 0x%x); mpr_print_field(sc, "IOCCapabilities: %b\n", facts->IOCCapabilities, "\20" "\3ScsiTaskFull" "\4DiagTrace" "\5SnapBuf" "\6ExtBuf" "\7EEDP" "\10BiDirTarg" "\11Multicast" "\14TransRetry" "\15IR" "\16EventReplay" "\17RaidAccel" "\20MSIXIndex" "\21HostDisc"); mpr_print_field(sc, "FWVersion= %d-%d-%d-%d\n", facts->FWVersion.Struct.Major, facts->FWVersion.Struct.Minor, facts->FWVersion.Struct.Unit, facts->FWVersion.Struct.Dev); MPR_PRINTFIELD(sc, facts, IOCRequestFrameSize, %d); MPR_PRINTFIELD(sc, facts, MaxInitiators, %d); MPR_PRINTFIELD(sc, facts, MaxTargets, %d); MPR_PRINTFIELD(sc, facts, MaxSasExpanders, %d); MPR_PRINTFIELD(sc, facts, MaxEnclosures, %d); mpr_print_field(sc, "ProtocolFlags: %b\n", facts->ProtocolFlags, "\20" "\1ScsiTarg" "\2ScsiInit"); MPR_PRINTFIELD(sc, facts, HighPriorityCredit, %d); MPR_PRINTFIELD(sc, facts, MaxReplyDescriptorPostQueueDepth, %d); MPR_PRINTFIELD(sc, facts, ReplyFrameSize, %d); MPR_PRINTFIELD(sc, facts, MaxVolumes, %d); MPR_PRINTFIELD(sc, facts, MaxDevHandle, %d); MPR_PRINTFIELD(sc, facts, MaxPersistentEntries, %d); } void mpr_print_portfacts(struct mpr_softc *sc, MPI2_PORT_FACTS_REPLY *facts) { MPR_PRINTFIELD_START(sc, "PortFacts"); MPR_PRINTFIELD(sc, facts, PortNumber, %d); MPR_PRINTFIELD(sc, facts, PortType, 0x%x); MPR_PRINTFIELD(sc, facts, MaxPostedCmdBuffers, %d); } void mpr_print_evt_generic(struct mpr_softc *sc, MPI2_EVENT_NOTIFICATION_REPLY *event) { MPR_PRINTFIELD_START(sc, "EventReply"); MPR_PRINTFIELD(sc, event, EventDataLength, %d); MPR_PRINTFIELD(sc, event, AckRequired, %d); mpr_print_field(sc, "Event: %s (0x%x)\n", mpr_describe_table(mpr_event_names, event->Event), event->Event); MPR_PRINTFIELD(sc, event, EventContext, 0x%x); } void mpr_print_sasdev0(struct mpr_softc *sc, MPI2_CONFIG_PAGE_SAS_DEV_0 *buf) { MPR_PRINTFIELD_START(sc, "SAS Device Page 0"); MPR_PRINTFIELD(sc, buf, Slot, %d); MPR_PRINTFIELD(sc, buf, EnclosureHandle, 0x%x); mpr_print_field(sc, "SASAddress: 0x%jx\n", mpr_to_u64(&buf->SASAddress)); MPR_PRINTFIELD(sc, buf, ParentDevHandle, 0x%x); MPR_PRINTFIELD(sc, buf, PhyNum, %d); MPR_PRINTFIELD(sc, buf, AccessStatus, 0x%x); MPR_PRINTFIELD(sc, buf, DevHandle, 0x%x); MPR_PRINTFIELD(sc, buf, AttachedPhyIdentifier, 0x%x); MPR_PRINTFIELD(sc, buf, ZoneGroup, %d); mpr_print_field(sc, "DeviceInfo: %b,%s\n", buf->DeviceInfo, "\20" "\4SataHost" "\5SmpInit" "\6StpInit" "\7SspInit" "\10SataDev" "\11SmpTarg" "\12StpTarg" "\13SspTarg" "\14Direct" "\15LsiDev" "\16AtapiDev" "\17SepDev", mpr_describe_table(mpr_sasdev0_devtype, buf->DeviceInfo & 0x03)); MPR_PRINTFIELD(sc, buf, Flags, 0x%x); MPR_PRINTFIELD(sc, buf, PhysicalPort, %d); MPR_PRINTFIELD(sc, buf, MaxPortConnections, %d); mpr_print_field(sc, "DeviceName: 0x%jx\n", mpr_to_u64(&buf->DeviceName)); MPR_PRINTFIELD(sc, buf, PortGroups, %d); MPR_PRINTFIELD(sc, buf, DmaGroup, %d); MPR_PRINTFIELD(sc, buf, ControlGroup, %d); } void mpr_print_evt_sas(struct mpr_softc *sc, MPI2_EVENT_NOTIFICATION_REPLY *event) { mpr_print_evt_generic(sc, event); switch(event->Event) { case MPI2_EVENT_SAS_DISCOVERY: { MPI2_EVENT_DATA_SAS_DISCOVERY *data; data = (MPI2_EVENT_DATA_SAS_DISCOVERY *)&event->EventData; mpr_print_field(sc, "Flags: %b\n", data->Flags, "\20" "\1InProgress" "\2DeviceChange"); mpr_print_field(sc, "ReasonCode: %s\n", mpr_describe_table(mpr_sasdisc_reason, data->ReasonCode)); MPR_PRINTFIELD(sc, data, PhysicalPort, %d); mpr_print_field(sc, "DiscoveryStatus: %b\n", data->DiscoveryStatus, "\20" "\1Loop" "\2UnaddressableDev" "\3DupSasAddr" "\5SmpTimeout" "\6ExpRouteFull" "\7RouteIndexError" "\10SmpFailed" "\11SmpCrcError" "\12SubSubLink" "\13TableTableLink" "\14UnsupDevice" "\15TableSubLink" "\16MultiDomain" "\17MultiSub" "\20MultiSubSub" "\34DownstreamInit" "\35MaxPhys" "\36MaxTargs" "\37MaxExpanders" "\40MaxEnclosures"); break; } //SLM-add for PCIE EVENT too case MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST: { MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST *data; MPI2_EVENT_SAS_TOPO_PHY_ENTRY *phy; int i, phynum; data = (MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST *) &event->EventData; MPR_PRINTFIELD(sc, data, EnclosureHandle, 0x%x); MPR_PRINTFIELD(sc, data, ExpanderDevHandle, 0x%x); MPR_PRINTFIELD(sc, data, NumPhys, %d); MPR_PRINTFIELD(sc, data, NumEntries, %d); MPR_PRINTFIELD(sc, data, StartPhyNum, %d); mpr_print_field(sc, "ExpStatus: %s (0x%x)\n", mpr_describe_table(mpr_sastopo_exp, data->ExpStatus), data->ExpStatus); MPR_PRINTFIELD(sc, data, PhysicalPort, %d); for (i = 0; i < data->NumEntries; i++) { phy = &data->PHY[i]; phynum = data->StartPhyNum + i; mpr_print_field(sc, "PHY[%d].AttachedDevHandle: 0x%04x\n", phynum, phy->AttachedDevHandle); mpr_print_field(sc, "PHY[%d].LinkRate: %s (0x%x)\n", phynum, mpr_describe_table(mpr_linkrate_names, (phy->LinkRate >> 4) & 0xf), phy->LinkRate); mpr_print_field(sc, "PHY[%d].PhyStatus: %s\n", phynum, mpr_describe_table(mpr_phystatus_names, phy->PhyStatus)); } break; } case MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE: { MPI2_EVENT_DATA_SAS_ENCL_DEV_STATUS_CHANGE *data; data = (MPI2_EVENT_DATA_SAS_ENCL_DEV_STATUS_CHANGE *) &event->EventData; MPR_PRINTFIELD(sc, data, EnclosureHandle, 0x%x); mpr_print_field(sc, "ReasonCode: %s\n", mpr_describe_table(mpr_sastopo_exp, data->ReasonCode)); MPR_PRINTFIELD(sc, data, PhysicalPort, %d); MPR_PRINTFIELD(sc, data, NumSlots, %d); MPR_PRINTFIELD(sc, data, StartSlot, %d); MPR_PRINTFIELD(sc, data, PhyBits, 0x%x); break; } case MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE: { MPI2_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE *data; data = (MPI2_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE *) &event->EventData; MPR_PRINTFIELD(sc, data, TaskTag, 0x%x); mpr_print_field(sc, "ReasonCode: %s\n", mpr_describe_table(mpr_sasdev_reason, data->ReasonCode)); MPR_PRINTFIELD(sc, data, ASC, 0x%x); MPR_PRINTFIELD(sc, data, ASCQ, 0x%x); MPR_PRINTFIELD(sc, data, DevHandle, 0x%x); mpr_print_field(sc, "SASAddress: 0x%jx\n", mpr_to_u64(&data->SASAddress)); + break; + } + case MPI2_EVENT_SAS_BROADCAST_PRIMITIVE: + { + MPI2_EVENT_DATA_SAS_BROADCAST_PRIMITIVE *data; + + data = (MPI2_EVENT_DATA_SAS_BROADCAST_PRIMITIVE *)&event->EventData; + MPR_PRINTFIELD(sc, data, PhyNum, %d); + MPR_PRINTFIELD(sc, data, Port, %d); + MPR_PRINTFIELD(sc, data, PortWidth, %d); + MPR_PRINTFIELD(sc, data, Primitive, 0x%x); + break; } default: break; } } void mpr_print_expander1(struct mpr_softc *sc, MPI2_CONFIG_PAGE_EXPANDER_1 *buf) { MPR_PRINTFIELD_START(sc, "SAS Expander Page 1 #%d", buf->Phy); MPR_PRINTFIELD(sc, buf, PhysicalPort, %d); MPR_PRINTFIELD(sc, buf, NumPhys, %d); MPR_PRINTFIELD(sc, buf, Phy, %d); MPR_PRINTFIELD(sc, buf, NumTableEntriesProgrammed, %d); mpr_print_field(sc, "ProgrammedLinkRate: %s (0x%x)\n", mpr_describe_table(mpr_linkrate_names, (buf->ProgrammedLinkRate >> 4) & 0xf), buf->ProgrammedLinkRate); mpr_print_field(sc, "HwLinkRate: %s (0x%x)\n", mpr_describe_table(mpr_linkrate_names, (buf->HwLinkRate >> 4) & 0xf), buf->HwLinkRate); MPR_PRINTFIELD(sc, buf, AttachedDevHandle, 0x%04x); mpr_print_field(sc, "PhyInfo Reason: %s (0x%x)\n", mpr_describe_table(mpr_phyinfo_reason_names, (buf->PhyInfo >> 16) & 0xf), buf->PhyInfo); mpr_print_field(sc, "AttachedDeviceInfo: %b,%s\n", buf->AttachedDeviceInfo, "\20" "\4SATAhost" "\5SMPinit" "\6STPinit" "\7SSPinit" "\10SATAdev" "\11SMPtarg" "\12STPtarg" "\13SSPtarg" "\14Direct" "\15LSIdev" "\16ATAPIdev" "\17SEPdev", mpr_describe_table(mpr_sasdev0_devtype, buf->AttachedDeviceInfo & 0x03)); MPR_PRINTFIELD(sc, buf, ExpanderDevHandle, 0x%04x); MPR_PRINTFIELD(sc, buf, ChangeCount, %d); mpr_print_field(sc, "NegotiatedLinkRate: %s (0x%x)\n", mpr_describe_table(mpr_linkrate_names, buf->NegotiatedLinkRate & 0xf), buf->NegotiatedLinkRate); MPR_PRINTFIELD(sc, buf, PhyIdentifier, %d); MPR_PRINTFIELD(sc, buf, AttachedPhyIdentifier, %d); MPR_PRINTFIELD(sc, buf, DiscoveryInfo, 0x%x); MPR_PRINTFIELD(sc, buf, AttachedPhyInfo, 0x%x); mpr_print_field(sc, "AttachedPhyInfo Reason: %s (0x%x)\n", mpr_describe_table(mpr_phyinfo_reason_names, buf->AttachedPhyInfo & 0xf), buf->AttachedPhyInfo); MPR_PRINTFIELD(sc, buf, ZoneGroup, %d); MPR_PRINTFIELD(sc, buf, SelfConfigStatus, 0x%x); } void mpr_print_sasphy0(struct mpr_softc *sc, MPI2_CONFIG_PAGE_SAS_PHY_0 *buf) { MPR_PRINTFIELD_START(sc, "SAS PHY Page 0"); MPR_PRINTFIELD(sc, buf, OwnerDevHandle, 0x%04x); MPR_PRINTFIELD(sc, buf, AttachedDevHandle, 0x%04x); MPR_PRINTFIELD(sc, buf, AttachedPhyIdentifier, %d); mpr_print_field(sc, "AttachedPhyInfo Reason: %s (0x%x)\n", mpr_describe_table(mpr_phyinfo_reason_names, buf->AttachedPhyInfo & 0xf), buf->AttachedPhyInfo); mpr_print_field(sc, "ProgrammedLinkRate: %s (0x%x)\n", mpr_describe_table(mpr_linkrate_names, (buf->ProgrammedLinkRate >> 4) & 0xf), buf->ProgrammedLinkRate); mpr_print_field(sc, "HwLinkRate: %s (0x%x)\n", mpr_describe_table(mpr_linkrate_names, (buf->HwLinkRate >> 4) & 0xf), buf->HwLinkRate); MPR_PRINTFIELD(sc, buf, ChangeCount, %d); MPR_PRINTFIELD(sc, buf, Flags, 0x%x); mpr_print_field(sc, "PhyInfo Reason: %s (0x%x)\n", mpr_describe_table(mpr_phyinfo_reason_names, (buf->PhyInfo >> 16) & 0xf), buf->PhyInfo); mpr_print_field(sc, "NegotiatedLinkRate: %s (0x%x)\n", mpr_describe_table(mpr_linkrate_names, buf->NegotiatedLinkRate & 0xf), buf->NegotiatedLinkRate); } void mpr_print_sgl(struct mpr_softc *sc, struct mpr_command *cm, int offset) { MPI2_IEEE_SGE_SIMPLE64 *ieee_sge; MPI25_IEEE_SGE_CHAIN64 *ieee_sgc; MPI2_SGE_SIMPLE64 *sge; MPI2_REQUEST_HEADER *req; struct mpr_chain *chain = NULL; char *frame; u_int i = 0, flags, length; req = (MPI2_REQUEST_HEADER *)cm->cm_req; frame = (char *)cm->cm_req; ieee_sge = (MPI2_IEEE_SGE_SIMPLE64 *)&frame[offset * 4]; sge = (MPI2_SGE_SIMPLE64 *)&frame[offset * 4]; printf("SGL for command %p\n", cm); hexdump(frame, 128, NULL, 0); while ((frame != NULL) && (!(cm->cm_flags & MPR_CM_FLAGS_SGE_SIMPLE))) { flags = ieee_sge->Flags; length = le32toh(ieee_sge->Length); printf("IEEE seg%d flags=0x%02x len=0x%08x addr=0x%016jx\n", i, flags, length, mpr_to_u64(&ieee_sge->Address)); if (flags & MPI25_IEEE_SGE_FLAGS_END_OF_LIST) break; ieee_sge++; i++; if (flags & MPI2_IEEE_SGE_FLAGS_CHAIN_ELEMENT) { ieee_sgc = (MPI25_IEEE_SGE_CHAIN64 *)ieee_sge; printf("IEEE chain flags=0x%x len=0x%x Offset=0x%x " "Address=0x%016jx\n", ieee_sgc->Flags, le32toh(ieee_sgc->Length), ieee_sgc->NextChainOffset, mpr_to_u64(&ieee_sgc->Address)); if (chain == NULL) chain = TAILQ_FIRST(&cm->cm_chain_list); else chain = TAILQ_NEXT(chain, chain_link); frame = (char *)chain->chain; ieee_sge = (MPI2_IEEE_SGE_SIMPLE64 *)frame; hexdump(frame, 128, NULL, 0); } } while ((frame != NULL) && (cm->cm_flags & MPR_CM_FLAGS_SGE_SIMPLE)) { flags = le32toh(sge->FlagsLength) >> MPI2_SGE_FLAGS_SHIFT; printf("seg%d flags=0x%02x len=0x%06x addr=0x%016jx\n", i, flags, le32toh(sge->FlagsLength) & 0xffffff, mpr_to_u64(&sge->Address)); if (flags & (MPI2_SGE_FLAGS_END_OF_LIST | MPI2_SGE_FLAGS_END_OF_BUFFER)) break; sge++; i++; } } void mpr_print_scsiio_cmd(struct mpr_softc *sc, struct mpr_command *cm) { MPI2_SCSI_IO_REQUEST *req; req = (MPI2_SCSI_IO_REQUEST *)cm->cm_req; mpr_print_sgl(sc, cm, req->SGLOffset0); } Index: releng/12.1/sys/dev/mpr/mpr_user.c =================================================================== --- releng/12.1/sys/dev/mpr/mpr_user.c (revision 352760) +++ releng/12.1/sys/dev/mpr/mpr_user.c (revision 352761) @@ -1,2626 +1,2627 @@ /*- * Copyright (c) 2008 Yahoo!, Inc. * All rights reserved. * Written by: John Baldwin * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD userland interface + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD userland interface */ /*- * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2016 Avago Technologies + * Copyright 2000-2020 Broadcom Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ #include __FBSDID("$FreeBSD$"); /* TODO Move headers to mprvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include static d_open_t mpr_open; static d_close_t mpr_close; static d_ioctl_t mpr_ioctl_devsw; static struct cdevsw mpr_cdevsw = { .d_version = D_VERSION, .d_flags = 0, .d_open = mpr_open, .d_close = mpr_close, .d_ioctl = mpr_ioctl_devsw, .d_name = "mpr", }; typedef int (mpr_user_f)(struct mpr_command *, struct mpr_usr_command *); static mpr_user_f mpi_pre_ioc_facts; static mpr_user_f mpi_pre_port_facts; static mpr_user_f mpi_pre_fw_download; static mpr_user_f mpi_pre_fw_upload; static mpr_user_f mpi_pre_sata_passthrough; static mpr_user_f mpi_pre_smp_passthrough; static mpr_user_f mpi_pre_config; static mpr_user_f mpi_pre_sas_io_unit_control; static int mpr_user_read_cfg_header(struct mpr_softc *, struct mpr_cfg_page_req *); static int mpr_user_read_cfg_page(struct mpr_softc *, struct mpr_cfg_page_req *, void *); static int mpr_user_read_extcfg_header(struct mpr_softc *, struct mpr_ext_cfg_page_req *); static int mpr_user_read_extcfg_page(struct mpr_softc *, struct mpr_ext_cfg_page_req *, void *); static int mpr_user_write_cfg_page(struct mpr_softc *, struct mpr_cfg_page_req *, void *); static int mpr_user_setup_request(struct mpr_command *, struct mpr_usr_command *); static int mpr_user_command(struct mpr_softc *, struct mpr_usr_command *); static int mpr_user_pass_thru(struct mpr_softc *sc, mpr_pass_thru_t *data); static void mpr_user_get_adapter_data(struct mpr_softc *sc, mpr_adapter_data_t *data); static void mpr_user_read_pci_info(struct mpr_softc *sc, mpr_pci_info_t *data); static uint8_t mpr_get_fw_diag_buffer_number(struct mpr_softc *sc, uint32_t unique_id); static int mpr_post_fw_diag_buffer(struct mpr_softc *sc, mpr_fw_diagnostic_buffer_t *pBuffer, uint32_t *return_code); static int mpr_release_fw_diag_buffer(struct mpr_softc *sc, mpr_fw_diagnostic_buffer_t *pBuffer, uint32_t *return_code, uint32_t diag_type); static int mpr_diag_register(struct mpr_softc *sc, mpr_fw_diag_register_t *diag_register, uint32_t *return_code); static int mpr_diag_unregister(struct mpr_softc *sc, mpr_fw_diag_unregister_t *diag_unregister, uint32_t *return_code); static int mpr_diag_query(struct mpr_softc *sc, mpr_fw_diag_query_t *diag_query, uint32_t *return_code); static int mpr_diag_read_buffer(struct mpr_softc *sc, mpr_diag_read_buffer_t *diag_read_buffer, uint8_t *ioctl_buf, uint32_t *return_code); static int mpr_diag_release(struct mpr_softc *sc, mpr_fw_diag_release_t *diag_release, uint32_t *return_code); static int mpr_do_diag_action(struct mpr_softc *sc, uint32_t action, uint8_t *diag_action, uint32_t length, uint32_t *return_code); static int mpr_user_diag_action(struct mpr_softc *sc, mpr_diag_action_t *data); static void mpr_user_event_query(struct mpr_softc *sc, mpr_event_query_t *data); static void mpr_user_event_enable(struct mpr_softc *sc, mpr_event_enable_t *data); static int mpr_user_event_report(struct mpr_softc *sc, mpr_event_report_t *data); static int mpr_user_reg_access(struct mpr_softc *sc, mpr_reg_access_t *data); static int mpr_user_btdh(struct mpr_softc *sc, mpr_btdh_mapping_t *data); static MALLOC_DEFINE(M_MPRUSER, "mpr_user", "Buffers for mpr(4) ioctls"); /* Macros from compat/freebsd32/freebsd32.h */ #define PTRIN(v) (void *)(uintptr_t)(v) #define PTROUT(v) (uint32_t)(uintptr_t)(v) #define CP(src,dst,fld) do { (dst).fld = (src).fld; } while (0) #define PTRIN_CP(src,dst,fld) \ do { (dst).fld = PTRIN((src).fld); } while (0) #define PTROUT_CP(src,dst,fld) \ do { (dst).fld = PTROUT((src).fld); } while (0) /* * MPI functions that support IEEE SGLs for SAS3. */ static uint8_t ieee_sgl_func_list[] = { MPI2_FUNCTION_SCSI_IO_REQUEST, MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH, MPI2_FUNCTION_SMP_PASSTHROUGH, MPI2_FUNCTION_SATA_PASSTHROUGH, MPI2_FUNCTION_FW_UPLOAD, MPI2_FUNCTION_FW_DOWNLOAD, MPI2_FUNCTION_TARGET_ASSIST, MPI2_FUNCTION_TARGET_STATUS_SEND, MPI2_FUNCTION_TOOLBOX }; int mpr_attach_user(struct mpr_softc *sc) { int unit; unit = device_get_unit(sc->mpr_dev); sc->mpr_cdev = make_dev(&mpr_cdevsw, unit, UID_ROOT, GID_OPERATOR, 0640, "mpr%d", unit); if (sc->mpr_cdev == NULL) return (ENOMEM); sc->mpr_cdev->si_drv1 = sc; return (0); } void mpr_detach_user(struct mpr_softc *sc) { /* XXX: do a purge of pending requests? */ if (sc->mpr_cdev != NULL) destroy_dev(sc->mpr_cdev); } static int mpr_open(struct cdev *dev, int flags, int fmt, struct thread *td) { return (0); } static int mpr_close(struct cdev *dev, int flags, int fmt, struct thread *td) { return (0); } static int mpr_user_read_cfg_header(struct mpr_softc *sc, struct mpr_cfg_page_req *page_req) { MPI2_CONFIG_PAGE_HEADER *hdr; struct mpr_config_params params; int error; hdr = ¶ms.hdr.Struct; params.action = MPI2_CONFIG_ACTION_PAGE_HEADER; params.page_address = le32toh(page_req->page_address); hdr->PageVersion = 0; hdr->PageLength = 0; hdr->PageNumber = page_req->header.PageNumber; hdr->PageType = page_req->header.PageType; params.buffer = NULL; params.length = 0; params.callback = NULL; if ((error = mpr_read_config_page(sc, ¶ms)) != 0) { /* * Leave the request. Without resetting the chip, it's * still owned by it and we'll just get into trouble * freeing it now. Mark it as abandoned so that if it * shows up later it can be freed. */ mpr_printf(sc, "read_cfg_header timed out\n"); return (ETIMEDOUT); } page_req->ioc_status = htole16(params.status); if ((page_req->ioc_status & MPI2_IOCSTATUS_MASK) == MPI2_IOCSTATUS_SUCCESS) { bcopy(hdr, &page_req->header, sizeof(page_req->header)); } return (0); } static int mpr_user_read_cfg_page(struct mpr_softc *sc, struct mpr_cfg_page_req *page_req, void *buf) { MPI2_CONFIG_PAGE_HEADER *reqhdr, *hdr; struct mpr_config_params params; int error; reqhdr = buf; hdr = ¶ms.hdr.Struct; hdr->PageVersion = reqhdr->PageVersion; hdr->PageLength = reqhdr->PageLength; hdr->PageNumber = reqhdr->PageNumber; hdr->PageType = reqhdr->PageType & MPI2_CONFIG_PAGETYPE_MASK; params.action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; params.page_address = le32toh(page_req->page_address); params.buffer = buf; params.length = le32toh(page_req->len); params.callback = NULL; if ((error = mpr_read_config_page(sc, ¶ms)) != 0) { mpr_printf(sc, "mpr_user_read_cfg_page timed out\n"); return (ETIMEDOUT); } page_req->ioc_status = htole16(params.status); return (0); } static int mpr_user_read_extcfg_header(struct mpr_softc *sc, struct mpr_ext_cfg_page_req *ext_page_req) { MPI2_CONFIG_EXTENDED_PAGE_HEADER *hdr; struct mpr_config_params params; int error; hdr = ¶ms.hdr.Ext; params.action = MPI2_CONFIG_ACTION_PAGE_HEADER; hdr->PageVersion = ext_page_req->header.PageVersion; hdr->PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; hdr->ExtPageLength = 0; hdr->PageNumber = ext_page_req->header.PageNumber; hdr->ExtPageType = ext_page_req->header.ExtPageType; params.page_address = le32toh(ext_page_req->page_address); params.buffer = NULL; params.length = 0; params.callback = NULL; if ((error = mpr_read_config_page(sc, ¶ms)) != 0) { /* * Leave the request. Without resetting the chip, it's * still owned by it and we'll just get into trouble * freeing it now. Mark it as abandoned so that if it * shows up later it can be freed. */ mpr_printf(sc, "mpr_user_read_extcfg_header timed out\n"); return (ETIMEDOUT); } ext_page_req->ioc_status = htole16(params.status); if ((ext_page_req->ioc_status & MPI2_IOCSTATUS_MASK) == MPI2_IOCSTATUS_SUCCESS) { ext_page_req->header.PageVersion = hdr->PageVersion; ext_page_req->header.PageNumber = hdr->PageNumber; ext_page_req->header.PageType = hdr->PageType; ext_page_req->header.ExtPageLength = hdr->ExtPageLength; ext_page_req->header.ExtPageType = hdr->ExtPageType; } return (0); } static int mpr_user_read_extcfg_page(struct mpr_softc *sc, struct mpr_ext_cfg_page_req *ext_page_req, void *buf) { MPI2_CONFIG_EXTENDED_PAGE_HEADER *reqhdr, *hdr; struct mpr_config_params params; int error; reqhdr = buf; hdr = ¶ms.hdr.Ext; params.action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; params.page_address = le32toh(ext_page_req->page_address); hdr->PageVersion = reqhdr->PageVersion; hdr->PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; hdr->PageNumber = reqhdr->PageNumber; hdr->ExtPageType = reqhdr->ExtPageType; hdr->ExtPageLength = reqhdr->ExtPageLength; params.buffer = buf; params.length = le32toh(ext_page_req->len); params.callback = NULL; if ((error = mpr_read_config_page(sc, ¶ms)) != 0) { mpr_printf(sc, "mpr_user_read_extcfg_page timed out\n"); return (ETIMEDOUT); } ext_page_req->ioc_status = htole16(params.status); return (0); } static int mpr_user_write_cfg_page(struct mpr_softc *sc, struct mpr_cfg_page_req *page_req, void *buf) { MPI2_CONFIG_PAGE_HEADER *reqhdr, *hdr; struct mpr_config_params params; u_int hdr_attr; int error; reqhdr = buf; hdr = ¶ms.hdr.Struct; hdr_attr = reqhdr->PageType & MPI2_CONFIG_PAGEATTR_MASK; if (hdr_attr != MPI2_CONFIG_PAGEATTR_CHANGEABLE && hdr_attr != MPI2_CONFIG_PAGEATTR_PERSISTENT) { mpr_printf(sc, "page type 0x%x not changeable\n", reqhdr->PageType & MPI2_CONFIG_PAGETYPE_MASK); return (EINVAL); } /* * There isn't any point in restoring stripped out attributes * if you then mask them going down to issue the request. */ hdr->PageVersion = reqhdr->PageVersion; hdr->PageLength = reqhdr->PageLength; hdr->PageNumber = reqhdr->PageNumber; hdr->PageType = reqhdr->PageType; params.action = MPI2_CONFIG_ACTION_PAGE_WRITE_CURRENT; params.page_address = le32toh(page_req->page_address); params.buffer = buf; params.length = le32toh(page_req->len); params.callback = NULL; if ((error = mpr_write_config_page(sc, ¶ms)) != 0) { mpr_printf(sc, "mpr_write_cfg_page timed out\n"); return (ETIMEDOUT); } page_req->ioc_status = htole16(params.status); return (0); } void mpr_init_sge(struct mpr_command *cm, void *req, void *sge) { int off, space; space = (int)cm->cm_sc->reqframesz; off = (uintptr_t)sge - (uintptr_t)req; KASSERT(off < space, ("bad pointers %p %p, off %d, space %d", req, sge, off, space)); cm->cm_sge = sge; cm->cm_sglsize = space - off; } /* * Prepare the mpr_command for an IOC_FACTS request. */ static int mpi_pre_ioc_facts(struct mpr_command *cm, struct mpr_usr_command *cmd) { MPI2_IOC_FACTS_REQUEST *req = (void *)cm->cm_req; MPI2_IOC_FACTS_REPLY *rpl; if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); cm->cm_sge = NULL; cm->cm_sglsize = 0; return (0); } /* * Prepare the mpr_command for a PORT_FACTS request. */ static int mpi_pre_port_facts(struct mpr_command *cm, struct mpr_usr_command *cmd) { MPI2_PORT_FACTS_REQUEST *req = (void *)cm->cm_req; MPI2_PORT_FACTS_REPLY *rpl; if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); cm->cm_sge = NULL; cm->cm_sglsize = 0; return (0); } /* * Prepare the mpr_command for a FW_DOWNLOAD request. */ static int mpi_pre_fw_download(struct mpr_command *cm, struct mpr_usr_command *cmd) { MPI25_FW_DOWNLOAD_REQUEST *req = (void *)cm->cm_req; MPI2_FW_DOWNLOAD_REPLY *rpl; int error; if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); if (cmd->len == 0) return (EINVAL); error = copyin(cmd->buf, cm->cm_data, cmd->len); if (error != 0) return (error); mpr_init_sge(cm, req, &req->SGL); /* * For now, the F/W image must be provided in a single request. */ if ((req->MsgFlags & MPI2_FW_DOWNLOAD_MSGFLGS_LAST_SEGMENT) == 0) return (EINVAL); if (req->TotalImageSize != cmd->len) return (EINVAL); req->ImageOffset = 0; req->ImageSize = cmd->len; cm->cm_flags |= MPR_CM_FLAGS_DATAOUT; return (mpr_push_ieee_sge(cm, &req->SGL, 0)); } /* * Prepare the mpr_command for a FW_UPLOAD request. */ static int mpi_pre_fw_upload(struct mpr_command *cm, struct mpr_usr_command *cmd) { MPI25_FW_UPLOAD_REQUEST *req = (void *)cm->cm_req; MPI2_FW_UPLOAD_REPLY *rpl; if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); mpr_init_sge(cm, req, &req->SGL); if (cmd->len == 0) { /* Perhaps just asking what the size of the fw is? */ return (0); } req->ImageOffset = 0; req->ImageSize = cmd->len; cm->cm_flags |= MPR_CM_FLAGS_DATAIN; return (mpr_push_ieee_sge(cm, &req->SGL, 0)); } /* * Prepare the mpr_command for a SATA_PASSTHROUGH request. */ static int mpi_pre_sata_passthrough(struct mpr_command *cm, struct mpr_usr_command *cmd) { MPI2_SATA_PASSTHROUGH_REQUEST *req = (void *)cm->cm_req; MPI2_SATA_PASSTHROUGH_REPLY *rpl; if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); mpr_init_sge(cm, req, &req->SGL); return (0); } /* * Prepare the mpr_command for a SMP_PASSTHROUGH request. */ static int mpi_pre_smp_passthrough(struct mpr_command *cm, struct mpr_usr_command *cmd) { MPI2_SMP_PASSTHROUGH_REQUEST *req = (void *)cm->cm_req; MPI2_SMP_PASSTHROUGH_REPLY *rpl; if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); mpr_init_sge(cm, req, &req->SGL); return (0); } /* * Prepare the mpr_command for a CONFIG request. */ static int mpi_pre_config(struct mpr_command *cm, struct mpr_usr_command *cmd) { MPI2_CONFIG_REQUEST *req = (void *)cm->cm_req; MPI2_CONFIG_REPLY *rpl; if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); mpr_init_sge(cm, req, &req->PageBufferSGE); return (0); } /* * Prepare the mpr_command for a SAS_IO_UNIT_CONTROL request. */ static int mpi_pre_sas_io_unit_control(struct mpr_command *cm, struct mpr_usr_command *cmd) { cm->cm_sge = NULL; cm->cm_sglsize = 0; return (0); } /* * A set of functions to prepare an mpr_command for the various * supported requests. */ struct mpr_user_func { U8 Function; mpr_user_f *f_pre; } mpr_user_func_list[] = { { MPI2_FUNCTION_IOC_FACTS, mpi_pre_ioc_facts }, { MPI2_FUNCTION_PORT_FACTS, mpi_pre_port_facts }, { MPI2_FUNCTION_FW_DOWNLOAD, mpi_pre_fw_download }, { MPI2_FUNCTION_FW_UPLOAD, mpi_pre_fw_upload }, { MPI2_FUNCTION_SATA_PASSTHROUGH, mpi_pre_sata_passthrough }, { MPI2_FUNCTION_SMP_PASSTHROUGH, mpi_pre_smp_passthrough}, { MPI2_FUNCTION_CONFIG, mpi_pre_config}, { MPI2_FUNCTION_SAS_IO_UNIT_CONTROL, mpi_pre_sas_io_unit_control }, { 0xFF, NULL } /* list end */ }; static int mpr_user_setup_request(struct mpr_command *cm, struct mpr_usr_command *cmd) { MPI2_REQUEST_HEADER *hdr = (MPI2_REQUEST_HEADER *)cm->cm_req; struct mpr_user_func *f; for (f = mpr_user_func_list; f->f_pre != NULL; f++) { if (hdr->Function == f->Function) return (f->f_pre(cm, cmd)); } return (EINVAL); } static int mpr_user_command(struct mpr_softc *sc, struct mpr_usr_command *cmd) { MPI2_REQUEST_HEADER *hdr; MPI2_DEFAULT_REPLY *rpl = NULL; void *buf = NULL; struct mpr_command *cm = NULL; int err = 0; int sz; mpr_lock(sc); cm = mpr_alloc_command(sc); if (cm == NULL) { mpr_printf(sc, "%s: no mpr requests\n", __func__); err = ENOMEM; goto RetFree; } mpr_unlock(sc); hdr = (MPI2_REQUEST_HEADER *)cm->cm_req; mpr_dprint(sc, MPR_USER, "%s: req %p %d rpl %p %d\n", __func__, cmd->req, cmd->req_len, cmd->rpl, cmd->rpl_len); if (cmd->req_len > (int)sc->reqframesz) { err = EINVAL; goto RetFreeUnlocked; } err = copyin(cmd->req, hdr, cmd->req_len); if (err != 0) goto RetFreeUnlocked; mpr_dprint(sc, MPR_USER, "%s: Function %02X MsgFlags %02X\n", __func__, hdr->Function, hdr->MsgFlags); if (cmd->len > 0) { buf = malloc(cmd->len, M_MPRUSER, M_WAITOK|M_ZERO); cm->cm_data = buf; cm->cm_length = cmd->len; } else { cm->cm_data = NULL; cm->cm_length = 0; } cm->cm_flags = MPR_CM_FLAGS_SGE_SIMPLE; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; err = mpr_user_setup_request(cm, cmd); if (err == EINVAL) { mpr_printf(sc, "%s: unsupported parameter or unsupported " "function in request (function = 0x%X)\n", __func__, hdr->Function); } if (err != 0) goto RetFreeUnlocked; mpr_lock(sc); err = mpr_wait_command(sc, &cm, 30, CAN_SLEEP); if (err || (cm == NULL)) { mpr_printf(sc, "%s: invalid request: error %d\n", __func__, err); goto RetFree; } if (cm != NULL) rpl = (MPI2_DEFAULT_REPLY *)cm->cm_reply; if (rpl != NULL) sz = rpl->MsgLength * 4; else sz = 0; if (sz > cmd->rpl_len) { mpr_printf(sc, "%s: user reply buffer (%d) smaller than " "returned buffer (%d)\n", __func__, cmd->rpl_len, sz); sz = cmd->rpl_len; } mpr_unlock(sc); copyout(rpl, cmd->rpl, sz); if (buf != NULL) copyout(buf, cmd->buf, cmd->len); mpr_dprint(sc, MPR_USER, "%s: reply size %d\n", __func__, sz); RetFreeUnlocked: mpr_lock(sc); RetFree: if (cm != NULL) mpr_free_command(sc, cm); mpr_unlock(sc); if (buf != NULL) free(buf, M_MPRUSER); return (err); } static int mpr_user_pass_thru(struct mpr_softc *sc, mpr_pass_thru_t *data) { MPI2_REQUEST_HEADER *hdr, tmphdr; MPI2_DEFAULT_REPLY *rpl; Mpi26NVMeEncapsulatedErrorReply_t *nvme_error_reply = NULL; Mpi26NVMeEncapsulatedRequest_t *nvme_encap_request = NULL; struct mpr_command *cm = NULL; int i, err = 0, dir = 0, sz; uint8_t tool, function = 0; u_int sense_len; struct mprsas_target *targ = NULL; /* * Only allow one passthru command at a time. Use the MPR_FLAGS_BUSY * bit to denote that a passthru is being processed. */ mpr_lock(sc); if (sc->mpr_flags & MPR_FLAGS_BUSY) { mpr_dprint(sc, MPR_USER, "%s: Only one passthru command " "allowed at a single time.", __func__); mpr_unlock(sc); return (EBUSY); } sc->mpr_flags |= MPR_FLAGS_BUSY; mpr_unlock(sc); /* * Do some validation on data direction. Valid cases are: * 1) DataSize is 0 and direction is NONE * 2) DataSize is non-zero and one of: * a) direction is READ or * b) direction is WRITE or * c) direction is BOTH and DataOutSize is non-zero * If valid and the direction is BOTH, change the direction to READ. * if valid and the direction is not BOTH, make sure DataOutSize is 0. */ if (((data->DataSize == 0) && (data->DataDirection == MPR_PASS_THRU_DIRECTION_NONE)) || ((data->DataSize != 0) && ((data->DataDirection == MPR_PASS_THRU_DIRECTION_READ) || (data->DataDirection == MPR_PASS_THRU_DIRECTION_WRITE) || ((data->DataDirection == MPR_PASS_THRU_DIRECTION_BOTH) && (data->DataOutSize != 0))))) { if (data->DataDirection == MPR_PASS_THRU_DIRECTION_BOTH) data->DataDirection = MPR_PASS_THRU_DIRECTION_READ; else data->DataOutSize = 0; } else { err = EINVAL; goto RetFreeUnlocked; } mpr_dprint(sc, MPR_USER, "%s: req 0x%jx %d rpl 0x%jx %d " "data in 0x%jx %d data out 0x%jx %d data dir %d\n", __func__, data->PtrRequest, data->RequestSize, data->PtrReply, data->ReplySize, data->PtrData, data->DataSize, data->PtrDataOut, data->DataOutSize, data->DataDirection); /* * copy in the header so we know what we're dealing with before we * commit to allocating a command for it. */ err = copyin(PTRIN(data->PtrRequest), &tmphdr, data->RequestSize); if (err != 0) goto RetFreeUnlocked; if (data->RequestSize > (int)sc->reqframesz) { err = EINVAL; goto RetFreeUnlocked; } function = tmphdr.Function; mpr_dprint(sc, MPR_USER, "%s: Function %02X MsgFlags %02X\n", __func__, function, tmphdr.MsgFlags); /* * Handle a passthru TM request. */ if (function == MPI2_FUNCTION_SCSI_TASK_MGMT) { MPI2_SCSI_TASK_MANAGE_REQUEST *task; mpr_lock(sc); cm = mprsas_alloc_tm(sc); if (cm == NULL) { err = EINVAL; goto Ret; } /* Copy the header in. Only a small fixup is needed. */ task = (MPI2_SCSI_TASK_MANAGE_REQUEST *)cm->cm_req; bcopy(&tmphdr, task, data->RequestSize); task->TaskMID = cm->cm_desc.Default.SMID; cm->cm_data = NULL; - cm->cm_desc.HighPriority.RequestFlags = - MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; cm->cm_complete = NULL; cm->cm_complete_data = NULL; targ = mprsas_find_target_by_handle(sc->sassc, 0, task->DevHandle); if (targ == NULL) { mpr_dprint(sc, MPR_INFO, "%s %d : invalid handle for requested TM 0x%x \n", __func__, __LINE__, task->DevHandle); err = 1; } else { mprsas_prepare_for_tm(sc, cm, targ, CAM_LUN_WILDCARD); err = mpr_wait_command(sc, &cm, 30, CAN_SLEEP); } if (err != 0) { err = EIO; mpr_dprint(sc, MPR_FAULT, "%s: task management failed", __func__); } /* * Copy the reply data and sense data to user space. */ if ((cm != NULL) && (cm->cm_reply != NULL)) { rpl = (MPI2_DEFAULT_REPLY *)cm->cm_reply; sz = rpl->MsgLength * 4; if (sz > data->ReplySize) { mpr_printf(sc, "%s: user reply buffer (%d) " "smaller than returned buffer (%d)\n", __func__, data->ReplySize, sz); } mpr_unlock(sc); copyout(cm->cm_reply, PTRIN(data->PtrReply), data->ReplySize); mpr_lock(sc); } mprsas_free_tm(sc, cm); goto Ret; } mpr_lock(sc); cm = mpr_alloc_command(sc); if (cm == NULL) { mpr_printf(sc, "%s: no mpr requests\n", __func__); err = ENOMEM; goto Ret; } mpr_unlock(sc); hdr = (MPI2_REQUEST_HEADER *)cm->cm_req; bcopy(&tmphdr, hdr, data->RequestSize); /* * Do some checking to make sure the IOCTL request contains a valid * request. Then set the SGL info. */ mpr_init_sge(cm, hdr, (void *)((uint8_t *)hdr + data->RequestSize)); /* * Set up for read, write or both. From check above, DataOutSize will * be 0 if direction is READ or WRITE, but it will have some non-zero * value if the direction is BOTH. So, just use the biggest size to get * the cm_data buffer size. If direction is BOTH, 2 SGLs need to be set * up; the first is for the request and the second will contain the * response data. cm_out_len needs to be set here and this will be used * when the SGLs are set up. */ cm->cm_data = NULL; cm->cm_length = MAX(data->DataSize, data->DataOutSize); cm->cm_out_len = data->DataOutSize; cm->cm_flags = 0; if (cm->cm_length != 0) { cm->cm_data = malloc(cm->cm_length, M_MPRUSER, M_WAITOK | M_ZERO); cm->cm_flags = MPR_CM_FLAGS_DATAIN; if (data->DataOutSize) { cm->cm_flags |= MPR_CM_FLAGS_DATAOUT; err = copyin(PTRIN(data->PtrDataOut), cm->cm_data, data->DataOutSize); } else if (data->DataDirection == MPR_PASS_THRU_DIRECTION_WRITE) { cm->cm_flags = MPR_CM_FLAGS_DATAOUT; err = copyin(PTRIN(data->PtrData), cm->cm_data, data->DataSize); } if (err != 0) mpr_dprint(sc, MPR_FAULT, "%s: failed to copy IOCTL " "data from user space\n", __func__); } /* * Set this flag only if processing a command that does not need an * IEEE SGL. The CLI Tool within the Toolbox uses IEEE SGLs, so clear * the flag only for that tool if processing a Toolbox function. */ cm->cm_flags |= MPR_CM_FLAGS_SGE_SIMPLE; for (i = 0; i < sizeof (ieee_sgl_func_list); i++) { if (function == ieee_sgl_func_list[i]) { if (function == MPI2_FUNCTION_TOOLBOX) { tool = (uint8_t)hdr->FunctionDependent1; if (tool != MPI2_TOOLBOX_DIAGNOSTIC_CLI_TOOL) break; } cm->cm_flags &= ~MPR_CM_FLAGS_SGE_SIMPLE; break; } } cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; if (function == MPI2_FUNCTION_NVME_ENCAPSULATED) { nvme_encap_request = (Mpi26NVMeEncapsulatedRequest_t *)cm->cm_req; cm->cm_desc.Default.RequestFlags = MPI26_REQ_DESCRIPT_FLAGS_PCIE_ENCAPSULATED; /* * Get the Physical Address of the sense buffer. * Save the user's Error Response buffer address and use that * field to hold the sense buffer address. * Clear the internal sense buffer, which will potentially hold * the Completion Queue Entry on return, or 0 if no Entry. * Build the PRPs and set direction bits. * Send the request. */ cm->nvme_error_response = (uint64_t *)(uintptr_t)(((uint64_t)nvme_encap_request-> ErrorResponseBaseAddress.High << 32) | (uint64_t)nvme_encap_request-> ErrorResponseBaseAddress.Low); nvme_encap_request->ErrorResponseBaseAddress.High = htole32((uint32_t)((uint64_t)cm->cm_sense_busaddr >> 32)); nvme_encap_request->ErrorResponseBaseAddress.Low = htole32(cm->cm_sense_busaddr); memset(cm->cm_sense, 0, NVME_ERROR_RESPONSE_SIZE); mpr_build_nvme_prp(sc, cm, nvme_encap_request, cm->cm_data, data->DataSize, data->DataOutSize); } /* * Set up Sense buffer and SGL offset for IO passthru. SCSI IO request * uses SCSI IO or Fast Path SCSI IO descriptor. */ if ((function == MPI2_FUNCTION_SCSI_IO_REQUEST) || (function == MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH)) { MPI2_SCSI_IO_REQUEST *scsi_io_req; scsi_io_req = (MPI2_SCSI_IO_REQUEST *)hdr; /* * Put SGE for data and data_out buffer at the end of * scsi_io_request message header (64 bytes in total). * Following above SGEs, the residual space will be used by * sense data. */ scsi_io_req->SenseBufferLength = (uint8_t)(data->RequestSize - 64); scsi_io_req->SenseBufferLowAddress = htole32(cm->cm_sense_busaddr); /* * Set SGLOffset0 value. This is the number of dwords that SGL * is offset from the beginning of MPI2_SCSI_IO_REQUEST struct. */ scsi_io_req->SGLOffset0 = 24; /* * Setup descriptor info. RAID passthrough must use the * default request descriptor which is already set, so if this * is a SCSI IO request, change the descriptor to SCSI IO or * Fast Path SCSI IO. Also, if this is a SCSI IO request, * handle the reply in the mprsas_scsio_complete function. */ if (function == MPI2_FUNCTION_SCSI_IO_REQUEST) { targ = mprsas_find_target_by_handle(sc->sassc, 0, scsi_io_req->DevHandle); if (!targ) { printf("No Target found for handle %d\n", scsi_io_req->DevHandle); err = EINVAL; goto RetFreeUnlocked; } if (targ->scsi_req_desc_type == MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO) { cm->cm_desc.FastPathSCSIIO.RequestFlags = MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO; if (!sc->atomic_desc_capable) { cm->cm_desc.FastPathSCSIIO.DevHandle = scsi_io_req->DevHandle; } scsi_io_req->IoFlags |= MPI25_SCSIIO_IOFLAGS_FAST_PATH; } else { cm->cm_desc.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO; if (!sc->atomic_desc_capable) { cm->cm_desc.SCSIIO.DevHandle = scsi_io_req->DevHandle; } } /* * Make sure the DevHandle is not 0 because this is a * likely error. */ if (scsi_io_req->DevHandle == 0) { err = EINVAL; goto RetFreeUnlocked; } } } mpr_lock(sc); err = mpr_wait_command(sc, &cm, 30, CAN_SLEEP); if (err || (cm == NULL)) { mpr_printf(sc, "%s: invalid request: error %d\n", __func__, err); goto RetFree; } /* * Sync the DMA data, if any. Then copy the data to user space. */ if (cm->cm_data != NULL) { if (cm->cm_flags & MPR_CM_FLAGS_DATAIN) dir = BUS_DMASYNC_POSTREAD; else if (cm->cm_flags & MPR_CM_FLAGS_DATAOUT) dir = BUS_DMASYNC_POSTWRITE; bus_dmamap_sync(sc->buffer_dmat, cm->cm_dmamap, dir); bus_dmamap_unload(sc->buffer_dmat, cm->cm_dmamap); if (cm->cm_flags & MPR_CM_FLAGS_DATAIN) { mpr_unlock(sc); err = copyout(cm->cm_data, PTRIN(data->PtrData), data->DataSize); mpr_lock(sc); if (err != 0) mpr_dprint(sc, MPR_FAULT, "%s: failed to copy " "IOCTL data to user space\n", __func__); } } /* * Copy the reply data and sense data to user space. */ if (cm->cm_reply != NULL) { rpl = (MPI2_DEFAULT_REPLY *)cm->cm_reply; sz = rpl->MsgLength * 4; if (sz > data->ReplySize) { mpr_printf(sc, "%s: user reply buffer (%d) smaller " "than returned buffer (%d)\n", __func__, data->ReplySize, sz); } mpr_unlock(sc); copyout(cm->cm_reply, PTRIN(data->PtrReply), data->ReplySize); mpr_lock(sc); if ((function == MPI2_FUNCTION_SCSI_IO_REQUEST) || (function == MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH)) { if (((MPI2_SCSI_IO_REPLY *)rpl)->SCSIState & MPI2_SCSI_STATE_AUTOSENSE_VALID) { sense_len = MIN((le32toh(((MPI2_SCSI_IO_REPLY *)rpl)-> SenseCount)), sizeof(struct scsi_sense_data)); mpr_unlock(sc); - copyout(cm->cm_sense, cm->cm_req + 64, - sense_len); + copyout(cm->cm_sense, (PTRIN(data->PtrReply + + sizeof(MPI2_SCSI_IO_REPLY))), sense_len); mpr_lock(sc); } } /* * Copy out the NVMe Error Reponse to user. The Error Response * buffer is given by the user, but a sense buffer is used to * get that data from the IOC. The user's * ErrorResponseBaseAddress is saved in the * 'nvme_error_response' field before the command because that * field is set to a sense buffer. When the command is * complete, the Error Response data from the IOC is copied to * that user address after it is checked for validity. * Also note that 'sense' buffers are not defined for * NVMe commands. Sense terminalogy is only used here so that * the same IOCTL structure and sense buffers can be used for * NVMe. */ if (function == MPI2_FUNCTION_NVME_ENCAPSULATED) { if (cm->nvme_error_response == NULL) { mpr_dprint(sc, MPR_INFO, "NVMe Error Response " "buffer is NULL. Response data will not be " "returned.\n"); mpr_unlock(sc); goto RetFreeUnlocked; } nvme_error_reply = (Mpi26NVMeEncapsulatedErrorReply_t *)cm->cm_reply; sz = MIN(le32toh(nvme_error_reply->ErrorResponseCount), NVME_ERROR_RESPONSE_SIZE); mpr_unlock(sc); - copyout(cm->cm_sense, cm->nvme_error_response, sz); + copyout(cm->cm_sense, + (PTRIN(data->PtrReply + + sizeof(MPI2_SCSI_IO_REPLY))), sz); mpr_lock(sc); } } mpr_unlock(sc); RetFreeUnlocked: mpr_lock(sc); RetFree: if (cm != NULL) { if (cm->cm_data) free(cm->cm_data, M_MPRUSER); mpr_free_command(sc, cm); } Ret: sc->mpr_flags &= ~MPR_FLAGS_BUSY; mpr_unlock(sc); return (err); } static void mpr_user_get_adapter_data(struct mpr_softc *sc, mpr_adapter_data_t *data) { Mpi2ConfigReply_t mpi_reply; Mpi2BiosPage3_t config_page; /* * Use the PCI interface functions to get the Bus, Device, and Function * information. */ data->PciInformation.u.bits.BusNumber = pci_get_bus(sc->mpr_dev); data->PciInformation.u.bits.DeviceNumber = pci_get_slot(sc->mpr_dev); data->PciInformation.u.bits.FunctionNumber = pci_get_function(sc->mpr_dev); /* * Get the FW version that should already be saved in IOC Facts. */ data->MpiFirmwareVersion = sc->facts->FWVersion.Word; /* * General device info. */ if (sc->mpr_flags & MPR_FLAGS_GEN35_IOC) data->AdapterType = MPRIOCTL_ADAPTER_TYPE_SAS35; else data->AdapterType = MPRIOCTL_ADAPTER_TYPE_SAS3; data->PCIDeviceHwId = pci_get_device(sc->mpr_dev); data->PCIDeviceHwRev = pci_read_config(sc->mpr_dev, PCIR_REVID, 1); data->SubSystemId = pci_get_subdevice(sc->mpr_dev); data->SubsystemVendorId = pci_get_subvendor(sc->mpr_dev); /* * Get the driver version. */ strcpy((char *)&data->DriverVersion[0], MPR_DRIVER_VERSION); /* * Need to get BIOS Config Page 3 for the BIOS Version. */ data->BiosVersion = 0; mpr_lock(sc); if (mpr_config_get_bios_pg3(sc, &mpi_reply, &config_page)) printf("%s: Error while retrieving BIOS Version\n", __func__); else data->BiosVersion = config_page.BiosVersion; mpr_unlock(sc); } static void mpr_user_read_pci_info(struct mpr_softc *sc, mpr_pci_info_t *data) { int i; /* * Use the PCI interface functions to get the Bus, Device, and Function * information. */ data->BusNumber = pci_get_bus(sc->mpr_dev); data->DeviceNumber = pci_get_slot(sc->mpr_dev); data->FunctionNumber = pci_get_function(sc->mpr_dev); /* * Now get the interrupt vector and the pci header. The vector can * only be 0 right now. The header is the first 256 bytes of config * space. */ data->InterruptVector = 0; for (i = 0; i < sizeof (data->PciHeader); i++) { data->PciHeader[i] = pci_read_config(sc->mpr_dev, i, 1); } } static uint8_t mpr_get_fw_diag_buffer_number(struct mpr_softc *sc, uint32_t unique_id) { uint8_t index; for (index = 0; index < MPI2_DIAG_BUF_TYPE_COUNT; index++) { if (sc->fw_diag_buffer_list[index].unique_id == unique_id) { return (index); } } return (MPR_FW_DIAGNOSTIC_UID_NOT_FOUND); } static int mpr_post_fw_diag_buffer(struct mpr_softc *sc, mpr_fw_diagnostic_buffer_t *pBuffer, uint32_t *return_code) { MPI2_DIAG_BUFFER_POST_REQUEST *req; MPI2_DIAG_BUFFER_POST_REPLY *reply; struct mpr_command *cm = NULL; int i, status; /* * If buffer is not enabled, just leave. */ *return_code = MPR_FW_DIAG_ERROR_POST_FAILED; if (!pBuffer->enabled) { return (MPR_DIAG_FAILURE); } /* * Clear some flags initially. */ pBuffer->force_release = FALSE; pBuffer->valid_data = FALSE; pBuffer->owned_by_firmware = FALSE; /* * Get a command. */ cm = mpr_alloc_command(sc); if (cm == NULL) { mpr_printf(sc, "%s: no mpr requests\n", __func__); return (MPR_DIAG_FAILURE); } /* * Build the request for releasing the FW Diag Buffer and send it. */ req = (MPI2_DIAG_BUFFER_POST_REQUEST *)cm->cm_req; req->Function = MPI2_FUNCTION_DIAG_BUFFER_POST; req->BufferType = pBuffer->buffer_type; req->ExtendedType = pBuffer->extended_type; req->BufferLength = pBuffer->size; for (i = 0; i < (sizeof(req->ProductSpecific) / 4); i++) req->ProductSpecific[i] = pBuffer->product_specific[i]; mpr_from_u64(sc->fw_diag_busaddr, &req->BufferAddress); cm->cm_data = NULL; cm->cm_length = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_complete_data = NULL; /* * Send command synchronously. */ status = mpr_wait_command(sc, &cm, 30, CAN_SLEEP); if (status || (cm == NULL)) { mpr_printf(sc, "%s: invalid request: error %d\n", __func__, status); status = MPR_DIAG_FAILURE; goto done; } /* * Process POST reply. */ reply = (MPI2_DIAG_BUFFER_POST_REPLY *)cm->cm_reply; if (reply == NULL) { mpr_printf(sc, "%s: reply is NULL, probably due to " "reinitialization", __func__); status = MPR_DIAG_FAILURE; goto done; } if ((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) { status = MPR_DIAG_FAILURE; mpr_dprint(sc, MPR_FAULT, "%s: post of FW Diag Buffer failed " "with IOCStatus = 0x%x, IOCLogInfo = 0x%x and " "TransferLength = 0x%x\n", __func__, le16toh(reply->IOCStatus), le32toh(reply->IOCLogInfo), le32toh(reply->TransferLength)); goto done; } /* * Post was successful. */ pBuffer->valid_data = TRUE; pBuffer->owned_by_firmware = TRUE; *return_code = MPR_FW_DIAG_ERROR_SUCCESS; status = MPR_DIAG_SUCCESS; done: if (cm != NULL) mpr_free_command(sc, cm); return (status); } static int mpr_release_fw_diag_buffer(struct mpr_softc *sc, mpr_fw_diagnostic_buffer_t *pBuffer, uint32_t *return_code, uint32_t diag_type) { MPI2_DIAG_RELEASE_REQUEST *req; MPI2_DIAG_RELEASE_REPLY *reply; struct mpr_command *cm = NULL; int status; /* * If buffer is not enabled, just leave. */ *return_code = MPR_FW_DIAG_ERROR_RELEASE_FAILED; if (!pBuffer->enabled) { mpr_dprint(sc, MPR_USER, "%s: This buffer type is not " "supported by the IOC", __func__); return (MPR_DIAG_FAILURE); } /* * Clear some flags initially. */ pBuffer->force_release = FALSE; pBuffer->valid_data = FALSE; pBuffer->owned_by_firmware = FALSE; /* * Get a command. */ cm = mpr_alloc_command(sc); if (cm == NULL) { mpr_printf(sc, "%s: no mpr requests\n", __func__); return (MPR_DIAG_FAILURE); } /* * Build the request for releasing the FW Diag Buffer and send it. */ req = (MPI2_DIAG_RELEASE_REQUEST *)cm->cm_req; req->Function = MPI2_FUNCTION_DIAG_RELEASE; req->BufferType = pBuffer->buffer_type; cm->cm_data = NULL; cm->cm_length = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_complete_data = NULL; /* * Send command synchronously. */ status = mpr_wait_command(sc, &cm, 30, CAN_SLEEP); if (status || (cm == NULL)) { mpr_printf(sc, "%s: invalid request: error %d\n", __func__, status); status = MPR_DIAG_FAILURE; goto done; } /* * Process RELEASE reply. */ reply = (MPI2_DIAG_RELEASE_REPLY *)cm->cm_reply; if (reply == NULL) { mpr_printf(sc, "%s: reply is NULL, probably due to " "reinitialization", __func__); status = MPR_DIAG_FAILURE; goto done; } if (((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) || pBuffer->owned_by_firmware) { status = MPR_DIAG_FAILURE; mpr_dprint(sc, MPR_FAULT, "%s: release of FW Diag Buffer " "failed with IOCStatus = 0x%x and IOCLogInfo = 0x%x\n", __func__, le16toh(reply->IOCStatus), le32toh(reply->IOCLogInfo)); goto done; } /* * Release was successful. */ *return_code = MPR_FW_DIAG_ERROR_SUCCESS; status = MPR_DIAG_SUCCESS; /* * If this was for an UNREGISTER diag type command, clear the unique ID. */ if (diag_type == MPR_FW_DIAG_TYPE_UNREGISTER) { pBuffer->unique_id = MPR_FW_DIAG_INVALID_UID; } done: if (cm != NULL) mpr_free_command(sc, cm); return (status); } static int mpr_diag_register(struct mpr_softc *sc, mpr_fw_diag_register_t *diag_register, uint32_t *return_code) { mpr_fw_diagnostic_buffer_t *pBuffer; struct mpr_busdma_context *ctx; uint8_t extended_type, buffer_type, i; uint32_t buffer_size; uint32_t unique_id; int status; int error; extended_type = diag_register->ExtendedType; buffer_type = diag_register->BufferType; buffer_size = diag_register->RequestedBufferSize; unique_id = diag_register->UniqueId; ctx = NULL; error = 0; /* * Check for valid buffer type */ if (buffer_type >= MPI2_DIAG_BUF_TYPE_COUNT) { *return_code = MPR_FW_DIAG_ERROR_INVALID_PARAMETER; return (MPR_DIAG_FAILURE); } /* * Get the current buffer and look up the unique ID. The unique ID * should not be found. If it is, the ID is already in use. */ i = mpr_get_fw_diag_buffer_number(sc, unique_id); pBuffer = &sc->fw_diag_buffer_list[buffer_type]; if (i != MPR_FW_DIAGNOSTIC_UID_NOT_FOUND) { *return_code = MPR_FW_DIAG_ERROR_INVALID_UID; return (MPR_DIAG_FAILURE); } /* * The buffer's unique ID should not be registered yet, and the given * unique ID cannot be 0. */ if ((pBuffer->unique_id != MPR_FW_DIAG_INVALID_UID) || (unique_id == MPR_FW_DIAG_INVALID_UID)) { *return_code = MPR_FW_DIAG_ERROR_INVALID_UID; return (MPR_DIAG_FAILURE); } /* * If this buffer is already posted as immediate, just change owner. */ if (pBuffer->immediate && pBuffer->owned_by_firmware && (pBuffer->unique_id == MPR_FW_DIAG_INVALID_UID)) { pBuffer->immediate = FALSE; pBuffer->unique_id = unique_id; return (MPR_DIAG_SUCCESS); } /* * Post a new buffer after checking if it's enabled. The DMA buffer * that is allocated will be contiguous (nsegments = 1). */ if (!pBuffer->enabled) { *return_code = MPR_FW_DIAG_ERROR_NO_BUFFER; return (MPR_DIAG_FAILURE); } if (bus_dma_tag_create( sc->mpr_parent_dmat, /* parent */ 1, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ buffer_size, /* maxsize */ 1, /* nsegments */ buffer_size, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->fw_diag_dmat)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate FW diag buffer DMA tag\n"); *return_code = MPR_FW_DIAG_ERROR_NO_BUFFER; status = MPR_DIAG_FAILURE; goto bailout; } if (bus_dmamem_alloc(sc->fw_diag_dmat, (void **)&sc->fw_diag_buffer, BUS_DMA_NOWAIT, &sc->fw_diag_map)) { mpr_dprint(sc, MPR_ERROR, "Cannot allocate FW diag buffer memory\n"); *return_code = MPR_FW_DIAG_ERROR_NO_BUFFER; status = MPR_DIAG_FAILURE; goto bailout; } bzero(sc->fw_diag_buffer, buffer_size); ctx = malloc(sizeof(*ctx), M_MPR, M_WAITOK | M_ZERO); if (ctx == NULL) { device_printf(sc->mpr_dev, "%s: context malloc failed\n", __func__); *return_code = MPR_FW_DIAG_ERROR_NO_BUFFER; status = MPR_DIAG_FAILURE; goto bailout; } ctx->addr = &sc->fw_diag_busaddr; ctx->buffer_dmat = sc->fw_diag_dmat; ctx->buffer_dmamap = sc->fw_diag_map; ctx->softc = sc; error = bus_dmamap_load(sc->fw_diag_dmat, sc->fw_diag_map, sc->fw_diag_buffer, buffer_size, mpr_memaddr_wait_cb, ctx, 0); if (error == EINPROGRESS) { /* XXX KDM */ device_printf(sc->mpr_dev, "%s: Deferred bus_dmamap_load\n", __func__); /* * Wait for the load to complete. If we're interrupted, * bail out. */ mpr_lock(sc); if (ctx->completed == 0) { error = msleep(ctx, &sc->mpr_mtx, PCATCH, "mprwait", 0); if (error != 0) { /* * We got an error from msleep(9). This is * most likely due to a signal. Tell * mpr_memaddr_wait_cb() that we've abandoned * the context, so it needs to clean up when * it is called. */ ctx->abandoned = 1; /* The callback will free this memory */ ctx = NULL; mpr_unlock(sc); device_printf(sc->mpr_dev, "Cannot " "bus_dmamap_load FW diag buffer, error = " "%d returned from msleep\n", error); *return_code = MPR_FW_DIAG_ERROR_NO_BUFFER; status = MPR_DIAG_FAILURE; goto bailout; } } mpr_unlock(sc); } if ((error != 0) || (ctx->error != 0)) { device_printf(sc->mpr_dev, "Cannot bus_dmamap_load FW diag " "buffer, %serror = %d\n", error ? "" : "callback ", error ? error : ctx->error); *return_code = MPR_FW_DIAG_ERROR_NO_BUFFER; status = MPR_DIAG_FAILURE; goto bailout; } bus_dmamap_sync(sc->fw_diag_dmat, sc->fw_diag_map, BUS_DMASYNC_PREREAD); pBuffer->size = buffer_size; /* * Copy the given info to the diag buffer and post the buffer. */ pBuffer->buffer_type = buffer_type; pBuffer->immediate = FALSE; if (buffer_type == MPI2_DIAG_BUF_TYPE_TRACE) { for (i = 0; i < (sizeof (pBuffer->product_specific) / 4); i++) { pBuffer->product_specific[i] = diag_register->ProductSpecific[i]; } } pBuffer->extended_type = extended_type; pBuffer->unique_id = unique_id; status = mpr_post_fw_diag_buffer(sc, pBuffer, return_code); bailout: /* * In case there was a failure, free the DMA buffer. */ if (status == MPR_DIAG_FAILURE) { if (sc->fw_diag_busaddr != 0) { bus_dmamap_unload(sc->fw_diag_dmat, sc->fw_diag_map); sc->fw_diag_busaddr = 0; } if (sc->fw_diag_buffer != NULL) { bus_dmamem_free(sc->fw_diag_dmat, sc->fw_diag_buffer, sc->fw_diag_map); sc->fw_diag_buffer = NULL; } if (sc->fw_diag_dmat != NULL) { bus_dma_tag_destroy(sc->fw_diag_dmat); sc->fw_diag_dmat = NULL; } } if (ctx != NULL) free(ctx, M_MPR); return (status); } static int mpr_diag_unregister(struct mpr_softc *sc, mpr_fw_diag_unregister_t *diag_unregister, uint32_t *return_code) { mpr_fw_diagnostic_buffer_t *pBuffer; uint8_t i; uint32_t unique_id; int status; unique_id = diag_unregister->UniqueId; /* * Get the current buffer and look up the unique ID. The unique ID * should be there. */ i = mpr_get_fw_diag_buffer_number(sc, unique_id); if (i == MPR_FW_DIAGNOSTIC_UID_NOT_FOUND) { *return_code = MPR_FW_DIAG_ERROR_INVALID_UID; return (MPR_DIAG_FAILURE); } pBuffer = &sc->fw_diag_buffer_list[i]; /* * Try to release the buffer from FW before freeing it. If release * fails, don't free the DMA buffer in case FW tries to access it * later. If buffer is not owned by firmware, can't release it. */ if (!pBuffer->owned_by_firmware) { status = MPR_DIAG_SUCCESS; } else { status = mpr_release_fw_diag_buffer(sc, pBuffer, return_code, MPR_FW_DIAG_TYPE_UNREGISTER); } /* * At this point, return the current status no matter what happens with * the DMA buffer. */ pBuffer->unique_id = MPR_FW_DIAG_INVALID_UID; if (status == MPR_DIAG_SUCCESS) { if (sc->fw_diag_busaddr != 0) { bus_dmamap_unload(sc->fw_diag_dmat, sc->fw_diag_map); sc->fw_diag_busaddr = 0; } if (sc->fw_diag_buffer != NULL) { bus_dmamem_free(sc->fw_diag_dmat, sc->fw_diag_buffer, sc->fw_diag_map); sc->fw_diag_buffer = NULL; } if (sc->fw_diag_dmat != NULL) { bus_dma_tag_destroy(sc->fw_diag_dmat); sc->fw_diag_dmat = NULL; } } return (status); } static int mpr_diag_query(struct mpr_softc *sc, mpr_fw_diag_query_t *diag_query, uint32_t *return_code) { mpr_fw_diagnostic_buffer_t *pBuffer; uint8_t i; uint32_t unique_id; unique_id = diag_query->UniqueId; /* * If ID is valid, query on ID. * If ID is invalid, query on buffer type. */ if (unique_id == MPR_FW_DIAG_INVALID_UID) { i = diag_query->BufferType; if (i >= MPI2_DIAG_BUF_TYPE_COUNT) { *return_code = MPR_FW_DIAG_ERROR_INVALID_UID; return (MPR_DIAG_FAILURE); } } else { i = mpr_get_fw_diag_buffer_number(sc, unique_id); if (i == MPR_FW_DIAGNOSTIC_UID_NOT_FOUND) { *return_code = MPR_FW_DIAG_ERROR_INVALID_UID; return (MPR_DIAG_FAILURE); } } /* * Fill query structure with the diag buffer info. */ pBuffer = &sc->fw_diag_buffer_list[i]; diag_query->BufferType = pBuffer->buffer_type; diag_query->ExtendedType = pBuffer->extended_type; if (diag_query->BufferType == MPI2_DIAG_BUF_TYPE_TRACE) { for (i = 0; i < (sizeof(diag_query->ProductSpecific) / 4); i++) { diag_query->ProductSpecific[i] = pBuffer->product_specific[i]; } } diag_query->TotalBufferSize = pBuffer->size; diag_query->DriverAddedBufferSize = 0; diag_query->UniqueId = pBuffer->unique_id; diag_query->ApplicationFlags = 0; diag_query->DiagnosticFlags = 0; /* * Set/Clear application flags */ if (pBuffer->immediate) { diag_query->ApplicationFlags &= ~MPR_FW_DIAG_FLAG_APP_OWNED; } else { diag_query->ApplicationFlags |= MPR_FW_DIAG_FLAG_APP_OWNED; } if (pBuffer->valid_data || pBuffer->owned_by_firmware) { diag_query->ApplicationFlags |= MPR_FW_DIAG_FLAG_BUFFER_VALID; } else { diag_query->ApplicationFlags &= ~MPR_FW_DIAG_FLAG_BUFFER_VALID; } if (pBuffer->owned_by_firmware) { diag_query->ApplicationFlags |= MPR_FW_DIAG_FLAG_FW_BUFFER_ACCESS; } else { diag_query->ApplicationFlags &= ~MPR_FW_DIAG_FLAG_FW_BUFFER_ACCESS; } return (MPR_DIAG_SUCCESS); } static int mpr_diag_read_buffer(struct mpr_softc *sc, mpr_diag_read_buffer_t *diag_read_buffer, uint8_t *ioctl_buf, uint32_t *return_code) { mpr_fw_diagnostic_buffer_t *pBuffer; uint8_t i, *pData; uint32_t unique_id; int status; unique_id = diag_read_buffer->UniqueId; /* * Get the current buffer and look up the unique ID. The unique ID * should be there. */ i = mpr_get_fw_diag_buffer_number(sc, unique_id); if (i == MPR_FW_DIAGNOSTIC_UID_NOT_FOUND) { *return_code = MPR_FW_DIAG_ERROR_INVALID_UID; return (MPR_DIAG_FAILURE); } pBuffer = &sc->fw_diag_buffer_list[i]; /* * Make sure requested read is within limits */ if (diag_read_buffer->StartingOffset + diag_read_buffer->BytesToRead > pBuffer->size) { *return_code = MPR_FW_DIAG_ERROR_INVALID_PARAMETER; return (MPR_DIAG_FAILURE); } /* Sync the DMA map before we copy to userland. */ bus_dmamap_sync(sc->fw_diag_dmat, sc->fw_diag_map, BUS_DMASYNC_POSTREAD); /* * Copy the requested data from DMA to the diag_read_buffer. The DMA * buffer that was allocated is one contiguous buffer. */ pData = (uint8_t *)(sc->fw_diag_buffer + diag_read_buffer->StartingOffset); if (copyout(pData, ioctl_buf, diag_read_buffer->BytesToRead) != 0) return (MPR_DIAG_FAILURE); diag_read_buffer->Status = 0; /* * Set or clear the Force Release flag. */ if (pBuffer->force_release) { diag_read_buffer->Flags |= MPR_FW_DIAG_FLAG_FORCE_RELEASE; } else { diag_read_buffer->Flags &= ~MPR_FW_DIAG_FLAG_FORCE_RELEASE; } /* * If buffer is to be reregistered, make sure it's not already owned by * firmware first. */ status = MPR_DIAG_SUCCESS; if (!pBuffer->owned_by_firmware) { if (diag_read_buffer->Flags & MPR_FW_DIAG_FLAG_REREGISTER) { status = mpr_post_fw_diag_buffer(sc, pBuffer, return_code); } } return (status); } static int mpr_diag_release(struct mpr_softc *sc, mpr_fw_diag_release_t *diag_release, uint32_t *return_code) { mpr_fw_diagnostic_buffer_t *pBuffer; uint8_t i; uint32_t unique_id; int status; unique_id = diag_release->UniqueId; /* * Get the current buffer and look up the unique ID. The unique ID * should be there. */ i = mpr_get_fw_diag_buffer_number(sc, unique_id); if (i == MPR_FW_DIAGNOSTIC_UID_NOT_FOUND) { *return_code = MPR_FW_DIAG_ERROR_INVALID_UID; return (MPR_DIAG_FAILURE); } pBuffer = &sc->fw_diag_buffer_list[i]; /* * If buffer is not owned by firmware, it's already been released. */ if (!pBuffer->owned_by_firmware) { *return_code = MPR_FW_DIAG_ERROR_ALREADY_RELEASED; return (MPR_DIAG_FAILURE); } /* * Release the buffer. */ status = mpr_release_fw_diag_buffer(sc, pBuffer, return_code, MPR_FW_DIAG_TYPE_RELEASE); return (status); } static int mpr_do_diag_action(struct mpr_softc *sc, uint32_t action, uint8_t *diag_action, uint32_t length, uint32_t *return_code) { mpr_fw_diag_register_t diag_register; mpr_fw_diag_unregister_t diag_unregister; mpr_fw_diag_query_t diag_query; mpr_diag_read_buffer_t diag_read_buffer; mpr_fw_diag_release_t diag_release; int status = MPR_DIAG_SUCCESS; uint32_t original_return_code; original_return_code = *return_code; *return_code = MPR_FW_DIAG_ERROR_SUCCESS; switch (action) { case MPR_FW_DIAG_TYPE_REGISTER: if (!length) { *return_code = MPR_FW_DIAG_ERROR_INVALID_PARAMETER; status = MPR_DIAG_FAILURE; break; } if (copyin(diag_action, &diag_register, sizeof(diag_register)) != 0) return (MPR_DIAG_FAILURE); status = mpr_diag_register(sc, &diag_register, return_code); break; case MPR_FW_DIAG_TYPE_UNREGISTER: if (length < sizeof(diag_unregister)) { *return_code = MPR_FW_DIAG_ERROR_INVALID_PARAMETER; status = MPR_DIAG_FAILURE; break; } if (copyin(diag_action, &diag_unregister, sizeof(diag_unregister)) != 0) return (MPR_DIAG_FAILURE); status = mpr_diag_unregister(sc, &diag_unregister, return_code); break; case MPR_FW_DIAG_TYPE_QUERY: if (length < sizeof (diag_query)) { *return_code = MPR_FW_DIAG_ERROR_INVALID_PARAMETER; status = MPR_DIAG_FAILURE; break; } if (copyin(diag_action, &diag_query, sizeof(diag_query)) != 0) return (MPR_DIAG_FAILURE); status = mpr_diag_query(sc, &diag_query, return_code); if (status == MPR_DIAG_SUCCESS) if (copyout(&diag_query, diag_action, sizeof (diag_query)) != 0) return (MPR_DIAG_FAILURE); break; case MPR_FW_DIAG_TYPE_READ_BUFFER: if (copyin(diag_action, &diag_read_buffer, sizeof(diag_read_buffer)) != 0) return (MPR_DIAG_FAILURE); if (length < diag_read_buffer.BytesToRead) { *return_code = MPR_FW_DIAG_ERROR_INVALID_PARAMETER; status = MPR_DIAG_FAILURE; break; } status = mpr_diag_read_buffer(sc, &diag_read_buffer, PTRIN(diag_read_buffer.PtrDataBuffer), return_code); if (status == MPR_DIAG_SUCCESS) { if (copyout(&diag_read_buffer, diag_action, sizeof(diag_read_buffer) - sizeof(diag_read_buffer.PtrDataBuffer)) != 0) return (MPR_DIAG_FAILURE); } break; case MPR_FW_DIAG_TYPE_RELEASE: if (length < sizeof(diag_release)) { *return_code = MPR_FW_DIAG_ERROR_INVALID_PARAMETER; status = MPR_DIAG_FAILURE; break; } if (copyin(diag_action, &diag_release, sizeof(diag_release)) != 0) return (MPR_DIAG_FAILURE); status = mpr_diag_release(sc, &diag_release, return_code); break; default: *return_code = MPR_FW_DIAG_ERROR_INVALID_PARAMETER; status = MPR_DIAG_FAILURE; break; } if ((status == MPR_DIAG_FAILURE) && (original_return_code == MPR_FW_DIAG_NEW) && (*return_code != MPR_FW_DIAG_ERROR_SUCCESS)) status = MPR_DIAG_SUCCESS; return (status); } static int mpr_user_diag_action(struct mpr_softc *sc, mpr_diag_action_t *data) { int status; /* * Only allow one diag action at one time. */ if (sc->mpr_flags & MPR_FLAGS_BUSY) { mpr_dprint(sc, MPR_USER, "%s: Only one FW diag command " "allowed at a single time.", __func__); return (EBUSY); } sc->mpr_flags |= MPR_FLAGS_BUSY; /* * Send diag action request */ if (data->Action == MPR_FW_DIAG_TYPE_REGISTER || data->Action == MPR_FW_DIAG_TYPE_UNREGISTER || data->Action == MPR_FW_DIAG_TYPE_QUERY || data->Action == MPR_FW_DIAG_TYPE_READ_BUFFER || data->Action == MPR_FW_DIAG_TYPE_RELEASE) { status = mpr_do_diag_action(sc, data->Action, PTRIN(data->PtrDiagAction), data->Length, &data->ReturnCode); } else status = EINVAL; sc->mpr_flags &= ~MPR_FLAGS_BUSY; return (status); } /* * Copy the event recording mask and the event queue size out. For * clarification, the event recording mask (events_to_record) is not the same * thing as the event mask (event_mask). events_to_record has a bit set for * every event type that is to be recorded by the driver, and event_mask has a * bit cleared for every event that is allowed into the driver from the IOC. * They really have nothing to do with each other. */ static void mpr_user_event_query(struct mpr_softc *sc, mpr_event_query_t *data) { uint8_t i; mpr_lock(sc); data->Entries = MPR_EVENT_QUEUE_SIZE; for (i = 0; i < 4; i++) { data->Types[i] = sc->events_to_record[i]; } mpr_unlock(sc); } /* * Set the driver's event mask according to what's been given. See * mpr_user_event_query for explanation of the event recording mask and the IOC * event mask. It's the app's responsibility to enable event logging by setting * the bits in events_to_record. Initially, no events will be logged. */ static void mpr_user_event_enable(struct mpr_softc *sc, mpr_event_enable_t *data) { uint8_t i; mpr_lock(sc); for (i = 0; i < 4; i++) { sc->events_to_record[i] = data->Types[i]; } mpr_unlock(sc); } /* * Copy out the events that have been recorded, up to the max events allowed. */ static int mpr_user_event_report(struct mpr_softc *sc, mpr_event_report_t *data) { int status = 0; uint32_t size; mpr_lock(sc); size = data->Size; if ((size >= sizeof(sc->recorded_events)) && (status == 0)) { mpr_unlock(sc); if (copyout((void *)sc->recorded_events, PTRIN(data->PtrEvents), size) != 0) status = EFAULT; mpr_lock(sc); } else { /* * data->Size value is not large enough to copy event data. */ status = EFAULT; } /* * Change size value to match the number of bytes that were copied. */ if (status == 0) data->Size = sizeof(sc->recorded_events); mpr_unlock(sc); return (status); } /* * Record events into the driver from the IOC if they are not masked. */ void mprsas_record_event(struct mpr_softc *sc, MPI2_EVENT_NOTIFICATION_REPLY *event_reply) { uint32_t event; int i, j; uint16_t event_data_len; boolean_t sendAEN = FALSE; event = event_reply->Event; /* * Generate a system event to let anyone who cares know that a * LOG_ENTRY_ADDED event has occurred. This is sent no matter what the * event mask is set to. */ if (event == MPI2_EVENT_LOG_ENTRY_ADDED) { sendAEN = TRUE; } /* * Record the event only if its corresponding bit is set in * events_to_record. event_index is the index into recorded_events and * event_number is the overall number of an event being recorded since * start-of-day. event_index will roll over; event_number will never * roll over. */ i = (uint8_t)(event / 32); j = (uint8_t)(event % 32); if ((i < 4) && ((1 << j) & sc->events_to_record[i])) { i = sc->event_index; sc->recorded_events[i].Type = event; sc->recorded_events[i].Number = ++sc->event_number; bzero(sc->recorded_events[i].Data, MPR_MAX_EVENT_DATA_LENGTH * 4); event_data_len = event_reply->EventDataLength; if (event_data_len > 0) { /* * Limit data to size in m_event entry */ if (event_data_len > MPR_MAX_EVENT_DATA_LENGTH) { event_data_len = MPR_MAX_EVENT_DATA_LENGTH; } for (j = 0; j < event_data_len; j++) { sc->recorded_events[i].Data[j] = event_reply->EventData[j]; } /* * check for index wrap-around */ if (++i == MPR_EVENT_QUEUE_SIZE) { i = 0; } sc->event_index = (uint8_t)i; /* * Set flag to send the event. */ sendAEN = TRUE; } } /* * Generate a system event if flag is set to let anyone who cares know * that an event has occurred. */ if (sendAEN) { //SLM-how to send a system event (see kqueue, kevent) // (void) ddi_log_sysevent(mpt->m_dip, DDI_VENDOR_LSI, "MPT_SAS", // "SAS", NULL, NULL, DDI_NOSLEEP); } } static int mpr_user_reg_access(struct mpr_softc *sc, mpr_reg_access_t *data) { int status = 0; switch (data->Command) { /* * IO access is not supported. */ case REG_IO_READ: case REG_IO_WRITE: mpr_dprint(sc, MPR_USER, "IO access is not supported. " "Use memory access."); status = EINVAL; break; case REG_MEM_READ: data->RegData = mpr_regread(sc, data->RegOffset); break; case REG_MEM_WRITE: mpr_regwrite(sc, data->RegOffset, data->RegData); break; default: status = EINVAL; break; } return (status); } static int mpr_user_btdh(struct mpr_softc *sc, mpr_btdh_mapping_t *data) { uint8_t bt2dh = FALSE; uint8_t dh2bt = FALSE; uint16_t dev_handle, bus, target; bus = data->Bus; target = data->TargetID; dev_handle = data->DevHandle; /* * When DevHandle is 0xFFFF and Bus/Target are not 0xFFFF, use Bus/ * Target to get DevHandle. When Bus/Target are 0xFFFF and DevHandle is * not 0xFFFF, use DevHandle to get Bus/Target. Anything else is * invalid. */ if ((bus == 0xFFFF) && (target == 0xFFFF) && (dev_handle != 0xFFFF)) dh2bt = TRUE; if ((dev_handle == 0xFFFF) && (bus != 0xFFFF) && (target != 0xFFFF)) bt2dh = TRUE; if (!dh2bt && !bt2dh) return (EINVAL); /* * Only handle bus of 0. Make sure target is within range. */ if (bt2dh) { if (bus != 0) return (EINVAL); if (target > sc->max_devices) { mpr_dprint(sc, MPR_XINFO, "Target ID is out of range " "for Bus/Target to DevHandle mapping."); return (EINVAL); } dev_handle = sc->mapping_table[target].dev_handle; if (dev_handle) data->DevHandle = dev_handle; } else { bus = 0; target = mpr_mapping_get_tid_from_handle(sc, dev_handle); data->Bus = bus; data->TargetID = target; } return (0); } static int mpr_ioctl(struct cdev *dev, u_long cmd, void *arg, int flag, struct thread *td) { struct mpr_softc *sc; struct mpr_cfg_page_req *page_req; struct mpr_ext_cfg_page_req *ext_page_req; void *mpr_page; int error, msleep_ret; mpr_page = NULL; sc = dev->si_drv1; page_req = (void *)arg; ext_page_req = (void *)arg; switch (cmd) { case MPRIO_READ_CFG_HEADER: mpr_lock(sc); error = mpr_user_read_cfg_header(sc, page_req); mpr_unlock(sc); break; case MPRIO_READ_CFG_PAGE: mpr_page = malloc(page_req->len, M_MPRUSER, M_WAITOK | M_ZERO); error = copyin(page_req->buf, mpr_page, sizeof(MPI2_CONFIG_PAGE_HEADER)); if (error) break; mpr_lock(sc); error = mpr_user_read_cfg_page(sc, page_req, mpr_page); mpr_unlock(sc); if (error) break; error = copyout(mpr_page, page_req->buf, page_req->len); break; case MPRIO_READ_EXT_CFG_HEADER: mpr_lock(sc); error = mpr_user_read_extcfg_header(sc, ext_page_req); mpr_unlock(sc); break; case MPRIO_READ_EXT_CFG_PAGE: mpr_page = malloc(ext_page_req->len, M_MPRUSER, M_WAITOK | M_ZERO); error = copyin(ext_page_req->buf, mpr_page, sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); if (error) break; mpr_lock(sc); error = mpr_user_read_extcfg_page(sc, ext_page_req, mpr_page); mpr_unlock(sc); if (error) break; error = copyout(mpr_page, ext_page_req->buf, ext_page_req->len); break; case MPRIO_WRITE_CFG_PAGE: mpr_page = malloc(page_req->len, M_MPRUSER, M_WAITOK|M_ZERO); error = copyin(page_req->buf, mpr_page, page_req->len); if (error) break; mpr_lock(sc); error = mpr_user_write_cfg_page(sc, page_req, mpr_page); mpr_unlock(sc); break; case MPRIO_MPR_COMMAND: error = mpr_user_command(sc, (struct mpr_usr_command *)arg); break; case MPTIOCTL_PASS_THRU: /* * The user has requested to pass through a command to be * executed by the MPT firmware. Call our routine which does * this. Only allow one passthru IOCTL at one time. */ error = mpr_user_pass_thru(sc, (mpr_pass_thru_t *)arg); break; case MPTIOCTL_GET_ADAPTER_DATA: /* * The user has requested to read adapter data. Call our * routine which does this. */ error = 0; mpr_user_get_adapter_data(sc, (mpr_adapter_data_t *)arg); break; case MPTIOCTL_GET_PCI_INFO: /* * The user has requested to read pci info. Call * our routine which does this. */ mpr_lock(sc); error = 0; mpr_user_read_pci_info(sc, (mpr_pci_info_t *)arg); mpr_unlock(sc); break; case MPTIOCTL_RESET_ADAPTER: mpr_lock(sc); sc->port_enable_complete = 0; uint32_t reinit_start = time_uptime; error = mpr_reinit(sc); /* Sleep for 300 second. */ msleep_ret = msleep(&sc->port_enable_complete, &sc->mpr_mtx, PRIBIO, "mpr_porten", 300 * hz); mpr_unlock(sc); if (msleep_ret) printf("Port Enable did not complete after Diag " "Reset msleep error %d.\n", msleep_ret); else mpr_dprint(sc, MPR_USER, "Hard Reset with Port Enable " "completed in %d seconds.\n", (uint32_t)(time_uptime - reinit_start)); break; case MPTIOCTL_DIAG_ACTION: /* * The user has done a diag buffer action. Call our routine * which does this. Only allow one diag action at one time. */ mpr_lock(sc); error = mpr_user_diag_action(sc, (mpr_diag_action_t *)arg); mpr_unlock(sc); break; case MPTIOCTL_EVENT_QUERY: /* * The user has done an event query. Call our routine which does * this. */ error = 0; mpr_user_event_query(sc, (mpr_event_query_t *)arg); break; case MPTIOCTL_EVENT_ENABLE: /* * The user has done an event enable. Call our routine which * does this. */ error = 0; mpr_user_event_enable(sc, (mpr_event_enable_t *)arg); break; case MPTIOCTL_EVENT_REPORT: /* * The user has done an event report. Call our routine which * does this. */ error = mpr_user_event_report(sc, (mpr_event_report_t *)arg); break; case MPTIOCTL_REG_ACCESS: /* * The user has requested register access. Call our routine * which does this. */ mpr_lock(sc); error = mpr_user_reg_access(sc, (mpr_reg_access_t *)arg); mpr_unlock(sc); break; case MPTIOCTL_BTDH_MAPPING: /* * The user has requested to translate a bus/target to a * DevHandle or a DevHandle to a bus/target. Call our routine * which does this. */ error = mpr_user_btdh(sc, (mpr_btdh_mapping_t *)arg); break; default: error = ENOIOCTL; break; } if (mpr_page != NULL) free(mpr_page, M_MPRUSER); return (error); } #ifdef COMPAT_FREEBSD32 struct mpr_cfg_page_req32 { MPI2_CONFIG_PAGE_HEADER header; uint32_t page_address; uint32_t buf; int len; uint16_t ioc_status; }; struct mpr_ext_cfg_page_req32 { MPI2_CONFIG_EXTENDED_PAGE_HEADER header; uint32_t page_address; uint32_t buf; int len; uint16_t ioc_status; }; struct mpr_raid_action32 { uint8_t action; uint8_t volume_bus; uint8_t volume_id; uint8_t phys_disk_num; uint32_t action_data_word; uint32_t buf; int len; uint32_t volume_status; uint32_t action_data[4]; uint16_t action_status; uint16_t ioc_status; uint8_t write; }; struct mpr_usr_command32 { uint32_t req; uint32_t req_len; uint32_t rpl; uint32_t rpl_len; uint32_t buf; int len; uint32_t flags; }; #define MPRIO_READ_CFG_HEADER32 _IOWR('M', 200, struct mpr_cfg_page_req32) #define MPRIO_READ_CFG_PAGE32 _IOWR('M', 201, struct mpr_cfg_page_req32) #define MPRIO_READ_EXT_CFG_HEADER32 _IOWR('M', 202, struct mpr_ext_cfg_page_req32) #define MPRIO_READ_EXT_CFG_PAGE32 _IOWR('M', 203, struct mpr_ext_cfg_page_req32) #define MPRIO_WRITE_CFG_PAGE32 _IOWR('M', 204, struct mpr_cfg_page_req32) #define MPRIO_RAID_ACTION32 _IOWR('M', 205, struct mpr_raid_action32) #define MPRIO_MPR_COMMAND32 _IOWR('M', 210, struct mpr_usr_command32) static int mpr_ioctl32(struct cdev *dev, u_long cmd32, void *_arg, int flag, struct thread *td) { struct mpr_cfg_page_req32 *page32 = _arg; struct mpr_ext_cfg_page_req32 *ext32 = _arg; struct mpr_raid_action32 *raid32 = _arg; struct mpr_usr_command32 *user32 = _arg; union { struct mpr_cfg_page_req page; struct mpr_ext_cfg_page_req ext; struct mpr_raid_action raid; struct mpr_usr_command user; } arg; u_long cmd; int error; switch (cmd32) { case MPRIO_READ_CFG_HEADER32: case MPRIO_READ_CFG_PAGE32: case MPRIO_WRITE_CFG_PAGE32: if (cmd32 == MPRIO_READ_CFG_HEADER32) cmd = MPRIO_READ_CFG_HEADER; else if (cmd32 == MPRIO_READ_CFG_PAGE32) cmd = MPRIO_READ_CFG_PAGE; else cmd = MPRIO_WRITE_CFG_PAGE; CP(*page32, arg.page, header); CP(*page32, arg.page, page_address); PTRIN_CP(*page32, arg.page, buf); CP(*page32, arg.page, len); CP(*page32, arg.page, ioc_status); break; case MPRIO_READ_EXT_CFG_HEADER32: case MPRIO_READ_EXT_CFG_PAGE32: if (cmd32 == MPRIO_READ_EXT_CFG_HEADER32) cmd = MPRIO_READ_EXT_CFG_HEADER; else cmd = MPRIO_READ_EXT_CFG_PAGE; CP(*ext32, arg.ext, header); CP(*ext32, arg.ext, page_address); PTRIN_CP(*ext32, arg.ext, buf); CP(*ext32, arg.ext, len); CP(*ext32, arg.ext, ioc_status); break; case MPRIO_RAID_ACTION32: cmd = MPRIO_RAID_ACTION; CP(*raid32, arg.raid, action); CP(*raid32, arg.raid, volume_bus); CP(*raid32, arg.raid, volume_id); CP(*raid32, arg.raid, phys_disk_num); CP(*raid32, arg.raid, action_data_word); PTRIN_CP(*raid32, arg.raid, buf); CP(*raid32, arg.raid, len); CP(*raid32, arg.raid, volume_status); bcopy(raid32->action_data, arg.raid.action_data, sizeof arg.raid.action_data); CP(*raid32, arg.raid, ioc_status); CP(*raid32, arg.raid, write); break; case MPRIO_MPR_COMMAND32: cmd = MPRIO_MPR_COMMAND; PTRIN_CP(*user32, arg.user, req); CP(*user32, arg.user, req_len); PTRIN_CP(*user32, arg.user, rpl); CP(*user32, arg.user, rpl_len); PTRIN_CP(*user32, arg.user, buf); CP(*user32, arg.user, len); CP(*user32, arg.user, flags); break; default: return (ENOIOCTL); } error = mpr_ioctl(dev, cmd, &arg, flag, td); if (error == 0 && (cmd32 & IOC_OUT) != 0) { switch (cmd32) { case MPRIO_READ_CFG_HEADER32: case MPRIO_READ_CFG_PAGE32: case MPRIO_WRITE_CFG_PAGE32: CP(arg.page, *page32, header); CP(arg.page, *page32, page_address); PTROUT_CP(arg.page, *page32, buf); CP(arg.page, *page32, len); CP(arg.page, *page32, ioc_status); break; case MPRIO_READ_EXT_CFG_HEADER32: case MPRIO_READ_EXT_CFG_PAGE32: CP(arg.ext, *ext32, header); CP(arg.ext, *ext32, page_address); PTROUT_CP(arg.ext, *ext32, buf); CP(arg.ext, *ext32, len); CP(arg.ext, *ext32, ioc_status); break; case MPRIO_RAID_ACTION32: CP(arg.raid, *raid32, action); CP(arg.raid, *raid32, volume_bus); CP(arg.raid, *raid32, volume_id); CP(arg.raid, *raid32, phys_disk_num); CP(arg.raid, *raid32, action_data_word); PTROUT_CP(arg.raid, *raid32, buf); CP(arg.raid, *raid32, len); CP(arg.raid, *raid32, volume_status); bcopy(arg.raid.action_data, raid32->action_data, sizeof arg.raid.action_data); CP(arg.raid, *raid32, ioc_status); CP(arg.raid, *raid32, write); break; case MPRIO_MPR_COMMAND32: PTROUT_CP(arg.user, *user32, req); CP(arg.user, *user32, req_len); PTROUT_CP(arg.user, *user32, rpl); CP(arg.user, *user32, rpl_len); PTROUT_CP(arg.user, *user32, buf); CP(arg.user, *user32, len); CP(arg.user, *user32, flags); break; } } return (error); } #endif /* COMPAT_FREEBSD32 */ static int mpr_ioctl_devsw(struct cdev *dev, u_long com, caddr_t arg, int flag, struct thread *td) { #ifdef COMPAT_FREEBSD32 if (SV_CURPROC_FLAG(SV_ILP32)) return (mpr_ioctl32(dev, com, arg, flag, td)); #endif return (mpr_ioctl(dev, com, arg, flag, td)); } Index: releng/12.1/sys/dev/mpr/mprvar.h =================================================================== --- releng/12.1/sys/dev/mpr/mprvar.h (revision 352760) +++ releng/12.1/sys/dev/mpr/mprvar.h (revision 352761) @@ -1,936 +1,987 @@ /*- * Copyright (c) 2009 Yahoo! Inc. * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2016 Avago Technologies + * Copyright 2000-2020 Broadcom Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD + * Broadcom Inc. (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ #ifndef _MPRVAR_H #define _MPRVAR_H -#define MPR_DRIVER_VERSION "18.03.00.00-fbsd" +#define MPR_DRIVER_VERSION "23.00.00.00-fbsd" #define MPR_DB_MAX_WAIT 2500 #define MPR_REQ_FRAMES 2048 #define MPR_PRI_REQ_FRAMES 128 #define MPR_EVT_REPLY_FRAMES 32 #define MPR_REPLY_FRAMES MPR_REQ_FRAMES #define MPR_CHAIN_FRAMES 16384 #define MPR_MAXIO_PAGES (-1) #define MPR_SENSE_LEN SSD_FULL_SIZE #define MPR_MSI_MAX 1 #define MPR_MSIX_MAX 96 #define MPR_SGE64_SIZE 12 #define MPR_SGE32_SIZE 8 #define MPR_SGC_SIZE 8 #define MPR_DEFAULT_CHAIN_SEG_SIZE 8 #define MPR_MAX_CHAIN_ELEMENT_SIZE 16 /* * PCIe NVMe Specific defines */ //SLM-for now just use the same value as a SAS disk #define NVME_QDEPTH MPR_REQ_FRAMES #define PRP_ENTRY_SIZE 8 #define NVME_CMD_PRP1_OFFSET 24 /* PRP1 offset in NVMe cmd */ #define NVME_CMD_PRP2_OFFSET 32 /* PRP2 offset in NVMe cmd */ #define NVME_ERROR_RESPONSE_SIZE 16 /* Max NVME Error Response */ #define HOST_PAGE_SIZE_4K 12 #define MPR_FUNCTRACE(sc) \ mpr_dprint((sc), MPR_TRACE, "%s\n", __func__) #define CAN_SLEEP 1 #define NO_SLEEP 0 #define MPR_PERIODIC_DELAY 1 /* 1 second heartbeat/watchdog check */ #define MPR_ATA_ID_TIMEOUT 5 /* 5 second timeout for SATA ID cmd */ #define MPR_MISSING_CHECK_DELAY 10 /* 10 seconds between missing check */ #define IFAULT_IOP_OVER_TEMP_THRESHOLD_EXCEEDED 0x2810 #define MPR_SCSI_RI_INVALID_FRAME (0x00000002) #define DEFAULT_SPINUP_WAIT 3 /* seconds to wait for spinup */ #include /* * host mapping related macro definitions */ #define MPR_MAPTABLE_BAD_IDX 0xFFFFFFFF #define MPR_DPM_BAD_IDX 0xFFFF #define MPR_ENCTABLE_BAD_IDX 0xFF #define MPR_MAX_MISSING_COUNT 0x0F #define MPR_DEV_RESERVED 0x20000000 #define MPR_MAP_IN_USE 0x10000000 #define MPR_MAP_BAD_ID 0xFFFFFFFF typedef uint8_t u8; typedef uint16_t u16; typedef uint32_t u32; typedef uint64_t u64; +typedef struct _MPI2_CONFIG_PAGE_MAN_11 +{ + MPI2_CONFIG_PAGE_HEADER Header; /* 0x00 */ + U8 FlashTime; /* 0x04 */ + U8 NVTime; /* 0x05 */ + U16 Flag; /* 0x06 */ + U8 RFIoTimeout; /* 0x08 */ + U8 EEDPTagMode; /* 0x09 */ + U8 AWTValue; /* 0x0A */ + U8 Reserve1; /* 0x0B */ + U8 MaxCmdFrames; /* 0x0C */ + U8 Reserve2; /* 0x0D */ + U16 AddlFlags; /* 0x0E */ + U32 SysRefClk; /* 0x10 */ + U64 Reserve3[3]; /* 0x14 */ + U16 AddlFlags2; /* 0x2C */ + U8 AddlFlags3; /* 0x2E */ + U8 Reserve4; /* 0x2F */ + U64 opDebugEnable; /* 0x30 */ + U64 PlDebugEnable; /* 0x38 */ + U64 IrDebugEnable; /* 0x40 */ + U32 BoardPowerRequirement; /* 0x48 */ + U8 NVMeAbortTO; /* 0x4C */ + U8 Reserve5; /* 0x4D */ + U16 Reserve6; /* 0x4E */ + U32 Reserve7[3]; /* 0x50 */ +} MPI2_CONFIG_PAGE_MAN_11, + MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_11, + Mpi2ManufacturingPage11_t, MPI2_POINTER pMpi2ManufacturingPage11_t; + +#define MPI2_MAN_PG11_ADDLFLAGS2_CUSTOM_TM_HANDLING_MASK (0x0010) + /** * struct dev_mapping_table - device mapping information * @physical_id: SAS address for drives or WWID for RAID volumes * @device_info: bitfield provides detailed info about the device * @phy_bits: bitfields indicating controller phys * @dpm_entry_num: index of this device in device persistent map table * @dev_handle: device handle for the device pointed by this entry * @id: target id * @missing_count: number of times the device not detected by driver * @hide_flag: Hide this physical disk/not (foreign configuration) * @init_complete: Whether the start of the day checks completed or not * @TLR_bits: Turn TLR support on or off */ struct dev_mapping_table { u64 physical_id; u32 device_info; u32 phy_bits; u16 dpm_entry_num; u16 dev_handle; u16 reserved1; u16 id; u8 missing_count; u8 init_complete; u8 TLR_bits; u8 reserved2; }; /** * struct enc_mapping_table - mapping information about an enclosure * @enclosure_id: Logical ID of this enclosure * @start_index: index to the entry in dev_mapping_table * @phy_bits: bitfields indicating controller phys * @dpm_entry_num: index of this enclosure in device persistent map table * @enc_handle: device handle for the enclosure pointed by this entry * @num_slots: number of slots in the enclosure * @start_slot: Starting slot id * @missing_count: number of times the device not detected by driver * @removal_flag: used to mark the device for removal * @skip_search: used as a flag to include/exclude enclosure for search * @init_complete: Whether the start of the day checks completed or not */ struct enc_mapping_table { u64 enclosure_id; u32 start_index; u32 phy_bits; u16 dpm_entry_num; u16 enc_handle; u16 num_slots; u16 start_slot; u8 missing_count; u8 removal_flag; u8 skip_search; u8 init_complete; }; /** * struct map_removal_table - entries to be removed from mapping table * @dpm_entry_num: index of this device in device persistent map table * @dev_handle: device handle for the device pointed by this entry */ struct map_removal_table{ u16 dpm_entry_num; u16 dev_handle; }; typedef struct mpr_fw_diagnostic_buffer { size_t size; uint8_t extended_type; uint8_t buffer_type; uint8_t force_release; uint32_t product_specific[23]; uint8_t immediate; uint8_t enabled; uint8_t valid_data; uint8_t owned_by_firmware; uint32_t unique_id; } mpr_fw_diagnostic_buffer_t; struct mpr_softc; struct mpr_command; struct mprsas_softc; union ccb; struct mprsas_target; struct mpr_column_map; MALLOC_DECLARE(M_MPR); typedef void mpr_evt_callback_t(struct mpr_softc *, uintptr_t, MPI2_EVENT_NOTIFICATION_REPLY *reply); typedef void mpr_command_callback_t(struct mpr_softc *, struct mpr_command *cm); struct mpr_chain { TAILQ_ENTRY(mpr_chain) chain_link; void *chain; uint64_t chain_busaddr; }; struct mpr_prp_page { TAILQ_ENTRY(mpr_prp_page) prp_page_link; uint64_t *prp_page; uint64_t prp_page_busaddr; }; /* * This needs to be at least 2 to support SMP passthrough. */ #define MPR_IOVEC_COUNT 2 struct mpr_command { TAILQ_ENTRY(mpr_command) cm_link; TAILQ_ENTRY(mpr_command) cm_recovery; struct mpr_softc *cm_sc; union ccb *cm_ccb; void *cm_data; u_int cm_length; u_int cm_out_len; struct uio cm_uio; struct iovec cm_iovec[MPR_IOVEC_COUNT]; u_int cm_max_segs; u_int cm_sglsize; void *cm_sge; uint8_t *cm_req; uint8_t *cm_reply; uint32_t cm_reply_data; mpr_command_callback_t *cm_complete; void *cm_complete_data; struct mprsas_target *cm_targ; MPI2_REQUEST_DESCRIPTOR_UNION cm_desc; u_int cm_lun; u_int cm_flags; #define MPR_CM_FLAGS_POLLED (1 << 0) #define MPR_CM_FLAGS_COMPLETE (1 << 1) #define MPR_CM_FLAGS_SGE_SIMPLE (1 << 2) #define MPR_CM_FLAGS_DATAOUT (1 << 3) #define MPR_CM_FLAGS_DATAIN (1 << 4) #define MPR_CM_FLAGS_WAKEUP (1 << 5) #define MPR_CM_FLAGS_USE_UIO (1 << 6) #define MPR_CM_FLAGS_SMP_PASS (1 << 7) #define MPR_CM_FLAGS_CHAIN_FAILED (1 << 8) #define MPR_CM_FLAGS_ERROR_MASK MPR_CM_FLAGS_CHAIN_FAILED #define MPR_CM_FLAGS_USE_CCB (1 << 9) #define MPR_CM_FLAGS_SATA_ID_TIMEOUT (1 << 10) +#define MPR_CM_FLAGS_ON_RECOVERY (1 << 12) +#define MPR_CM_FLAGS_TIMEDOUT (1 << 13) u_int cm_state; #define MPR_CM_STATE_FREE 0 #define MPR_CM_STATE_BUSY 1 -#define MPR_CM_STATE_TIMEDOUT 2 -#define MPR_CM_STATE_INQUEUE 3 +#define MPR_CM_STATE_INQUEUE 2 bus_dmamap_t cm_dmamap; struct scsi_sense_data *cm_sense; uint64_t *nvme_error_response; TAILQ_HEAD(, mpr_chain) cm_chain_list; TAILQ_HEAD(, mpr_prp_page) cm_prp_page_list; uint32_t cm_req_busaddr; bus_addr_t cm_sense_busaddr; struct callout cm_callout; + mpr_command_callback_t *cm_timeout_handler; }; struct mpr_column_map { uint16_t dev_handle; uint8_t phys_disk_num; }; struct mpr_event_handle { TAILQ_ENTRY(mpr_event_handle) eh_list; mpr_evt_callback_t *callback; void *data; uint8_t mask[16]; }; struct mpr_busdma_context { int completed; int abandoned; int error; bus_addr_t *addr; struct mpr_softc *softc; bus_dmamap_t buffer_dmamap; bus_dma_tag_t buffer_dmat; }; struct mpr_queue { struct mpr_softc *sc; int qnum; MPI2_REPLY_DESCRIPTORS_UNION *post_queue; int replypostindex; #ifdef notyet ck_ring_buffer_t *ringmem; ck_ring_buffer_t *chainmem; ck_ring_t req_ring; ck_ring_t chain_ring; #endif bus_dma_tag_t buffer_dmat; int io_cmds_highwater; int chain_free_lowwater; int chain_alloc_fail; struct resource *irq; void *intrhand; int irq_rid; }; struct mpr_softc { device_t mpr_dev; struct cdev *mpr_cdev; u_int mpr_flags; #define MPR_FLAGS_INTX (1 << 0) #define MPR_FLAGS_MSI (1 << 1) #define MPR_FLAGS_BUSY (1 << 2) #define MPR_FLAGS_SHUTDOWN (1 << 3) #define MPR_FLAGS_DIAGRESET (1 << 4) #define MPR_FLAGS_ATTACH_DONE (1 << 5) #define MPR_FLAGS_GEN35_IOC (1 << 6) #define MPR_FLAGS_REALLOCATED (1 << 7) +#define MPR_FLAGS_SEA_IOC (1 << 8) u_int mpr_debug; int msi_msgs; u_int reqframesz; u_int replyframesz; u_int atomic_desc_capable; int tm_cmds_active; int io_cmds_active; int io_cmds_highwater; int chain_free; int max_chains; int max_io_pages; u_int maxio; int chain_free_lowwater; uint32_t chain_frame_size; int prp_buffer_size; int prp_pages_free; int prp_pages_free_lowwater; u_int enable_ssu; int spinup_wait_time; int use_phynum; uint64_t chain_alloc_fail; uint64_t prp_page_alloc_fail; struct sysctl_ctx_list sysctl_ctx; struct sysctl_oid *sysctl_tree; char fw_version[16]; struct mpr_command *commands; struct mpr_chain *chains; struct mpr_prp_page *prps; struct callout periodic; struct callout device_check_callout; struct mpr_queue *queues; struct mprsas_softc *sassc; TAILQ_HEAD(, mpr_command) req_list; TAILQ_HEAD(, mpr_command) high_priority_req_list; TAILQ_HEAD(, mpr_chain) chain_list; TAILQ_HEAD(, mpr_prp_page) prp_page_list; TAILQ_HEAD(, mpr_command) tm_list; int replypostindex; int replyfreeindex; struct resource *mpr_regs_resource; bus_space_handle_t mpr_bhandle; bus_space_tag_t mpr_btag; int mpr_regs_rid; bus_dma_tag_t mpr_parent_dmat; bus_dma_tag_t buffer_dmat; MPI2_IOC_FACTS_REPLY *facts; int num_reqs; int num_prireqs; int num_replies; int num_chains; int fqdepth; /* Free queue */ int pqdepth; /* Post queue */ uint8_t event_mask[16]; TAILQ_HEAD(, mpr_event_handle) event_list; struct mpr_event_handle *mpr_log_eh; struct mtx mpr_mtx; struct intr_config_hook mpr_ich; uint8_t *req_frames; bus_addr_t req_busaddr; bus_dma_tag_t req_dmat; bus_dmamap_t req_map; uint8_t *reply_frames; bus_addr_t reply_busaddr; bus_dma_tag_t reply_dmat; bus_dmamap_t reply_map; struct scsi_sense_data *sense_frames; bus_addr_t sense_busaddr; bus_dma_tag_t sense_dmat; bus_dmamap_t sense_map; uint8_t *chain_frames; bus_dma_tag_t chain_dmat; bus_dmamap_t chain_map; uint8_t *prp_pages; bus_addr_t prp_page_busaddr; bus_dma_tag_t prp_page_dmat; bus_dmamap_t prp_page_map; MPI2_REPLY_DESCRIPTORS_UNION *post_queue; bus_addr_t post_busaddr; uint32_t *free_queue; bus_addr_t free_busaddr; bus_dma_tag_t queues_dmat; bus_dmamap_t queues_map; uint8_t *fw_diag_buffer; bus_addr_t fw_diag_busaddr; bus_dma_tag_t fw_diag_dmat; bus_dmamap_t fw_diag_map; uint8_t ir_firmware; /* static config pages */ Mpi2IOCPage8_t ioc_pg8; Mpi2IOUnitPage8_t iounit_pg8; /* host mapping support */ struct dev_mapping_table *mapping_table; struct enc_mapping_table *enclosure_table; struct map_removal_table *removal_table; uint8_t *dpm_entry_used; uint8_t *dpm_flush_entry; Mpi2DriverMappingPage0_t *dpm_pg0; uint16_t max_devices; uint16_t max_enclosures; uint16_t max_expanders; uint8_t max_volumes; uint8_t num_enc_table_entries; uint8_t num_rsvd_entries; uint16_t max_dpm_entries; uint8_t is_dpm_enable; uint8_t track_mapping_events; uint32_t pending_map_events; /* FW diag Buffer List */ mpr_fw_diagnostic_buffer_t fw_diag_buffer_list[MPI2_DIAG_BUF_TYPE_COUNT]; /* Event Recording IOCTL support */ uint32_t events_to_record[4]; mpr_event_entry_t recorded_events[MPR_EVENT_QUEUE_SIZE]; uint8_t event_index; uint32_t event_number; /* EEDP and TLR support */ uint8_t eedp_enabled; uint8_t control_TLR; /* Shutdown Event Handler */ eventhandler_tag shutdown_eh; /* To track topo events during reset */ #define MPR_DIAG_RESET_TIMEOUT 300000 uint8_t wait_for_port_enable; uint8_t port_enable_complete; uint8_t msleep_fake_chan; /* StartStopUnit command handling at shutdown */ uint32_t SSU_refcount; uint8_t SSU_started; /* Configuration tunables */ u_int disable_msix; u_int disable_msi; u_int max_msix; u_int max_reqframes; u_int max_prireqframes; u_int max_replyframes; u_int max_evtframes; char exclude_ids[80]; struct timeval lastfail; + uint8_t custom_nvme_tm_handling; + uint8_t nvme_abort_timeout; }; struct mpr_config_params { MPI2_CONFIG_EXT_PAGE_HEADER_UNION hdr; u_int action; u_int page_address; /* Attributes, not a phys address */ u_int status; void *buffer; u_int length; int timeout; void (*callback)(struct mpr_softc *, struct mpr_config_params *); void *cbdata; }; struct scsi_read_capacity_eedp { uint8_t addr[8]; uint8_t length[4]; uint8_t protect; }; static __inline uint32_t mpr_regread(struct mpr_softc *sc, uint32_t offset) { - return (bus_space_read_4(sc->mpr_btag, sc->mpr_bhandle, offset)); + uint32_t ret_val, i = 0; + do { + ret_val = + bus_space_read_4(sc->mpr_btag, sc->mpr_bhandle, offset); + } while((sc->mpr_flags & MPR_FLAGS_SEA_IOC) && + (ret_val == 0) && (++i < 3)); + + return ret_val; } static __inline void mpr_regwrite(struct mpr_softc *sc, uint32_t offset, uint32_t val) { bus_space_write_4(sc->mpr_btag, sc->mpr_bhandle, offset, val); } /* free_queue must have Little Endian address * TODO- cm_reply_data is unwanted. We can remove it. * */ static __inline void mpr_free_reply(struct mpr_softc *sc, uint32_t busaddr) { if (++sc->replyfreeindex >= sc->fqdepth) sc->replyfreeindex = 0; sc->free_queue[sc->replyfreeindex] = htole32(busaddr); mpr_regwrite(sc, MPI2_REPLY_FREE_HOST_INDEX_OFFSET, sc->replyfreeindex); } static __inline struct mpr_chain * mpr_alloc_chain(struct mpr_softc *sc) { struct mpr_chain *chain; if ((chain = TAILQ_FIRST(&sc->chain_list)) != NULL) { TAILQ_REMOVE(&sc->chain_list, chain, chain_link); sc->chain_free--; if (sc->chain_free < sc->chain_free_lowwater) sc->chain_free_lowwater = sc->chain_free; } else sc->chain_alloc_fail++; return (chain); } static __inline void mpr_free_chain(struct mpr_softc *sc, struct mpr_chain *chain) { #if 0 bzero(chain->chain, 128); #endif sc->chain_free++; TAILQ_INSERT_TAIL(&sc->chain_list, chain, chain_link); } static __inline struct mpr_prp_page * mpr_alloc_prp_page(struct mpr_softc *sc) { struct mpr_prp_page *prp_page; if ((prp_page = TAILQ_FIRST(&sc->prp_page_list)) != NULL) { TAILQ_REMOVE(&sc->prp_page_list, prp_page, prp_page_link); sc->prp_pages_free--; if (sc->prp_pages_free < sc->prp_pages_free_lowwater) sc->prp_pages_free_lowwater = sc->prp_pages_free; } else sc->prp_page_alloc_fail++; return (prp_page); } static __inline void mpr_free_prp_page(struct mpr_softc *sc, struct mpr_prp_page *prp_page) { sc->prp_pages_free++; TAILQ_INSERT_TAIL(&sc->prp_page_list, prp_page, prp_page_link); } static __inline void mpr_free_command(struct mpr_softc *sc, struct mpr_command *cm) { struct mpr_chain *chain, *chain_temp; struct mpr_prp_page *prp_page, *prp_page_temp; KASSERT(cm->cm_state == MPR_CM_STATE_BUSY, ("state not busy\n")); if (cm->cm_reply != NULL) mpr_free_reply(sc, cm->cm_reply_data); cm->cm_reply = NULL; cm->cm_flags = 0; cm->cm_complete = NULL; cm->cm_complete_data = NULL; cm->cm_ccb = NULL; cm->cm_targ = NULL; cm->cm_max_segs = 0; cm->cm_lun = 0; cm->cm_state = MPR_CM_STATE_FREE; cm->cm_data = NULL; cm->cm_length = 0; cm->cm_out_len = 0; cm->cm_sglsize = 0; cm->cm_sge = NULL; TAILQ_FOREACH_SAFE(chain, &cm->cm_chain_list, chain_link, chain_temp) { TAILQ_REMOVE(&cm->cm_chain_list, chain, chain_link); mpr_free_chain(sc, chain); } TAILQ_FOREACH_SAFE(prp_page, &cm->cm_prp_page_list, prp_page_link, prp_page_temp) { TAILQ_REMOVE(&cm->cm_prp_page_list, prp_page, prp_page_link); mpr_free_prp_page(sc, prp_page); } TAILQ_INSERT_TAIL(&sc->req_list, cm, cm_link); } static __inline struct mpr_command * mpr_alloc_command(struct mpr_softc *sc) { struct mpr_command *cm; cm = TAILQ_FIRST(&sc->req_list); if (cm == NULL) return (NULL); KASSERT(cm->cm_state == MPR_CM_STATE_FREE, ("mpr: Allocating busy command\n")); TAILQ_REMOVE(&sc->req_list, cm, cm_link); cm->cm_state = MPR_CM_STATE_BUSY; + cm->cm_timeout_handler = NULL; return (cm); } static __inline void mpr_free_high_priority_command(struct mpr_softc *sc, struct mpr_command *cm) { struct mpr_chain *chain, *chain_temp; KASSERT(cm->cm_state == MPR_CM_STATE_BUSY, ("state not busy\n")); if (cm->cm_reply != NULL) mpr_free_reply(sc, cm->cm_reply_data); cm->cm_reply = NULL; cm->cm_flags = 0; cm->cm_complete = NULL; cm->cm_complete_data = NULL; cm->cm_ccb = NULL; cm->cm_targ = NULL; cm->cm_lun = 0; cm->cm_state = MPR_CM_STATE_FREE; TAILQ_FOREACH_SAFE(chain, &cm->cm_chain_list, chain_link, chain_temp) { TAILQ_REMOVE(&cm->cm_chain_list, chain, chain_link); mpr_free_chain(sc, chain); } TAILQ_INSERT_TAIL(&sc->high_priority_req_list, cm, cm_link); } static __inline struct mpr_command * mpr_alloc_high_priority_command(struct mpr_softc *sc) { struct mpr_command *cm; cm = TAILQ_FIRST(&sc->high_priority_req_list); if (cm == NULL) return (NULL); KASSERT(cm->cm_state == MPR_CM_STATE_FREE, ("mpr: Allocating busy command\n")); TAILQ_REMOVE(&sc->high_priority_req_list, cm, cm_link); cm->cm_state = MPR_CM_STATE_BUSY; + cm->cm_timeout_handler = NULL; + cm->cm_desc.HighPriority.RequestFlags = + MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; return (cm); } static __inline void mpr_lock(struct mpr_softc *sc) { mtx_lock(&sc->mpr_mtx); } static __inline void mpr_unlock(struct mpr_softc *sc) { mtx_unlock(&sc->mpr_mtx); } #define MPR_INFO (1 << 0) /* Basic info */ #define MPR_FAULT (1 << 1) /* Hardware faults */ #define MPR_EVENT (1 << 2) /* Event data from the controller */ #define MPR_LOG (1 << 3) /* Log data from the controller */ #define MPR_RECOVERY (1 << 4) /* Command error recovery tracing */ #define MPR_ERROR (1 << 5) /* Parameter errors, programming bugs */ #define MPR_INIT (1 << 6) /* Things related to system init */ #define MPR_XINFO (1 << 7) /* More detailed/noisy info */ #define MPR_USER (1 << 8) /* Trace user-generated commands */ #define MPR_MAPPING (1 << 9) /* Trace device mappings */ #define MPR_TRACE (1 << 10) /* Function-by-function trace */ #define MPR_SSU_DISABLE_SSD_DISABLE_HDD 0 #define MPR_SSU_ENABLE_SSD_DISABLE_HDD 1 #define MPR_SSU_DISABLE_SSD_ENABLE_HDD 2 #define MPR_SSU_ENABLE_SSD_ENABLE_HDD 3 #define mpr_printf(sc, args...) \ device_printf((sc)->mpr_dev, ##args) #define mpr_print_field(sc, msg, args...) \ printf("\t" msg, ##args) #define mpr_vprintf(sc, args...) \ do { \ if (bootverbose) \ mpr_printf(sc, ##args); \ } while (0) #define mpr_dprint(sc, level, msg, args...) \ do { \ if ((sc)->mpr_debug & (level)) \ device_printf((sc)->mpr_dev, msg, ##args); \ } while (0) #define MPR_PRINTFIELD_START(sc, tag...) \ mpr_printf((sc), ##tag); \ mpr_print_field((sc), ":\n") #define MPR_PRINTFIELD_END(sc, tag) \ mpr_printf((sc), tag "\n") #define MPR_PRINTFIELD(sc, facts, attr, fmt) \ mpr_print_field((sc), #attr ": " #fmt "\n", (facts)->attr) static __inline void mpr_from_u64(uint64_t data, U64 *mpr) { (mpr)->High = htole32((uint32_t)((data) >> 32)); (mpr)->Low = htole32((uint32_t)((data) & 0xffffffff)); } static __inline uint64_t mpr_to_u64(U64 *data) { return (((uint64_t)le32toh(data->High) << 32) | le32toh(data->Low)); } static __inline void mpr_mask_intr(struct mpr_softc *sc) { uint32_t mask; mask = mpr_regread(sc, MPI2_HOST_INTERRUPT_MASK_OFFSET); mask |= MPI2_HIM_REPLY_INT_MASK; mpr_regwrite(sc, MPI2_HOST_INTERRUPT_MASK_OFFSET, mask); } static __inline void mpr_unmask_intr(struct mpr_softc *sc) { uint32_t mask; mask = mpr_regread(sc, MPI2_HOST_INTERRUPT_MASK_OFFSET); mask &= ~MPI2_HIM_REPLY_INT_MASK; mpr_regwrite(sc, MPI2_HOST_INTERRUPT_MASK_OFFSET, mask); } int mpr_pci_setup_interrupts(struct mpr_softc *sc); void mpr_pci_free_interrupts(struct mpr_softc *sc); int mpr_pci_restore(struct mpr_softc *sc); void mpr_get_tunables(struct mpr_softc *sc); int mpr_attach(struct mpr_softc *sc); int mpr_free(struct mpr_softc *sc); void mpr_intr(void *); void mpr_intr_msi(void *); void mpr_intr_locked(void *); int mpr_register_events(struct mpr_softc *, uint8_t *, mpr_evt_callback_t *, void *, struct mpr_event_handle **); int mpr_restart(struct mpr_softc *); int mpr_update_events(struct mpr_softc *, struct mpr_event_handle *, uint8_t *); int mpr_deregister_events(struct mpr_softc *, struct mpr_event_handle *); void mpr_build_nvme_prp(struct mpr_softc *sc, struct mpr_command *cm, Mpi26NVMeEncapsulatedRequest_t *nvme_encap_request, void *data, uint32_t data_in_sz, uint32_t data_out_sz); int mpr_push_sge(struct mpr_command *, MPI2_SGE_SIMPLE64 *, size_t, int); int mpr_push_ieee_sge(struct mpr_command *, void *, int); int mpr_add_dmaseg(struct mpr_command *, vm_paddr_t, size_t, u_int, int); int mpr_attach_sas(struct mpr_softc *sc); int mpr_detach_sas(struct mpr_softc *sc); int mpr_read_config_page(struct mpr_softc *, struct mpr_config_params *); int mpr_write_config_page(struct mpr_softc *, struct mpr_config_params *); void mpr_memaddr_cb(void *, bus_dma_segment_t *, int , int ); void mpr_memaddr_wait_cb(void *, bus_dma_segment_t *, int , int ); void mpr_init_sge(struct mpr_command *cm, void *req, void *sge); int mpr_attach_user(struct mpr_softc *); void mpr_detach_user(struct mpr_softc *); void mprsas_record_event(struct mpr_softc *sc, MPI2_EVENT_NOTIFICATION_REPLY *event_reply); int mpr_map_command(struct mpr_softc *sc, struct mpr_command *cm); int mpr_wait_command(struct mpr_softc *sc, struct mpr_command **cm, int timeout, int sleep_flag); int mpr_request_polled(struct mpr_softc *sc, struct mpr_command **cm); int mpr_config_get_bios_pg3(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2BiosPage3_t *config_page); int mpr_config_get_raid_volume_pg0(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2RaidVolPage0_t *config_page, u32 page_address); int mpr_config_get_ioc_pg8(struct mpr_softc *sc, Mpi2ConfigReply_t *, Mpi2IOCPage8_t *); int mpr_config_get_iounit_pg8(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2IOUnitPage8_t *config_page); int mpr_config_get_sas_device_pg0(struct mpr_softc *, Mpi2ConfigReply_t *, Mpi2SasDevicePage0_t *, u32 , u16 ); int mpr_config_get_pcie_device_pg0(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi26PCIeDevicePage0_t *config_page, u32 form, u16 handle); int mpr_config_get_pcie_device_pg2(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi26PCIeDevicePage2_t *config_page, u32 form, u16 handle); int mpr_config_get_dpm_pg0(struct mpr_softc *, Mpi2ConfigReply_t *, Mpi2DriverMappingPage0_t *, u16 ); int mpr_config_get_raid_volume_pg1(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2RaidVolPage1_t *config_page, u32 form, u16 handle); int mpr_config_get_volume_wwid(struct mpr_softc *sc, u16 volume_handle, u64 *wwid); int mpr_config_get_raid_pd_pg0(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2RaidPhysDiskPage0_t *config_page, u32 page_address); +int mpr_config_get_man_pg11(struct mpr_softc *sc, Mpi2ConfigReply_t *mpi_reply, + Mpi2ManufacturingPage11_t *config_page); void mprsas_ir_shutdown(struct mpr_softc *sc, int howto); int mpr_reinit(struct mpr_softc *sc); void mprsas_handle_reinit(struct mpr_softc *sc); void mpr_base_static_config_pages(struct mpr_softc *sc); int mpr_mapping_initialize(struct mpr_softc *); void mpr_mapping_topology_change_event(struct mpr_softc *, Mpi2EventDataSasTopologyChangeList_t *); void mpr_mapping_pcie_topology_change_event(struct mpr_softc *sc, Mpi26EventDataPCIeTopologyChangeList_t *event_data); void mpr_mapping_free_memory(struct mpr_softc *sc); int mpr_config_set_dpm_pg0(struct mpr_softc *, Mpi2ConfigReply_t *, Mpi2DriverMappingPage0_t *, u16 ); void mpr_mapping_exit(struct mpr_softc *); void mpr_mapping_check_devices(void *); int mpr_mapping_allocate_memory(struct mpr_softc *sc); unsigned int mpr_mapping_get_tid(struct mpr_softc *, uint64_t , u16); unsigned int mpr_mapping_get_tid_from_handle(struct mpr_softc *sc, u16 handle); unsigned int mpr_mapping_get_raid_tid(struct mpr_softc *sc, u64 wwid, u16 volHandle); unsigned int mpr_mapping_get_raid_tid_from_handle(struct mpr_softc *sc, u16 volHandle); void mpr_mapping_enclosure_dev_status_change_event(struct mpr_softc *, Mpi2EventDataSasEnclDevStatusChange_t *event_data); void mpr_mapping_ir_config_change_event(struct mpr_softc *sc, Mpi2EventDataIrConfigChangeList_t *event_data); void mprsas_evt_handler(struct mpr_softc *sc, uintptr_t data, MPI2_EVENT_NOTIFICATION_REPLY *event); void mprsas_prepare_remove(struct mprsas_softc *sassc, uint16_t handle); void mprsas_prepare_volume_remove(struct mprsas_softc *sassc, uint16_t handle); int mprsas_startup(struct mpr_softc *sc); struct mprsas_target * mprsas_find_target_by_handle(struct mprsas_softc *, int, uint16_t); void mprsas_realloc_targets(struct mpr_softc *sc, int maxtargets); struct mpr_command * mprsas_alloc_tm(struct mpr_softc *sc); void mprsas_free_tm(struct mpr_softc *sc, struct mpr_command *tm); void mprsas_release_simq_reinit(struct mprsas_softc *sassc); int mprsas_send_reset(struct mpr_softc *sc, struct mpr_command *tm, uint8_t type); SYSCTL_DECL(_hw_mpr); /* Compatibility shims for different OS versions */ #if __FreeBSD_version >= 800001 #define mpr_kproc_create(func, farg, proc_ptr, flags, stackpgs, fmtstr, arg) \ kproc_create(func, farg, proc_ptr, flags, stackpgs, fmtstr, arg) #define mpr_kproc_exit(arg) kproc_exit(arg) #else #define mpr_kproc_create(func, farg, proc_ptr, flags, stackpgs, fmtstr, arg) \ kthread_create(func, farg, proc_ptr, flags, stackpgs, fmtstr, arg) #define mpr_kproc_exit(arg) kthread_exit(arg) #endif #if defined(CAM_PRIORITY_XPT) #define MPR_PRIORITY_XPT CAM_PRIORITY_XPT #else #define MPR_PRIORITY_XPT 5 #endif #if __FreeBSD_version < 800107 // Prior to FreeBSD-8.0 scp3_flags was not defined. #define spc3_flags reserved #define SPC3_SID_PROTECT 0x01 #define SPC3_SID_3PC 0x08 #define SPC3_SID_TPGS_MASK 0x30 #define SPC3_SID_TPGS_IMPLICIT 0x10 #define SPC3_SID_TPGS_EXPLICIT 0x20 #define SPC3_SID_ACC 0x40 #define SPC3_SID_SCCS 0x80 #define CAM_PRIORITY_NORMAL CAM_PRIORITY_NONE #endif /* Definitions for SCSI unmap translation to NVMe DSM command */ /* UNMAP block descriptor structure */ struct unmap_blk_desc { uint64_t slba; uint32_t nlb; uint32_t resv; }; /* UNMAP command's data */ struct unmap_parm_list { uint16_t unmap_data_len; uint16_t unmap_blk_desc_data_len; uint32_t resv; struct unmap_blk_desc desc[0]; }; /* SCSI ADDITIONAL SENSE Codes */ #define FIXED_SENSE_DATA 0x70 #define SCSI_ASC_NO_SENSE 0x00 #define SCSI_ASC_PERIPHERAL_DEV_WRITE_FAULT 0x03 #define SCSI_ASC_LUN_NOT_READY 0x04 #define SCSI_ASC_WARNING 0x0B #define SCSI_ASC_LOG_BLOCK_GUARD_CHECK_FAILED 0x10 #define SCSI_ASC_LOG_BLOCK_APPTAG_CHECK_FAILED 0x10 #define SCSI_ASC_LOG_BLOCK_REFTAG_CHECK_FAILED 0x10 #define SCSI_ASC_UNRECOVERED_READ_ERROR 0x11 #define SCSI_ASC_MISCOMPARE_DURING_VERIFY 0x1D #define SCSI_ASC_ACCESS_DENIED_INVALID_LUN_ID 0x20 #define SCSI_ASC_ILLEGAL_COMMAND 0x20 #define SCSI_ASC_ILLEGAL_BLOCK 0x21 #define SCSI_ASC_INVALID_CDB 0x24 #define SCSI_ASC_INVALID_LUN 0x25 #define SCSI_ASC_INVALID_PARAMETER 0x26 #define SCSI_ASC_FORMAT_COMMAND_FAILED 0x31 #define SCSI_ASC_INTERNAL_TARGET_FAILURE 0x44 /* SCSI ADDITIONAL SENSE Code Qualifiers */ #define SCSI_ASCQ_CAUSE_NOT_REPORTABLE 0x00 #define SCSI_ASCQ_FORMAT_COMMAND_FAILED 0x01 #define SCSI_ASCQ_LOG_BLOCK_GUARD_CHECK_FAILED 0x01 #define SCSI_ASCQ_LOG_BLOCK_APPTAG_CHECK_FAILED 0x02 #define SCSI_ASCQ_LOG_BLOCK_REFTAG_CHECK_FAILED 0x03 #define SCSI_ASCQ_FORMAT_IN_PROGRESS 0x04 #define SCSI_ASCQ_POWER_LOSS_EXPECTED 0x08 #define SCSI_ASCQ_INVALID_LUN_ID 0x09 #endif Index: releng/12.1/sys/dev/mps/mps.c =================================================================== --- releng/12.1/sys/dev/mps/mps.c (revision 352760) +++ releng/12.1/sys/dev/mps/mps.c (revision 352761) @@ -1,3245 +1,3267 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2009 Yahoo! Inc. * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2015 Avago Technologies * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ #include __FBSDID("$FreeBSD$"); /* Communications core for Avago Technologies (LSI) MPT2 */ /* TODO Move headers to mpsvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include static int mps_diag_reset(struct mps_softc *sc, int sleep_flag); static int mps_init_queues(struct mps_softc *sc); static void mps_resize_queues(struct mps_softc *sc); static int mps_message_unit_reset(struct mps_softc *sc, int sleep_flag); static int mps_transition_operational(struct mps_softc *sc); static int mps_iocfacts_allocate(struct mps_softc *sc, uint8_t attaching); static void mps_iocfacts_free(struct mps_softc *sc); static void mps_startup(void *arg); static int mps_send_iocinit(struct mps_softc *sc); static int mps_alloc_queues(struct mps_softc *sc); static int mps_alloc_hw_queues(struct mps_softc *sc); static int mps_alloc_replies(struct mps_softc *sc); static int mps_alloc_requests(struct mps_softc *sc); static int mps_attach_log(struct mps_softc *sc); static __inline void mps_complete_command(struct mps_softc *sc, struct mps_command *cm); static void mps_dispatch_event(struct mps_softc *sc, uintptr_t data, MPI2_EVENT_NOTIFICATION_REPLY *reply); static void mps_config_complete(struct mps_softc *sc, struct mps_command *cm); static void mps_periodic(void *); static int mps_reregister_events(struct mps_softc *sc); static void mps_enqueue_request(struct mps_softc *sc, struct mps_command *cm); static int mps_get_iocfacts(struct mps_softc *sc, MPI2_IOC_FACTS_REPLY *facts); static int mps_wait_db_ack(struct mps_softc *sc, int timeout, int sleep_flag); static int mps_debug_sysctl(SYSCTL_HANDLER_ARGS); static int mps_dump_reqs(SYSCTL_HANDLER_ARGS); static void mps_parse_debug(struct mps_softc *sc, char *list); SYSCTL_NODE(_hw, OID_AUTO, mps, CTLFLAG_RD, 0, "MPS Driver Parameters"); MALLOC_DEFINE(M_MPT2, "mps", "mpt2 driver memory"); MALLOC_DECLARE(M_MPSUSER); /* * Do a "Diagnostic Reset" aka a hard reset. This should get the chip out of * any state and back to its initialization state machine. */ static char mpt2_reset_magic[] = { 0x00, 0x0f, 0x04, 0x0b, 0x02, 0x07, 0x0d }; /* Added this union to smoothly convert le64toh cm->cm_desc.Words. * Compiler only support unint64_t to be passed as argument. * Otherwise it will throw below error * "aggregate value used where an integer was expected" */ typedef union _reply_descriptor { u64 word; struct { u32 low; u32 high; } u; }reply_descriptor,address_descriptor; /* Rate limit chain-fail messages to 1 per minute */ static struct timeval mps_chainfail_interval = { 60, 0 }; /* * sleep_flag can be either CAN_SLEEP or NO_SLEEP. * If this function is called from process context, it can sleep * and there is no harm to sleep, in case if this fuction is called * from Interrupt handler, we can not sleep and need NO_SLEEP flag set. * based on sleep flags driver will call either msleep, pause or DELAY. * msleep and pause are of same variant, but pause is used when mps_mtx * is not hold by driver. * */ static int mps_diag_reset(struct mps_softc *sc,int sleep_flag) { uint32_t reg; int i, error, tries = 0; uint8_t first_wait_done = FALSE; mps_dprint(sc, MPS_INIT, "%s entered\n", __func__); /* Clear any pending interrupts */ mps_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); /* * Force NO_SLEEP for threads prohibited to sleep * e.a Thread from interrupt handler are prohibited to sleep. */ if (curthread->td_no_sleeping != 0) sleep_flag = NO_SLEEP; mps_dprint(sc, MPS_INIT, "sequence start, sleep_flag= %d\n", sleep_flag); /* Push the magic sequence */ error = ETIMEDOUT; while (tries++ < 20) { for (i = 0; i < sizeof(mpt2_reset_magic); i++) mps_regwrite(sc, MPI2_WRITE_SEQUENCE_OFFSET, mpt2_reset_magic[i]); /* wait 100 msec */ if (mtx_owned(&sc->mps_mtx) && sleep_flag == CAN_SLEEP) msleep(&sc->msleep_fake_chan, &sc->mps_mtx, 0, "mpsdiag", hz/10); else if (sleep_flag == CAN_SLEEP) pause("mpsdiag", hz/10); else DELAY(100 * 1000); reg = mps_regread(sc, MPI2_HOST_DIAGNOSTIC_OFFSET); if (reg & MPI2_DIAG_DIAG_WRITE_ENABLE) { error = 0; break; } } if (error) { mps_dprint(sc, MPS_INIT, "sequence failed, error=%d, exit\n", error); return (error); } /* Send the actual reset. XXX need to refresh the reg? */ reg |= MPI2_DIAG_RESET_ADAPTER; mps_dprint(sc, MPS_INIT, "sequence success, sending reset, reg= 0x%x\n", reg); mps_regwrite(sc, MPI2_HOST_DIAGNOSTIC_OFFSET, reg); /* Wait up to 300 seconds in 50ms intervals */ error = ETIMEDOUT; for (i = 0; i < 6000; i++) { /* * Wait 50 msec. If this is the first time through, wait 256 * msec to satisfy Diag Reset timing requirements. */ if (first_wait_done) { if (mtx_owned(&sc->mps_mtx) && sleep_flag == CAN_SLEEP) msleep(&sc->msleep_fake_chan, &sc->mps_mtx, 0, "mpsdiag", hz/20); else if (sleep_flag == CAN_SLEEP) pause("mpsdiag", hz/20); else DELAY(50 * 1000); } else { DELAY(256 * 1000); first_wait_done = TRUE; } /* * Check for the RESET_ADAPTER bit to be cleared first, then * wait for the RESET state to be cleared, which takes a little * longer. */ reg = mps_regread(sc, MPI2_HOST_DIAGNOSTIC_OFFSET); if (reg & MPI2_DIAG_RESET_ADAPTER) { continue; } reg = mps_regread(sc, MPI2_DOORBELL_OFFSET); if ((reg & MPI2_IOC_STATE_MASK) != MPI2_IOC_STATE_RESET) { error = 0; break; } } if (error) { mps_dprint(sc, MPS_INIT, "reset failed, error= %d, exit\n", error); return (error); } mps_regwrite(sc, MPI2_WRITE_SEQUENCE_OFFSET, 0x0); mps_dprint(sc, MPS_INIT, "diag reset success, exit\n"); return (0); } static int mps_message_unit_reset(struct mps_softc *sc, int sleep_flag) { int error; MPS_FUNCTRACE(sc); mps_dprint(sc, MPS_INIT, "%s entered\n", __func__); error = 0; mps_regwrite(sc, MPI2_DOORBELL_OFFSET, MPI2_FUNCTION_IOC_MESSAGE_UNIT_RESET << MPI2_DOORBELL_FUNCTION_SHIFT); if (mps_wait_db_ack(sc, 5, sleep_flag) != 0) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "Doorbell handshake failed\n"); error = ETIMEDOUT; } mps_dprint(sc, MPS_INIT, "%s exit\n", __func__); return (error); } static int mps_transition_ready(struct mps_softc *sc) { uint32_t reg, state; int error, tries = 0; int sleep_flags; MPS_FUNCTRACE(sc); /* If we are in attach call, do not sleep */ sleep_flags = (sc->mps_flags & MPS_FLAGS_ATTACH_DONE) ? CAN_SLEEP:NO_SLEEP; error = 0; mps_dprint(sc, MPS_INIT, "%s entered, sleep_flags= %d\n", __func__, sleep_flags); while (tries++ < 1200) { reg = mps_regread(sc, MPI2_DOORBELL_OFFSET); mps_dprint(sc, MPS_INIT, " Doorbell= 0x%x\n", reg); /* * Ensure the IOC is ready to talk. If it's not, try * resetting it. */ if (reg & MPI2_DOORBELL_USED) { mps_dprint(sc, MPS_INIT, " Not ready, sending diag " "reset\n"); mps_diag_reset(sc, sleep_flags); DELAY(50000); continue; } /* Is the adapter owned by another peer? */ if ((reg & MPI2_DOORBELL_WHO_INIT_MASK) == (MPI2_WHOINIT_PCI_PEER << MPI2_DOORBELL_WHO_INIT_SHIFT)) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "IOC is under the " "control of another peer host, aborting " "initialization.\n"); error = ENXIO; break; } state = reg & MPI2_IOC_STATE_MASK; if (state == MPI2_IOC_STATE_READY) { /* Ready to go! */ error = 0; break; } else if (state == MPI2_IOC_STATE_FAULT) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "IOC in fault " "state 0x%x, resetting\n", state & MPI2_DOORBELL_FAULT_CODE_MASK); mps_diag_reset(sc, sleep_flags); } else if (state == MPI2_IOC_STATE_OPERATIONAL) { /* Need to take ownership */ mps_message_unit_reset(sc, sleep_flags); } else if (state == MPI2_IOC_STATE_RESET) { /* Wait a bit, IOC might be in transition */ mps_dprint(sc, MPS_INIT|MPS_FAULT, "IOC in unexpected reset state\n"); } else { mps_dprint(sc, MPS_INIT|MPS_FAULT, "IOC in unknown state 0x%x\n", state); error = EINVAL; break; } /* Wait 50ms for things to settle down. */ DELAY(50000); } if (error) mps_dprint(sc, MPS_INIT|MPS_FAULT, "Cannot transition IOC to ready\n"); mps_dprint(sc, MPS_INIT, "%s exit\n", __func__); return (error); } static int mps_transition_operational(struct mps_softc *sc) { uint32_t reg, state; int error; MPS_FUNCTRACE(sc); error = 0; reg = mps_regread(sc, MPI2_DOORBELL_OFFSET); mps_dprint(sc, MPS_INIT, "%s entered, Doorbell= 0x%x\n", __func__, reg); state = reg & MPI2_IOC_STATE_MASK; if (state != MPI2_IOC_STATE_READY) { mps_dprint(sc, MPS_INIT, "IOC not ready\n"); if ((error = mps_transition_ready(sc)) != 0) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "failed to transition ready, exit\n"); return (error); } } error = mps_send_iocinit(sc); mps_dprint(sc, MPS_INIT, "%s exit\n", __func__); return (error); } static void mps_resize_queues(struct mps_softc *sc) { u_int reqcr, prireqcr, maxio, sges_per_frame; /* * Size the queues. Since the reply queues always need one free * entry, we'll deduct one reply message here. The LSI documents * suggest instead to add a count to the request queue, but I think * that it's better to deduct from reply queue. */ prireqcr = MAX(1, sc->max_prireqframes); prireqcr = MIN(prireqcr, sc->facts->HighPriorityCredit); reqcr = MAX(2, sc->max_reqframes); reqcr = MIN(reqcr, sc->facts->RequestCredit); sc->num_reqs = prireqcr + reqcr; sc->num_prireqs = prireqcr; sc->num_replies = MIN(sc->max_replyframes + sc->max_evtframes, sc->facts->MaxReplyDescriptorPostQueueDepth) - 1; /* Store the request frame size in bytes rather than as 32bit words */ sc->reqframesz = sc->facts->IOCRequestFrameSize * 4; /* * Max IO Size is Page Size * the following: * ((SGEs per frame - 1 for chain element) * Max Chain Depth) * + 1 for no chain needed in last frame * * If user suggests a Max IO size to use, use the smaller of the * user's value and the calculated value as long as the user's * value is larger than 0. The user's value is in pages. */ sges_per_frame = sc->reqframesz / sizeof(MPI2_SGE_SIMPLE64) - 1; maxio = (sges_per_frame * sc->facts->MaxChainDepth + 1) * PAGE_SIZE; /* * If I/O size limitation requested, then use it and pass up to CAM. * If not, use MAXPHYS as an optimization hint, but report HW limit. */ if (sc->max_io_pages > 0) { maxio = min(maxio, sc->max_io_pages * PAGE_SIZE); sc->maxio = maxio; } else { sc->maxio = maxio; maxio = min(maxio, MAXPHYS); } sc->num_chains = (maxio / PAGE_SIZE + sges_per_frame - 2) / sges_per_frame * reqcr; if (sc->max_chains > 0 && sc->max_chains < sc->num_chains) sc->num_chains = sc->max_chains; /* * Figure out the number of MSIx-based queues. If the firmware or * user has done something crazy and not allowed enough credit for * the queues to be useful then don't enable multi-queue. */ if (sc->facts->MaxMSIxVectors < 2) sc->msi_msgs = 1; if (sc->msi_msgs > 1) { sc->msi_msgs = MIN(sc->msi_msgs, mp_ncpus); sc->msi_msgs = MIN(sc->msi_msgs, sc->facts->MaxMSIxVectors); if (sc->num_reqs / sc->msi_msgs < 2) sc->msi_msgs = 1; } mps_dprint(sc, MPS_INIT, "Sized queues to q=%d reqs=%d replies=%d\n", sc->msi_msgs, sc->num_reqs, sc->num_replies); } /* * This is called during attach and when re-initializing due to a Diag Reset. * IOC Facts is used to allocate many of the structures needed by the driver. * If called from attach, de-allocation is not required because the driver has * not allocated any structures yet, but if called from a Diag Reset, previously * allocated structures based on IOC Facts will need to be freed and re- * allocated bases on the latest IOC Facts. */ static int mps_iocfacts_allocate(struct mps_softc *sc, uint8_t attaching) { int error; Mpi2IOCFactsReply_t saved_facts; uint8_t saved_mode, reallocating; mps_dprint(sc, MPS_INIT|MPS_TRACE, "%s entered\n", __func__); /* Save old IOC Facts and then only reallocate if Facts have changed */ if (!attaching) { bcopy(sc->facts, &saved_facts, sizeof(MPI2_IOC_FACTS_REPLY)); } /* * Get IOC Facts. In all cases throughout this function, panic if doing * a re-initialization and only return the error if attaching so the OS * can handle it. */ if ((error = mps_get_iocfacts(sc, sc->facts)) != 0) { if (attaching) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "Failed to get " "IOC Facts with error %d, exit\n", error); return (error); } else { panic("%s failed to get IOC Facts with error %d\n", __func__, error); } } MPS_DPRINT_PAGE(sc, MPS_XINFO, iocfacts, sc->facts); snprintf(sc->fw_version, sizeof(sc->fw_version), "%02d.%02d.%02d.%02d", sc->facts->FWVersion.Struct.Major, sc->facts->FWVersion.Struct.Minor, sc->facts->FWVersion.Struct.Unit, sc->facts->FWVersion.Struct.Dev); mps_dprint(sc, MPS_INFO, "Firmware: %s, Driver: %s\n", sc->fw_version, MPS_DRIVER_VERSION); mps_dprint(sc, MPS_INFO, "IOCCapabilities: %b\n", sc->facts->IOCCapabilities, "\20" "\3ScsiTaskFull" "\4DiagTrace" "\5SnapBuf" "\6ExtBuf" "\7EEDP" "\10BiDirTarg" "\11Multicast" "\14TransRetry" "\15IR" "\16EventReplay" "\17RaidAccel" "\20MSIXIndex" "\21HostDisc"); /* * If the chip doesn't support event replay then a hard reset will be * required to trigger a full discovery. Do the reset here then * retransition to Ready. A hard reset might have already been done, * but it doesn't hurt to do it again. Only do this if attaching, not * for a Diag Reset. */ if (attaching && ((sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_EVENT_REPLAY) == 0)) { mps_dprint(sc, MPS_INIT, "No event replay, reseting\n"); mps_diag_reset(sc, NO_SLEEP); if ((error = mps_transition_ready(sc)) != 0) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "Failed to " "transition to ready with error %d, exit\n", error); return (error); } } /* * Set flag if IR Firmware is loaded. If the RAID Capability has * changed from the previous IOC Facts, log a warning, but only if * checking this after a Diag Reset and not during attach. */ saved_mode = sc->ir_firmware; if (sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID) sc->ir_firmware = 1; if (!attaching) { if (sc->ir_firmware != saved_mode) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "new IR/IT mode " "in IOC Facts does not match previous mode\n"); } } /* Only deallocate and reallocate if relevant IOC Facts have changed */ reallocating = FALSE; sc->mps_flags &= ~MPS_FLAGS_REALLOCATED; if ((!attaching) && ((saved_facts.MsgVersion != sc->facts->MsgVersion) || (saved_facts.HeaderVersion != sc->facts->HeaderVersion) || (saved_facts.MaxChainDepth != sc->facts->MaxChainDepth) || (saved_facts.RequestCredit != sc->facts->RequestCredit) || (saved_facts.ProductID != sc->facts->ProductID) || (saved_facts.IOCCapabilities != sc->facts->IOCCapabilities) || (saved_facts.IOCRequestFrameSize != sc->facts->IOCRequestFrameSize) || (saved_facts.MaxTargets != sc->facts->MaxTargets) || (saved_facts.MaxSasExpanders != sc->facts->MaxSasExpanders) || (saved_facts.MaxEnclosures != sc->facts->MaxEnclosures) || (saved_facts.HighPriorityCredit != sc->facts->HighPriorityCredit) || (saved_facts.MaxReplyDescriptorPostQueueDepth != sc->facts->MaxReplyDescriptorPostQueueDepth) || (saved_facts.ReplyFrameSize != sc->facts->ReplyFrameSize) || (saved_facts.MaxVolumes != sc->facts->MaxVolumes) || (saved_facts.MaxPersistentEntries != sc->facts->MaxPersistentEntries))) { reallocating = TRUE; /* Record that we reallocated everything */ sc->mps_flags |= MPS_FLAGS_REALLOCATED; } /* * Some things should be done if attaching or re-allocating after a Diag * Reset, but are not needed after a Diag Reset if the FW has not * changed. */ if (attaching || reallocating) { /* * Check if controller supports FW diag buffers and set flag to * enable each type. */ if (sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_DIAG_TRACE_BUFFER) sc->fw_diag_buffer_list[MPI2_DIAG_BUF_TYPE_TRACE]. enabled = TRUE; if (sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_SNAPSHOT_BUFFER) sc->fw_diag_buffer_list[MPI2_DIAG_BUF_TYPE_SNAPSHOT]. enabled = TRUE; if (sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_EXTENDED_BUFFER) sc->fw_diag_buffer_list[MPI2_DIAG_BUF_TYPE_EXTENDED]. enabled = TRUE; /* * Set flag if EEDP is supported and if TLR is supported. */ if (sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_EEDP) sc->eedp_enabled = TRUE; if (sc->facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_TLR) sc->control_TLR = TRUE; mps_resize_queues(sc); /* * Initialize all Tail Queues */ TAILQ_INIT(&sc->req_list); TAILQ_INIT(&sc->high_priority_req_list); TAILQ_INIT(&sc->chain_list); TAILQ_INIT(&sc->tm_list); } /* * If doing a Diag Reset and the FW is significantly different * (reallocating will be set above in IOC Facts comparison), then all * buffers based on the IOC Facts will need to be freed before they are * reallocated. */ if (reallocating) { mps_iocfacts_free(sc); mpssas_realloc_targets(sc, saved_facts.MaxTargets + saved_facts.MaxVolumes); } /* * Any deallocation has been completed. Now start reallocating * if needed. Will only need to reallocate if attaching or if the new * IOC Facts are different from the previous IOC Facts after a Diag * Reset. Targets have already been allocated above if needed. */ error = 0; while (attaching || reallocating) { if ((error = mps_alloc_hw_queues(sc)) != 0) break; if ((error = mps_alloc_replies(sc)) != 0) break; if ((error = mps_alloc_requests(sc)) != 0) break; if ((error = mps_alloc_queues(sc)) != 0) break; break; } if (error) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "Failed to alloc queues with error %d\n", error); mps_free(sc); return (error); } /* Always initialize the queues */ bzero(sc->free_queue, sc->fqdepth * 4); mps_init_queues(sc); /* * Always get the chip out of the reset state, but only panic if not * attaching. If attaching and there is an error, that is handled by * the OS. */ error = mps_transition_operational(sc); if (error != 0) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "Failed to " "transition to operational with error %d\n", error); mps_free(sc); return (error); } /* * Finish the queue initialization. * These are set here instead of in mps_init_queues() because the * IOC resets these values during the state transition in * mps_transition_operational(). The free index is set to 1 * because the corresponding index in the IOC is set to 0, and the * IOC treats the queues as full if both are set to the same value. * Hence the reason that the queue can't hold all of the possible * replies. */ sc->replypostindex = 0; mps_regwrite(sc, MPI2_REPLY_FREE_HOST_INDEX_OFFSET, sc->replyfreeindex); mps_regwrite(sc, MPI2_REPLY_POST_HOST_INDEX_OFFSET, 0); /* * Attach the subsystems so they can prepare their event masks. * XXX Should be dynamic so that IM/IR and user modules can attach */ error = 0; while (attaching) { mps_dprint(sc, MPS_INIT, "Attaching subsystems\n"); if ((error = mps_attach_log(sc)) != 0) break; if ((error = mps_attach_sas(sc)) != 0) break; if ((error = mps_attach_user(sc)) != 0) break; break; } if (error) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "Failed to attach all " "subsystems: error %d\n", error); mps_free(sc); return (error); } /* * XXX If the number of MSI-X vectors changes during re-init, this * won't see it and adjust. */ if (attaching && (error = mps_pci_setup_interrupts(sc)) != 0) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "Failed to setup " "interrupts\n"); mps_free(sc); return (error); } /* * Set flag if this is a WD controller. This shouldn't ever change, but * reset it after a Diag Reset, just in case. */ sc->WD_available = FALSE; if (pci_get_device(sc->mps_dev) == MPI2_MFGPAGE_DEVID_SSS6200) sc->WD_available = TRUE; return (error); } /* * This is called if memory is being free (during detach for example) and when * buffers need to be reallocated due to a Diag Reset. */ static void mps_iocfacts_free(struct mps_softc *sc) { struct mps_command *cm; int i; mps_dprint(sc, MPS_TRACE, "%s\n", __func__); if (sc->free_busaddr != 0) bus_dmamap_unload(sc->queues_dmat, sc->queues_map); if (sc->free_queue != NULL) bus_dmamem_free(sc->queues_dmat, sc->free_queue, sc->queues_map); if (sc->queues_dmat != NULL) bus_dma_tag_destroy(sc->queues_dmat); if (sc->chain_frames != NULL) { bus_dmamap_unload(sc->chain_dmat, sc->chain_map); bus_dmamem_free(sc->chain_dmat, sc->chain_frames, sc->chain_map); } if (sc->chain_dmat != NULL) bus_dma_tag_destroy(sc->chain_dmat); if (sc->sense_busaddr != 0) bus_dmamap_unload(sc->sense_dmat, sc->sense_map); if (sc->sense_frames != NULL) bus_dmamem_free(sc->sense_dmat, sc->sense_frames, sc->sense_map); if (sc->sense_dmat != NULL) bus_dma_tag_destroy(sc->sense_dmat); if (sc->reply_busaddr != 0) bus_dmamap_unload(sc->reply_dmat, sc->reply_map); if (sc->reply_frames != NULL) bus_dmamem_free(sc->reply_dmat, sc->reply_frames, sc->reply_map); if (sc->reply_dmat != NULL) bus_dma_tag_destroy(sc->reply_dmat); if (sc->req_busaddr != 0) bus_dmamap_unload(sc->req_dmat, sc->req_map); if (sc->req_frames != NULL) bus_dmamem_free(sc->req_dmat, sc->req_frames, sc->req_map); if (sc->req_dmat != NULL) bus_dma_tag_destroy(sc->req_dmat); if (sc->chains != NULL) free(sc->chains, M_MPT2); if (sc->commands != NULL) { for (i = 1; i < sc->num_reqs; i++) { cm = &sc->commands[i]; bus_dmamap_destroy(sc->buffer_dmat, cm->cm_dmamap); } free(sc->commands, M_MPT2); } if (sc->buffer_dmat != NULL) bus_dma_tag_destroy(sc->buffer_dmat); mps_pci_free_interrupts(sc); free(sc->queues, M_MPT2); sc->queues = NULL; } /* * The terms diag reset and hard reset are used interchangeably in the MPI * docs to mean resetting the controller chip. In this code diag reset * cleans everything up, and the hard reset function just sends the reset * sequence to the chip. This should probably be refactored so that every * subsystem gets a reset notification of some sort, and can clean up * appropriately. */ int mps_reinit(struct mps_softc *sc) { int error; struct mpssas_softc *sassc; sassc = sc->sassc; MPS_FUNCTRACE(sc); mtx_assert(&sc->mps_mtx, MA_OWNED); mps_dprint(sc, MPS_INIT|MPS_INFO, "Reinitializing controller\n"); if (sc->mps_flags & MPS_FLAGS_DIAGRESET) { mps_dprint(sc, MPS_INIT, "Reset already in progress\n"); return 0; } /* make sure the completion callbacks can recognize they're getting * a NULL cm_reply due to a reset. */ sc->mps_flags |= MPS_FLAGS_DIAGRESET; /* * Mask interrupts here. */ mps_dprint(sc, MPS_INIT, "masking interrupts and resetting\n"); mps_mask_intr(sc); error = mps_diag_reset(sc, CAN_SLEEP); if (error != 0) { /* XXXSL No need to panic here */ panic("%s hard reset failed with error %d\n", __func__, error); } /* Restore the PCI state, including the MSI-X registers */ mps_pci_restore(sc); /* Give the I/O subsystem special priority to get itself prepared */ mpssas_handle_reinit(sc); /* * Get IOC Facts and allocate all structures based on this information. * The attach function will also call mps_iocfacts_allocate at startup. * If relevant values have changed in IOC Facts, this function will free * all of the memory based on IOC Facts and reallocate that memory. */ if ((error = mps_iocfacts_allocate(sc, FALSE)) != 0) { panic("%s IOC Facts based allocation failed with error %d\n", __func__, error); } /* * Mapping structures will be re-allocated after getting IOC Page8, so * free these structures here. */ mps_mapping_exit(sc); /* * The static page function currently read is IOC Page8. Others can be * added in future. It's possible that the values in IOC Page8 have * changed after a Diag Reset due to user modification, so always read * these. Interrupts are masked, so unmask them before getting config * pages. */ mps_unmask_intr(sc); sc->mps_flags &= ~MPS_FLAGS_DIAGRESET; mps_base_static_config_pages(sc); /* * Some mapping info is based in IOC Page8 data, so re-initialize the * mapping tables. */ mps_mapping_initialize(sc); /* * Restart will reload the event masks clobbered by the reset, and * then enable the port. */ mps_reregister_events(sc); /* the end of discovery will release the simq, so we're done. */ mps_dprint(sc, MPS_INIT|MPS_XINFO, "Finished sc %p post %u free %u\n", sc, sc->replypostindex, sc->replyfreeindex); mpssas_release_simq_reinit(sassc); mps_dprint(sc, MPS_INIT, "%s exit\n", __func__); return 0; } /* Wait for the chip to ACK a word that we've put into its FIFO * Wait for seconds. In single loop wait for busy loop * for 500 microseconds. * Total is [ 0.5 * (2000 * ) ] in miliseconds. * */ static int mps_wait_db_ack(struct mps_softc *sc, int timeout, int sleep_flag) { u32 cntdn, count; u32 int_status; u32 doorbell; count = 0; cntdn = (sleep_flag == CAN_SLEEP) ? 1000*timeout : 2000*timeout; do { int_status = mps_regread(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET); if (!(int_status & MPI2_HIS_SYS2IOC_DB_STATUS)) { mps_dprint(sc, MPS_TRACE, "%s: successful count(%d), timeout(%d)\n", __func__, count, timeout); return 0; } else if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) { doorbell = mps_regread(sc, MPI2_DOORBELL_OFFSET); if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) { mps_dprint(sc, MPS_FAULT, "fault_state(0x%04x)!\n", doorbell); return (EFAULT); } } else if (int_status == 0xFFFFFFFF) goto out; /* If it can sleep, sleep for 1 milisecond, else busy loop for * 0.5 milisecond */ if (mtx_owned(&sc->mps_mtx) && sleep_flag == CAN_SLEEP) msleep(&sc->msleep_fake_chan, &sc->mps_mtx, 0, "mpsdba", hz/1000); else if (sleep_flag == CAN_SLEEP) pause("mpsdba", hz/1000); else DELAY(500); count++; } while (--cntdn); out: mps_dprint(sc, MPS_FAULT, "%s: failed due to timeout count(%d), " "int_status(%x)!\n", __func__, count, int_status); return (ETIMEDOUT); } /* Wait for the chip to signal that the next word in its FIFO can be fetched */ static int mps_wait_db_int(struct mps_softc *sc) { int retry; for (retry = 0; retry < MPS_DB_MAX_WAIT; retry++) { if ((mps_regread(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET) & MPI2_HIS_IOC2SYS_DB_STATUS) != 0) return (0); DELAY(2000); } return (ETIMEDOUT); } /* Step through the synchronous command state machine, i.e. "Doorbell mode" */ static int mps_request_sync(struct mps_softc *sc, void *req, MPI2_DEFAULT_REPLY *reply, int req_sz, int reply_sz, int timeout) { uint32_t *data32; uint16_t *data16; int i, count, ioc_sz, residual; int sleep_flags = CAN_SLEEP; if (curthread->td_no_sleeping != 0) sleep_flags = NO_SLEEP; /* Step 1 */ mps_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); /* Step 2 */ if (mps_regread(sc, MPI2_DOORBELL_OFFSET) & MPI2_DOORBELL_USED) return (EBUSY); /* Step 3 * Announce that a message is coming through the doorbell. Messages * are pushed at 32bit words, so round up if needed. */ count = (req_sz + 3) / 4; mps_regwrite(sc, MPI2_DOORBELL_OFFSET, (MPI2_FUNCTION_HANDSHAKE << MPI2_DOORBELL_FUNCTION_SHIFT) | (count << MPI2_DOORBELL_ADD_DWORDS_SHIFT)); /* Step 4 */ if (mps_wait_db_int(sc) || (mps_regread(sc, MPI2_DOORBELL_OFFSET) & MPI2_DOORBELL_USED) == 0) { mps_dprint(sc, MPS_FAULT, "Doorbell failed to activate\n"); return (ENXIO); } mps_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); if (mps_wait_db_ack(sc, 5, sleep_flags) != 0) { mps_dprint(sc, MPS_FAULT, "Doorbell handshake failed\n"); return (ENXIO); } /* Step 5 */ /* Clock out the message data synchronously in 32-bit dwords*/ data32 = (uint32_t *)req; for (i = 0; i < count; i++) { mps_regwrite(sc, MPI2_DOORBELL_OFFSET, htole32(data32[i])); if (mps_wait_db_ack(sc, 5, sleep_flags) != 0) { mps_dprint(sc, MPS_FAULT, "Timeout while writing doorbell\n"); return (ENXIO); } } /* Step 6 */ /* Clock in the reply in 16-bit words. The total length of the * message is always in the 4th byte, so clock out the first 2 words * manually, then loop the rest. */ data16 = (uint16_t *)reply; if (mps_wait_db_int(sc) != 0) { mps_dprint(sc, MPS_FAULT, "Timeout reading doorbell 0\n"); return (ENXIO); } data16[0] = mps_regread(sc, MPI2_DOORBELL_OFFSET) & MPI2_DOORBELL_DATA_MASK; mps_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); if (mps_wait_db_int(sc) != 0) { mps_dprint(sc, MPS_FAULT, "Timeout reading doorbell 1\n"); return (ENXIO); } data16[1] = mps_regread(sc, MPI2_DOORBELL_OFFSET) & MPI2_DOORBELL_DATA_MASK; mps_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); /* Number of 32bit words in the message */ ioc_sz = reply->MsgLength; /* * Figure out how many 16bit words to clock in without overrunning. * The precision loss with dividing reply_sz can safely be * ignored because the messages can only be multiples of 32bits. */ residual = 0; count = MIN((reply_sz / 4), ioc_sz) * 2; if (count < ioc_sz * 2) { residual = ioc_sz * 2 - count; mps_dprint(sc, MPS_ERROR, "Driver error, throwing away %d " "residual message words\n", residual); } for (i = 2; i < count; i++) { if (mps_wait_db_int(sc) != 0) { mps_dprint(sc, MPS_FAULT, "Timeout reading doorbell %d\n", i); return (ENXIO); } data16[i] = mps_regread(sc, MPI2_DOORBELL_OFFSET) & MPI2_DOORBELL_DATA_MASK; mps_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); } /* * Pull out residual words that won't fit into the provided buffer. * This keeps the chip from hanging due to a driver programming * error. */ while (residual--) { if (mps_wait_db_int(sc) != 0) { mps_dprint(sc, MPS_FAULT, "Timeout reading doorbell\n"); return (ENXIO); } (void)mps_regread(sc, MPI2_DOORBELL_OFFSET); mps_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); } /* Step 7 */ if (mps_wait_db_int(sc) != 0) { mps_dprint(sc, MPS_FAULT, "Timeout waiting to exit doorbell\n"); return (ENXIO); } if (mps_regread(sc, MPI2_DOORBELL_OFFSET) & MPI2_DOORBELL_USED) mps_dprint(sc, MPS_FAULT, "Warning, doorbell still active\n"); mps_regwrite(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET, 0x0); return (0); } static void mps_enqueue_request(struct mps_softc *sc, struct mps_command *cm) { reply_descriptor rd; MPS_FUNCTRACE(sc); mps_dprint(sc, MPS_TRACE, "SMID %u cm %p ccb %p\n", cm->cm_desc.Default.SMID, cm, cm->cm_ccb); if (sc->mps_flags & MPS_FLAGS_ATTACH_DONE && !(sc->mps_flags & MPS_FLAGS_SHUTDOWN)) mtx_assert(&sc->mps_mtx, MA_OWNED); if (++sc->io_cmds_active > sc->io_cmds_highwater) sc->io_cmds_highwater++; rd.u.low = cm->cm_desc.Words.Low; rd.u.high = cm->cm_desc.Words.High; rd.word = htole64(rd.word); KASSERT(cm->cm_state == MPS_CM_STATE_BUSY, ("command not busy\n")); cm->cm_state = MPS_CM_STATE_INQUEUE; /* TODO-We may need to make below regwrite atomic */ mps_regwrite(sc, MPI2_REQUEST_DESCRIPTOR_POST_LOW_OFFSET, rd.u.low); mps_regwrite(sc, MPI2_REQUEST_DESCRIPTOR_POST_HIGH_OFFSET, rd.u.high); } /* * Just the FACTS, ma'am. */ static int mps_get_iocfacts(struct mps_softc *sc, MPI2_IOC_FACTS_REPLY *facts) { MPI2_DEFAULT_REPLY *reply; MPI2_IOC_FACTS_REQUEST request; int error, req_sz, reply_sz; MPS_FUNCTRACE(sc); mps_dprint(sc, MPS_INIT, "%s entered\n", __func__); req_sz = sizeof(MPI2_IOC_FACTS_REQUEST); reply_sz = sizeof(MPI2_IOC_FACTS_REPLY); reply = (MPI2_DEFAULT_REPLY *)facts; bzero(&request, req_sz); request.Function = MPI2_FUNCTION_IOC_FACTS; error = mps_request_sync(sc, &request, reply, req_sz, reply_sz, 5); mps_dprint(sc, MPS_INIT, "%s exit error= %d\n", __func__, error); return (error); } static int mps_send_iocinit(struct mps_softc *sc) { MPI2_IOC_INIT_REQUEST init; MPI2_DEFAULT_REPLY reply; int req_sz, reply_sz, error; struct timeval now; uint64_t time_in_msec; MPS_FUNCTRACE(sc); mps_dprint(sc, MPS_INIT, "%s entered\n", __func__); /* Do a quick sanity check on proper initialization */ if ((sc->pqdepth == 0) || (sc->fqdepth == 0) || (sc->reqframesz == 0) || (sc->replyframesz == 0)) { mps_dprint(sc, MPS_INIT|MPS_ERROR, "Driver not fully initialized for IOCInit\n"); return (EINVAL); } req_sz = sizeof(MPI2_IOC_INIT_REQUEST); reply_sz = sizeof(MPI2_IOC_INIT_REPLY); bzero(&init, req_sz); bzero(&reply, reply_sz); /* * Fill in the init block. Note that most addresses are * deliberately in the lower 32bits of memory. This is a micro- * optimzation for PCI/PCIX, though it's not clear if it helps PCIe. */ init.Function = MPI2_FUNCTION_IOC_INIT; init.WhoInit = MPI2_WHOINIT_HOST_DRIVER; init.MsgVersion = htole16(MPI2_VERSION); init.HeaderVersion = htole16(MPI2_HEADER_VERSION); init.SystemRequestFrameSize = htole16((uint16_t)(sc->reqframesz / 4)); init.ReplyDescriptorPostQueueDepth = htole16(sc->pqdepth); init.ReplyFreeQueueDepth = htole16(sc->fqdepth); init.SenseBufferAddressHigh = 0; init.SystemReplyAddressHigh = 0; init.SystemRequestFrameBaseAddress.High = 0; init.SystemRequestFrameBaseAddress.Low = htole32((uint32_t)sc->req_busaddr); init.ReplyDescriptorPostQueueAddress.High = 0; init.ReplyDescriptorPostQueueAddress.Low = htole32((uint32_t)sc->post_busaddr); init.ReplyFreeQueueAddress.High = 0; init.ReplyFreeQueueAddress.Low = htole32((uint32_t)sc->free_busaddr); getmicrotime(&now); time_in_msec = (now.tv_sec * 1000 + now.tv_usec/1000); init.TimeStamp.High = htole32((time_in_msec >> 32) & 0xFFFFFFFF); init.TimeStamp.Low = htole32(time_in_msec & 0xFFFFFFFF); error = mps_request_sync(sc, &init, &reply, req_sz, reply_sz, 5); if ((reply.IOCStatus & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) error = ENXIO; mps_dprint(sc, MPS_INIT, "IOCInit status= 0x%x\n", reply.IOCStatus); mps_dprint(sc, MPS_INIT, "%s exit\n", __func__); return (error); } void mps_memaddr_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error) { bus_addr_t *addr; addr = arg; *addr = segs[0].ds_addr; } void mps_memaddr_wait_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error) { struct mps_busdma_context *ctx; int need_unload, need_free; ctx = (struct mps_busdma_context *)arg; need_unload = 0; need_free = 0; mps_lock(ctx->softc); ctx->error = error; ctx->completed = 1; if ((error == 0) && (ctx->abandoned == 0)) { *ctx->addr = segs[0].ds_addr; } else { if (nsegs != 0) need_unload = 1; if (ctx->abandoned != 0) need_free = 1; } if (need_free == 0) wakeup(ctx); mps_unlock(ctx->softc); if (need_unload != 0) { bus_dmamap_unload(ctx->buffer_dmat, ctx->buffer_dmamap); *ctx->addr = 0; } if (need_free != 0) free(ctx, M_MPSUSER); } static int mps_alloc_queues(struct mps_softc *sc) { struct mps_queue *q; u_int nq, i; nq = sc->msi_msgs; mps_dprint(sc, MPS_INIT|MPS_XINFO, "Allocating %d I/O queues\n", nq); sc->queues = malloc(sizeof(struct mps_queue) * nq, M_MPT2, M_NOWAIT|M_ZERO); if (sc->queues == NULL) return (ENOMEM); for (i = 0; i < nq; i++) { q = &sc->queues[i]; mps_dprint(sc, MPS_INIT, "Configuring queue %d %p\n", i, q); q->sc = sc; q->qnum = i; } return (0); } static int mps_alloc_hw_queues(struct mps_softc *sc) { bus_addr_t queues_busaddr; uint8_t *queues; int qsize, fqsize, pqsize; /* * The reply free queue contains 4 byte entries in multiples of 16 and * aligned on a 16 byte boundary. There must always be an unused entry. * This queue supplies fresh reply frames for the firmware to use. * * The reply descriptor post queue contains 8 byte entries in * multiples of 16 and aligned on a 16 byte boundary. This queue * contains filled-in reply frames sent from the firmware to the host. * * These two queues are allocated together for simplicity. */ sc->fqdepth = roundup2(sc->num_replies + 1, 16); sc->pqdepth = roundup2(sc->num_replies + 1, 16); fqsize= sc->fqdepth * 4; pqsize = sc->pqdepth * 8; qsize = fqsize + pqsize; if (bus_dma_tag_create( sc->mps_parent_dmat, /* parent */ 16, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ qsize, /* maxsize */ 1, /* nsegments */ qsize, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->queues_dmat)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate queues DMA tag\n"); return (ENOMEM); } if (bus_dmamem_alloc(sc->queues_dmat, (void **)&queues, BUS_DMA_NOWAIT, &sc->queues_map)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate queues memory\n"); return (ENOMEM); } bzero(queues, qsize); bus_dmamap_load(sc->queues_dmat, sc->queues_map, queues, qsize, mps_memaddr_cb, &queues_busaddr, 0); sc->free_queue = (uint32_t *)queues; sc->free_busaddr = queues_busaddr; sc->post_queue = (MPI2_REPLY_DESCRIPTORS_UNION *)(queues + fqsize); sc->post_busaddr = queues_busaddr + fqsize; mps_dprint(sc, MPS_INIT, "free queue busaddr= %#016jx size= %d\n", (uintmax_t)sc->free_busaddr, fqsize); mps_dprint(sc, MPS_INIT, "reply queue busaddr= %#016jx size= %d\n", (uintmax_t)sc->post_busaddr, pqsize); return (0); } static int mps_alloc_replies(struct mps_softc *sc) { int rsize, num_replies; /* Store the reply frame size in bytes rather than as 32bit words */ sc->replyframesz = sc->facts->ReplyFrameSize * 4; /* * sc->num_replies should be one less than sc->fqdepth. We need to * allocate space for sc->fqdepth replies, but only sc->num_replies * replies can be used at once. */ num_replies = max(sc->fqdepth, sc->num_replies); rsize = sc->replyframesz * num_replies; if (bus_dma_tag_create( sc->mps_parent_dmat, /* parent */ 4, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ rsize, /* maxsize */ 1, /* nsegments */ rsize, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->reply_dmat)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate replies DMA tag\n"); return (ENOMEM); } if (bus_dmamem_alloc(sc->reply_dmat, (void **)&sc->reply_frames, BUS_DMA_NOWAIT, &sc->reply_map)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate replies memory\n"); return (ENOMEM); } bzero(sc->reply_frames, rsize); bus_dmamap_load(sc->reply_dmat, sc->reply_map, sc->reply_frames, rsize, mps_memaddr_cb, &sc->reply_busaddr, 0); mps_dprint(sc, MPS_INIT, "reply frames busaddr= %#016jx size= %d\n", (uintmax_t)sc->reply_busaddr, rsize); return (0); } static void mps_load_chains_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error) { struct mps_softc *sc = arg; struct mps_chain *chain; bus_size_t bo; int i, o, s; if (error != 0) return; for (i = 0, o = 0, s = 0; s < nsegs; s++) { for (bo = 0; bo + sc->reqframesz <= segs[s].ds_len; bo += sc->reqframesz) { chain = &sc->chains[i++]; chain->chain =(MPI2_SGE_IO_UNION *)(sc->chain_frames+o); chain->chain_busaddr = segs[s].ds_addr + bo; o += sc->reqframesz; mps_free_chain(sc, chain); } if (bo != segs[s].ds_len) o += segs[s].ds_len - bo; } sc->chain_free_lowwater = i; } static int mps_alloc_requests(struct mps_softc *sc) { struct mps_command *cm; int i, rsize, nsegs; rsize = sc->reqframesz * sc->num_reqs; if (bus_dma_tag_create( sc->mps_parent_dmat, /* parent */ 16, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ rsize, /* maxsize */ 1, /* nsegments */ rsize, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->req_dmat)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate request DMA tag\n"); return (ENOMEM); } if (bus_dmamem_alloc(sc->req_dmat, (void **)&sc->req_frames, BUS_DMA_NOWAIT, &sc->req_map)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate request memory\n"); return (ENOMEM); } bzero(sc->req_frames, rsize); bus_dmamap_load(sc->req_dmat, sc->req_map, sc->req_frames, rsize, mps_memaddr_cb, &sc->req_busaddr, 0); mps_dprint(sc, MPS_INIT, "request frames busaddr= %#016jx size= %d\n", (uintmax_t)sc->req_busaddr, rsize); sc->chains = malloc(sizeof(struct mps_chain) * sc->num_chains, M_MPT2, M_NOWAIT | M_ZERO); if (!sc->chains) { mps_dprint(sc, MPS_ERROR, "Cannot allocate chain memory\n"); return (ENOMEM); } rsize = sc->reqframesz * sc->num_chains; if (bus_dma_tag_create( sc->mps_parent_dmat, /* parent */ 16, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ rsize, /* maxsize */ howmany(rsize, PAGE_SIZE), /* nsegments */ rsize, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->chain_dmat)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate chain DMA tag\n"); return (ENOMEM); } if (bus_dmamem_alloc(sc->chain_dmat, (void **)&sc->chain_frames, BUS_DMA_NOWAIT | BUS_DMA_ZERO, &sc->chain_map)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate chain memory\n"); return (ENOMEM); } if (bus_dmamap_load(sc->chain_dmat, sc->chain_map, sc->chain_frames, rsize, mps_load_chains_cb, sc, BUS_DMA_NOWAIT)) { mps_dprint(sc, MPS_ERROR, "Cannot load chain memory\n"); bus_dmamem_free(sc->chain_dmat, sc->chain_frames, sc->chain_map); return (ENOMEM); } rsize = MPS_SENSE_LEN * sc->num_reqs; if (bus_dma_tag_create( sc->mps_parent_dmat, /* parent */ 1, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ rsize, /* maxsize */ 1, /* nsegments */ rsize, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->sense_dmat)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate sense DMA tag\n"); return (ENOMEM); } if (bus_dmamem_alloc(sc->sense_dmat, (void **)&sc->sense_frames, BUS_DMA_NOWAIT, &sc->sense_map)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate sense memory\n"); return (ENOMEM); } bzero(sc->sense_frames, rsize); bus_dmamap_load(sc->sense_dmat, sc->sense_map, sc->sense_frames, rsize, mps_memaddr_cb, &sc->sense_busaddr, 0); mps_dprint(sc, MPS_INIT, "sense frames busaddr= %#016jx size= %d\n", (uintmax_t)sc->sense_busaddr, rsize); nsegs = (sc->maxio / PAGE_SIZE) + 1; if (bus_dma_tag_create( sc->mps_parent_dmat, /* parent */ 1, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ BUS_SPACE_MAXSIZE_32BIT,/* maxsize */ nsegs, /* nsegments */ BUS_SPACE_MAXSIZE_24BIT,/* maxsegsize */ BUS_DMA_ALLOCNOW, /* flags */ busdma_lock_mutex, /* lockfunc */ &sc->mps_mtx, /* lockarg */ &sc->buffer_dmat)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate buffer DMA tag\n"); return (ENOMEM); } /* * SMID 0 cannot be used as a free command per the firmware spec. * Just drop that command instead of risking accounting bugs. */ sc->commands = malloc(sizeof(struct mps_command) * sc->num_reqs, M_MPT2, M_WAITOK | M_ZERO); if(!sc->commands) { mps_dprint(sc, MPS_ERROR, "Cannot allocate command memory\n"); return (ENOMEM); } for (i = 1; i < sc->num_reqs; i++) { cm = &sc->commands[i]; cm->cm_req = sc->req_frames + i * sc->reqframesz; cm->cm_req_busaddr = sc->req_busaddr + i * sc->reqframesz; cm->cm_sense = &sc->sense_frames[i]; cm->cm_sense_busaddr = sc->sense_busaddr + i * MPS_SENSE_LEN; cm->cm_desc.Default.SMID = i; cm->cm_sc = sc; cm->cm_state = MPS_CM_STATE_BUSY; TAILQ_INIT(&cm->cm_chain_list); callout_init_mtx(&cm->cm_callout, &sc->mps_mtx, 0); /* XXX Is a failure here a critical problem? */ if (bus_dmamap_create(sc->buffer_dmat, 0, &cm->cm_dmamap) == 0) if (i <= sc->num_prireqs) mps_free_high_priority_command(sc, cm); else mps_free_command(sc, cm); else { panic("failed to allocate command %d\n", i); sc->num_reqs = i; break; } } return (0); } static int mps_init_queues(struct mps_softc *sc) { int i; memset((uint8_t *)sc->post_queue, 0xff, sc->pqdepth * 8); /* * According to the spec, we need to use one less reply than we * have space for on the queue. So sc->num_replies (the number we * use) should be less than sc->fqdepth (allocated size). */ if (sc->num_replies >= sc->fqdepth) return (EINVAL); /* * Initialize all of the free queue entries. */ for (i = 0; i < sc->fqdepth; i++) sc->free_queue[i] = sc->reply_busaddr + (i * sc->replyframesz); sc->replyfreeindex = sc->num_replies; return (0); } /* Get the driver parameter tunables. Lowest priority are the driver defaults. * Next are the global settings, if they exist. Highest are the per-unit * settings, if they exist. */ void mps_get_tunables(struct mps_softc *sc) { char tmpstr[80], mps_debug[80]; /* XXX default to some debugging for now */ sc->mps_debug = MPS_INFO|MPS_FAULT; sc->disable_msix = 0; sc->disable_msi = 0; sc->max_msix = MPS_MSIX_MAX; sc->max_chains = MPS_CHAIN_FRAMES; sc->max_io_pages = MPS_MAXIO_PAGES; sc->enable_ssu = MPS_SSU_ENABLE_SSD_DISABLE_HDD; sc->spinup_wait_time = DEFAULT_SPINUP_WAIT; sc->use_phynum = 1; sc->max_reqframes = MPS_REQ_FRAMES; sc->max_prireqframes = MPS_PRI_REQ_FRAMES; sc->max_replyframes = MPS_REPLY_FRAMES; sc->max_evtframes = MPS_EVT_REPLY_FRAMES; /* * Grab the global variables. */ bzero(mps_debug, 80); if (TUNABLE_STR_FETCH("hw.mps.debug_level", mps_debug, 80) != 0) mps_parse_debug(sc, mps_debug); TUNABLE_INT_FETCH("hw.mps.disable_msix", &sc->disable_msix); TUNABLE_INT_FETCH("hw.mps.disable_msi", &sc->disable_msi); TUNABLE_INT_FETCH("hw.mps.max_msix", &sc->max_msix); TUNABLE_INT_FETCH("hw.mps.max_chains", &sc->max_chains); TUNABLE_INT_FETCH("hw.mps.max_io_pages", &sc->max_io_pages); TUNABLE_INT_FETCH("hw.mps.enable_ssu", &sc->enable_ssu); TUNABLE_INT_FETCH("hw.mps.spinup_wait_time", &sc->spinup_wait_time); TUNABLE_INT_FETCH("hw.mps.use_phy_num", &sc->use_phynum); TUNABLE_INT_FETCH("hw.mps.max_reqframes", &sc->max_reqframes); TUNABLE_INT_FETCH("hw.mps.max_prireqframes", &sc->max_prireqframes); TUNABLE_INT_FETCH("hw.mps.max_replyframes", &sc->max_replyframes); TUNABLE_INT_FETCH("hw.mps.max_evtframes", &sc->max_evtframes); /* Grab the unit-instance variables */ snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.debug_level", device_get_unit(sc->mps_dev)); bzero(mps_debug, 80); if (TUNABLE_STR_FETCH(tmpstr, mps_debug, 80) != 0) mps_parse_debug(sc, mps_debug); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.disable_msix", device_get_unit(sc->mps_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->disable_msix); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.disable_msi", device_get_unit(sc->mps_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->disable_msi); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.max_msix", device_get_unit(sc->mps_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_msix); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.max_chains", device_get_unit(sc->mps_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_chains); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.max_io_pages", device_get_unit(sc->mps_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_io_pages); bzero(sc->exclude_ids, sizeof(sc->exclude_ids)); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.exclude_ids", device_get_unit(sc->mps_dev)); TUNABLE_STR_FETCH(tmpstr, sc->exclude_ids, sizeof(sc->exclude_ids)); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.enable_ssu", device_get_unit(sc->mps_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->enable_ssu); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.spinup_wait_time", device_get_unit(sc->mps_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->spinup_wait_time); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.use_phy_num", device_get_unit(sc->mps_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->use_phynum); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.max_reqframes", device_get_unit(sc->mps_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_reqframes); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.max_prireqframes", device_get_unit(sc->mps_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_prireqframes); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.max_replyframes", device_get_unit(sc->mps_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_replyframes); snprintf(tmpstr, sizeof(tmpstr), "dev.mps.%d.max_evtframes", device_get_unit(sc->mps_dev)); TUNABLE_INT_FETCH(tmpstr, &sc->max_evtframes); } static void mps_setup_sysctl(struct mps_softc *sc) { struct sysctl_ctx_list *sysctl_ctx = NULL; struct sysctl_oid *sysctl_tree = NULL; char tmpstr[80], tmpstr2[80]; /* * Setup the sysctl variable so the user can change the debug level * on the fly. */ snprintf(tmpstr, sizeof(tmpstr), "MPS controller %d", device_get_unit(sc->mps_dev)); snprintf(tmpstr2, sizeof(tmpstr2), "%d", device_get_unit(sc->mps_dev)); sysctl_ctx = device_get_sysctl_ctx(sc->mps_dev); if (sysctl_ctx != NULL) sysctl_tree = device_get_sysctl_tree(sc->mps_dev); if (sysctl_tree == NULL) { sysctl_ctx_init(&sc->sysctl_ctx); sc->sysctl_tree = SYSCTL_ADD_NODE(&sc->sysctl_ctx, SYSCTL_STATIC_CHILDREN(_hw_mps), OID_AUTO, tmpstr2, CTLFLAG_RD, 0, tmpstr); if (sc->sysctl_tree == NULL) return; sysctl_ctx = &sc->sysctl_ctx; sysctl_tree = sc->sysctl_tree; } SYSCTL_ADD_PROC(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "debug_level", CTLTYPE_STRING | CTLFLAG_RW |CTLFLAG_MPSAFE, sc, 0, mps_debug_sysctl, "A", "mps debug level"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "disable_msix", CTLFLAG_RD, &sc->disable_msix, 0, "Disable the use of MSI-X interrupts"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "disable_msi", CTLFLAG_RD, &sc->disable_msi, 0, "Disable the use of MSI interrupts"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_msix", CTLFLAG_RD, &sc->max_msix, 0, "User-defined maximum number of MSIX queues"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "msix_msgs", CTLFLAG_RD, &sc->msi_msgs, 0, "Negotiated number of MSIX queues"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_reqframes", CTLFLAG_RD, &sc->max_reqframes, 0, "Total number of allocated request frames"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_prireqframes", CTLFLAG_RD, &sc->max_prireqframes, 0, "Total number of allocated high priority request frames"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_replyframes", CTLFLAG_RD, &sc->max_replyframes, 0, "Total number of allocated reply frames"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_evtframes", CTLFLAG_RD, &sc->max_evtframes, 0, "Total number of event frames allocated"); SYSCTL_ADD_STRING(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "firmware_version", CTLFLAG_RW, sc->fw_version, strlen(sc->fw_version), "firmware version"); SYSCTL_ADD_STRING(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "driver_version", CTLFLAG_RW, MPS_DRIVER_VERSION, strlen(MPS_DRIVER_VERSION), "driver version"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "io_cmds_active", CTLFLAG_RD, &sc->io_cmds_active, 0, "number of currently active commands"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "io_cmds_highwater", CTLFLAG_RD, &sc->io_cmds_highwater, 0, "maximum active commands seen"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "chain_free", CTLFLAG_RD, &sc->chain_free, 0, "number of free chain elements"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "chain_free_lowwater", CTLFLAG_RD, &sc->chain_free_lowwater, 0,"lowest number of free chain elements"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_chains", CTLFLAG_RD, &sc->max_chains, 0,"maximum chain frames that will be allocated"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "max_io_pages", CTLFLAG_RD, &sc->max_io_pages, 0,"maximum pages to allow per I/O (if <1 use " "IOCFacts)"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "enable_ssu", CTLFLAG_RW, &sc->enable_ssu, 0, "enable SSU to SATA SSD/HDD at shutdown"); SYSCTL_ADD_UQUAD(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "chain_alloc_fail", CTLFLAG_RD, &sc->chain_alloc_fail, "chain allocation failures"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "spinup_wait_time", CTLFLAG_RD, &sc->spinup_wait_time, DEFAULT_SPINUP_WAIT, "seconds to wait for " "spinup after SATA ID error"); SYSCTL_ADD_PROC(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "mapping_table_dump", CTLTYPE_STRING | CTLFLAG_RD, sc, 0, mps_mapping_dump, "A", "Mapping Table Dump"); SYSCTL_ADD_PROC(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "encl_table_dump", CTLTYPE_STRING | CTLFLAG_RD, sc, 0, mps_mapping_encl_dump, "A", "Enclosure Table Dump"); SYSCTL_ADD_PROC(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "dump_reqs", CTLTYPE_OPAQUE | CTLFLAG_RD | CTLFLAG_SKIP, sc, 0, mps_dump_reqs, "I", "Dump Active Requests"); SYSCTL_ADD_INT(sysctl_ctx, SYSCTL_CHILDREN(sysctl_tree), OID_AUTO, "use_phy_num", CTLFLAG_RD, &sc->use_phynum, 0, "Use the phy number for enumeration"); } static struct mps_debug_string { char *name; int flag; } mps_debug_strings[] = { {"info", MPS_INFO}, {"fault", MPS_FAULT}, {"event", MPS_EVENT}, {"log", MPS_LOG}, {"recovery", MPS_RECOVERY}, {"error", MPS_ERROR}, {"init", MPS_INIT}, {"xinfo", MPS_XINFO}, {"user", MPS_USER}, {"mapping", MPS_MAPPING}, {"trace", MPS_TRACE} }; enum mps_debug_level_combiner { COMB_NONE, COMB_ADD, COMB_SUB }; static int mps_debug_sysctl(SYSCTL_HANDLER_ARGS) { struct mps_softc *sc; struct mps_debug_string *string; struct sbuf *sbuf; char *buffer; size_t sz; int i, len, debug, error; sc = (struct mps_softc *)arg1; error = sysctl_wire_old_buffer(req, 0); if (error != 0) return (error); sbuf = sbuf_new_for_sysctl(NULL, NULL, 128, req); debug = sc->mps_debug; sbuf_printf(sbuf, "%#x", debug); sz = sizeof(mps_debug_strings) / sizeof(mps_debug_strings[0]); for (i = 0; i < sz; i++) { string = &mps_debug_strings[i]; if (debug & string->flag) sbuf_printf(sbuf, ",%s", string->name); } error = sbuf_finish(sbuf); sbuf_delete(sbuf); if (error || req->newptr == NULL) return (error); len = req->newlen - req->newidx; if (len == 0) return (0); buffer = malloc(len, M_MPT2, M_ZERO|M_WAITOK); error = SYSCTL_IN(req, buffer, len); mps_parse_debug(sc, buffer); free(buffer, M_MPT2); return (error); } static void mps_parse_debug(struct mps_softc *sc, char *list) { struct mps_debug_string *string; enum mps_debug_level_combiner op; char *token, *endtoken; size_t sz; int flags, i; if (list == NULL || *list == '\0') return; if (*list == '+') { op = COMB_ADD; list++; } else if (*list == '-') { op = COMB_SUB; list++; } else op = COMB_NONE; if (*list == '\0') return; flags = 0; sz = sizeof(mps_debug_strings) / sizeof(mps_debug_strings[0]); while ((token = strsep(&list, ":,")) != NULL) { /* Handle integer flags */ flags |= strtol(token, &endtoken, 0); if (token != endtoken) continue; /* Handle text flags */ for (i = 0; i < sz; i++) { string = &mps_debug_strings[i]; if (strcasecmp(token, string->name) == 0) { flags |= string->flag; break; } } } switch (op) { case COMB_NONE: sc->mps_debug = flags; break; case COMB_ADD: sc->mps_debug |= flags; break; case COMB_SUB: sc->mps_debug &= (~flags); break; } return; } struct mps_dumpreq_hdr { uint32_t smid; uint32_t state; uint32_t numframes; uint32_t deschi; uint32_t desclo; }; static int mps_dump_reqs(SYSCTL_HANDLER_ARGS) { struct mps_softc *sc; struct mps_chain *chain, *chain1; struct mps_command *cm; struct mps_dumpreq_hdr hdr; struct sbuf *sb; uint32_t smid, state; int i, numreqs, error = 0; sc = (struct mps_softc *)arg1; if ((error = priv_check(curthread, PRIV_DRIVER)) != 0) { printf("priv check error %d\n", error); return (error); } state = MPS_CM_STATE_INQUEUE; smid = 1; numreqs = sc->num_reqs; if (req->newptr != NULL) return (EINVAL); if (smid == 0 || smid > sc->num_reqs) return (EINVAL); if (numreqs <= 0 || (numreqs + smid > sc->num_reqs)) numreqs = sc->num_reqs; sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req); /* Best effort, no locking */ for (i = smid; i < numreqs; i++) { cm = &sc->commands[i]; if (cm->cm_state != state) continue; hdr.smid = i; hdr.state = cm->cm_state; hdr.numframes = 1; hdr.deschi = cm->cm_desc.Words.High; hdr.desclo = cm->cm_desc.Words.Low; TAILQ_FOREACH_SAFE(chain, &cm->cm_chain_list, chain_link, chain1) hdr.numframes++; sbuf_bcat(sb, &hdr, sizeof(hdr)); sbuf_bcat(sb, cm->cm_req, 128); TAILQ_FOREACH_SAFE(chain, &cm->cm_chain_list, chain_link, chain1) sbuf_bcat(sb, chain->chain, 128); } error = sbuf_finish(sb); sbuf_delete(sb); return (error); } int mps_attach(struct mps_softc *sc) { int error; MPS_FUNCTRACE(sc); mps_dprint(sc, MPS_INIT, "%s entered\n", __func__); mtx_init(&sc->mps_mtx, "MPT2SAS lock", NULL, MTX_DEF); callout_init_mtx(&sc->periodic, &sc->mps_mtx, 0); callout_init_mtx(&sc->device_check_callout, &sc->mps_mtx, 0); TAILQ_INIT(&sc->event_list); timevalclear(&sc->lastfail); if ((error = mps_transition_ready(sc)) != 0) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "failed to transition " "ready\n"); return (error); } sc->facts = malloc(sizeof(MPI2_IOC_FACTS_REPLY), M_MPT2, M_ZERO|M_NOWAIT); if(!sc->facts) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "Cannot allocate memory, " "exit\n"); return (ENOMEM); } /* * Get IOC Facts and allocate all structures based on this information. * A Diag Reset will also call mps_iocfacts_allocate and re-read the IOC * Facts. If relevant values have changed in IOC Facts, this function * will free all of the memory based on IOC Facts and reallocate that * memory. If this fails, any allocated memory should already be freed. */ if ((error = mps_iocfacts_allocate(sc, TRUE)) != 0) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "IOC Facts based allocation " "failed with error %d, exit\n", error); return (error); } /* Start the periodic watchdog check on the IOC Doorbell */ mps_periodic(sc); /* * The portenable will kick off discovery events that will drive the * rest of the initialization process. The CAM/SAS module will * hold up the boot sequence until discovery is complete. */ sc->mps_ich.ich_func = mps_startup; sc->mps_ich.ich_arg = sc; if (config_intrhook_establish(&sc->mps_ich) != 0) { mps_dprint(sc, MPS_INIT|MPS_ERROR, "Cannot establish MPS config hook\n"); error = EINVAL; } /* * Allow IR to shutdown gracefully when shutdown occurs. */ sc->shutdown_eh = EVENTHANDLER_REGISTER(shutdown_final, mpssas_ir_shutdown, sc, SHUTDOWN_PRI_DEFAULT); if (sc->shutdown_eh == NULL) mps_dprint(sc, MPS_INIT|MPS_ERROR, "shutdown event registration failed\n"); mps_setup_sysctl(sc); sc->mps_flags |= MPS_FLAGS_ATTACH_DONE; mps_dprint(sc, MPS_INIT, "%s exit error= %d\n", __func__, error); return (error); } /* Run through any late-start handlers. */ static void mps_startup(void *arg) { struct mps_softc *sc; sc = (struct mps_softc *)arg; mps_dprint(sc, MPS_INIT, "%s entered\n", __func__); mps_lock(sc); mps_unmask_intr(sc); /* initialize device mapping tables */ mps_base_static_config_pages(sc); mps_mapping_initialize(sc); mpssas_startup(sc); mps_unlock(sc); mps_dprint(sc, MPS_INIT, "disestablish config intrhook\n"); config_intrhook_disestablish(&sc->mps_ich); sc->mps_ich.ich_arg = NULL; mps_dprint(sc, MPS_INIT, "%s exit\n", __func__); } /* Periodic watchdog. Is called with the driver lock already held. */ static void mps_periodic(void *arg) { struct mps_softc *sc; uint32_t db; sc = (struct mps_softc *)arg; if (sc->mps_flags & MPS_FLAGS_SHUTDOWN) return; db = mps_regread(sc, MPI2_DOORBELL_OFFSET); if ((db & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) { mps_dprint(sc, MPS_FAULT, "IOC Fault 0x%08x, Resetting\n", db); mps_reinit(sc); } callout_reset(&sc->periodic, MPS_PERIODIC_DELAY * hz, mps_periodic, sc); } static void mps_log_evt_handler(struct mps_softc *sc, uintptr_t data, MPI2_EVENT_NOTIFICATION_REPLY *event) { MPI2_EVENT_DATA_LOG_ENTRY_ADDED *entry; MPS_DPRINT_EVENT(sc, generic, event); switch (event->Event) { case MPI2_EVENT_LOG_DATA: mps_dprint(sc, MPS_EVENT, "MPI2_EVENT_LOG_DATA:\n"); if (sc->mps_debug & MPS_EVENT) hexdump(event->EventData, event->EventDataLength, NULL, 0); break; case MPI2_EVENT_LOG_ENTRY_ADDED: entry = (MPI2_EVENT_DATA_LOG_ENTRY_ADDED *)event->EventData; mps_dprint(sc, MPS_EVENT, "MPI2_EVENT_LOG_ENTRY_ADDED event " "0x%x Sequence %d:\n", entry->LogEntryQualifier, entry->LogSequence); break; default: break; } return; } static int mps_attach_log(struct mps_softc *sc) { u32 events[MPI2_EVENT_NOTIFY_EVENTMASK_WORDS]; bzero(events, 16); setbit(events, MPI2_EVENT_LOG_DATA); setbit(events, MPI2_EVENT_LOG_ENTRY_ADDED); mps_register_events(sc, events, mps_log_evt_handler, NULL, &sc->mps_log_eh); return (0); } static int mps_detach_log(struct mps_softc *sc) { if (sc->mps_log_eh != NULL) mps_deregister_events(sc, sc->mps_log_eh); return (0); } /* * Free all of the driver resources and detach submodules. Should be called * without the lock held. */ int mps_free(struct mps_softc *sc) { int error; mps_dprint(sc, MPS_INIT, "%s entered\n", __func__); /* Turn off the watchdog */ mps_lock(sc); sc->mps_flags |= MPS_FLAGS_SHUTDOWN; mps_unlock(sc); /* Lock must not be held for this */ callout_drain(&sc->periodic); callout_drain(&sc->device_check_callout); if (((error = mps_detach_log(sc)) != 0) || ((error = mps_detach_sas(sc)) != 0)) { mps_dprint(sc, MPS_INIT|MPS_FAULT, "failed to detach " "subsystems, exit\n"); return (error); } mps_detach_user(sc); /* Put the IOC back in the READY state. */ mps_lock(sc); if ((error = mps_transition_ready(sc)) != 0) { mps_unlock(sc); return (error); } mps_unlock(sc); if (sc->facts != NULL) free(sc->facts, M_MPT2); /* * Free all buffers that are based on IOC Facts. A Diag Reset may need * to free these buffers too. */ mps_iocfacts_free(sc); if (sc->sysctl_tree != NULL) sysctl_ctx_free(&sc->sysctl_ctx); /* Deregister the shutdown function */ if (sc->shutdown_eh != NULL) EVENTHANDLER_DEREGISTER(shutdown_final, sc->shutdown_eh); mtx_destroy(&sc->mps_mtx); mps_dprint(sc, MPS_INIT, "%s exit\n", __func__); return (0); } static __inline void mps_complete_command(struct mps_softc *sc, struct mps_command *cm) { MPS_FUNCTRACE(sc); if (cm == NULL) { mps_dprint(sc, MPS_ERROR, "Completing NULL command\n"); return; } if (cm->cm_flags & MPS_CM_FLAGS_POLLED) cm->cm_flags |= MPS_CM_FLAGS_COMPLETE; if (cm->cm_complete != NULL) { mps_dprint(sc, MPS_TRACE, "%s cm %p calling cm_complete %p data %p reply %p\n", __func__, cm, cm->cm_complete, cm->cm_complete_data, cm->cm_reply); cm->cm_complete(sc, cm); } if (cm->cm_flags & MPS_CM_FLAGS_WAKEUP) { mps_dprint(sc, MPS_TRACE, "waking up %p\n", cm); wakeup(cm); } if (cm->cm_sc->io_cmds_active != 0) { cm->cm_sc->io_cmds_active--; } else { mps_dprint(sc, MPS_ERROR, "Warning: io_cmds_active is " "out of sync - resynching to 0\n"); } } static void mps_sas_log_info(struct mps_softc *sc , u32 log_info) { union loginfo_type { u32 loginfo; struct { u32 subcode:16; u32 code:8; u32 originator:4; u32 bus_type:4; } dw; }; union loginfo_type sas_loginfo; char *originator_str = NULL; sas_loginfo.loginfo = log_info; if (sas_loginfo.dw.bus_type != 3 /*SAS*/) return; /* each nexus loss loginfo */ if (log_info == 0x31170000) return; /* eat the loginfos associated with task aborts */ if ((log_info == 30050000 || log_info == 0x31140000 || log_info == 0x31130000)) return; switch (sas_loginfo.dw.originator) { case 0: originator_str = "IOP"; break; case 1: originator_str = "PL"; break; case 2: originator_str = "IR"; break; } mps_dprint(sc, MPS_LOG, "log_info(0x%08x): originator(%s), " "code(0x%02x), sub_code(0x%04x)\n", log_info, originator_str, sas_loginfo.dw.code, sas_loginfo.dw.subcode); } static void mps_display_reply_info(struct mps_softc *sc, uint8_t *reply) { MPI2DefaultReply_t *mpi_reply; u16 sc_status; mpi_reply = (MPI2DefaultReply_t*)reply; sc_status = le16toh(mpi_reply->IOCStatus); if (sc_status & MPI2_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE) mps_sas_log_info(sc, le32toh(mpi_reply->IOCLogInfo)); } void mps_intr(void *data) { struct mps_softc *sc; uint32_t status; sc = (struct mps_softc *)data; mps_dprint(sc, MPS_TRACE, "%s\n", __func__); /* * Check interrupt status register to flush the bus. This is * needed for both INTx interrupts and driver-driven polling */ status = mps_regread(sc, MPI2_HOST_INTERRUPT_STATUS_OFFSET); if ((status & MPI2_HIS_REPLY_DESCRIPTOR_INTERRUPT) == 0) return; mps_lock(sc); mps_intr_locked(data); mps_unlock(sc); return; } /* * In theory, MSI/MSIX interrupts shouldn't need to read any registers on the * chip. Hopefully this theory is correct. */ void mps_intr_msi(void *data) { struct mps_softc *sc; sc = (struct mps_softc *)data; mps_dprint(sc, MPS_TRACE, "%s\n", __func__); mps_lock(sc); mps_intr_locked(data); mps_unlock(sc); return; } /* * The locking is overly broad and simplistic, but easy to deal with for now. */ void mps_intr_locked(void *data) { MPI2_REPLY_DESCRIPTORS_UNION *desc; + MPI2_DIAG_RELEASE_REPLY *rel_rep; + mps_fw_diagnostic_buffer_t *pBuffer; struct mps_softc *sc; struct mps_command *cm = NULL; + uint64_t tdesc; uint8_t flags; u_int pq; - MPI2_DIAG_RELEASE_REPLY *rel_rep; - mps_fw_diagnostic_buffer_t *pBuffer; sc = (struct mps_softc *)data; pq = sc->replypostindex; mps_dprint(sc, MPS_TRACE, "%s sc %p starting with replypostindex %u\n", __func__, sc, sc->replypostindex); for ( ;; ) { cm = NULL; desc = &sc->post_queue[sc->replypostindex]; + + /* + * Copy and clear out the descriptor so that any reentry will + * immediately know that this descriptor has already been + * looked at. There is unfortunate casting magic because the + * MPI API doesn't have a cardinal 64bit type. + */ + tdesc = 0xffffffffffffffff; + tdesc = atomic_swap_64((uint64_t *)desc, tdesc); + desc = (MPI2_REPLY_DESCRIPTORS_UNION *)&tdesc; + flags = desc->Default.ReplyFlags & MPI2_RPY_DESCRIPT_FLAGS_TYPE_MASK; if ((flags == MPI2_RPY_DESCRIPT_FLAGS_UNUSED) || (le32toh(desc->Words.High) == 0xffffffff)) break; /* increment the replypostindex now, so that event handlers * and cm completion handlers which decide to do a diag * reset can zero it without it getting incremented again * afterwards, and we break out of this loop on the next * iteration since the reply post queue has been cleared to * 0xFF and all descriptors look unused (which they are). */ if (++sc->replypostindex >= sc->pqdepth) sc->replypostindex = 0; switch (flags) { case MPI2_RPY_DESCRIPT_FLAGS_SCSI_IO_SUCCESS: cm = &sc->commands[le16toh(desc->SCSIIOSuccess.SMID)]; KASSERT(cm->cm_state == MPS_CM_STATE_INQUEUE, ("command not inqueue\n")); cm->cm_state = MPS_CM_STATE_BUSY; cm->cm_reply = NULL; break; case MPI2_RPY_DESCRIPT_FLAGS_ADDRESS_REPLY: { uint32_t baddr; uint8_t *reply; /* * Re-compose the reply address from the address * sent back from the chip. The ReplyFrameAddress * is the lower 32 bits of the physical address of * particular reply frame. Convert that address to * host format, and then use that to provide the * offset against the virtual address base * (sc->reply_frames). */ baddr = le32toh(desc->AddressReply.ReplyFrameAddress); reply = sc->reply_frames + (baddr - ((uint32_t)sc->reply_busaddr)); /* * Make sure the reply we got back is in a valid * range. If not, go ahead and panic here, since * we'll probably panic as soon as we deference the * reply pointer anyway. */ if ((reply < sc->reply_frames) || (reply > (sc->reply_frames + (sc->fqdepth * sc->replyframesz)))) { printf("%s: WARNING: reply %p out of range!\n", __func__, reply); printf("%s: reply_frames %p, fqdepth %d, " "frame size %d\n", __func__, sc->reply_frames, sc->fqdepth, sc->replyframesz); printf("%s: baddr %#x,\n", __func__, baddr); /* LSI-TODO. See Linux Code for Graceful exit */ panic("Reply address out of range"); } if (le16toh(desc->AddressReply.SMID) == 0) { if (((MPI2_DEFAULT_REPLY *)reply)->Function == MPI2_FUNCTION_DIAG_BUFFER_POST) { /* * If SMID is 0 for Diag Buffer Post, * this implies that the reply is due to * a release function with a status that * the buffer has been released. Set * the buffer flags accordingly. */ rel_rep = (MPI2_DIAG_RELEASE_REPLY *)reply; if ((le16toh(rel_rep->IOCStatus) & MPI2_IOCSTATUS_MASK) == MPI2_IOCSTATUS_DIAGNOSTIC_RELEASED) { pBuffer = &sc->fw_diag_buffer_list[ rel_rep->BufferType]; pBuffer->valid_data = TRUE; pBuffer->owned_by_firmware = FALSE; pBuffer->immediate = FALSE; } } else mps_dispatch_event(sc, baddr, (MPI2_EVENT_NOTIFICATION_REPLY *) reply); } else { + /* + * Ignore commands not in INQUEUE state + * since they've already been completed + * via another path. + */ cm = &sc->commands[ le16toh(desc->AddressReply.SMID)]; - KASSERT(cm->cm_state == MPS_CM_STATE_INQUEUE, - ("command not inqueue\n")); - cm->cm_state = MPS_CM_STATE_BUSY; - cm->cm_reply = reply; - cm->cm_reply_data = le32toh( - desc->AddressReply.ReplyFrameAddress); + if (cm->cm_state == MPS_CM_STATE_INQUEUE) { + cm->cm_state = MPS_CM_STATE_BUSY; + cm->cm_reply = reply; + cm->cm_reply_data = le32toh( + desc->AddressReply.ReplyFrameAddress); + } else { + mps_dprint(sc, MPS_RECOVERY, + "Bad state for ADDRESS_REPLY status," + " ignoring state %d cm %p\n", + cm->cm_state, cm); + } } break; } case MPI2_RPY_DESCRIPT_FLAGS_TARGETASSIST_SUCCESS: case MPI2_RPY_DESCRIPT_FLAGS_TARGET_COMMAND_BUFFER: case MPI2_RPY_DESCRIPT_FLAGS_RAID_ACCELERATOR_SUCCESS: default: /* Unhandled */ mps_dprint(sc, MPS_ERROR, "Unhandled reply 0x%x\n", desc->Default.ReplyFlags); cm = NULL; break; } if (cm != NULL) { // Print Error reply frame if (cm->cm_reply) mps_display_reply_info(sc,cm->cm_reply); mps_complete_command(sc, cm); } - - desc->Words.Low = 0xffffffff; - desc->Words.High = 0xffffffff; } if (pq != sc->replypostindex) { mps_dprint(sc, MPS_TRACE, "%s sc %p writing postindex %d\n", __func__, sc, sc->replypostindex); mps_regwrite(sc, MPI2_REPLY_POST_HOST_INDEX_OFFSET, sc->replypostindex); } return; } static void mps_dispatch_event(struct mps_softc *sc, uintptr_t data, MPI2_EVENT_NOTIFICATION_REPLY *reply) { struct mps_event_handle *eh; int event, handled = 0; event = le16toh(reply->Event); TAILQ_FOREACH(eh, &sc->event_list, eh_list) { if (isset(eh->mask, event)) { eh->callback(sc, data, reply); handled++; } } if (handled == 0) mps_dprint(sc, MPS_EVENT, "Unhandled event 0x%x\n", le16toh(event)); /* * This is the only place that the event/reply should be freed. * Anything wanting to hold onto the event data should have * already copied it into their own storage. */ mps_free_reply(sc, data); } static void mps_reregister_events_complete(struct mps_softc *sc, struct mps_command *cm) { mps_dprint(sc, MPS_TRACE, "%s\n", __func__); if (cm->cm_reply) MPS_DPRINT_EVENT(sc, generic, (MPI2_EVENT_NOTIFICATION_REPLY *)cm->cm_reply); mps_free_command(sc, cm); /* next, send a port enable */ mpssas_startup(sc); } /* * For both register_events and update_events, the caller supplies a bitmap * of events that it _wants_. These functions then turn that into a bitmask * suitable for the controller. */ int mps_register_events(struct mps_softc *sc, u32 *mask, mps_evt_callback_t *cb, void *data, struct mps_event_handle **handle) { struct mps_event_handle *eh; int error = 0; eh = malloc(sizeof(struct mps_event_handle), M_MPT2, M_WAITOK|M_ZERO); if(!eh) { mps_dprint(sc, MPS_ERROR, "Cannot allocate event memory\n"); return (ENOMEM); } eh->callback = cb; eh->data = data; TAILQ_INSERT_TAIL(&sc->event_list, eh, eh_list); if (mask != NULL) error = mps_update_events(sc, eh, mask); *handle = eh; return (error); } int mps_update_events(struct mps_softc *sc, struct mps_event_handle *handle, u32 *mask) { MPI2_EVENT_NOTIFICATION_REQUEST *evtreq; MPI2_EVENT_NOTIFICATION_REPLY *reply = NULL; struct mps_command *cm; int error, i; mps_dprint(sc, MPS_TRACE, "%s\n", __func__); if ((mask != NULL) && (handle != NULL)) bcopy(mask, &handle->mask[0], sizeof(u32) * MPI2_EVENT_NOTIFY_EVENTMASK_WORDS); for (i = 0; i < MPI2_EVENT_NOTIFY_EVENTMASK_WORDS; i++) sc->event_mask[i] = -1; for (i = 0; i < MPI2_EVENT_NOTIFY_EVENTMASK_WORDS; i++) sc->event_mask[i] &= ~handle->mask[i]; if ((cm = mps_alloc_command(sc)) == NULL) return (EBUSY); evtreq = (MPI2_EVENT_NOTIFICATION_REQUEST *)cm->cm_req; evtreq->Function = MPI2_FUNCTION_EVENT_NOTIFICATION; evtreq->MsgFlags = 0; evtreq->SASBroadcastPrimitiveMasks = 0; #ifdef MPS_DEBUG_ALL_EVENTS { u_char fullmask[16]; memset(fullmask, 0x00, 16); bcopy(fullmask, &evtreq->EventMasks[0], sizeof(u32) * MPI2_EVENT_NOTIFY_EVENTMASK_WORDS); } #else for (i = 0; i < MPI2_EVENT_NOTIFY_EVENTMASK_WORDS; i++) evtreq->EventMasks[i] = htole32(sc->event_mask[i]); #endif cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; error = mps_wait_command(sc, &cm, 60, 0); if (cm != NULL) reply = (MPI2_EVENT_NOTIFICATION_REPLY *)cm->cm_reply; if ((reply == NULL) || (reply->IOCStatus & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) error = ENXIO; if (reply) MPS_DPRINT_EVENT(sc, generic, reply); mps_dprint(sc, MPS_TRACE, "%s finished error %d\n", __func__, error); if (cm != NULL) mps_free_command(sc, cm); return (error); } static int mps_reregister_events(struct mps_softc *sc) { MPI2_EVENT_NOTIFICATION_REQUEST *evtreq; struct mps_command *cm; struct mps_event_handle *eh; int error, i; mps_dprint(sc, MPS_TRACE, "%s\n", __func__); /* first, reregister events */ for (i = 0; i < MPI2_EVENT_NOTIFY_EVENTMASK_WORDS; i++) sc->event_mask[i] = -1; TAILQ_FOREACH(eh, &sc->event_list, eh_list) { for (i = 0; i < MPI2_EVENT_NOTIFY_EVENTMASK_WORDS; i++) sc->event_mask[i] &= ~eh->mask[i]; } if ((cm = mps_alloc_command(sc)) == NULL) return (EBUSY); evtreq = (MPI2_EVENT_NOTIFICATION_REQUEST *)cm->cm_req; evtreq->Function = MPI2_FUNCTION_EVENT_NOTIFICATION; evtreq->MsgFlags = 0; evtreq->SASBroadcastPrimitiveMasks = 0; #ifdef MPS_DEBUG_ALL_EVENTS { u_char fullmask[16]; memset(fullmask, 0x00, 16); bcopy(fullmask, &evtreq->EventMasks[0], sizeof(u32) * MPI2_EVENT_NOTIFY_EVENTMASK_WORDS); } #else for (i = 0; i < MPI2_EVENT_NOTIFY_EVENTMASK_WORDS; i++) evtreq->EventMasks[i] = htole32(sc->event_mask[i]); #endif cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = NULL; cm->cm_complete = mps_reregister_events_complete; error = mps_map_command(sc, cm); mps_dprint(sc, MPS_TRACE, "%s finished with error %d\n", __func__, error); return (error); } void mps_deregister_events(struct mps_softc *sc, struct mps_event_handle *handle) { TAILQ_REMOVE(&sc->event_list, handle, eh_list); free(handle, M_MPT2); } /* * Add a chain element as the next SGE for the specified command. * Reset cm_sge and cm_sgesize to indicate all the available space. */ static int mps_add_chain(struct mps_command *cm) { MPI2_SGE_CHAIN32 *sgc; struct mps_chain *chain; u_int space; if (cm->cm_sglsize < MPS_SGC_SIZE) panic("MPS: Need SGE Error Code\n"); chain = mps_alloc_chain(cm->cm_sc); if (chain == NULL) return (ENOBUFS); space = cm->cm_sc->reqframesz; /* * Note: a double-linked list is used to make it easier to * walk for debugging. */ TAILQ_INSERT_TAIL(&cm->cm_chain_list, chain, chain_link); sgc = (MPI2_SGE_CHAIN32 *)&cm->cm_sge->MpiChain; sgc->Length = htole16(space); sgc->NextChainOffset = 0; /* TODO Looks like bug in Setting sgc->Flags. * sgc->Flags = ( MPI2_SGE_FLAGS_CHAIN_ELEMENT | MPI2_SGE_FLAGS_64_BIT_ADDRESSING | * MPI2_SGE_FLAGS_SYSTEM_ADDRESS) << MPI2_SGE_FLAGS_SHIFT * This is fine.. because we are not using simple element. In case of * MPI2_SGE_CHAIN32, we have separate Length and Flags feild. */ sgc->Flags = MPI2_SGE_FLAGS_CHAIN_ELEMENT; sgc->Address = htole32(chain->chain_busaddr); cm->cm_sge = (MPI2_SGE_IO_UNION *)&chain->chain->MpiSimple; cm->cm_sglsize = space; return (0); } /* * Add one scatter-gather element (chain, simple, transaction context) * to the scatter-gather list for a command. Maintain cm_sglsize and * cm_sge as the remaining size and pointer to the next SGE to fill * in, respectively. */ int mps_push_sge(struct mps_command *cm, void *sgep, size_t len, int segsleft) { MPI2_SGE_TRANSACTION_UNION *tc = sgep; MPI2_SGE_SIMPLE64 *sge = sgep; int error, type; uint32_t saved_buf_len, saved_address_low, saved_address_high; type = (tc->Flags & MPI2_SGE_FLAGS_ELEMENT_MASK); #ifdef INVARIANTS switch (type) { case MPI2_SGE_FLAGS_TRANSACTION_ELEMENT: { if (len != tc->DetailsLength + 4) panic("TC %p length %u or %zu?", tc, tc->DetailsLength + 4, len); } break; case MPI2_SGE_FLAGS_CHAIN_ELEMENT: /* Driver only uses 32-bit chain elements */ if (len != MPS_SGC_SIZE) panic("CHAIN %p length %u or %zu?", sgep, MPS_SGC_SIZE, len); break; case MPI2_SGE_FLAGS_SIMPLE_ELEMENT: /* Driver only uses 64-bit SGE simple elements */ if (len != MPS_SGE64_SIZE) panic("SGE simple %p length %u or %zu?", sge, MPS_SGE64_SIZE, len); if (((le32toh(sge->FlagsLength) >> MPI2_SGE_FLAGS_SHIFT) & MPI2_SGE_FLAGS_ADDRESS_SIZE) == 0) panic("SGE simple %p not marked 64-bit?", sge); break; default: panic("Unexpected SGE %p, flags %02x", tc, tc->Flags); } #endif /* * case 1: 1 more segment, enough room for it * case 2: 2 more segments, enough room for both * case 3: >=2 more segments, only enough room for 1 and a chain * case 4: >=1 more segment, enough room for only a chain * case 5: >=1 more segment, no room for anything (error) */ /* * There should be room for at least a chain element, or this * code is buggy. Case (5). */ if (cm->cm_sglsize < MPS_SGC_SIZE) panic("MPS: Need SGE Error Code\n"); if (segsleft >= 1 && cm->cm_sglsize < len + MPS_SGC_SIZE) { /* * 1 or more segment, enough room for only a chain. * Hope the previous element wasn't a Simple entry * that needed to be marked with * MPI2_SGE_FLAGS_LAST_ELEMENT. Case (4). */ if ((error = mps_add_chain(cm)) != 0) return (error); } if (segsleft >= 2 && cm->cm_sglsize < len + MPS_SGC_SIZE + MPS_SGE64_SIZE) { /* * There are 2 or more segments left to add, and only * enough room for 1 and a chain. Case (3). * * Mark as last element in this chain if necessary. */ if (type == MPI2_SGE_FLAGS_SIMPLE_ELEMENT) { sge->FlagsLength |= htole32( MPI2_SGE_FLAGS_LAST_ELEMENT << MPI2_SGE_FLAGS_SHIFT); } /* * Add the item then a chain. Do the chain now, * rather than on the next iteration, to simplify * understanding the code. */ cm->cm_sglsize -= len; bcopy(sgep, cm->cm_sge, len); cm->cm_sge = (MPI2_SGE_IO_UNION *)((uintptr_t)cm->cm_sge + len); return (mps_add_chain(cm)); } #ifdef INVARIANTS /* Case 1: 1 more segment, enough room for it. */ if (segsleft == 1 && cm->cm_sglsize < len) panic("1 seg left and no room? %u versus %zu", cm->cm_sglsize, len); /* Case 2: 2 more segments, enough room for both */ if (segsleft == 2 && cm->cm_sglsize < len + MPS_SGE64_SIZE) panic("2 segs left and no room? %u versus %zu", cm->cm_sglsize, len); #endif if (segsleft == 1 && type == MPI2_SGE_FLAGS_SIMPLE_ELEMENT) { /* * If this is a bi-directional request, need to account for that * here. Save the pre-filled sge values. These will be used * either for the 2nd SGL or for a single direction SGL. If * cm_out_len is non-zero, this is a bi-directional request, so * fill in the OUT SGL first, then the IN SGL, otherwise just * fill in the IN SGL. Note that at this time, when filling in * 2 SGL's for a bi-directional request, they both use the same * DMA buffer (same cm command). */ saved_buf_len = le32toh(sge->FlagsLength) & 0x00FFFFFF; saved_address_low = sge->Address.Low; saved_address_high = sge->Address.High; if (cm->cm_out_len) { sge->FlagsLength = htole32(cm->cm_out_len | ((uint32_t)(MPI2_SGE_FLAGS_SIMPLE_ELEMENT | MPI2_SGE_FLAGS_END_OF_BUFFER | MPI2_SGE_FLAGS_HOST_TO_IOC | MPI2_SGE_FLAGS_64_BIT_ADDRESSING) << MPI2_SGE_FLAGS_SHIFT)); cm->cm_sglsize -= len; bcopy(sgep, cm->cm_sge, len); cm->cm_sge = (MPI2_SGE_IO_UNION *)((uintptr_t)cm->cm_sge + len); } saved_buf_len |= ((uint32_t)(MPI2_SGE_FLAGS_SIMPLE_ELEMENT | MPI2_SGE_FLAGS_END_OF_BUFFER | MPI2_SGE_FLAGS_LAST_ELEMENT | MPI2_SGE_FLAGS_END_OF_LIST | MPI2_SGE_FLAGS_64_BIT_ADDRESSING) << MPI2_SGE_FLAGS_SHIFT); if (cm->cm_flags & MPS_CM_FLAGS_DATAIN) { saved_buf_len |= ((uint32_t)(MPI2_SGE_FLAGS_IOC_TO_HOST) << MPI2_SGE_FLAGS_SHIFT); } else { saved_buf_len |= ((uint32_t)(MPI2_SGE_FLAGS_HOST_TO_IOC) << MPI2_SGE_FLAGS_SHIFT); } sge->FlagsLength = htole32(saved_buf_len); sge->Address.Low = saved_address_low; sge->Address.High = saved_address_high; } cm->cm_sglsize -= len; bcopy(sgep, cm->cm_sge, len); cm->cm_sge = (MPI2_SGE_IO_UNION *)((uintptr_t)cm->cm_sge + len); return (0); } /* * Add one dma segment to the scatter-gather list for a command. */ int mps_add_dmaseg(struct mps_command *cm, vm_paddr_t pa, size_t len, u_int flags, int segsleft) { MPI2_SGE_SIMPLE64 sge; /* * This driver always uses 64-bit address elements for simplicity. */ bzero(&sge, sizeof(sge)); flags |= MPI2_SGE_FLAGS_SIMPLE_ELEMENT | MPI2_SGE_FLAGS_64_BIT_ADDRESSING; sge.FlagsLength = htole32(len | (flags << MPI2_SGE_FLAGS_SHIFT)); mps_from_u64(pa, &sge.Address); return (mps_push_sge(cm, &sge, sizeof sge, segsleft)); } static void mps_data_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error) { struct mps_softc *sc; struct mps_command *cm; u_int i, dir, sflags; cm = (struct mps_command *)arg; sc = cm->cm_sc; /* * In this case, just print out a warning and let the chip tell the * user they did the wrong thing. */ if ((cm->cm_max_segs != 0) && (nsegs > cm->cm_max_segs)) { mps_dprint(sc, MPS_ERROR, "%s: warning: busdma returned %d segments, " "more than the %d allowed\n", __func__, nsegs, cm->cm_max_segs); } /* * Set up DMA direction flags. Bi-directional requests are also handled * here. In that case, both direction flags will be set. */ sflags = 0; if (cm->cm_flags & MPS_CM_FLAGS_SMP_PASS) { /* * We have to add a special case for SMP passthrough, there * is no easy way to generically handle it. The first * S/G element is used for the command (therefore the * direction bit needs to be set). The second one is used * for the reply. We'll leave it to the caller to make * sure we only have two buffers. */ /* * Even though the busdma man page says it doesn't make * sense to have both direction flags, it does in this case. * We have one s/g element being accessed in each direction. */ dir = BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD; /* * Set the direction flag on the first buffer in the SMP * passthrough request. We'll clear it for the second one. */ sflags |= MPI2_SGE_FLAGS_DIRECTION | MPI2_SGE_FLAGS_END_OF_BUFFER; } else if (cm->cm_flags & MPS_CM_FLAGS_DATAOUT) { sflags |= MPI2_SGE_FLAGS_HOST_TO_IOC; dir = BUS_DMASYNC_PREWRITE; } else dir = BUS_DMASYNC_PREREAD; for (i = 0; i < nsegs; i++) { if ((cm->cm_flags & MPS_CM_FLAGS_SMP_PASS) && (i != 0)) { sflags &= ~MPI2_SGE_FLAGS_DIRECTION; } error = mps_add_dmaseg(cm, segs[i].ds_addr, segs[i].ds_len, sflags, nsegs - i); if (error != 0) { /* Resource shortage, roll back! */ if (ratecheck(&sc->lastfail, &mps_chainfail_interval)) mps_dprint(sc, MPS_INFO, "Out of chain frames, " "consider increasing hw.mps.max_chains.\n"); cm->cm_flags |= MPS_CM_FLAGS_CHAIN_FAILED; mps_complete_command(sc, cm); return; } } bus_dmamap_sync(sc->buffer_dmat, cm->cm_dmamap, dir); mps_enqueue_request(sc, cm); return; } static void mps_data_cb2(void *arg, bus_dma_segment_t *segs, int nsegs, bus_size_t mapsize, int error) { mps_data_cb(arg, segs, nsegs, error); } /* * This is the routine to enqueue commands ansynchronously. * Note that the only error path here is from bus_dmamap_load(), which can * return EINPROGRESS if it is waiting for resources. Other than this, it's * assumed that if you have a command in-hand, then you have enough credits * to use it. */ int mps_map_command(struct mps_softc *sc, struct mps_command *cm) { int error = 0; if (cm->cm_flags & MPS_CM_FLAGS_USE_UIO) { error = bus_dmamap_load_uio(sc->buffer_dmat, cm->cm_dmamap, &cm->cm_uio, mps_data_cb2, cm, 0); } else if (cm->cm_flags & MPS_CM_FLAGS_USE_CCB) { error = bus_dmamap_load_ccb(sc->buffer_dmat, cm->cm_dmamap, cm->cm_data, mps_data_cb, cm, 0); } else if ((cm->cm_data != NULL) && (cm->cm_length != 0)) { error = bus_dmamap_load(sc->buffer_dmat, cm->cm_dmamap, cm->cm_data, cm->cm_length, mps_data_cb, cm, 0); } else { /* Add a zero-length element as needed */ if (cm->cm_sge != NULL) mps_add_dmaseg(cm, 0, 0, 0, 1); mps_enqueue_request(sc, cm); } return (error); } /* * This is the routine to enqueue commands synchronously. An error of * EINPROGRESS from mps_map_command() is ignored since the command will * be executed and enqueued automatically. Other errors come from msleep(). */ int mps_wait_command(struct mps_softc *sc, struct mps_command **cmp, int timeout, int sleep_flag) { int error, rc; struct timeval cur_time, start_time; struct mps_command *cm = *cmp; if (sc->mps_flags & MPS_FLAGS_DIAGRESET) return EBUSY; cm->cm_complete = NULL; cm->cm_flags |= MPS_CM_FLAGS_POLLED; error = mps_map_command(sc, cm); if ((error != 0) && (error != EINPROGRESS)) return (error); /* * Check for context and wait for 50 mSec at a time until time has * expired or the command has finished. If msleep can't be used, need * to poll. */ if (curthread->td_no_sleeping != 0) sleep_flag = NO_SLEEP; getmicrouptime(&start_time); if (mtx_owned(&sc->mps_mtx) && sleep_flag == CAN_SLEEP) { cm->cm_flags |= MPS_CM_FLAGS_WAKEUP; error = msleep(cm, &sc->mps_mtx, 0, "mpswait", timeout*hz); if (error == EWOULDBLOCK) { /* * Record the actual elapsed time in the case of a * timeout for the message below. */ getmicrouptime(&cur_time); timevalsub(&cur_time, &start_time); } } else { while ((cm->cm_flags & MPS_CM_FLAGS_COMPLETE) == 0) { mps_intr_locked(sc); if (sleep_flag == CAN_SLEEP) pause("mpswait", hz/20); else DELAY(50000); getmicrouptime(&cur_time); timevalsub(&cur_time, &start_time); if (cur_time.tv_sec > timeout) { error = EWOULDBLOCK; break; } } } if (error == EWOULDBLOCK) { - mps_dprint(sc, MPS_FAULT, "Calling Reinit from %s, timeout=%d," - " elapsed=%jd\n", __func__, timeout, - (intmax_t)cur_time.tv_sec); - rc = mps_reinit(sc); - mps_dprint(sc, MPS_FAULT, "Reinit %s\n", (rc == 0) ? "success" : - "failed"); + if (cm->cm_timeout_handler == NULL) { + mps_dprint(sc, MPS_FAULT, "Calling Reinit from %s, timeout=%d," + " elapsed=%jd\n", __func__, timeout, + (intmax_t)cur_time.tv_sec); + rc = mps_reinit(sc); + mps_dprint(sc, MPS_FAULT, "Reinit %s\n", (rc == 0) ? "success" : + "failed"); + } else + cm->cm_timeout_handler(sc, cm); if (sc->mps_flags & MPS_FLAGS_REALLOCATED) { /* * Tell the caller that we freed the command in a * reinit. */ *cmp = NULL; } error = ETIMEDOUT; } return (error); } /* * The MPT driver had a verbose interface for config pages. In this driver, * reduce it to much simpler terms, similar to the Linux driver. */ int mps_read_config_page(struct mps_softc *sc, struct mps_config_params *params) { MPI2_CONFIG_REQUEST *req; struct mps_command *cm; int error; if (sc->mps_flags & MPS_FLAGS_BUSY) { return (EBUSY); } cm = mps_alloc_command(sc); if (cm == NULL) { return (EBUSY); } req = (MPI2_CONFIG_REQUEST *)cm->cm_req; req->Function = MPI2_FUNCTION_CONFIG; req->Action = params->action; req->SGLFlags = 0; req->ChainOffset = 0; req->PageAddress = params->page_address; if (params->hdr.Struct.PageType == MPI2_CONFIG_PAGETYPE_EXTENDED) { MPI2_CONFIG_EXTENDED_PAGE_HEADER *hdr; hdr = ¶ms->hdr.Ext; req->ExtPageType = hdr->ExtPageType; req->ExtPageLength = hdr->ExtPageLength; req->Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; req->Header.PageLength = 0; /* Must be set to zero */ req->Header.PageNumber = hdr->PageNumber; req->Header.PageVersion = hdr->PageVersion; } else { MPI2_CONFIG_PAGE_HEADER *hdr; hdr = ¶ms->hdr.Struct; req->Header.PageType = hdr->PageType; req->Header.PageNumber = hdr->PageNumber; req->Header.PageLength = hdr->PageLength; req->Header.PageVersion = hdr->PageVersion; } cm->cm_data = params->buffer; cm->cm_length = params->length; if (cm->cm_data != NULL) { cm->cm_sge = &req->PageBufferSGE; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPS_CM_FLAGS_SGE_SIMPLE | MPS_CM_FLAGS_DATAIN; } else cm->cm_sge = NULL; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_complete_data = params; if (params->callback != NULL) { cm->cm_complete = mps_config_complete; return (mps_map_command(sc, cm)); } else { error = mps_wait_command(sc, &cm, 0, CAN_SLEEP); if (error) { mps_dprint(sc, MPS_FAULT, "Error %d reading config page\n", error); if (cm != NULL) mps_free_command(sc, cm); return (error); } mps_config_complete(sc, cm); } return (0); } int mps_write_config_page(struct mps_softc *sc, struct mps_config_params *params) { return (EINVAL); } static void mps_config_complete(struct mps_softc *sc, struct mps_command *cm) { MPI2_CONFIG_REPLY *reply; struct mps_config_params *params; MPS_FUNCTRACE(sc); params = cm->cm_complete_data; if (cm->cm_data != NULL) { bus_dmamap_sync(sc->buffer_dmat, cm->cm_dmamap, BUS_DMASYNC_POSTREAD); bus_dmamap_unload(sc->buffer_dmat, cm->cm_dmamap); } /* * XXX KDM need to do more error recovery? This results in the * device in question not getting probed. */ if ((cm->cm_flags & MPS_CM_FLAGS_ERROR_MASK) != 0) { params->status = MPI2_IOCSTATUS_BUSY; goto done; } reply = (MPI2_CONFIG_REPLY *)cm->cm_reply; if (reply == NULL) { params->status = MPI2_IOCSTATUS_BUSY; goto done; } params->status = reply->IOCStatus; if (params->hdr.Struct.PageType == MPI2_CONFIG_PAGETYPE_EXTENDED) { params->hdr.Ext.ExtPageType = reply->ExtPageType; params->hdr.Ext.ExtPageLength = reply->ExtPageLength; params->hdr.Ext.PageType = reply->Header.PageType; params->hdr.Ext.PageNumber = reply->Header.PageNumber; params->hdr.Ext.PageVersion = reply->Header.PageVersion; } else { params->hdr.Struct.PageType = reply->Header.PageType; params->hdr.Struct.PageNumber = reply->Header.PageNumber; params->hdr.Struct.PageLength = reply->Header.PageLength; params->hdr.Struct.PageVersion = reply->Header.PageVersion; } done: mps_free_command(sc, cm); if (params->callback != NULL) params->callback(sc, params); return; } Index: releng/12.1/sys/dev/mps/mps_sas.c =================================================================== --- releng/12.1/sys/dev/mps/mps_sas.c (revision 352760) +++ releng/12.1/sys/dev/mps/mps_sas.c (revision 352761) @@ -1,3636 +1,3641 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2009 Yahoo! Inc. * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2015 Avago Technologies * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ #include __FBSDID("$FreeBSD$"); /* Communications core for Avago Technologies (LSI) MPT2 */ /* TODO Move headers to mpsvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #if __FreeBSD_version >= 900026 #include #endif #include #include #include #include #include #include #include #include #include #include #include #define MPSSAS_DISCOVERY_TIMEOUT 20 #define MPSSAS_MAX_DISCOVERY_TIMEOUTS 10 /* 200 seconds */ /* * static array to check SCSI OpCode for EEDP protection bits */ #define PRO_R MPI2_SCSIIO_EEDPFLAGS_CHECK_REMOVE_OP #define PRO_W MPI2_SCSIIO_EEDPFLAGS_INSERT_OP #define PRO_V MPI2_SCSIIO_EEDPFLAGS_INSERT_OP static uint8_t op_code_prot[256] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, PRO_R, 0, PRO_W, 0, 0, 0, PRO_W, PRO_V, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, PRO_W, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, PRO_R, 0, PRO_W, 0, 0, 0, PRO_W, PRO_V, 0, 0, 0, PRO_W, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, PRO_R, 0, PRO_W, 0, 0, 0, PRO_W, PRO_V, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; MALLOC_DEFINE(M_MPSSAS, "MPSSAS", "MPS SAS memory"); static void mpssas_remove_device(struct mps_softc *, struct mps_command *); static void mpssas_remove_complete(struct mps_softc *, struct mps_command *); static void mpssas_action(struct cam_sim *sim, union ccb *ccb); static void mpssas_poll(struct cam_sim *sim); static int mpssas_send_abort(struct mps_softc *sc, struct mps_command *tm, struct mps_command *cm); static void mpssas_scsiio_timeout(void *data); static void mpssas_abort_complete(struct mps_softc *sc, struct mps_command *cm); static void mpssas_direct_drive_io(struct mpssas_softc *sassc, struct mps_command *cm, union ccb *ccb); static void mpssas_action_scsiio(struct mpssas_softc *, union ccb *); static void mpssas_scsiio_complete(struct mps_softc *, struct mps_command *); static void mpssas_action_resetdev(struct mpssas_softc *, union ccb *); #if __FreeBSD_version >= 900026 static void mpssas_smpio_complete(struct mps_softc *sc, struct mps_command *cm); static void mpssas_send_smpcmd(struct mpssas_softc *sassc, union ccb *ccb, uint64_t sasaddr); static void mpssas_action_smpio(struct mpssas_softc *sassc, union ccb *ccb); #endif //FreeBSD_version >= 900026 static void mpssas_resetdev_complete(struct mps_softc *, struct mps_command *); static void mpssas_async(void *callback_arg, uint32_t code, struct cam_path *path, void *arg); #if (__FreeBSD_version < 901503) || \ ((__FreeBSD_version >= 1000000) && (__FreeBSD_version < 1000006)) static void mpssas_check_eedp(struct mps_softc *sc, struct cam_path *path, struct ccb_getdev *cgd); static void mpssas_read_cap_done(struct cam_periph *periph, union ccb *done_ccb); #endif static int mpssas_send_portenable(struct mps_softc *sc); static void mpssas_portenable_complete(struct mps_softc *sc, struct mps_command *cm); struct mpssas_target * mpssas_find_target_by_handle(struct mpssas_softc *sassc, int start, uint16_t handle) { struct mpssas_target *target; int i; for (i = start; i < sassc->maxtargets; i++) { target = &sassc->targets[i]; if (target->handle == handle) return (target); } return (NULL); } /* we need to freeze the simq during attach and diag reset, to avoid failing * commands before device handles have been found by discovery. Since * discovery involves reading config pages and possibly sending commands, * discovery actions may continue even after we receive the end of discovery * event, so refcount discovery actions instead of assuming we can unfreeze * the simq when we get the event. */ void mpssas_startup_increment(struct mpssas_softc *sassc) { MPS_FUNCTRACE(sassc->sc); if ((sassc->flags & MPSSAS_IN_STARTUP) != 0) { if (sassc->startup_refcount++ == 0) { /* just starting, freeze the simq */ mps_dprint(sassc->sc, MPS_INIT, "%s freezing simq\n", __func__); #if __FreeBSD_version >= 1000039 xpt_hold_boot(); #endif xpt_freeze_simq(sassc->sim, 1); } mps_dprint(sassc->sc, MPS_INIT, "%s refcount %u\n", __func__, sassc->startup_refcount); } } void mpssas_release_simq_reinit(struct mpssas_softc *sassc) { if (sassc->flags & MPSSAS_QUEUE_FROZEN) { sassc->flags &= ~MPSSAS_QUEUE_FROZEN; xpt_release_simq(sassc->sim, 1); mps_dprint(sassc->sc, MPS_INFO, "Unfreezing SIM queue\n"); } } void mpssas_startup_decrement(struct mpssas_softc *sassc) { MPS_FUNCTRACE(sassc->sc); if ((sassc->flags & MPSSAS_IN_STARTUP) != 0) { if (--sassc->startup_refcount == 0) { /* finished all discovery-related actions, release * the simq and rescan for the latest topology. */ mps_dprint(sassc->sc, MPS_INIT, "%s releasing simq\n", __func__); sassc->flags &= ~MPSSAS_IN_STARTUP; xpt_release_simq(sassc->sim, 1); #if __FreeBSD_version >= 1000039 xpt_release_boot(); #else mpssas_rescan_target(sassc->sc, NULL); #endif } mps_dprint(sassc->sc, MPS_INIT, "%s refcount %u\n", __func__, sassc->startup_refcount); } } -/* The firmware requires us to stop sending commands when we're doing task - * management, so refcount the TMs and keep the simq frozen when any are in - * use. +/* + * The firmware requires us to stop sending commands when we're doing task + * management. + * XXX The logic for serializing the device has been made lazy and moved to + * mpssas_prepare_for_tm(). */ struct mps_command * mpssas_alloc_tm(struct mps_softc *sc) { + MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mps_command *tm; tm = mps_alloc_high_priority_command(sc); + if (tm == NULL) + return (NULL); + + req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; + req->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; return tm; } void mpssas_free_tm(struct mps_softc *sc, struct mps_command *tm) { int target_id = 0xFFFFFFFF; if (tm == NULL) return; /* * For TM's the devq is frozen for the device. Unfreeze it here and * free the resources used for freezing the devq. Must clear the * INRESET flag as well or scsi I/O will not work. */ if (tm->cm_targ != NULL) { tm->cm_targ->flags &= ~MPSSAS_TARGET_INRESET; target_id = tm->cm_targ->tid; } if (tm->cm_ccb) { mps_dprint(sc, MPS_INFO, "Unfreezing devq for target ID %d\n", target_id); xpt_release_devq(tm->cm_ccb->ccb_h.path, 1, TRUE); xpt_free_path(tm->cm_ccb->ccb_h.path); xpt_free_ccb(tm->cm_ccb); } mps_free_high_priority_command(sc, tm); } void mpssas_rescan_target(struct mps_softc *sc, struct mpssas_target *targ) { struct mpssas_softc *sassc = sc->sassc; path_id_t pathid; target_id_t targetid; union ccb *ccb; MPS_FUNCTRACE(sc); pathid = cam_sim_path(sassc->sim); if (targ == NULL) targetid = CAM_TARGET_WILDCARD; else targetid = targ - sassc->targets; /* * Allocate a CCB and schedule a rescan. */ ccb = xpt_alloc_ccb_nowait(); if (ccb == NULL) { mps_dprint(sc, MPS_ERROR, "unable to alloc CCB for rescan\n"); return; } if (xpt_create_path(&ccb->ccb_h.path, NULL, pathid, targetid, CAM_LUN_WILDCARD) != CAM_REQ_CMP) { mps_dprint(sc, MPS_ERROR, "unable to create path for rescan\n"); xpt_free_ccb(ccb); return; } if (targetid == CAM_TARGET_WILDCARD) ccb->ccb_h.func_code = XPT_SCAN_BUS; else ccb->ccb_h.func_code = XPT_SCAN_TGT; mps_dprint(sc, MPS_TRACE, "%s targetid %u\n", __func__, targetid); xpt_rescan(ccb); } static void mpssas_log_command(struct mps_command *cm, u_int level, const char *fmt, ...) { struct sbuf sb; va_list ap; char str[192]; char path_str[64]; if (cm == NULL) return; /* No need to be in here if debugging isn't enabled */ if ((cm->cm_sc->mps_debug & level) == 0) return; sbuf_new(&sb, str, sizeof(str), 0); va_start(ap, fmt); if (cm->cm_ccb != NULL) { xpt_path_string(cm->cm_ccb->csio.ccb_h.path, path_str, sizeof(path_str)); sbuf_cat(&sb, path_str); if (cm->cm_ccb->ccb_h.func_code == XPT_SCSI_IO) { scsi_command_string(&cm->cm_ccb->csio, &sb); sbuf_printf(&sb, "length %d ", cm->cm_ccb->csio.dxfer_len); } } else { sbuf_printf(&sb, "(noperiph:%s%d:%u:%u:%u): ", cam_sim_name(cm->cm_sc->sassc->sim), cam_sim_unit(cm->cm_sc->sassc->sim), cam_sim_bus(cm->cm_sc->sassc->sim), cm->cm_targ ? cm->cm_targ->tid : 0xFFFFFFFF, cm->cm_lun); } sbuf_printf(&sb, "SMID %u ", cm->cm_desc.Default.SMID); sbuf_vprintf(&sb, fmt, ap); sbuf_finish(&sb); mps_print_field(cm->cm_sc, "%s", sbuf_data(&sb)); va_end(ap); } static void mpssas_remove_volume(struct mps_softc *sc, struct mps_command *tm) { MPI2_SCSI_TASK_MANAGE_REPLY *reply; struct mpssas_target *targ; uint16_t handle; MPS_FUNCTRACE(sc); reply = (MPI2_SCSI_TASK_MANAGE_REPLY *)tm->cm_reply; handle = (uint16_t)(uintptr_t)tm->cm_complete_data; targ = tm->cm_targ; if (reply == NULL) { /* XXX retry the remove after the diag reset completes? */ mps_dprint(sc, MPS_FAULT, "%s NULL reply resetting device 0x%04x\n", __func__, handle); mpssas_free_tm(sc, tm); return; } if ((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) { mps_dprint(sc, MPS_ERROR, "IOCStatus = 0x%x while resetting device 0x%x\n", le16toh(reply->IOCStatus), handle); } mps_dprint(sc, MPS_XINFO, "Reset aborted %u commands\n", reply->TerminationCount); mps_free_reply(sc, tm->cm_reply_data); tm->cm_reply = NULL; /* Ensures the reply won't get re-freed */ mps_dprint(sc, MPS_XINFO, "clearing target %u handle 0x%04x\n", targ->tid, handle); /* * Don't clear target if remove fails because things will get confusing. * Leave the devname and sasaddr intact so that we know to avoid reusing * this target id if possible, and so we can assign the same target id * to this device if it comes back in the future. */ if ((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) == MPI2_IOCSTATUS_SUCCESS) { targ = tm->cm_targ; targ->handle = 0x0; targ->encl_handle = 0x0; targ->encl_slot = 0x0; targ->exp_dev_handle = 0x0; targ->phy_num = 0x0; targ->linkrate = 0x0; targ->devinfo = 0x0; targ->flags = 0x0; } mpssas_free_tm(sc, tm); } /* * No Need to call "MPI2_SAS_OP_REMOVE_DEVICE" For Volume removal. * Otherwise Volume Delete is same as Bare Drive Removal. */ void mpssas_prepare_volume_remove(struct mpssas_softc *sassc, uint16_t handle) { MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mps_softc *sc; - struct mps_command *cm; + struct mps_command *tm; struct mpssas_target *targ = NULL; MPS_FUNCTRACE(sassc->sc); sc = sassc->sc; #ifdef WD_SUPPORT /* * If this is a WD controller, determine if the disk should be exposed * to the OS or not. If disk should be exposed, return from this * function without doing anything. */ if (sc->WD_available && (sc->WD_hide_expose == MPS_WD_EXPOSE_ALWAYS)) { return; } #endif //WD_SUPPORT targ = mpssas_find_target_by_handle(sassc, 0, handle); if (targ == NULL) { /* FIXME: what is the action? */ /* We don't know about this device? */ mps_dprint(sc, MPS_ERROR, "%s %d : invalid handle 0x%x \n", __func__,__LINE__, handle); return; } targ->flags |= MPSSAS_TARGET_INREMOVAL; - cm = mpssas_alloc_tm(sc); - if (cm == NULL) { + tm = mpssas_alloc_tm(sc); + if (tm == NULL) { mps_dprint(sc, MPS_ERROR, "%s: command alloc failure\n", __func__); return; } mpssas_rescan_target(sc, targ); - req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)cm->cm_req; + req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; req->DevHandle = targ->handle; - req->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; req->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET; /* SAS Hard Link Reset / SATA Link Reset */ req->MsgFlags = MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET; - cm->cm_targ = targ; - cm->cm_data = NULL; - cm->cm_desc.HighPriority.RequestFlags = - MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; - cm->cm_complete = mpssas_remove_volume; - cm->cm_complete_data = (void *)(uintptr_t)handle; + tm->cm_targ = targ; + tm->cm_data = NULL; + tm->cm_complete = mpssas_remove_volume; + tm->cm_complete_data = (void *)(uintptr_t)handle; mps_dprint(sc, MPS_INFO, "%s: Sending reset for target ID %d\n", __func__, targ->tid); - mpssas_prepare_for_tm(sc, cm, targ, CAM_LUN_WILDCARD); + mpssas_prepare_for_tm(sc, tm, targ, CAM_LUN_WILDCARD); - mps_map_command(sc, cm); + mps_map_command(sc, tm); } /* * The MPT2 firmware performs debounce on the link to avoid transient link * errors and false removals. When it does decide that link has been lost * and a device need to go away, it expects that the host will perform a * target reset and then an op remove. The reset has the side-effect of * aborting any outstanding requests for the device, which is required for * the op-remove to succeed. It's not clear if the host should check for * the device coming back alive after the reset. */ void mpssas_prepare_remove(struct mpssas_softc *sassc, uint16_t handle) { MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mps_softc *sc; struct mps_command *cm; struct mpssas_target *targ = NULL; MPS_FUNCTRACE(sassc->sc); sc = sassc->sc; targ = mpssas_find_target_by_handle(sassc, 0, handle); if (targ == NULL) { /* FIXME: what is the action? */ /* We don't know about this device? */ mps_dprint(sc, MPS_ERROR, "%s : invalid handle 0x%x \n", __func__, handle); return; } targ->flags |= MPSSAS_TARGET_INREMOVAL; cm = mpssas_alloc_tm(sc); if (cm == NULL) { mps_dprint(sc, MPS_ERROR, "%s: command alloc failure\n", __func__); return; } mpssas_rescan_target(sc, targ); req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)cm->cm_req; memset(req, 0, sizeof(*req)); req->DevHandle = htole16(targ->handle); - req->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; req->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET; /* SAS Hard Link Reset / SATA Link Reset */ req->MsgFlags = MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET; cm->cm_targ = targ; cm->cm_data = NULL; - cm->cm_desc.HighPriority.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; cm->cm_complete = mpssas_remove_device; cm->cm_complete_data = (void *)(uintptr_t)handle; mps_dprint(sc, MPS_INFO, "%s: Sending reset for target ID %d\n", __func__, targ->tid); mpssas_prepare_for_tm(sc, cm, targ, CAM_LUN_WILDCARD); mps_map_command(sc, cm); } static void mpssas_remove_device(struct mps_softc *sc, struct mps_command *tm) { MPI2_SCSI_TASK_MANAGE_REPLY *reply; MPI2_SAS_IOUNIT_CONTROL_REQUEST *req; struct mpssas_target *targ; struct mps_command *next_cm; uint16_t handle; MPS_FUNCTRACE(sc); reply = (MPI2_SCSI_TASK_MANAGE_REPLY *)tm->cm_reply; handle = (uint16_t)(uintptr_t)tm->cm_complete_data; targ = tm->cm_targ; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * task management commands don't have S/G lists. */ if ((tm->cm_flags & MPS_CM_FLAGS_ERROR_MASK) != 0) { mps_dprint(sc, MPS_ERROR, "%s: cm_flags = %#x for remove of handle %#04x! " "This should not happen!\n", __func__, tm->cm_flags, handle); } if (reply == NULL) { /* XXX retry the remove after the diag reset completes? */ mps_dprint(sc, MPS_FAULT, "%s NULL reply resetting device 0x%04x\n", __func__, handle); mpssas_free_tm(sc, tm); return; } if ((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) { mps_dprint(sc, MPS_ERROR, "IOCStatus = 0x%x while resetting device 0x%x\n", le16toh(reply->IOCStatus), handle); } mps_dprint(sc, MPS_XINFO, "Reset aborted %u commands\n", le32toh(reply->TerminationCount)); mps_free_reply(sc, tm->cm_reply_data); tm->cm_reply = NULL; /* Ensures the reply won't get re-freed */ /* Reuse the existing command */ req = (MPI2_SAS_IOUNIT_CONTROL_REQUEST *)tm->cm_req; memset(req, 0, sizeof(*req)); req->Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL; req->Operation = MPI2_SAS_OP_REMOVE_DEVICE; req->DevHandle = htole16(handle); tm->cm_data = NULL; tm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; tm->cm_complete = mpssas_remove_complete; tm->cm_complete_data = (void *)(uintptr_t)handle; mps_map_command(sc, tm); mps_dprint(sc, MPS_XINFO, "clearing target %u handle 0x%04x\n", targ->tid, handle); TAILQ_FOREACH_SAFE(tm, &targ->commands, cm_link, next_cm) { union ccb *ccb; mps_dprint(sc, MPS_XINFO, "Completing missed command %p\n", tm); ccb = tm->cm_complete_data; mpssas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); mpssas_scsiio_complete(sc, tm); } } static void mpssas_remove_complete(struct mps_softc *sc, struct mps_command *tm) { MPI2_SAS_IOUNIT_CONTROL_REPLY *reply; uint16_t handle; struct mpssas_target *targ; struct mpssas_lun *lun; MPS_FUNCTRACE(sc); reply = (MPI2_SAS_IOUNIT_CONTROL_REPLY *)tm->cm_reply; handle = (uint16_t)(uintptr_t)tm->cm_complete_data; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * task management commands don't have S/G lists. */ if ((tm->cm_flags & MPS_CM_FLAGS_ERROR_MASK) != 0) { mps_dprint(sc, MPS_XINFO, "%s: cm_flags = %#x for remove of handle %#04x! " "This should not happen!\n", __func__, tm->cm_flags, handle); mpssas_free_tm(sc, tm); return; } if (reply == NULL) { /* most likely a chip reset */ mps_dprint(sc, MPS_FAULT, "%s NULL reply removing device 0x%04x\n", __func__, handle); mpssas_free_tm(sc, tm); return; } mps_dprint(sc, MPS_XINFO, "%s on handle 0x%04x, IOCStatus= 0x%x\n", __func__, handle, le16toh(reply->IOCStatus)); /* * Don't clear target if remove fails because things will get confusing. * Leave the devname and sasaddr intact so that we know to avoid reusing * this target id if possible, and so we can assign the same target id * to this device if it comes back in the future. */ if ((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) == MPI2_IOCSTATUS_SUCCESS) { targ = tm->cm_targ; targ->handle = 0x0; targ->encl_handle = 0x0; targ->encl_slot = 0x0; targ->exp_dev_handle = 0x0; targ->phy_num = 0x0; targ->linkrate = 0x0; targ->devinfo = 0x0; targ->flags = 0x0; while(!SLIST_EMPTY(&targ->luns)) { lun = SLIST_FIRST(&targ->luns); SLIST_REMOVE_HEAD(&targ->luns, lun_link); free(lun, M_MPT2); } } mpssas_free_tm(sc, tm); } static int mpssas_register_events(struct mps_softc *sc) { u32 events[MPI2_EVENT_NOTIFY_EVENTMASK_WORDS]; bzero(events, 16); setbit(events, MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE); setbit(events, MPI2_EVENT_SAS_DISCOVERY); setbit(events, MPI2_EVENT_SAS_BROADCAST_PRIMITIVE); setbit(events, MPI2_EVENT_SAS_INIT_DEVICE_STATUS_CHANGE); setbit(events, MPI2_EVENT_SAS_INIT_TABLE_OVERFLOW); setbit(events, MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST); setbit(events, MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE); setbit(events, MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST); setbit(events, MPI2_EVENT_IR_VOLUME); setbit(events, MPI2_EVENT_IR_PHYSICAL_DISK); setbit(events, MPI2_EVENT_IR_OPERATION_STATUS); setbit(events, MPI2_EVENT_LOG_ENTRY_ADDED); mps_register_events(sc, events, mpssas_evt_handler, NULL, &sc->sassc->mpssas_eh); return (0); } int mps_attach_sas(struct mps_softc *sc) { struct mpssas_softc *sassc; cam_status status; int unit, error = 0, reqs; MPS_FUNCTRACE(sc); mps_dprint(sc, MPS_INIT, "%s entered\n", __func__); sassc = malloc(sizeof(struct mpssas_softc), M_MPT2, M_WAITOK|M_ZERO); if(!sassc) { mps_dprint(sc, MPS_INIT|MPS_ERROR, "Cannot allocate SAS controller memory\n"); return (ENOMEM); } /* * XXX MaxTargets could change during a reinit. Since we don't * resize the targets[] array during such an event, cache the value * of MaxTargets here so that we don't get into trouble later. This * should move into the reinit logic. */ sassc->maxtargets = sc->facts->MaxTargets + sc->facts->MaxVolumes; sassc->targets = malloc(sizeof(struct mpssas_target) * sassc->maxtargets, M_MPT2, M_WAITOK|M_ZERO); if(!sassc->targets) { mps_dprint(sc, MPS_INIT|MPS_ERROR, "Cannot allocate SAS target memory\n"); free(sassc, M_MPT2); return (ENOMEM); } sc->sassc = sassc; sassc->sc = sc; reqs = sc->num_reqs - sc->num_prireqs - 1; if ((sassc->devq = cam_simq_alloc(reqs)) == NULL) { mps_dprint(sc, MPS_ERROR, "Cannot allocate SIMQ\n"); error = ENOMEM; goto out; } unit = device_get_unit(sc->mps_dev); sassc->sim = cam_sim_alloc(mpssas_action, mpssas_poll, "mps", sassc, unit, &sc->mps_mtx, reqs, reqs, sassc->devq); if (sassc->sim == NULL) { mps_dprint(sc, MPS_INIT|MPS_ERROR, "Cannot allocate SIM\n"); error = EINVAL; goto out; } TAILQ_INIT(&sassc->ev_queue); /* Initialize taskqueue for Event Handling */ TASK_INIT(&sassc->ev_task, 0, mpssas_firmware_event_work, sc); sassc->ev_tq = taskqueue_create("mps_taskq", M_NOWAIT | M_ZERO, taskqueue_thread_enqueue, &sassc->ev_tq); taskqueue_start_threads(&sassc->ev_tq, 1, PRIBIO, "%s taskq", device_get_nameunit(sc->mps_dev)); mps_lock(sc); /* * XXX There should be a bus for every port on the adapter, but since * we're just going to fake the topology for now, we'll pretend that * everything is just a target on a single bus. */ if ((error = xpt_bus_register(sassc->sim, sc->mps_dev, 0)) != 0) { mps_dprint(sc, MPS_INIT|MPS_ERROR, "Error %d registering SCSI bus\n", error); mps_unlock(sc); goto out; } /* * Assume that discovery events will start right away. * * Hold off boot until discovery is complete. */ sassc->flags |= MPSSAS_IN_STARTUP | MPSSAS_IN_DISCOVERY; sc->sassc->startup_refcount = 0; mpssas_startup_increment(sassc); callout_init(&sassc->discovery_callout, 1 /*mpsafe*/); /* * Register for async events so we can determine the EEDP * capabilities of devices. */ status = xpt_create_path(&sassc->path, /*periph*/NULL, cam_sim_path(sc->sassc->sim), CAM_TARGET_WILDCARD, CAM_LUN_WILDCARD); if (status != CAM_REQ_CMP) { mps_dprint(sc, MPS_ERROR|MPS_INIT, "Error %#x creating sim path\n", status); sassc->path = NULL; } else { int event; #if (__FreeBSD_version >= 1000006) || \ ((__FreeBSD_version >= 901503) && (__FreeBSD_version < 1000000)) event = AC_ADVINFO_CHANGED; #else event = AC_FOUND_DEVICE; #endif status = xpt_register_async(event, mpssas_async, sc, sassc->path); if (status != CAM_REQ_CMP) { mps_dprint(sc, MPS_ERROR, "Error %#x registering async handler for " "AC_ADVINFO_CHANGED events\n", status); xpt_free_path(sassc->path); sassc->path = NULL; } } if (status != CAM_REQ_CMP) { /* * EEDP use is the exception, not the rule. * Warn the user, but do not fail to attach. */ mps_printf(sc, "EEDP capabilities disabled.\n"); } mps_unlock(sc); mpssas_register_events(sc); out: if (error) mps_detach_sas(sc); mps_dprint(sc, MPS_INIT, "%s exit error= %d\n", __func__, error); return (error); } int mps_detach_sas(struct mps_softc *sc) { struct mpssas_softc *sassc; struct mpssas_lun *lun, *lun_tmp; struct mpssas_target *targ; int i; MPS_FUNCTRACE(sc); if (sc->sassc == NULL) return (0); sassc = sc->sassc; mps_deregister_events(sc, sassc->mpssas_eh); /* * Drain and free the event handling taskqueue with the lock * unheld so that any parallel processing tasks drain properly * without deadlocking. */ if (sassc->ev_tq != NULL) taskqueue_free(sassc->ev_tq); /* Make sure CAM doesn't wedge if we had to bail out early. */ mps_lock(sc); while (sassc->startup_refcount != 0) mpssas_startup_decrement(sassc); /* Deregister our async handler */ if (sassc->path != NULL) { xpt_register_async(0, mpssas_async, sc, sassc->path); xpt_free_path(sassc->path); sassc->path = NULL; } if (sassc->flags & MPSSAS_IN_STARTUP) xpt_release_simq(sassc->sim, 1); if (sassc->sim != NULL) { xpt_bus_deregister(cam_sim_path(sassc->sim)); cam_sim_free(sassc->sim, FALSE); } mps_unlock(sc); if (sassc->devq != NULL) cam_simq_free(sassc->devq); for(i=0; i< sassc->maxtargets ;i++) { targ = &sassc->targets[i]; SLIST_FOREACH_SAFE(lun, &targ->luns, lun_link, lun_tmp) { free(lun, M_MPT2); } } free(sassc->targets, M_MPT2); free(sassc, M_MPT2); sc->sassc = NULL; return (0); } void mpssas_discovery_end(struct mpssas_softc *sassc) { struct mps_softc *sc = sassc->sc; MPS_FUNCTRACE(sc); if (sassc->flags & MPSSAS_DISCOVERY_TIMEOUT_PENDING) callout_stop(&sassc->discovery_callout); /* * After discovery has completed, check the mapping table for any * missing devices and update their missing counts. Only do this once * whenever the driver is initialized so that missing counts aren't * updated unnecessarily. Note that just because discovery has * completed doesn't mean that events have been processed yet. The * check_devices function is a callout timer that checks if ALL devices * are missing. If so, it will wait a little longer for events to * complete and keep resetting itself until some device in the mapping * table is not missing, meaning that event processing has started. */ if (sc->track_mapping_events) { mps_dprint(sc, MPS_XINFO | MPS_MAPPING, "Discovery has " "completed. Check for missing devices in the mapping " "table.\n"); callout_reset(&sc->device_check_callout, MPS_MISSING_CHECK_DELAY * hz, mps_mapping_check_devices, sc); } } static void mpssas_action(struct cam_sim *sim, union ccb *ccb) { struct mpssas_softc *sassc; sassc = cam_sim_softc(sim); MPS_FUNCTRACE(sassc->sc); mps_dprint(sassc->sc, MPS_TRACE, "ccb func_code 0x%x\n", ccb->ccb_h.func_code); mtx_assert(&sassc->sc->mps_mtx, MA_OWNED); switch (ccb->ccb_h.func_code) { case XPT_PATH_INQ: { struct ccb_pathinq *cpi = &ccb->cpi; struct mps_softc *sc = sassc->sc; cpi->version_num = 1; cpi->hba_inquiry = PI_SDTR_ABLE|PI_TAG_ABLE|PI_WIDE_16; cpi->target_sprt = 0; #if __FreeBSD_version >= 1000039 cpi->hba_misc = PIM_NOBUSRESET | PIM_UNMAPPED | PIM_NOSCAN; #else cpi->hba_misc = PIM_NOBUSRESET | PIM_UNMAPPED; #endif cpi->hba_eng_cnt = 0; cpi->max_target = sassc->maxtargets - 1; cpi->max_lun = 255; /* * initiator_id is set here to an ID outside the set of valid * target IDs (including volumes). */ cpi->initiator_id = sassc->maxtargets; strlcpy(cpi->sim_vid, "FreeBSD", SIM_IDLEN); strlcpy(cpi->hba_vid, "Avago Tech", HBA_IDLEN); strlcpy(cpi->dev_name, cam_sim_name(sim), DEV_IDLEN); cpi->unit_number = cam_sim_unit(sim); cpi->bus_id = cam_sim_bus(sim); cpi->base_transfer_speed = 150000; cpi->transport = XPORT_SAS; cpi->transport_version = 0; cpi->protocol = PROTO_SCSI; cpi->protocol_version = SCSI_REV_SPC; cpi->maxio = sc->maxio; mpssas_set_ccbstatus(ccb, CAM_REQ_CMP); break; } case XPT_GET_TRAN_SETTINGS: { struct ccb_trans_settings *cts; struct ccb_trans_settings_sas *sas; struct ccb_trans_settings_scsi *scsi; struct mpssas_target *targ; cts = &ccb->cts; sas = &cts->xport_specific.sas; scsi = &cts->proto_specific.scsi; KASSERT(cts->ccb_h.target_id < sassc->maxtargets, ("Target %d out of bounds in XPT_GET_TRANS_SETTINGS\n", cts->ccb_h.target_id)); targ = &sassc->targets[cts->ccb_h.target_id]; if (targ->handle == 0x0) { mpssas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); break; } cts->protocol_version = SCSI_REV_SPC2; cts->transport = XPORT_SAS; cts->transport_version = 0; sas->valid = CTS_SAS_VALID_SPEED; switch (targ->linkrate) { case 0x08: sas->bitrate = 150000; break; case 0x09: sas->bitrate = 300000; break; case 0x0a: sas->bitrate = 600000; break; default: sas->valid = 0; } cts->protocol = PROTO_SCSI; scsi->valid = CTS_SCSI_VALID_TQ; scsi->flags = CTS_SCSI_FLAGS_TAG_ENB; mpssas_set_ccbstatus(ccb, CAM_REQ_CMP); break; } case XPT_CALC_GEOMETRY: cam_calc_geometry(&ccb->ccg, /*extended*/1); mpssas_set_ccbstatus(ccb, CAM_REQ_CMP); break; case XPT_RESET_DEV: mps_dprint(sassc->sc, MPS_XINFO, "mpssas_action XPT_RESET_DEV\n"); mpssas_action_resetdev(sassc, ccb); return; case XPT_RESET_BUS: case XPT_ABORT: case XPT_TERM_IO: mps_dprint(sassc->sc, MPS_XINFO, "mpssas_action faking success for abort or reset\n"); mpssas_set_ccbstatus(ccb, CAM_REQ_CMP); break; case XPT_SCSI_IO: mpssas_action_scsiio(sassc, ccb); return; #if __FreeBSD_version >= 900026 case XPT_SMP_IO: mpssas_action_smpio(sassc, ccb); return; #endif default: mpssas_set_ccbstatus(ccb, CAM_FUNC_NOTAVAIL); break; } xpt_done(ccb); } static void mpssas_announce_reset(struct mps_softc *sc, uint32_t ac_code, target_id_t target_id, lun_id_t lun_id) { path_id_t path_id = cam_sim_path(sc->sassc->sim); struct cam_path *path; mps_dprint(sc, MPS_XINFO, "%s code %x target %d lun %jx\n", __func__, ac_code, target_id, (uintmax_t)lun_id); if (xpt_create_path(&path, NULL, path_id, target_id, lun_id) != CAM_REQ_CMP) { mps_dprint(sc, MPS_ERROR, "unable to create path for reset " "notification\n"); return; } xpt_async(ac_code, path, NULL); xpt_free_path(path); } static void mpssas_complete_all_commands(struct mps_softc *sc) { struct mps_command *cm; int i; int completed; MPS_FUNCTRACE(sc); mtx_assert(&sc->mps_mtx, MA_OWNED); /* complete all commands with a NULL reply */ for (i = 1; i < sc->num_reqs; i++) { cm = &sc->commands[i]; if (cm->cm_state == MPS_CM_STATE_FREE) continue; cm->cm_state = MPS_CM_STATE_BUSY; cm->cm_reply = NULL; completed = 0; + if (cm->cm_flags & MPS_CM_FLAGS_SATA_ID_TIMEOUT) { + MPASS(cm->cm_data); + free(cm->cm_data, M_MPT2); + cm->cm_data = NULL; + } + if (cm->cm_flags & MPS_CM_FLAGS_POLLED) cm->cm_flags |= MPS_CM_FLAGS_COMPLETE; if (cm->cm_complete != NULL) { mpssas_log_command(cm, MPS_RECOVERY, "completing cm %p state %x ccb %p for diag reset\n", cm, cm->cm_state, cm->cm_ccb); cm->cm_complete(sc, cm); completed = 1; } else if (cm->cm_flags & MPS_CM_FLAGS_WAKEUP) { mpssas_log_command(cm, MPS_RECOVERY, "waking up cm %p state %x ccb %p for diag reset\n", cm, cm->cm_state, cm->cm_ccb); wakeup(cm); completed = 1; } if ((completed == 0) && (cm->cm_state != MPS_CM_STATE_FREE)) { /* this should never happen, but if it does, log */ mpssas_log_command(cm, MPS_RECOVERY, "cm %p state %x flags 0x%x ccb %p during diag " "reset\n", cm, cm->cm_state, cm->cm_flags, cm->cm_ccb); } } sc->io_cmds_active = 0; } void mpssas_handle_reinit(struct mps_softc *sc) { int i; /* Go back into startup mode and freeze the simq, so that CAM * doesn't send any commands until after we've rediscovered all * targets and found the proper device handles for them. * * After the reset, portenable will trigger discovery, and after all * discovery-related activities have finished, the simq will be * released. */ mps_dprint(sc, MPS_INIT, "%s startup\n", __func__); sc->sassc->flags |= MPSSAS_IN_STARTUP; sc->sassc->flags |= MPSSAS_IN_DISCOVERY; mpssas_startup_increment(sc->sassc); /* notify CAM of a bus reset */ mpssas_announce_reset(sc, AC_BUS_RESET, CAM_TARGET_WILDCARD, CAM_LUN_WILDCARD); /* complete and cleanup after all outstanding commands */ mpssas_complete_all_commands(sc); mps_dprint(sc, MPS_INIT, "%s startup %u after command completion\n", __func__, sc->sassc->startup_refcount); /* zero all the target handles, since they may change after the * reset, and we have to rediscover all the targets and use the new * handles. */ for (i = 0; i < sc->sassc->maxtargets; i++) { if (sc->sassc->targets[i].outstanding != 0) mps_dprint(sc, MPS_INIT, "target %u outstanding %u\n", i, sc->sassc->targets[i].outstanding); sc->sassc->targets[i].handle = 0x0; sc->sassc->targets[i].exp_dev_handle = 0x0; sc->sassc->targets[i].outstanding = 0; sc->sassc->targets[i].flags = MPSSAS_TARGET_INDIAGRESET; } } static void mpssas_tm_timeout(void *data) { struct mps_command *tm = data; struct mps_softc *sc = tm->cm_sc; mtx_assert(&sc->mps_mtx, MA_OWNED); mpssas_log_command(tm, MPS_INFO|MPS_RECOVERY, "task mgmt %p timed out\n", tm); KASSERT(tm->cm_state == MPS_CM_STATE_INQUEUE, ("command not inqueue\n")); tm->cm_state = MPS_CM_STATE_BUSY; mps_reinit(sc); } static void mpssas_logical_unit_reset_complete(struct mps_softc *sc, struct mps_command *tm) { MPI2_SCSI_TASK_MANAGE_REPLY *reply; MPI2_SCSI_TASK_MANAGE_REQUEST *req; unsigned int cm_count = 0; struct mps_command *cm; struct mpssas_target *targ; callout_stop(&tm->cm_callout); req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; reply = (MPI2_SCSI_TASK_MANAGE_REPLY *)tm->cm_reply; targ = tm->cm_targ; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * task management commands don't have S/G lists. * XXXSL So should it be an assertion? */ if ((tm->cm_flags & MPS_CM_FLAGS_ERROR_MASK) != 0) { mps_dprint(sc, MPS_RECOVERY|MPS_ERROR, "%s: cm_flags = %#x for LUN reset! " "This should not happen!\n", __func__, tm->cm_flags); mpssas_free_tm(sc, tm); return; } if (reply == NULL) { mps_dprint(sc, MPS_RECOVERY, "NULL reset reply for tm %p\n", tm); if ((sc->mps_flags & MPS_FLAGS_DIAGRESET) != 0) { /* this completion was due to a reset, just cleanup */ mps_dprint(sc, MPS_RECOVERY, "Hardware undergoing " "reset, ignoring NULL LUN reset reply\n"); targ->tm = NULL; mpssas_free_tm(sc, tm); } else { /* we should have gotten a reply. */ mps_dprint(sc, MPS_INFO|MPS_RECOVERY, "NULL reply on " "LUN reset attempt, resetting controller\n"); mps_reinit(sc); } return; } mps_dprint(sc, MPS_RECOVERY, "logical unit reset status 0x%x code 0x%x count %u\n", le16toh(reply->IOCStatus), le32toh(reply->ResponseCode), le32toh(reply->TerminationCount)); /* * See if there are any outstanding commands for this LUN. * This could be made more efficient by using a per-LU data * structure of some sort. */ TAILQ_FOREACH(cm, &targ->commands, cm_link) { if (cm->cm_lun == tm->cm_lun) cm_count++; } if (cm_count == 0) { mps_dprint(sc, MPS_RECOVERY|MPS_INFO, "Finished recovery after LUN reset for target %u\n", targ->tid); mpssas_announce_reset(sc, AC_SENT_BDR, targ->tid, tm->cm_lun); /* * We've finished recovery for this logical unit. check and * see if some other logical unit has a timedout command * that needs to be processed. */ cm = TAILQ_FIRST(&targ->timedout_commands); if (cm) { mps_dprint(sc, MPS_INFO|MPS_RECOVERY, "More commands to abort for target %u\n", targ->tid); mpssas_send_abort(sc, tm, cm); } else { targ->tm = NULL; mpssas_free_tm(sc, tm); } } else { /* * If we still have commands for this LUN, the reset * effectively failed, regardless of the status reported. * Escalate to a target reset. */ mps_dprint(sc, MPS_INFO|MPS_RECOVERY, "logical unit reset complete for target %u, but still " "have %u command(s), sending target reset\n", targ->tid, cm_count); mpssas_send_reset(sc, tm, MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET); } } static void mpssas_target_reset_complete(struct mps_softc *sc, struct mps_command *tm) { MPI2_SCSI_TASK_MANAGE_REPLY *reply; MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mpssas_target *targ; callout_stop(&tm->cm_callout); req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; reply = (MPI2_SCSI_TASK_MANAGE_REPLY *)tm->cm_reply; targ = tm->cm_targ; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * task management commands don't have S/G lists. */ if ((tm->cm_flags & MPS_CM_FLAGS_ERROR_MASK) != 0) { mps_dprint(sc, MPS_ERROR,"%s: cm_flags = %#x for target reset! " "This should not happen!\n", __func__, tm->cm_flags); mpssas_free_tm(sc, tm); return; } if (reply == NULL) { mps_dprint(sc, MPS_RECOVERY, "NULL target reset reply for tm %pi TaskMID %u\n", tm, le16toh(req->TaskMID)); if ((sc->mps_flags & MPS_FLAGS_DIAGRESET) != 0) { /* this completion was due to a reset, just cleanup */ mps_dprint(sc, MPS_RECOVERY, "Hardware undergoing " "reset, ignoring NULL target reset reply\n"); targ->tm = NULL; mpssas_free_tm(sc, tm); } else { /* we should have gotten a reply. */ mps_dprint(sc, MPS_INFO|MPS_RECOVERY, "NULL reply on " "target reset attempt, resetting controller\n"); mps_reinit(sc); } return; } mps_dprint(sc, MPS_RECOVERY, "target reset status 0x%x code 0x%x count %u\n", le16toh(reply->IOCStatus), le32toh(reply->ResponseCode), le32toh(reply->TerminationCount)); if (targ->outstanding == 0) { /* we've finished recovery for this target and all * of its logical units. */ mps_dprint(sc, MPS_RECOVERY|MPS_INFO, "Finished reset recovery for target %u\n", targ->tid); mpssas_announce_reset(sc, AC_SENT_BDR, tm->cm_targ->tid, CAM_LUN_WILDCARD); targ->tm = NULL; mpssas_free_tm(sc, tm); } else { /* * After a target reset, if this target still has * outstanding commands, the reset effectively failed, * regardless of the status reported. escalate. */ mps_dprint(sc, MPS_INFO|MPS_RECOVERY, "Target reset complete for target %u, but still have %u " "command(s), resetting controller\n", targ->tid, targ->outstanding); mps_reinit(sc); } } #define MPS_RESET_TIMEOUT 30 int mpssas_send_reset(struct mps_softc *sc, struct mps_command *tm, uint8_t type) { MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mpssas_target *target; int err; target = tm->cm_targ; if (target->handle == 0) { mps_dprint(sc, MPS_ERROR,"%s null devhandle for target_id %d\n", __func__, target->tid); return -1; } req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; req->DevHandle = htole16(target->handle); - req->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; req->TaskType = type; if (type == MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET) { /* XXX Need to handle invalid LUNs */ MPS_SET_LUN(req->LUN, tm->cm_lun); tm->cm_targ->logical_unit_resets++; mps_dprint(sc, MPS_RECOVERY|MPS_INFO, "Sending logical unit reset to target %u lun %d\n", target->tid, tm->cm_lun); tm->cm_complete = mpssas_logical_unit_reset_complete; mpssas_prepare_for_tm(sc, tm, target, tm->cm_lun); } else if (type == MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET) { /* * Target reset method = * SAS Hard Link Reset / SATA Link Reset */ req->MsgFlags = MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET; tm->cm_targ->target_resets++; mps_dprint(sc, MPS_RECOVERY|MPS_INFO, "Sending target reset to target %u\n", target->tid); tm->cm_complete = mpssas_target_reset_complete; mpssas_prepare_for_tm(sc, tm, target, CAM_LUN_WILDCARD); } else { mps_dprint(sc, MPS_ERROR, "unexpected reset type 0x%x\n", type); return -1; } tm->cm_data = NULL; - tm->cm_desc.HighPriority.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; tm->cm_complete_data = (void *)tm; callout_reset(&tm->cm_callout, MPS_RESET_TIMEOUT * hz, mpssas_tm_timeout, tm); err = mps_map_command(sc, tm); if (err) mps_dprint(sc, MPS_ERROR|MPS_RECOVERY, "error %d sending reset type %u\n", err, type); return err; } static void mpssas_abort_complete(struct mps_softc *sc, struct mps_command *tm) { struct mps_command *cm; MPI2_SCSI_TASK_MANAGE_REPLY *reply; MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mpssas_target *targ; callout_stop(&tm->cm_callout); req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; reply = (MPI2_SCSI_TASK_MANAGE_REPLY *)tm->cm_reply; targ = tm->cm_targ; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * task management commands don't have S/G lists. */ if ((tm->cm_flags & MPS_CM_FLAGS_ERROR_MASK) != 0) { mps_dprint(sc, MPS_RECOVERY, "cm_flags = %#x for abort %p TaskMID %u!\n", tm->cm_flags, tm, le16toh(req->TaskMID)); mpssas_free_tm(sc, tm); return; } if (reply == NULL) { mps_dprint(sc, MPS_RECOVERY, "NULL abort reply for tm %p TaskMID %u\n", tm, le16toh(req->TaskMID)); if ((sc->mps_flags & MPS_FLAGS_DIAGRESET) != 0) { /* this completion was due to a reset, just cleanup */ mps_dprint(sc, MPS_RECOVERY, "Hardware undergoing " "reset, ignoring NULL abort reply\n"); targ->tm = NULL; mpssas_free_tm(sc, tm); } else { /* we should have gotten a reply. */ mps_dprint(sc, MPS_INFO|MPS_RECOVERY, "NULL reply on " "abort attempt, resetting controller\n"); mps_reinit(sc); } return; } mps_dprint(sc, MPS_RECOVERY, "abort TaskMID %u status 0x%x code 0x%x count %u\n", le16toh(req->TaskMID), le16toh(reply->IOCStatus), le32toh(reply->ResponseCode), le32toh(reply->TerminationCount)); cm = TAILQ_FIRST(&tm->cm_targ->timedout_commands); if (cm == NULL) { /* * If there are no more timedout commands, we're done with * error recovery for this target. */ mps_dprint(sc, MPS_INFO|MPS_RECOVERY, "Finished abort recovery for target %u\n", targ->tid); targ->tm = NULL; mpssas_free_tm(sc, tm); } else if (le16toh(req->TaskMID) != cm->cm_desc.Default.SMID) { /* abort success, but we have more timedout commands to abort */ mps_dprint(sc, MPS_INFO|MPS_RECOVERY, "Continuing abort recovery for target %u\n", targ->tid); mpssas_send_abort(sc, tm, cm); } else { /* we didn't get a command completion, so the abort * failed as far as we're concerned. escalate. */ mps_dprint(sc, MPS_RECOVERY, "Abort failed for target %u, sending logical unit reset\n", targ->tid); mpssas_send_reset(sc, tm, MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET); } } #define MPS_ABORT_TIMEOUT 5 static int mpssas_send_abort(struct mps_softc *sc, struct mps_command *tm, struct mps_command *cm) { MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mpssas_target *targ; int err; targ = cm->cm_targ; if (targ->handle == 0) { mps_dprint(sc, MPS_ERROR|MPS_RECOVERY, "%s null devhandle for target_id %d\n", __func__, cm->cm_ccb->ccb_h.target_id); return -1; } mpssas_log_command(cm, MPS_RECOVERY|MPS_INFO, "Aborting command %p\n", cm); req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; req->DevHandle = htole16(targ->handle); - req->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; req->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK; /* XXX Need to handle invalid LUNs */ MPS_SET_LUN(req->LUN, cm->cm_ccb->ccb_h.target_lun); req->TaskMID = htole16(cm->cm_desc.Default.SMID); tm->cm_data = NULL; - tm->cm_desc.HighPriority.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; tm->cm_complete = mpssas_abort_complete; tm->cm_complete_data = (void *)tm; tm->cm_targ = cm->cm_targ; tm->cm_lun = cm->cm_lun; callout_reset(&tm->cm_callout, MPS_ABORT_TIMEOUT * hz, mpssas_tm_timeout, tm); targ->aborts++; mpssas_prepare_for_tm(sc, tm, targ, tm->cm_lun); err = mps_map_command(sc, tm); if (err) mps_dprint(sc, MPS_ERROR|MPS_RECOVERY, "error %d sending abort for cm %p SMID %u\n", err, cm, req->TaskMID); return err; } static void mpssas_scsiio_timeout(void *data) { sbintime_t elapsed, now; union ccb *ccb; struct mps_softc *sc; struct mps_command *cm; struct mpssas_target *targ; cm = (struct mps_command *)data; sc = cm->cm_sc; ccb = cm->cm_ccb; now = sbinuptime(); MPS_FUNCTRACE(sc); mtx_assert(&sc->mps_mtx, MA_OWNED); mps_dprint(sc, MPS_XINFO|MPS_RECOVERY, "Timeout checking cm %p\n", sc); /* * Run the interrupt handler to make sure it's not pending. This * isn't perfect because the command could have already completed * and been re-used, though this is unlikely. */ mps_intr_locked(sc); - if (cm->cm_state != MPS_CM_STATE_INQUEUE) { + if (cm->cm_flags & MPS_CM_FLAGS_ON_RECOVERY) { mpssas_log_command(cm, MPS_XINFO, "SCSI command %p almost timed out\n", cm); return; } if (cm->cm_ccb == NULL) { mps_dprint(sc, MPS_ERROR, "command timeout with NULL ccb\n"); return; } targ = cm->cm_targ; targ->timeouts++; elapsed = now - ccb->ccb_h.qos.sim_data; mpssas_log_command(cm, MPS_INFO|MPS_RECOVERY, "Command timeout on target %u(0x%04x) %d set, %d.%d elapsed\n", targ->tid, targ->handle, ccb->ccb_h.timeout, sbintime_getsec(elapsed), elapsed & 0xffffffff); /* XXX first, check the firmware state, to see if it's still * operational. if not, do a diag reset. */ mpssas_set_ccbstatus(cm->cm_ccb, CAM_CMD_TIMEOUT); - cm->cm_state = MPS_CM_STATE_TIMEDOUT; + cm->cm_flags |= MPS_CM_FLAGS_ON_RECOVERY | MPS_CM_FLAGS_TIMEDOUT; TAILQ_INSERT_TAIL(&targ->timedout_commands, cm, cm_recovery); if (targ->tm != NULL) { /* target already in recovery, just queue up another * timedout command to be processed later. */ mps_dprint(sc, MPS_RECOVERY, "queued timedout cm %p for processing by tm %p\n", cm, targ->tm); } else if ((targ->tm = mpssas_alloc_tm(sc)) != NULL) { mps_dprint(sc, MPS_RECOVERY|MPS_INFO, "Sending abort to target %u for SMID %d\n", targ->tid, cm->cm_desc.Default.SMID); mps_dprint(sc, MPS_RECOVERY, "timedout cm %p allocated tm %p\n", cm, targ->tm); /* start recovery by aborting the first timedout command */ mpssas_send_abort(sc, targ->tm, cm); } else { /* XXX queue this target up for recovery once a TM becomes * available. The firmware only has a limited number of * HighPriority credits for the high priority requests used * for task management, and we ran out. * * Isilon: don't worry about this for now, since we have * more credits than disks in an enclosure, and limit * ourselves to one TM per target for recovery. */ mps_dprint(sc, MPS_ERROR|MPS_RECOVERY, "timedout cm %p failed to allocate a tm\n", cm); } } static void mpssas_action_scsiio(struct mpssas_softc *sassc, union ccb *ccb) { MPI2_SCSI_IO_REQUEST *req; struct ccb_scsiio *csio; struct mps_softc *sc; struct mpssas_target *targ; struct mpssas_lun *lun; struct mps_command *cm; uint8_t i, lba_byte, *ref_tag_addr; uint16_t eedp_flags; uint32_t mpi_control; sc = sassc->sc; MPS_FUNCTRACE(sc); mtx_assert(&sc->mps_mtx, MA_OWNED); csio = &ccb->csio; KASSERT(csio->ccb_h.target_id < sassc->maxtargets, ("Target %d out of bounds in XPT_SCSI_IO\n", csio->ccb_h.target_id)); targ = &sassc->targets[csio->ccb_h.target_id]; mps_dprint(sc, MPS_TRACE, "ccb %p target flag %x\n", ccb, targ->flags); if (targ->handle == 0x0) { mps_dprint(sc, MPS_ERROR, "%s NULL handle for target %u\n", __func__, csio->ccb_h.target_id); mpssas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); xpt_done(ccb); return; } if (targ->flags & MPS_TARGET_FLAGS_RAID_COMPONENT) { mps_dprint(sc, MPS_ERROR, "%s Raid component no SCSI IO " "supported %u\n", __func__, csio->ccb_h.target_id); mpssas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); xpt_done(ccb); return; } /* * Sometimes, it is possible to get a command that is not "In * Progress" and was actually aborted by the upper layer. Check for * this here and complete the command without error. */ if (mpssas_get_ccbstatus(ccb) != CAM_REQ_INPROG) { mps_dprint(sc, MPS_TRACE, "%s Command is not in progress for " "target %u\n", __func__, csio->ccb_h.target_id); xpt_done(ccb); return; } /* * If devinfo is 0 this will be a volume. In that case don't tell CAM * that the volume has timed out. We want volumes to be enumerated * until they are deleted/removed, not just failed. */ if (targ->flags & MPSSAS_TARGET_INREMOVAL) { if (targ->devinfo == 0) mpssas_set_ccbstatus(ccb, CAM_REQ_CMP); else mpssas_set_ccbstatus(ccb, CAM_SEL_TIMEOUT); xpt_done(ccb); return; } if ((sc->mps_flags & MPS_FLAGS_SHUTDOWN) != 0) { mps_dprint(sc, MPS_INFO, "%s shutting down\n", __func__); mpssas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); xpt_done(ccb); return; } /* * If target has a reset in progress, freeze the devq and return. The * devq will be released when the TM reset is finished. */ if (targ->flags & MPSSAS_TARGET_INRESET) { ccb->ccb_h.status = CAM_BUSY | CAM_DEV_QFRZN; mps_dprint(sc, MPS_INFO, "%s: Freezing devq for target ID %d\n", __func__, targ->tid); xpt_freeze_devq(ccb->ccb_h.path, 1); xpt_done(ccb); return; } cm = mps_alloc_command(sc); if (cm == NULL || (sc->mps_flags & MPS_FLAGS_DIAGRESET)) { if (cm != NULL) { mps_free_command(sc, cm); } if ((sassc->flags & MPSSAS_QUEUE_FROZEN) == 0) { xpt_freeze_simq(sassc->sim, 1); sassc->flags |= MPSSAS_QUEUE_FROZEN; } ccb->ccb_h.status &= ~CAM_SIM_QUEUED; ccb->ccb_h.status |= CAM_REQUEUE_REQ; xpt_done(ccb); return; } req = (MPI2_SCSI_IO_REQUEST *)cm->cm_req; bzero(req, sizeof(*req)); req->DevHandle = htole16(targ->handle); req->Function = MPI2_FUNCTION_SCSI_IO_REQUEST; req->MsgFlags = 0; req->SenseBufferLowAddress = htole32(cm->cm_sense_busaddr); req->SenseBufferLength = MPS_SENSE_LEN; req->SGLFlags = 0; req->ChainOffset = 0; req->SGLOffset0 = 24; /* 32bit word offset to the SGL */ req->SGLOffset1= 0; req->SGLOffset2= 0; req->SGLOffset3= 0; req->SkipCount = 0; req->DataLength = htole32(csio->dxfer_len); req->BidirectionalDataLength = 0; req->IoFlags = htole16(csio->cdb_len); req->EEDPFlags = 0; /* Note: BiDirectional transfers are not supported */ switch (csio->ccb_h.flags & CAM_DIR_MASK) { case CAM_DIR_IN: mpi_control = MPI2_SCSIIO_CONTROL_READ; cm->cm_flags |= MPS_CM_FLAGS_DATAIN; break; case CAM_DIR_OUT: mpi_control = MPI2_SCSIIO_CONTROL_WRITE; cm->cm_flags |= MPS_CM_FLAGS_DATAOUT; break; case CAM_DIR_NONE: default: mpi_control = MPI2_SCSIIO_CONTROL_NODATATRANSFER; break; } if (csio->cdb_len == 32) mpi_control |= 4 << MPI2_SCSIIO_CONTROL_ADDCDBLEN_SHIFT; /* * It looks like the hardware doesn't require an explicit tag * number for each transaction. SAM Task Management not supported * at the moment. */ switch (csio->tag_action) { case MSG_HEAD_OF_Q_TAG: mpi_control |= MPI2_SCSIIO_CONTROL_HEADOFQ; break; case MSG_ORDERED_Q_TAG: mpi_control |= MPI2_SCSIIO_CONTROL_ORDEREDQ; break; case MSG_ACA_TASK: mpi_control |= MPI2_SCSIIO_CONTROL_ACAQ; break; case CAM_TAG_ACTION_NONE: case MSG_SIMPLE_Q_TAG: default: mpi_control |= MPI2_SCSIIO_CONTROL_SIMPLEQ; break; } mpi_control |= sc->mapping_table[csio->ccb_h.target_id].TLR_bits; req->Control = htole32(mpi_control); if (MPS_SET_LUN(req->LUN, csio->ccb_h.target_lun) != 0) { mps_free_command(sc, cm); mpssas_set_ccbstatus(ccb, CAM_LUN_INVALID); xpt_done(ccb); return; } if (csio->ccb_h.flags & CAM_CDB_POINTER) bcopy(csio->cdb_io.cdb_ptr, &req->CDB.CDB32[0], csio->cdb_len); else bcopy(csio->cdb_io.cdb_bytes, &req->CDB.CDB32[0],csio->cdb_len); req->IoFlags = htole16(csio->cdb_len); /* * Check if EEDP is supported and enabled. If it is then check if the * SCSI opcode could be using EEDP. If so, make sure the LUN exists and * is formatted for EEDP support. If all of this is true, set CDB up * for EEDP transfer. */ eedp_flags = op_code_prot[req->CDB.CDB32[0]]; if (sc->eedp_enabled && eedp_flags) { SLIST_FOREACH(lun, &targ->luns, lun_link) { if (lun->lun_id == csio->ccb_h.target_lun) { break; } } if ((lun != NULL) && (lun->eedp_formatted)) { req->EEDPBlockSize = htole16(lun->eedp_block_size); eedp_flags |= (MPI2_SCSIIO_EEDPFLAGS_INC_PRI_REFTAG | MPI2_SCSIIO_EEDPFLAGS_CHECK_REFTAG | MPI2_SCSIIO_EEDPFLAGS_CHECK_GUARD); req->EEDPFlags = htole16(eedp_flags); /* * If CDB less than 32, fill in Primary Ref Tag with * low 4 bytes of LBA. If CDB is 32, tag stuff is * already there. Also, set protection bit. FreeBSD * currently does not support CDBs bigger than 16, but * the code doesn't hurt, and will be here for the * future. */ if (csio->cdb_len != 32) { lba_byte = (csio->cdb_len == 16) ? 6 : 2; ref_tag_addr = (uint8_t *)&req->CDB.EEDP32. PrimaryReferenceTag; for (i = 0; i < 4; i++) { *ref_tag_addr = req->CDB.CDB32[lba_byte + i]; ref_tag_addr++; } req->CDB.EEDP32.PrimaryReferenceTag = htole32(req->CDB.EEDP32.PrimaryReferenceTag); req->CDB.EEDP32.PrimaryApplicationTagMask = 0xFFFF; req->CDB.CDB32[1] = (req->CDB.CDB32[1] & 0x1F) | 0x20; } else { eedp_flags |= MPI2_SCSIIO_EEDPFLAGS_INC_PRI_APPTAG; req->EEDPFlags = htole16(eedp_flags); req->CDB.CDB32[10] = (req->CDB.CDB32[10] & 0x1F) | 0x20; } } } cm->cm_length = csio->dxfer_len; if (cm->cm_length != 0) { cm->cm_data = ccb; cm->cm_flags |= MPS_CM_FLAGS_USE_CCB; } else { cm->cm_data = NULL; } cm->cm_sge = &req->SGL; cm->cm_sglsize = (32 - 24) * 4; cm->cm_desc.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO; cm->cm_desc.SCSIIO.DevHandle = htole16(targ->handle); cm->cm_complete = mpssas_scsiio_complete; cm->cm_complete_data = ccb; cm->cm_targ = targ; cm->cm_lun = csio->ccb_h.target_lun; cm->cm_ccb = ccb; /* * If HBA is a WD and the command is not for a retry, try to build a * direct I/O message. If failed, or the command is for a retry, send * the I/O to the IR volume itself. */ if (sc->WD_valid_config) { if (ccb->ccb_h.sim_priv.entries[0].field == MPS_WD_RETRY) { mpssas_direct_drive_io(sassc, cm, ccb); } else { mpssas_set_ccbstatus(ccb, CAM_REQ_INPROG); } } #if defined(BUF_TRACKING) || defined(FULL_BUF_TRACKING) if (csio->bio != NULL) biotrack(csio->bio, __func__); #endif csio->ccb_h.qos.sim_data = sbinuptime(); callout_reset_sbt(&cm->cm_callout, SBT_1MS * ccb->ccb_h.timeout, 0, mpssas_scsiio_timeout, cm, 0); targ->issued++; targ->outstanding++; TAILQ_INSERT_TAIL(&targ->commands, cm, cm_link); ccb->ccb_h.status |= CAM_SIM_QUEUED; mpssas_log_command(cm, MPS_XINFO, "%s cm %p ccb %p outstanding %u\n", __func__, cm, ccb, targ->outstanding); mps_map_command(sc, cm); return; } /** * mps_sc_failed_io_info - translated non-succesfull SCSI_IO request */ static void mps_sc_failed_io_info(struct mps_softc *sc, struct ccb_scsiio *csio, Mpi2SCSIIOReply_t *mpi_reply) { u32 response_info; u8 *response_bytes; u16 ioc_status = le16toh(mpi_reply->IOCStatus) & MPI2_IOCSTATUS_MASK; u8 scsi_state = mpi_reply->SCSIState; u8 scsi_status = mpi_reply->SCSIStatus; u32 log_info = le32toh(mpi_reply->IOCLogInfo); const char *desc_ioc_state, *desc_scsi_status; if (log_info == 0x31170000) return; desc_ioc_state = mps_describe_table(mps_iocstatus_string, ioc_status); desc_scsi_status = mps_describe_table(mps_scsi_status_string, scsi_status); mps_dprint(sc, MPS_XINFO, "\thandle(0x%04x), ioc_status(%s)(0x%04x)\n", le16toh(mpi_reply->DevHandle), desc_ioc_state, ioc_status); /* *We can add more detail about underflow data here * TO-DO */ mps_dprint(sc, MPS_XINFO, "\tscsi_status(%s)(0x%02x), " "scsi_state %b\n", desc_scsi_status, scsi_status, scsi_state, "\20" "\1AutosenseValid" "\2AutosenseFailed" "\3NoScsiStatus" "\4Terminated" "\5Response InfoValid"); if (sc->mps_debug & MPS_XINFO && scsi_state & MPI2_SCSI_STATE_AUTOSENSE_VALID) { mps_dprint(sc, MPS_XINFO, "-> Sense Buffer Data : Start :\n"); scsi_sense_print(csio); mps_dprint(sc, MPS_XINFO, "-> Sense Buffer Data : End :\n"); } if (scsi_state & MPI2_SCSI_STATE_RESPONSE_INFO_VALID) { response_info = le32toh(mpi_reply->ResponseInfo); response_bytes = (u8 *)&response_info; mps_dprint(sc, MPS_XINFO, "response code(0x%1x): %s\n", response_bytes[0], mps_describe_table(mps_scsi_taskmgmt_string, response_bytes[0])); } } static void mpssas_scsiio_complete(struct mps_softc *sc, struct mps_command *cm) { MPI2_SCSI_IO_REPLY *rep; union ccb *ccb; struct ccb_scsiio *csio; struct mpssas_softc *sassc; struct scsi_vpd_supported_page_list *vpd_list = NULL; u8 *TLR_bits, TLR_on; int dir = 0, i; u16 alloc_len; struct mpssas_target *target; target_id_t target_id; MPS_FUNCTRACE(sc); mps_dprint(sc, MPS_TRACE, "cm %p SMID %u ccb %p reply %p outstanding %u\n", cm, cm->cm_desc.Default.SMID, cm->cm_ccb, cm->cm_reply, cm->cm_targ->outstanding); callout_stop(&cm->cm_callout); mtx_assert(&sc->mps_mtx, MA_OWNED); sassc = sc->sassc; ccb = cm->cm_complete_data; csio = &ccb->csio; target_id = csio->ccb_h.target_id; rep = (MPI2_SCSI_IO_REPLY *)cm->cm_reply; /* * XXX KDM if the chain allocation fails, does it matter if we do * the sync and unload here? It is simpler to do it in every case, * assuming it doesn't cause problems. */ if (cm->cm_data != NULL) { if (cm->cm_flags & MPS_CM_FLAGS_DATAIN) dir = BUS_DMASYNC_POSTREAD; else if (cm->cm_flags & MPS_CM_FLAGS_DATAOUT) dir = BUS_DMASYNC_POSTWRITE; bus_dmamap_sync(sc->buffer_dmat, cm->cm_dmamap, dir); bus_dmamap_unload(sc->buffer_dmat, cm->cm_dmamap); } cm->cm_targ->completed++; cm->cm_targ->outstanding--; TAILQ_REMOVE(&cm->cm_targ->commands, cm, cm_link); ccb->ccb_h.status &= ~(CAM_STATUS_MASK | CAM_SIM_QUEUED); #if defined(BUF_TRACKING) || defined(FULL_BUF_TRACKING) if (ccb->csio.bio != NULL) biotrack(ccb->csio.bio, __func__); #endif - if (cm->cm_state == MPS_CM_STATE_TIMEDOUT) { + if (cm->cm_flags & MPS_CM_FLAGS_ON_RECOVERY) { TAILQ_REMOVE(&cm->cm_targ->timedout_commands, cm, cm_recovery); - cm->cm_state = MPS_CM_STATE_BUSY; + KASSERT(cm->cm_state == MPS_CM_STATE_BUSY, + ("Not busy for CM_FLAGS_TIMEDOUT: %d\n", cm->cm_state)); + cm->cm_flags &= ~MPS_CM_FLAGS_ON_RECOVERY; if (cm->cm_reply != NULL) mpssas_log_command(cm, MPS_RECOVERY, "completed timedout cm %p ccb %p during recovery " "ioc %x scsi %x state %x xfer %u\n", cm, cm->cm_ccb, le16toh(rep->IOCStatus), rep->SCSIStatus, rep->SCSIState, le32toh(rep->TransferCount)); else mpssas_log_command(cm, MPS_RECOVERY, "completed timedout cm %p ccb %p during recovery\n", cm, cm->cm_ccb); } else if (cm->cm_targ->tm != NULL) { if (cm->cm_reply != NULL) mpssas_log_command(cm, MPS_RECOVERY, "completed cm %p ccb %p during recovery " "ioc %x scsi %x state %x xfer %u\n", cm, cm->cm_ccb, le16toh(rep->IOCStatus), rep->SCSIStatus, rep->SCSIState, le32toh(rep->TransferCount)); else mpssas_log_command(cm, MPS_RECOVERY, "completed cm %p ccb %p during recovery\n", cm, cm->cm_ccb); } else if ((sc->mps_flags & MPS_FLAGS_DIAGRESET) != 0) { mpssas_log_command(cm, MPS_RECOVERY, "reset completed cm %p ccb %p\n", cm, cm->cm_ccb); } if ((cm->cm_flags & MPS_CM_FLAGS_ERROR_MASK) != 0) { /* * We ran into an error after we tried to map the command, * so we're getting a callback without queueing the command * to the hardware. So we set the status here, and it will * be retained below. We'll go through the "fast path", * because there can be no reply when we haven't actually * gone out to the hardware. */ mpssas_set_ccbstatus(ccb, CAM_REQUEUE_REQ); /* * Currently the only error included in the mask is * MPS_CM_FLAGS_CHAIN_FAILED, which means we're out of * chain frames. We need to freeze the queue until we get * a command that completed without this error, which will * hopefully have some chain frames attached that we can * use. If we wanted to get smarter about it, we would * only unfreeze the queue in this condition when we're * sure that we're getting some chain frames back. That's * probably unnecessary. */ if ((sassc->flags & MPSSAS_QUEUE_FROZEN) == 0) { xpt_freeze_simq(sassc->sim, 1); sassc->flags |= MPSSAS_QUEUE_FROZEN; mps_dprint(sc, MPS_XINFO, "Error sending command, " "freezing SIM queue\n"); } } /* * If this is a Start Stop Unit command and it was issued by the driver * during shutdown, decrement the refcount to account for all of the * commands that were sent. All SSU commands should be completed before * shutdown completes, meaning SSU_refcount will be 0 after SSU_started * is TRUE. */ if (sc->SSU_started && (csio->cdb_io.cdb_bytes[0] == START_STOP_UNIT)) { mps_dprint(sc, MPS_INFO, "Decrementing SSU count.\n"); sc->SSU_refcount--; } /* Take the fast path to completion */ if (cm->cm_reply == NULL) { if (mpssas_get_ccbstatus(ccb) == CAM_REQ_INPROG) { if ((sc->mps_flags & MPS_FLAGS_DIAGRESET) != 0) mpssas_set_ccbstatus(ccb, CAM_SCSI_BUS_RESET); else { mpssas_set_ccbstatus(ccb, CAM_REQ_CMP); ccb->csio.scsi_status = SCSI_STATUS_OK; } if (sassc->flags & MPSSAS_QUEUE_FROZEN) { ccb->ccb_h.status |= CAM_RELEASE_SIMQ; sassc->flags &= ~MPSSAS_QUEUE_FROZEN; mps_dprint(sc, MPS_XINFO, "Unfreezing SIM queue\n"); } } /* * There are two scenarios where the status won't be * CAM_REQ_CMP. The first is if MPS_CM_FLAGS_ERROR_MASK is * set, the second is in the MPS_FLAGS_DIAGRESET above. */ if (mpssas_get_ccbstatus(ccb) != CAM_REQ_CMP) { /* * Freeze the dev queue so that commands are * executed in the correct order after error * recovery. */ ccb->ccb_h.status |= CAM_DEV_QFRZN; xpt_freeze_devq(ccb->ccb_h.path, /*count*/ 1); } mps_free_command(sc, cm); xpt_done(ccb); return; } mpssas_log_command(cm, MPS_XINFO, "ioc %x scsi %x state %x xfer %u\n", le16toh(rep->IOCStatus), rep->SCSIStatus, rep->SCSIState, le32toh(rep->TransferCount)); /* * If this is a Direct Drive I/O, reissue the I/O to the original IR * Volume if an error occurred (normal I/O retry). Use the original * CCB, but set a flag that this will be a retry so that it's sent to * the original volume. Free the command but reuse the CCB. */ if (cm->cm_flags & MPS_CM_FLAGS_DD_IO) { mps_free_command(sc, cm); ccb->ccb_h.sim_priv.entries[0].field = MPS_WD_RETRY; mpssas_action_scsiio(sassc, ccb); return; } else ccb->ccb_h.sim_priv.entries[0].field = 0; switch (le16toh(rep->IOCStatus) & MPI2_IOCSTATUS_MASK) { case MPI2_IOCSTATUS_SCSI_DATA_UNDERRUN: csio->resid = cm->cm_length - le32toh(rep->TransferCount); /* FALLTHROUGH */ case MPI2_IOCSTATUS_SUCCESS: case MPI2_IOCSTATUS_SCSI_RECOVERED_ERROR: if ((le16toh(rep->IOCStatus) & MPI2_IOCSTATUS_MASK) == MPI2_IOCSTATUS_SCSI_RECOVERED_ERROR) mpssas_log_command(cm, MPS_XINFO, "recovered error\n"); /* Completion failed at the transport level. */ if (rep->SCSIState & (MPI2_SCSI_STATE_NO_SCSI_STATUS | MPI2_SCSI_STATE_TERMINATED)) { mpssas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); break; } /* In a modern packetized environment, an autosense failure * implies that there's not much else that can be done to * recover the command. */ if (rep->SCSIState & MPI2_SCSI_STATE_AUTOSENSE_FAILED) { mpssas_set_ccbstatus(ccb, CAM_AUTOSENSE_FAIL); break; } /* * CAM doesn't care about SAS Response Info data, but if this is * the state check if TLR should be done. If not, clear the * TLR_bits for the target. */ if ((rep->SCSIState & MPI2_SCSI_STATE_RESPONSE_INFO_VALID) && ((le32toh(rep->ResponseInfo) & MPI2_SCSI_RI_MASK_REASONCODE) == MPS_SCSI_RI_INVALID_FRAME)) { sc->mapping_table[target_id].TLR_bits = (u8)MPI2_SCSIIO_CONTROL_NO_TLR; } /* * Intentionally override the normal SCSI status reporting * for these two cases. These are likely to happen in a * multi-initiator environment, and we want to make sure that * CAM retries these commands rather than fail them. */ if ((rep->SCSIStatus == MPI2_SCSI_STATUS_COMMAND_TERMINATED) || (rep->SCSIStatus == MPI2_SCSI_STATUS_TASK_ABORTED)) { mpssas_set_ccbstatus(ccb, CAM_REQ_ABORTED); break; } /* Handle normal status and sense */ csio->scsi_status = rep->SCSIStatus; if (rep->SCSIStatus == MPI2_SCSI_STATUS_GOOD) mpssas_set_ccbstatus(ccb, CAM_REQ_CMP); else mpssas_set_ccbstatus(ccb, CAM_SCSI_STATUS_ERROR); if (rep->SCSIState & MPI2_SCSI_STATE_AUTOSENSE_VALID) { int sense_len, returned_sense_len; returned_sense_len = min(le32toh(rep->SenseCount), sizeof(struct scsi_sense_data)); if (returned_sense_len < ccb->csio.sense_len) ccb->csio.sense_resid = ccb->csio.sense_len - returned_sense_len; else ccb->csio.sense_resid = 0; sense_len = min(returned_sense_len, ccb->csio.sense_len - ccb->csio.sense_resid); bzero(&ccb->csio.sense_data, sizeof(ccb->csio.sense_data)); bcopy(cm->cm_sense, &ccb->csio.sense_data, sense_len); ccb->ccb_h.status |= CAM_AUTOSNS_VALID; } /* * Check if this is an INQUIRY command. If it's a VPD inquiry, * and it's page code 0 (Supported Page List), and there is * inquiry data, and this is for a sequential access device, and * the device is an SSP target, and TLR is supported by the * controller, turn the TLR_bits value ON if page 0x90 is * supported. */ if ((csio->cdb_io.cdb_bytes[0] == INQUIRY) && (csio->cdb_io.cdb_bytes[1] & SI_EVPD) && (csio->cdb_io.cdb_bytes[2] == SVPD_SUPPORTED_PAGE_LIST) && ((csio->ccb_h.flags & CAM_DATA_MASK) == CAM_DATA_VADDR) && (csio->data_ptr != NULL) && ((csio->data_ptr[0] & 0x1f) == T_SEQUENTIAL) && (sc->control_TLR) && (sc->mapping_table[target_id].device_info & MPI2_SAS_DEVICE_INFO_SSP_TARGET)) { vpd_list = (struct scsi_vpd_supported_page_list *) csio->data_ptr; TLR_bits = &sc->mapping_table[target_id].TLR_bits; *TLR_bits = (u8)MPI2_SCSIIO_CONTROL_NO_TLR; TLR_on = (u8)MPI2_SCSIIO_CONTROL_TLR_ON; alloc_len = ((u16)csio->cdb_io.cdb_bytes[3] << 8) + csio->cdb_io.cdb_bytes[4]; alloc_len -= csio->resid; for (i = 0; i < MIN(vpd_list->length, alloc_len); i++) { if (vpd_list->list[i] == 0x90) { *TLR_bits = TLR_on; break; } } } /* * If this is a SATA direct-access end device, mark it so that * a SCSI StartStopUnit command will be sent to it when the * driver is being shutdown. */ if ((csio->cdb_io.cdb_bytes[0] == INQUIRY) && ((csio->data_ptr[0] & 0x1f) == T_DIRECT) && (sc->mapping_table[target_id].device_info & MPI2_SAS_DEVICE_INFO_SATA_DEVICE) && ((sc->mapping_table[target_id].device_info & MPI2_SAS_DEVICE_INFO_MASK_DEVICE_TYPE) == MPI2_SAS_DEVICE_INFO_END_DEVICE)) { target = &sassc->targets[target_id]; target->supports_SSU = TRUE; mps_dprint(sc, MPS_XINFO, "Target %d supports SSU\n", target_id); } break; case MPI2_IOCSTATUS_SCSI_INVALID_DEVHANDLE: case MPI2_IOCSTATUS_SCSI_DEVICE_NOT_THERE: /* * If devinfo is 0 this will be a volume. In that case don't * tell CAM that the volume is not there. We want volumes to * be enumerated until they are deleted/removed, not just * failed. */ if (cm->cm_targ->devinfo == 0) mpssas_set_ccbstatus(ccb, CAM_REQ_CMP); else mpssas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); break; case MPI2_IOCSTATUS_INVALID_SGL: mps_print_scsiio_cmd(sc, cm); mpssas_set_ccbstatus(ccb, CAM_UNREC_HBA_ERROR); break; case MPI2_IOCSTATUS_SCSI_TASK_TERMINATED: /* * This is one of the responses that comes back when an I/O * has been aborted. If it is because of a timeout that we * initiated, just set the status to CAM_CMD_TIMEOUT. * Otherwise set it to CAM_REQ_ABORTED. The effect on the * command is the same (it gets retried, subject to the * retry counter), the only difference is what gets printed * on the console. */ - if (cm->cm_state == MPS_CM_STATE_TIMEDOUT) + if (cm->cm_flags & MPS_CM_FLAGS_TIMEDOUT) mpssas_set_ccbstatus(ccb, CAM_CMD_TIMEOUT); else mpssas_set_ccbstatus(ccb, CAM_REQ_ABORTED); break; case MPI2_IOCSTATUS_SCSI_DATA_OVERRUN: /* resid is ignored for this condition */ csio->resid = 0; mpssas_set_ccbstatus(ccb, CAM_DATA_RUN_ERR); break; case MPI2_IOCSTATUS_SCSI_IOC_TERMINATED: case MPI2_IOCSTATUS_SCSI_EXT_TERMINATED: /* * These can sometimes be transient transport-related * errors, and sometimes persistent drive-related errors. * We used to retry these without decrementing the retry * count by returning CAM_REQUEUE_REQ. Unfortunately, if * we hit a persistent drive problem that returns one of * these error codes, we would retry indefinitely. So, * return CAM_REQ_CMP_ERROR so that we decrement the retry * count and avoid infinite retries. We're taking the * potential risk of flagging false failures in the event * of a topology-related error (e.g. a SAS expander problem * causes a command addressed to a drive to fail), but * avoiding getting into an infinite retry loop. */ mpssas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); mps_dprint(sc, MPS_INFO, "Controller reported %s tgt %u SMID %u loginfo %x\n", mps_describe_table(mps_iocstatus_string, le16toh(rep->IOCStatus) & MPI2_IOCSTATUS_MASK), target_id, cm->cm_desc.Default.SMID, le32toh(rep->IOCLogInfo)); mps_dprint(sc, MPS_XINFO, "SCSIStatus %x SCSIState %x xfercount %u\n", rep->SCSIStatus, rep->SCSIState, le32toh(rep->TransferCount)); break; case MPI2_IOCSTATUS_INVALID_FUNCTION: case MPI2_IOCSTATUS_INTERNAL_ERROR: case MPI2_IOCSTATUS_INVALID_VPID: case MPI2_IOCSTATUS_INVALID_FIELD: case MPI2_IOCSTATUS_INVALID_STATE: case MPI2_IOCSTATUS_OP_STATE_NOT_SUPPORTED: case MPI2_IOCSTATUS_SCSI_IO_DATA_ERROR: case MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR: case MPI2_IOCSTATUS_SCSI_RESIDUAL_MISMATCH: case MPI2_IOCSTATUS_SCSI_TASK_MGMT_FAILED: default: mpssas_log_command(cm, MPS_XINFO, "completed ioc %x loginfo %x scsi %x state %x xfer %u\n", le16toh(rep->IOCStatus), le32toh(rep->IOCLogInfo), rep->SCSIStatus, rep->SCSIState, le32toh(rep->TransferCount)); csio->resid = cm->cm_length; mpssas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); break; } mps_sc_failed_io_info(sc,csio,rep); if (sassc->flags & MPSSAS_QUEUE_FROZEN) { ccb->ccb_h.status |= CAM_RELEASE_SIMQ; sassc->flags &= ~MPSSAS_QUEUE_FROZEN; mps_dprint(sc, MPS_XINFO, "Command completed, " "unfreezing SIM queue\n"); } if (mpssas_get_ccbstatus(ccb) != CAM_REQ_CMP) { ccb->ccb_h.status |= CAM_DEV_QFRZN; xpt_freeze_devq(ccb->ccb_h.path, /*count*/ 1); } mps_free_command(sc, cm); xpt_done(ccb); } /* All Request reached here are Endian safe */ static void mpssas_direct_drive_io(struct mpssas_softc *sassc, struct mps_command *cm, union ccb *ccb) { pMpi2SCSIIORequest_t pIO_req; struct mps_softc *sc = sassc->sc; uint64_t virtLBA; uint32_t physLBA, stripe_offset, stripe_unit; uint32_t io_size, column; uint8_t *ptrLBA, lba_idx, physLBA_byte, *CDB; /* * If this is a valid SCSI command (Read6, Read10, Read16, Write6, * Write10, or Write16), build a direct I/O message. Otherwise, the I/O * will be sent to the IR volume itself. Since Read6 and Write6 are a * bit different than the 10/16 CDBs, handle them separately. */ pIO_req = (pMpi2SCSIIORequest_t)cm->cm_req; CDB = pIO_req->CDB.CDB32; /* * Handle 6 byte CDBs. */ if ((pIO_req->DevHandle == sc->DD_dev_handle) && ((CDB[0] == READ_6) || (CDB[0] == WRITE_6))) { /* * Get the transfer size in blocks. */ io_size = (cm->cm_length >> sc->DD_block_exponent); /* * Get virtual LBA given in the CDB. */ virtLBA = ((uint64_t)(CDB[1] & 0x1F) << 16) | ((uint64_t)CDB[2] << 8) | (uint64_t)CDB[3]; /* * Check that LBA range for I/O does not exceed volume's * MaxLBA. */ if ((virtLBA + (uint64_t)io_size - 1) <= sc->DD_max_lba) { /* * Check if the I/O crosses a stripe boundary. If not, * translate the virtual LBA to a physical LBA and set * the DevHandle for the PhysDisk to be used. If it * does cross a boundary, do normal I/O. To get the * right DevHandle to use, get the map number for the * column, then use that map number to look up the * DevHandle of the PhysDisk. */ stripe_offset = (uint32_t)virtLBA & (sc->DD_stripe_size - 1); if ((stripe_offset + io_size) <= sc->DD_stripe_size) { physLBA = (uint32_t)virtLBA >> sc->DD_stripe_exponent; stripe_unit = physLBA / sc->DD_num_phys_disks; column = physLBA % sc->DD_num_phys_disks; pIO_req->DevHandle = htole16(sc->DD_column_map[column].dev_handle); /* ???? Is this endian safe*/ cm->cm_desc.SCSIIO.DevHandle = pIO_req->DevHandle; physLBA = (stripe_unit << sc->DD_stripe_exponent) + stripe_offset; ptrLBA = &pIO_req->CDB.CDB32[1]; physLBA_byte = (uint8_t)(physLBA >> 16); *ptrLBA = physLBA_byte; ptrLBA = &pIO_req->CDB.CDB32[2]; physLBA_byte = (uint8_t)(physLBA >> 8); *ptrLBA = physLBA_byte; ptrLBA = &pIO_req->CDB.CDB32[3]; physLBA_byte = (uint8_t)physLBA; *ptrLBA = physLBA_byte; /* * Set flag that Direct Drive I/O is * being done. */ cm->cm_flags |= MPS_CM_FLAGS_DD_IO; } } return; } /* * Handle 10, 12 or 16 byte CDBs. */ if ((pIO_req->DevHandle == sc->DD_dev_handle) && ((CDB[0] == READ_10) || (CDB[0] == WRITE_10) || (CDB[0] == READ_16) || (CDB[0] == WRITE_16) || (CDB[0] == READ_12) || (CDB[0] == WRITE_12))) { /* * For 16-byte CDB's, verify that the upper 4 bytes of the CDB * are 0. If not, this is accessing beyond 2TB so handle it in * the else section. 10-byte and 12-byte CDB's are OK. * FreeBSD sends very rare 12 byte READ/WRITE, but driver is * ready to accept 12byte CDB for Direct IOs. */ if ((CDB[0] == READ_10 || CDB[0] == WRITE_10) || (CDB[0] == READ_12 || CDB[0] == WRITE_12) || !(CDB[2] | CDB[3] | CDB[4] | CDB[5])) { /* * Get the transfer size in blocks. */ io_size = (cm->cm_length >> sc->DD_block_exponent); /* * Get virtual LBA. Point to correct lower 4 bytes of * LBA in the CDB depending on command. */ lba_idx = ((CDB[0] == READ_12) || (CDB[0] == WRITE_12) || (CDB[0] == READ_10) || (CDB[0] == WRITE_10))? 2 : 6; virtLBA = ((uint64_t)CDB[lba_idx] << 24) | ((uint64_t)CDB[lba_idx + 1] << 16) | ((uint64_t)CDB[lba_idx + 2] << 8) | (uint64_t)CDB[lba_idx + 3]; /* * Check that LBA range for I/O does not exceed volume's * MaxLBA. */ if ((virtLBA + (uint64_t)io_size - 1) <= sc->DD_max_lba) { /* * Check if the I/O crosses a stripe boundary. * If not, translate the virtual LBA to a * physical LBA and set the DevHandle for the * PhysDisk to be used. If it does cross a * boundary, do normal I/O. To get the right * DevHandle to use, get the map number for the * column, then use that map number to look up * the DevHandle of the PhysDisk. */ stripe_offset = (uint32_t)virtLBA & (sc->DD_stripe_size - 1); if ((stripe_offset + io_size) <= sc->DD_stripe_size) { physLBA = (uint32_t)virtLBA >> sc->DD_stripe_exponent; stripe_unit = physLBA / sc->DD_num_phys_disks; column = physLBA % sc->DD_num_phys_disks; pIO_req->DevHandle = htole16(sc->DD_column_map[column]. dev_handle); cm->cm_desc.SCSIIO.DevHandle = pIO_req->DevHandle; physLBA = (stripe_unit << sc->DD_stripe_exponent) + stripe_offset; ptrLBA = &pIO_req->CDB.CDB32[lba_idx]; physLBA_byte = (uint8_t)(physLBA >> 24); *ptrLBA = physLBA_byte; ptrLBA = &pIO_req->CDB.CDB32[lba_idx + 1]; physLBA_byte = (uint8_t)(physLBA >> 16); *ptrLBA = physLBA_byte; ptrLBA = &pIO_req->CDB.CDB32[lba_idx + 2]; physLBA_byte = (uint8_t)(physLBA >> 8); *ptrLBA = physLBA_byte; ptrLBA = &pIO_req->CDB.CDB32[lba_idx + 3]; physLBA_byte = (uint8_t)physLBA; *ptrLBA = physLBA_byte; /* * Set flag that Direct Drive I/O is * being done. */ cm->cm_flags |= MPS_CM_FLAGS_DD_IO; } } } else { /* * 16-byte CDB and the upper 4 bytes of the CDB are not * 0. Get the transfer size in blocks. */ io_size = (cm->cm_length >> sc->DD_block_exponent); /* * Get virtual LBA. */ virtLBA = ((uint64_t)CDB[2] << 54) | ((uint64_t)CDB[3] << 48) | ((uint64_t)CDB[4] << 40) | ((uint64_t)CDB[5] << 32) | ((uint64_t)CDB[6] << 24) | ((uint64_t)CDB[7] << 16) | ((uint64_t)CDB[8] << 8) | (uint64_t)CDB[9]; /* * Check that LBA range for I/O does not exceed volume's * MaxLBA. */ if ((virtLBA + (uint64_t)io_size - 1) <= sc->DD_max_lba) { /* * Check if the I/O crosses a stripe boundary. * If not, translate the virtual LBA to a * physical LBA and set the DevHandle for the * PhysDisk to be used. If it does cross a * boundary, do normal I/O. To get the right * DevHandle to use, get the map number for the * column, then use that map number to look up * the DevHandle of the PhysDisk. */ stripe_offset = (uint32_t)virtLBA & (sc->DD_stripe_size - 1); if ((stripe_offset + io_size) <= sc->DD_stripe_size) { physLBA = (uint32_t)(virtLBA >> sc->DD_stripe_exponent); stripe_unit = physLBA / sc->DD_num_phys_disks; column = physLBA % sc->DD_num_phys_disks; pIO_req->DevHandle = htole16(sc->DD_column_map[column]. dev_handle); cm->cm_desc.SCSIIO.DevHandle = pIO_req->DevHandle; physLBA = (stripe_unit << sc->DD_stripe_exponent) + stripe_offset; /* * Set upper 4 bytes of LBA to 0. We * assume that the phys disks are less * than 2 TB's in size. Then, set the * lower 4 bytes. */ pIO_req->CDB.CDB32[2] = 0; pIO_req->CDB.CDB32[3] = 0; pIO_req->CDB.CDB32[4] = 0; pIO_req->CDB.CDB32[5] = 0; ptrLBA = &pIO_req->CDB.CDB32[6]; physLBA_byte = (uint8_t)(physLBA >> 24); *ptrLBA = physLBA_byte; ptrLBA = &pIO_req->CDB.CDB32[7]; physLBA_byte = (uint8_t)(physLBA >> 16); *ptrLBA = physLBA_byte; ptrLBA = &pIO_req->CDB.CDB32[8]; physLBA_byte = (uint8_t)(physLBA >> 8); *ptrLBA = physLBA_byte; ptrLBA = &pIO_req->CDB.CDB32[9]; physLBA_byte = (uint8_t)physLBA; *ptrLBA = physLBA_byte; /* * Set flag that Direct Drive I/O is * being done. */ cm->cm_flags |= MPS_CM_FLAGS_DD_IO; } } } } } #if __FreeBSD_version >= 900026 static void mpssas_smpio_complete(struct mps_softc *sc, struct mps_command *cm) { MPI2_SMP_PASSTHROUGH_REPLY *rpl; MPI2_SMP_PASSTHROUGH_REQUEST *req; uint64_t sasaddr; union ccb *ccb; ccb = cm->cm_complete_data; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and SMP * commands require two S/G elements only. That should be handled * in the standard request size. */ if ((cm->cm_flags & MPS_CM_FLAGS_ERROR_MASK) != 0) { mps_dprint(sc, MPS_ERROR,"%s: cm_flags = %#x on SMP request!\n", __func__, cm->cm_flags); mpssas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); goto bailout; } rpl = (MPI2_SMP_PASSTHROUGH_REPLY *)cm->cm_reply; if (rpl == NULL) { mps_dprint(sc, MPS_ERROR, "%s: NULL cm_reply!\n", __func__); mpssas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); goto bailout; } req = (MPI2_SMP_PASSTHROUGH_REQUEST *)cm->cm_req; sasaddr = le32toh(req->SASAddress.Low); sasaddr |= ((uint64_t)(le32toh(req->SASAddress.High))) << 32; if ((le16toh(rpl->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS || rpl->SASStatus != MPI2_SASSTATUS_SUCCESS) { mps_dprint(sc, MPS_XINFO, "%s: IOCStatus %04x SASStatus %02x\n", __func__, le16toh(rpl->IOCStatus), rpl->SASStatus); mpssas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); goto bailout; } mps_dprint(sc, MPS_XINFO, "%s: SMP request to SAS address " "%#jx completed successfully\n", __func__, (uintmax_t)sasaddr); if (ccb->smpio.smp_response[2] == SMP_FR_ACCEPTED) mpssas_set_ccbstatus(ccb, CAM_REQ_CMP); else mpssas_set_ccbstatus(ccb, CAM_SMP_STATUS_ERROR); bailout: /* * We sync in both directions because we had DMAs in the S/G list * in both directions. */ bus_dmamap_sync(sc->buffer_dmat, cm->cm_dmamap, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(sc->buffer_dmat, cm->cm_dmamap); mps_free_command(sc, cm); xpt_done(ccb); } static void mpssas_send_smpcmd(struct mpssas_softc *sassc, union ccb *ccb, uint64_t sasaddr) { struct mps_command *cm; uint8_t *request, *response; MPI2_SMP_PASSTHROUGH_REQUEST *req; struct mps_softc *sc; int error; sc = sassc->sc; error = 0; /* * XXX We don't yet support physical addresses here. */ switch ((ccb->ccb_h.flags & CAM_DATA_MASK)) { case CAM_DATA_PADDR: case CAM_DATA_SG_PADDR: mps_dprint(sc, MPS_ERROR, "%s: physical addresses not supported\n", __func__); mpssas_set_ccbstatus(ccb, CAM_REQ_INVALID); xpt_done(ccb); return; case CAM_DATA_SG: /* * The chip does not support more than one buffer for the * request or response. */ if ((ccb->smpio.smp_request_sglist_cnt > 1) || (ccb->smpio.smp_response_sglist_cnt > 1)) { mps_dprint(sc, MPS_ERROR, "%s: multiple request or response " "buffer segments not supported for SMP\n", __func__); mpssas_set_ccbstatus(ccb, CAM_REQ_INVALID); xpt_done(ccb); return; } /* * The CAM_SCATTER_VALID flag was originally implemented * for the XPT_SCSI_IO CCB, which only has one data pointer. * We have two. So, just take that flag to mean that we * might have S/G lists, and look at the S/G segment count * to figure out whether that is the case for each individual * buffer. */ if (ccb->smpio.smp_request_sglist_cnt != 0) { bus_dma_segment_t *req_sg; req_sg = (bus_dma_segment_t *)ccb->smpio.smp_request; request = (uint8_t *)(uintptr_t)req_sg[0].ds_addr; } else request = ccb->smpio.smp_request; if (ccb->smpio.smp_response_sglist_cnt != 0) { bus_dma_segment_t *rsp_sg; rsp_sg = (bus_dma_segment_t *)ccb->smpio.smp_response; response = (uint8_t *)(uintptr_t)rsp_sg[0].ds_addr; } else response = ccb->smpio.smp_response; break; case CAM_DATA_VADDR: request = ccb->smpio.smp_request; response = ccb->smpio.smp_response; break; default: mpssas_set_ccbstatus(ccb, CAM_REQ_INVALID); xpt_done(ccb); return; } cm = mps_alloc_command(sc); if (cm == NULL) { mps_dprint(sc, MPS_ERROR, "%s: cannot allocate command\n", __func__); mpssas_set_ccbstatus(ccb, CAM_RESRC_UNAVAIL); xpt_done(ccb); return; } req = (MPI2_SMP_PASSTHROUGH_REQUEST *)cm->cm_req; bzero(req, sizeof(*req)); req->Function = MPI2_FUNCTION_SMP_PASSTHROUGH; /* Allow the chip to use any route to this SAS address. */ req->PhysicalPort = 0xff; req->RequestDataLength = htole16(ccb->smpio.smp_request_len); req->SGLFlags = MPI2_SGLFLAGS_SYSTEM_ADDRESS_SPACE | MPI2_SGLFLAGS_SGL_TYPE_MPI; mps_dprint(sc, MPS_XINFO, "%s: sending SMP request to SAS " "address %#jx\n", __func__, (uintmax_t)sasaddr); mpi_init_sge(cm, req, &req->SGL); /* * Set up a uio to pass into mps_map_command(). This allows us to * do one map command, and one busdma call in there. */ cm->cm_uio.uio_iov = cm->cm_iovec; cm->cm_uio.uio_iovcnt = 2; cm->cm_uio.uio_segflg = UIO_SYSSPACE; /* * The read/write flag isn't used by busdma, but set it just in * case. This isn't exactly accurate, either, since we're going in * both directions. */ cm->cm_uio.uio_rw = UIO_WRITE; cm->cm_iovec[0].iov_base = request; cm->cm_iovec[0].iov_len = le16toh(req->RequestDataLength); cm->cm_iovec[1].iov_base = response; cm->cm_iovec[1].iov_len = ccb->smpio.smp_response_len; cm->cm_uio.uio_resid = cm->cm_iovec[0].iov_len + cm->cm_iovec[1].iov_len; /* * Trigger a warning message in mps_data_cb() for the user if we * wind up exceeding two S/G segments. The chip expects one * segment for the request and another for the response. */ cm->cm_max_segs = 2; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_complete = mpssas_smpio_complete; cm->cm_complete_data = ccb; /* * Tell the mapping code that we're using a uio, and that this is * an SMP passthrough request. There is a little special-case * logic there (in mps_data_cb()) to handle the bidirectional * transfer. */ cm->cm_flags |= MPS_CM_FLAGS_USE_UIO | MPS_CM_FLAGS_SMP_PASS | MPS_CM_FLAGS_DATAIN | MPS_CM_FLAGS_DATAOUT; /* The chip data format is little endian. */ req->SASAddress.High = htole32(sasaddr >> 32); req->SASAddress.Low = htole32(sasaddr); /* * XXX Note that we don't have a timeout/abort mechanism here. * From the manual, it looks like task management requests only * work for SCSI IO and SATA passthrough requests. We may need to * have a mechanism to retry requests in the event of a chip reset * at least. Hopefully the chip will insure that any errors short * of that are relayed back to the driver. */ error = mps_map_command(sc, cm); if ((error != 0) && (error != EINPROGRESS)) { mps_dprint(sc, MPS_ERROR, "%s: error %d returned from mps_map_command()\n", __func__, error); goto bailout_error; } return; bailout_error: mps_free_command(sc, cm); mpssas_set_ccbstatus(ccb, CAM_RESRC_UNAVAIL); xpt_done(ccb); return; } static void mpssas_action_smpio(struct mpssas_softc *sassc, union ccb *ccb) { struct mps_softc *sc; struct mpssas_target *targ; uint64_t sasaddr = 0; sc = sassc->sc; /* * Make sure the target exists. */ KASSERT(ccb->ccb_h.target_id < sassc->maxtargets, ("Target %d out of bounds in XPT_SMP_IO\n", ccb->ccb_h.target_id)); targ = &sassc->targets[ccb->ccb_h.target_id]; if (targ->handle == 0x0) { mps_dprint(sc, MPS_ERROR, "%s: target %d does not exist!\n", __func__, ccb->ccb_h.target_id); mpssas_set_ccbstatus(ccb, CAM_SEL_TIMEOUT); xpt_done(ccb); return; } /* * If this device has an embedded SMP target, we'll talk to it * directly. * figure out what the expander's address is. */ if ((targ->devinfo & MPI2_SAS_DEVICE_INFO_SMP_TARGET) != 0) sasaddr = targ->sasaddr; /* * If we don't have a SAS address for the expander yet, try * grabbing it from the page 0x83 information cached in the * transport layer for this target. LSI expanders report the * expander SAS address as the port-associated SAS address in * Inquiry VPD page 0x83. Maxim expanders don't report it in page * 0x83. * * XXX KDM disable this for now, but leave it commented out so that * it is obvious that this is another possible way to get the SAS * address. * * The parent handle method below is a little more reliable, and * the other benefit is that it works for devices other than SES * devices. So you can send a SMP request to a da(4) device and it * will get routed to the expander that device is attached to. * (Assuming the da(4) device doesn't contain an SMP target...) */ #if 0 if (sasaddr == 0) sasaddr = xpt_path_sas_addr(ccb->ccb_h.path); #endif /* * If we still don't have a SAS address for the expander, look for * the parent device of this device, which is probably the expander. */ if (sasaddr == 0) { #ifdef OLD_MPS_PROBE struct mpssas_target *parent_target; #endif if (targ->parent_handle == 0x0) { mps_dprint(sc, MPS_ERROR, "%s: handle %d does not have a valid " "parent handle!\n", __func__, targ->handle); mpssas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); goto bailout; } #ifdef OLD_MPS_PROBE parent_target = mpssas_find_target_by_handle(sassc, 0, targ->parent_handle); if (parent_target == NULL) { mps_dprint(sc, MPS_ERROR, "%s: handle %d does not have a valid " "parent target!\n", __func__, targ->handle); mpssas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); goto bailout; } if ((parent_target->devinfo & MPI2_SAS_DEVICE_INFO_SMP_TARGET) == 0) { mps_dprint(sc, MPS_ERROR, "%s: handle %d parent %d does not " "have an SMP target!\n", __func__, targ->handle, parent_target->handle); mpssas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); goto bailout; } sasaddr = parent_target->sasaddr; #else /* OLD_MPS_PROBE */ if ((targ->parent_devinfo & MPI2_SAS_DEVICE_INFO_SMP_TARGET) == 0) { mps_dprint(sc, MPS_ERROR, "%s: handle %d parent %d does not " "have an SMP target!\n", __func__, targ->handle, targ->parent_handle); mpssas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); goto bailout; } if (targ->parent_sasaddr == 0x0) { mps_dprint(sc, MPS_ERROR, "%s: handle %d parent handle %d does " "not have a valid SAS address!\n", __func__, targ->handle, targ->parent_handle); mpssas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); goto bailout; } sasaddr = targ->parent_sasaddr; #endif /* OLD_MPS_PROBE */ } if (sasaddr == 0) { mps_dprint(sc, MPS_INFO, "%s: unable to find SAS address for handle %d\n", __func__, targ->handle); mpssas_set_ccbstatus(ccb, CAM_DEV_NOT_THERE); goto bailout; } mpssas_send_smpcmd(sassc, ccb, sasaddr); return; bailout: xpt_done(ccb); } #endif //__FreeBSD_version >= 900026 static void mpssas_action_resetdev(struct mpssas_softc *sassc, union ccb *ccb) { MPI2_SCSI_TASK_MANAGE_REQUEST *req; struct mps_softc *sc; struct mps_command *tm; struct mpssas_target *targ; MPS_FUNCTRACE(sassc->sc); mtx_assert(&sassc->sc->mps_mtx, MA_OWNED); KASSERT(ccb->ccb_h.target_id < sassc->maxtargets, ("Target %d out of bounds in XPT_RESET_DEV\n", ccb->ccb_h.target_id)); sc = sassc->sc; - tm = mps_alloc_command(sc); + tm = mpssas_alloc_tm(sc); if (tm == NULL) { mps_dprint(sc, MPS_ERROR, "command alloc failure in mpssas_action_resetdev\n"); mpssas_set_ccbstatus(ccb, CAM_RESRC_UNAVAIL); xpt_done(ccb); return; } targ = &sassc->targets[ccb->ccb_h.target_id]; req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; req->DevHandle = htole16(targ->handle); - req->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; req->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET; /* SAS Hard Link Reset / SATA Link Reset */ req->MsgFlags = MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET; tm->cm_data = NULL; - tm->cm_desc.HighPriority.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; tm->cm_complete = mpssas_resetdev_complete; tm->cm_complete_data = ccb; tm->cm_targ = targ; - targ->flags |= MPSSAS_TARGET_INRESET; + mpssas_prepare_for_tm(sc, tm, targ, CAM_LUN_WILDCARD); mps_map_command(sc, tm); } static void mpssas_resetdev_complete(struct mps_softc *sc, struct mps_command *tm) { MPI2_SCSI_TASK_MANAGE_REPLY *resp; union ccb *ccb; MPS_FUNCTRACE(sc); mtx_assert(&sc->mps_mtx, MA_OWNED); resp = (MPI2_SCSI_TASK_MANAGE_REPLY *)tm->cm_reply; ccb = tm->cm_complete_data; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * task management commands don't have S/G lists. */ if ((tm->cm_flags & MPS_CM_FLAGS_ERROR_MASK) != 0) { MPI2_SCSI_TASK_MANAGE_REQUEST *req; req = (MPI2_SCSI_TASK_MANAGE_REQUEST *)tm->cm_req; mps_dprint(sc, MPS_ERROR, "%s: cm_flags = %#x for reset of handle %#04x! " "This should not happen!\n", __func__, tm->cm_flags, req->DevHandle); mpssas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); goto bailout; } mps_dprint(sc, MPS_XINFO, "%s: IOCStatus = 0x%x ResponseCode = 0x%x\n", __func__, le16toh(resp->IOCStatus), le32toh(resp->ResponseCode)); if (le32toh(resp->ResponseCode) == MPI2_SCSITASKMGMT_RSP_TM_COMPLETE) { mpssas_set_ccbstatus(ccb, CAM_REQ_CMP); mpssas_announce_reset(sc, AC_SENT_BDR, tm->cm_targ->tid, CAM_LUN_WILDCARD); } else mpssas_set_ccbstatus(ccb, CAM_REQ_CMP_ERR); bailout: mpssas_free_tm(sc, tm); xpt_done(ccb); } static void mpssas_poll(struct cam_sim *sim) { struct mpssas_softc *sassc; sassc = cam_sim_softc(sim); if (sassc->sc->mps_debug & MPS_TRACE) { /* frequent debug messages during a panic just slow * everything down too much. */ mps_printf(sassc->sc, "%s clearing MPS_TRACE\n", __func__); sassc->sc->mps_debug &= ~MPS_TRACE; } mps_intr_locked(sassc->sc); } static void mpssas_async(void *callback_arg, uint32_t code, struct cam_path *path, void *arg) { struct mps_softc *sc; sc = (struct mps_softc *)callback_arg; switch (code) { #if (__FreeBSD_version >= 1000006) || \ ((__FreeBSD_version >= 901503) && (__FreeBSD_version < 1000000)) case AC_ADVINFO_CHANGED: { struct mpssas_target *target; struct mpssas_softc *sassc; struct scsi_read_capacity_data_long rcap_buf; struct ccb_dev_advinfo cdai; struct mpssas_lun *lun; lun_id_t lunid; int found_lun; uintptr_t buftype; buftype = (uintptr_t)arg; found_lun = 0; sassc = sc->sassc; /* * We're only interested in read capacity data changes. */ if (buftype != CDAI_TYPE_RCAPLONG) break; /* * We should have a handle for this, but check to make sure. */ KASSERT(xpt_path_target_id(path) < sassc->maxtargets, ("Target %d out of bounds in mpssas_async\n", xpt_path_target_id(path))); target = &sassc->targets[xpt_path_target_id(path)]; if (target->handle == 0) break; lunid = xpt_path_lun_id(path); SLIST_FOREACH(lun, &target->luns, lun_link) { if (lun->lun_id == lunid) { found_lun = 1; break; } } if (found_lun == 0) { lun = malloc(sizeof(struct mpssas_lun), M_MPT2, M_NOWAIT | M_ZERO); if (lun == NULL) { mps_dprint(sc, MPS_ERROR, "Unable to alloc " "LUN for EEDP support.\n"); break; } lun->lun_id = lunid; SLIST_INSERT_HEAD(&target->luns, lun, lun_link); } bzero(&rcap_buf, sizeof(rcap_buf)); xpt_setup_ccb(&cdai.ccb_h, path, CAM_PRIORITY_NORMAL); cdai.ccb_h.func_code = XPT_DEV_ADVINFO; cdai.ccb_h.flags = CAM_DIR_IN; cdai.buftype = CDAI_TYPE_RCAPLONG; #if (__FreeBSD_version >= 1100061) || \ ((__FreeBSD_version >= 1001510) && (__FreeBSD_version < 1100000)) cdai.flags = CDAI_FLAG_NONE; #else cdai.flags = 0; #endif cdai.bufsiz = sizeof(rcap_buf); cdai.buf = (uint8_t *)&rcap_buf; xpt_action((union ccb *)&cdai); if ((cdai.ccb_h.status & CAM_DEV_QFRZN) != 0) cam_release_devq(cdai.ccb_h.path, 0, 0, 0, FALSE); if ((mpssas_get_ccbstatus((union ccb *)&cdai) == CAM_REQ_CMP) && (rcap_buf.prot & SRC16_PROT_EN)) { switch (rcap_buf.prot & SRC16_P_TYPE) { case SRC16_PTYPE_1: case SRC16_PTYPE_3: lun->eedp_formatted = TRUE; lun->eedp_block_size = scsi_4btoul(rcap_buf.length); break; case SRC16_PTYPE_2: default: lun->eedp_formatted = FALSE; lun->eedp_block_size = 0; break; } } else { lun->eedp_formatted = FALSE; lun->eedp_block_size = 0; } break; } #else case AC_FOUND_DEVICE: { struct ccb_getdev *cgd; cgd = arg; mpssas_check_eedp(sc, path, cgd); break; } #endif default: break; } } #if (__FreeBSD_version < 901503) || \ ((__FreeBSD_version >= 1000000) && (__FreeBSD_version < 1000006)) static void mpssas_check_eedp(struct mps_softc *sc, struct cam_path *path, struct ccb_getdev *cgd) { struct mpssas_softc *sassc = sc->sassc; struct ccb_scsiio *csio; struct scsi_read_capacity_16 *scsi_cmd; struct scsi_read_capacity_eedp *rcap_buf; path_id_t pathid; target_id_t targetid; lun_id_t lunid; union ccb *ccb; struct cam_path *local_path; struct mpssas_target *target; struct mpssas_lun *lun; uint8_t found_lun; char path_str[64]; sassc = sc->sassc; pathid = cam_sim_path(sassc->sim); targetid = xpt_path_target_id(path); lunid = xpt_path_lun_id(path); KASSERT(targetid < sassc->maxtargets, ("Target %d out of bounds in mpssas_check_eedp\n", targetid)); target = &sassc->targets[targetid]; if (target->handle == 0x0) return; /* * Determine if the device is EEDP capable. * * If this flag is set in the inquiry data, * the device supports protection information, * and must support the 16 byte read * capacity command, otherwise continue without * sending read cap 16 */ if ((cgd->inq_data.spc3_flags & SPC3_SID_PROTECT) == 0) return; /* * Issue a READ CAPACITY 16 command. This info * is used to determine if the LUN is formatted * for EEDP support. */ ccb = xpt_alloc_ccb_nowait(); if (ccb == NULL) { mps_dprint(sc, MPS_ERROR, "Unable to alloc CCB " "for EEDP support.\n"); return; } if (xpt_create_path(&local_path, xpt_periph, pathid, targetid, lunid) != CAM_REQ_CMP) { mps_dprint(sc, MPS_ERROR, "Unable to create " "path for EEDP support\n"); xpt_free_ccb(ccb); return; } /* * If LUN is already in list, don't create a new * one. */ found_lun = FALSE; SLIST_FOREACH(lun, &target->luns, lun_link) { if (lun->lun_id == lunid) { found_lun = TRUE; break; } } if (!found_lun) { lun = malloc(sizeof(struct mpssas_lun), M_MPT2, M_NOWAIT | M_ZERO); if (lun == NULL) { mps_dprint(sc, MPS_ERROR, "Unable to alloc LUN for EEDP support.\n"); xpt_free_path(local_path); xpt_free_ccb(ccb); return; } lun->lun_id = lunid; SLIST_INSERT_HEAD(&target->luns, lun, lun_link); } xpt_path_string(local_path, path_str, sizeof(path_str)); mps_dprint(sc, MPS_INFO, "Sending read cap: path %s handle %d\n", path_str, target->handle); /* * Issue a READ CAPACITY 16 command for the LUN. * The mpssas_read_cap_done function will load * the read cap info into the LUN struct. */ rcap_buf = malloc(sizeof(struct scsi_read_capacity_eedp), M_MPT2, M_NOWAIT | M_ZERO); if (rcap_buf == NULL) { mps_dprint(sc, MPS_FAULT, "Unable to alloc read capacity buffer for EEDP support.\n"); xpt_free_path(ccb->ccb_h.path); xpt_free_ccb(ccb); return; } xpt_setup_ccb(&ccb->ccb_h, local_path, CAM_PRIORITY_XPT); csio = &ccb->csio; csio->ccb_h.func_code = XPT_SCSI_IO; csio->ccb_h.flags = CAM_DIR_IN; csio->ccb_h.retry_count = 4; csio->ccb_h.cbfcnp = mpssas_read_cap_done; csio->ccb_h.timeout = 60000; csio->data_ptr = (uint8_t *)rcap_buf; csio->dxfer_len = sizeof(struct scsi_read_capacity_eedp); csio->sense_len = MPS_SENSE_LEN; csio->cdb_len = sizeof(*scsi_cmd); csio->tag_action = MSG_SIMPLE_Q_TAG; scsi_cmd = (struct scsi_read_capacity_16 *)&csio->cdb_io.cdb_bytes; bzero(scsi_cmd, sizeof(*scsi_cmd)); scsi_cmd->opcode = 0x9E; scsi_cmd->service_action = SRC16_SERVICE_ACTION; ((uint8_t *)scsi_cmd)[13] = sizeof(struct scsi_read_capacity_eedp); ccb->ccb_h.ppriv_ptr1 = sassc; xpt_action(ccb); } static void mpssas_read_cap_done(struct cam_periph *periph, union ccb *done_ccb) { struct mpssas_softc *sassc; struct mpssas_target *target; struct mpssas_lun *lun; struct scsi_read_capacity_eedp *rcap_buf; if (done_ccb == NULL) return; /* Driver need to release devq, it Scsi command is * generated by driver internally. * Currently there is a single place where driver * calls scsi command internally. In future if driver * calls more scsi command internally, it needs to release * devq internally, since those command will not go back to * cam_periph. */ if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) ) { done_ccb->ccb_h.status &= ~CAM_DEV_QFRZN; xpt_release_devq(done_ccb->ccb_h.path, /*count*/ 1, /*run_queue*/TRUE); } rcap_buf = (struct scsi_read_capacity_eedp *)done_ccb->csio.data_ptr; /* * Get the LUN ID for the path and look it up in the LUN list for the * target. */ sassc = (struct mpssas_softc *)done_ccb->ccb_h.ppriv_ptr1; KASSERT(done_ccb->ccb_h.target_id < sassc->maxtargets, ("Target %d out of bounds in mpssas_read_cap_done\n", done_ccb->ccb_h.target_id)); target = &sassc->targets[done_ccb->ccb_h.target_id]; SLIST_FOREACH(lun, &target->luns, lun_link) { if (lun->lun_id != done_ccb->ccb_h.target_lun) continue; /* * Got the LUN in the target's LUN list. Fill it in * with EEDP info. If the READ CAP 16 command had some * SCSI error (common if command is not supported), mark * the lun as not supporting EEDP and set the block size * to 0. */ if ((mpssas_get_ccbstatus(done_ccb) != CAM_REQ_CMP) || (done_ccb->csio.scsi_status != SCSI_STATUS_OK)) { lun->eedp_formatted = FALSE; lun->eedp_block_size = 0; break; } if (rcap_buf->protect & 0x01) { mps_dprint(sassc->sc, MPS_INFO, "LUN %d for " "target ID %d is formatted for EEDP " "support.\n", done_ccb->ccb_h.target_lun, done_ccb->ccb_h.target_id); lun->eedp_formatted = TRUE; lun->eedp_block_size = scsi_4btoul(rcap_buf->length); } break; } // Finished with this CCB and path. free(rcap_buf, M_MPT2); xpt_free_path(done_ccb->ccb_h.path); xpt_free_ccb(done_ccb); } #endif /* (__FreeBSD_version < 901503) || \ ((__FreeBSD_version >= 1000000) && (__FreeBSD_version < 1000006)) */ +/* + * Set the INRESET flag for this target so that no I/O will be sent to + * the target until the reset has completed. If an I/O request does + * happen, the devq will be frozen. The CCB holds the path which is + * used to release the devq. The devq is released and the CCB is freed + * when the TM completes. + */ void mpssas_prepare_for_tm(struct mps_softc *sc, struct mps_command *tm, struct mpssas_target *target, lun_id_t lun_id) { union ccb *ccb; path_id_t path_id; - /* - * Set the INRESET flag for this target so that no I/O will be sent to - * the target until the reset has completed. If an I/O request does - * happen, the devq will be frozen. The CCB holds the path which is - * used to release the devq. The devq is released and the CCB is freed - * when the TM completes. - */ ccb = xpt_alloc_ccb_nowait(); if (ccb) { path_id = cam_sim_path(sc->sassc->sim); if (xpt_create_path(&ccb->ccb_h.path, xpt_periph, path_id, target->tid, lun_id) != CAM_REQ_CMP) { xpt_free_ccb(ccb); } else { tm->cm_ccb = ccb; tm->cm_targ = target; target->flags |= MPSSAS_TARGET_INRESET; } } } int mpssas_startup(struct mps_softc *sc) { /* * Send the port enable message and set the wait_for_port_enable flag. * This flag helps to keep the simq frozen until all discovery events * are processed. */ sc->wait_for_port_enable = 1; mpssas_send_portenable(sc); return (0); } static int mpssas_send_portenable(struct mps_softc *sc) { MPI2_PORT_ENABLE_REQUEST *request; struct mps_command *cm; MPS_FUNCTRACE(sc); if ((cm = mps_alloc_command(sc)) == NULL) return (EBUSY); request = (MPI2_PORT_ENABLE_REQUEST *)cm->cm_req; request->Function = MPI2_FUNCTION_PORT_ENABLE; request->MsgFlags = 0; request->VP_ID = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_complete = mpssas_portenable_complete; cm->cm_data = NULL; cm->cm_sge = NULL; mps_map_command(sc, cm); mps_dprint(sc, MPS_XINFO, "mps_send_portenable finished cm %p req %p complete %p\n", cm, cm->cm_req, cm->cm_complete); return (0); } static void mpssas_portenable_complete(struct mps_softc *sc, struct mps_command *cm) { MPI2_PORT_ENABLE_REPLY *reply; struct mpssas_softc *sassc; MPS_FUNCTRACE(sc); sassc = sc->sassc; /* * Currently there should be no way we can hit this case. It only * happens when we have a failure to allocate chain frames, and * port enable commands don't have S/G lists. */ if ((cm->cm_flags & MPS_CM_FLAGS_ERROR_MASK) != 0) { mps_dprint(sc, MPS_ERROR, "%s: cm_flags = %#x for port enable! " "This should not happen!\n", __func__, cm->cm_flags); } reply = (MPI2_PORT_ENABLE_REPLY *)cm->cm_reply; if (reply == NULL) mps_dprint(sc, MPS_FAULT, "Portenable NULL reply\n"); else if (le16toh(reply->IOCStatus & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) mps_dprint(sc, MPS_FAULT, "Portenable failed\n"); mps_free_command(sc, cm); /* * Get WarpDrive info after discovery is complete but before the scan * starts. At this point, all devices are ready to be exposed to the * OS. If devices should be hidden instead, take them out of the * 'targets' array before the scan. The devinfo for a disk will have * some info and a volume's will be 0. Use that to remove disks. */ mps_wd_config_pages(sc); /* * Done waiting for port enable to complete. Decrement the refcount. * If refcount is 0, discovery is complete and a rescan of the bus can * take place. Since the simq was explicitly frozen before port * enable, it must be explicitly released here to keep the * freeze/release count in sync. */ sc->wait_for_port_enable = 0; sc->port_enable_complete = 1; wakeup(&sc->port_enable_complete); mpssas_startup_decrement(sassc); } int mpssas_check_id(struct mpssas_softc *sassc, int id) { struct mps_softc *sc = sassc->sc; char *ids; char *name; ids = &sc->exclude_ids[0]; while((name = strsep(&ids, ",")) != NULL) { if (name[0] == '\0') continue; if (strtol(name, NULL, 0) == (long)id) return (1); } return (0); } void mpssas_realloc_targets(struct mps_softc *sc, int maxtargets) { struct mpssas_softc *sassc; struct mpssas_lun *lun, *lun_tmp; struct mpssas_target *targ; int i; sassc = sc->sassc; /* * The number of targets is based on IOC Facts, so free all of * the allocated LUNs for each target and then the target buffer * itself. */ for (i=0; i< maxtargets; i++) { targ = &sassc->targets[i]; SLIST_FOREACH_SAFE(lun, &targ->luns, lun_link, lun_tmp) { free(lun, M_MPT2); } } free(sassc->targets, M_MPT2); sassc->targets = malloc(sizeof(struct mpssas_target) * maxtargets, M_MPT2, M_WAITOK|M_ZERO); if (!sassc->targets) { panic("%s failed to alloc targets with error %d\n", __func__, ENOMEM); } } Index: releng/12.1/sys/dev/mps/mps_sas_lsi.c =================================================================== --- releng/12.1/sys/dev/mps/mps_sas_lsi.c (revision 352760) +++ releng/12.1/sys/dev/mps/mps_sas_lsi.c (revision 352761) @@ -1,1366 +1,1331 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2015 Avago Technologies * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD */ #include __FBSDID("$FreeBSD$"); /* Communications core for Avago Technologies (LSI) MPT2 */ /* TODO Move headers to mpsvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* For Hashed SAS Address creation for SATA Drives */ #define MPT2SAS_SN_LEN 20 #define MPT2SAS_MN_LEN 40 struct mps_fw_event_work { u16 event; void *event_data; TAILQ_ENTRY(mps_fw_event_work) ev_link; }; union _sata_sas_address { u8 wwid[8]; struct { u32 high; u32 low; } word; }; /* * define the IDENTIFY DEVICE structure */ struct _ata_identify_device_data { u16 reserved1[10]; /* 0-9 */ u16 serial_number[10]; /* 10-19 */ u16 reserved2[7]; /* 20-26 */ u16 model_number[20]; /* 27-46*/ u16 reserved3[170]; /* 47-216 */ u16 rotational_speed; /* 217 */ u16 reserved4[38]; /* 218-255 */ }; static u32 event_count; static void mpssas_fw_work(struct mps_softc *sc, struct mps_fw_event_work *fw_event); static void mpssas_fw_event_free(struct mps_softc *, struct mps_fw_event_work *); static int mpssas_add_device(struct mps_softc *sc, u16 handle, u8 linkrate); static int mpssas_get_sata_identify(struct mps_softc *sc, u16 handle, Mpi2SataPassthroughReply_t *mpi_reply, char *id_buffer, int sz, u32 devinfo); -static void mpssas_ata_id_timeout(void *data); +static void mpssas_ata_id_timeout(struct mps_softc *, struct mps_command *); int mpssas_get_sas_address_for_sata_disk(struct mps_softc *sc, u64 *sas_address, u16 handle, u32 device_info, u8 *is_SATA_SSD); static int mpssas_volume_add(struct mps_softc *sc, u16 handle); static void mpssas_SSU_to_SATA_devices(struct mps_softc *sc, int howto); static void mpssas_stop_unit_done(struct cam_periph *periph, union ccb *done_ccb); void mpssas_evt_handler(struct mps_softc *sc, uintptr_t data, MPI2_EVENT_NOTIFICATION_REPLY *event) { struct mps_fw_event_work *fw_event; u16 sz; mps_dprint(sc, MPS_TRACE, "%s\n", __func__); MPS_DPRINT_EVENT(sc, sas, event); mpssas_record_event(sc, event); fw_event = malloc(sizeof(struct mps_fw_event_work), M_MPT2, M_ZERO|M_NOWAIT); if (!fw_event) { printf("%s: allocate failed for fw_event\n", __func__); return; } sz = le16toh(event->EventDataLength) * 4; fw_event->event_data = malloc(sz, M_MPT2, M_ZERO|M_NOWAIT); if (!fw_event->event_data) { printf("%s: allocate failed for event_data\n", __func__); free(fw_event, M_MPT2); return; } bcopy(event->EventData, fw_event->event_data, sz); fw_event->event = event->Event; if ((event->Event == MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST || event->Event == MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE || event->Event == MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST) && sc->track_mapping_events) sc->pending_map_events++; /* * When wait_for_port_enable flag is set, make sure that all the events * are processed. Increment the startup_refcount and decrement it after * events are processed. */ if ((event->Event == MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST || event->Event == MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST) && sc->wait_for_port_enable) mpssas_startup_increment(sc->sassc); TAILQ_INSERT_TAIL(&sc->sassc->ev_queue, fw_event, ev_link); taskqueue_enqueue(sc->sassc->ev_tq, &sc->sassc->ev_task); } static void mpssas_fw_event_free(struct mps_softc *sc, struct mps_fw_event_work *fw_event) { free(fw_event->event_data, M_MPT2); free(fw_event, M_MPT2); } /** * _mps_fw_work - delayed task for processing firmware events * @sc: per adapter object * @fw_event: The fw_event_work object * Context: user. * * Return nothing. */ static void mpssas_fw_work(struct mps_softc *sc, struct mps_fw_event_work *fw_event) { struct mpssas_softc *sassc; sassc = sc->sassc; mps_dprint(sc, MPS_EVENT, "(%d)->(%s) Working on Event: [%x]\n", event_count++,__func__,fw_event->event); switch (fw_event->event) { case MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST: { MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST *data; MPI2_EVENT_SAS_TOPO_PHY_ENTRY *phy; int i; data = (MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST *) fw_event->event_data; mps_mapping_topology_change_event(sc, fw_event->event_data); for (i = 0; i < data->NumEntries; i++) { phy = &data->PHY[i]; switch (phy->PhyStatus & MPI2_EVENT_SAS_TOPO_RC_MASK) { case MPI2_EVENT_SAS_TOPO_RC_TARG_ADDED: if (mpssas_add_device(sc, le16toh(phy->AttachedDevHandle), phy->LinkRate)){ mps_dprint(sc, MPS_ERROR, "%s: " "failed to add device with handle " "0x%x\n", __func__, le16toh(phy->AttachedDevHandle)); mpssas_prepare_remove(sassc, le16toh( phy->AttachedDevHandle)); } break; case MPI2_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING: mpssas_prepare_remove(sassc,le16toh( phy->AttachedDevHandle)); break; case MPI2_EVENT_SAS_TOPO_RC_PHY_CHANGED: case MPI2_EVENT_SAS_TOPO_RC_NO_CHANGE: case MPI2_EVENT_SAS_TOPO_RC_DELAY_NOT_RESPONDING: default: break; } } /* * refcount was incremented for this event in * mpssas_evt_handler. Decrement it here because the event has * been processed. */ mpssas_startup_decrement(sassc); break; } case MPI2_EVENT_SAS_DISCOVERY: { MPI2_EVENT_DATA_SAS_DISCOVERY *data; data = (MPI2_EVENT_DATA_SAS_DISCOVERY *)fw_event->event_data; if (data->ReasonCode & MPI2_EVENT_SAS_DISC_RC_STARTED) mps_dprint(sc, MPS_TRACE,"SAS discovery start event\n"); if (data->ReasonCode & MPI2_EVENT_SAS_DISC_RC_COMPLETED) { mps_dprint(sc, MPS_TRACE,"SAS discovery stop event\n"); sassc->flags &= ~MPSSAS_IN_DISCOVERY; mpssas_discovery_end(sassc); } break; } case MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE: { Mpi2EventDataSasEnclDevStatusChange_t *data; data = (Mpi2EventDataSasEnclDevStatusChange_t *) fw_event->event_data; mps_mapping_enclosure_dev_status_change_event(sc, fw_event->event_data); break; } case MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST: { Mpi2EventIrConfigElement_t *element; int i; u8 foreign_config; Mpi2EventDataIrConfigChangeList_t *event_data; struct mpssas_target *targ; unsigned int id; event_data = fw_event->event_data; foreign_config = (le32toh(event_data->Flags) & MPI2_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG) ? 1 : 0; element = (Mpi2EventIrConfigElement_t *)&event_data->ConfigElement[0]; id = mps_mapping_get_raid_tid_from_handle(sc, element->VolDevHandle); mps_mapping_ir_config_change_event(sc, event_data); for (i = 0; i < event_data->NumElements; i++, element++) { switch (element->ReasonCode) { case MPI2_EVENT_IR_CHANGE_RC_VOLUME_CREATED: case MPI2_EVENT_IR_CHANGE_RC_ADDED: if (!foreign_config) { if (mpssas_volume_add(sc, le16toh(element->VolDevHandle))){ printf("%s: failed to add RAID " "volume with handle 0x%x\n", __func__, le16toh(element-> VolDevHandle)); } } break; case MPI2_EVENT_IR_CHANGE_RC_VOLUME_DELETED: case MPI2_EVENT_IR_CHANGE_RC_REMOVED: /* * Rescan after volume is deleted or removed. */ if (!foreign_config) { if (id == MPS_MAP_BAD_ID) { printf("%s: could not get ID " "for volume with handle " "0x%04x\n", __func__, le16toh(element->VolDevHandle)); break; } targ = &sassc->targets[id]; targ->handle = 0x0; targ->encl_slot = 0x0; targ->encl_handle = 0x0; targ->exp_dev_handle = 0x0; targ->phy_num = 0x0; targ->linkrate = 0x0; mpssas_rescan_target(sc, targ); printf("RAID target id 0x%x removed\n", targ->tid); } break; case MPI2_EVENT_IR_CHANGE_RC_PD_CREATED: case MPI2_EVENT_IR_CHANGE_RC_HIDE: /* * Phys Disk of a volume has been created. Hide * it from the OS. */ targ = mpssas_find_target_by_handle(sassc, 0, element->PhysDiskDevHandle); if (targ == NULL) break; /* * Set raid component flags only if it is not * WD. OR WrapDrive with * WD_HIDE_ALWAYS/WD_HIDE_IF_VOLUME is set in * NVRAM */ if((!sc->WD_available) || ((sc->WD_available && (sc->WD_hide_expose == MPS_WD_HIDE_ALWAYS)) || (sc->WD_valid_config && (sc->WD_hide_expose == MPS_WD_HIDE_IF_VOLUME)))) { targ->flags |= MPS_TARGET_FLAGS_RAID_COMPONENT; } mpssas_rescan_target(sc, targ); break; case MPI2_EVENT_IR_CHANGE_RC_PD_DELETED: /* * Phys Disk of a volume has been deleted. * Expose it to the OS. */ if (mpssas_add_device(sc, le16toh(element->PhysDiskDevHandle), 0)){ printf("%s: failed to add device with " "handle 0x%x\n", __func__, le16toh(element->PhysDiskDevHandle)); mpssas_prepare_remove(sassc, le16toh(element-> PhysDiskDevHandle)); } break; } } /* * refcount was incremented for this event in * mpssas_evt_handler. Decrement it here because the event has * been processed. */ mpssas_startup_decrement(sassc); break; } case MPI2_EVENT_IR_VOLUME: { Mpi2EventDataIrVolume_t *event_data = fw_event->event_data; /* * Informational only. */ mps_dprint(sc, MPS_EVENT, "Received IR Volume event:\n"); switch (event_data->ReasonCode) { case MPI2_EVENT_IR_VOLUME_RC_SETTINGS_CHANGED: mps_dprint(sc, MPS_EVENT, " Volume Settings " "changed from 0x%x to 0x%x for Volome with " "handle 0x%x", le32toh(event_data->PreviousValue), le32toh(event_data->NewValue), le16toh(event_data->VolDevHandle)); break; case MPI2_EVENT_IR_VOLUME_RC_STATUS_FLAGS_CHANGED: mps_dprint(sc, MPS_EVENT, " Volume Status " "changed from 0x%x to 0x%x for Volome with " "handle 0x%x", le32toh(event_data->PreviousValue), le32toh(event_data->NewValue), le16toh(event_data->VolDevHandle)); break; case MPI2_EVENT_IR_VOLUME_RC_STATE_CHANGED: mps_dprint(sc, MPS_EVENT, " Volume State " "changed from 0x%x to 0x%x for Volome with " "handle 0x%x", le32toh(event_data->PreviousValue), le32toh(event_data->NewValue), le16toh(event_data->VolDevHandle)); u32 state; struct mpssas_target *targ; state = le32toh(event_data->NewValue); switch (state) { case MPI2_RAID_VOL_STATE_MISSING: case MPI2_RAID_VOL_STATE_FAILED: mpssas_prepare_volume_remove(sassc, event_data-> VolDevHandle); break; case MPI2_RAID_VOL_STATE_ONLINE: case MPI2_RAID_VOL_STATE_DEGRADED: case MPI2_RAID_VOL_STATE_OPTIMAL: targ = mpssas_find_target_by_handle(sassc, 0, event_data->VolDevHandle); if (targ) { printf("%s %d: Volume handle 0x%x is already added \n", __func__, __LINE__ , event_data->VolDevHandle); break; } if (mpssas_volume_add(sc, le16toh(event_data->VolDevHandle))) { printf("%s: failed to add RAID " "volume with handle 0x%x\n", __func__, le16toh(event_data-> VolDevHandle)); } break; default: break; } break; default: break; } break; } case MPI2_EVENT_IR_PHYSICAL_DISK: { Mpi2EventDataIrPhysicalDisk_t *event_data = fw_event->event_data; struct mpssas_target *targ; /* * Informational only. */ mps_dprint(sc, MPS_EVENT, "Received IR Phys Disk event:\n"); switch (event_data->ReasonCode) { case MPI2_EVENT_IR_PHYSDISK_RC_SETTINGS_CHANGED: mps_dprint(sc, MPS_EVENT, " Phys Disk Settings " "changed from 0x%x to 0x%x for Phys Disk Number " "%d and handle 0x%x at Enclosure handle 0x%x, Slot " "%d", le32toh(event_data->PreviousValue), le32toh(event_data->NewValue), event_data->PhysDiskNum, le16toh(event_data->PhysDiskDevHandle), le16toh(event_data->EnclosureHandle), le16toh(event_data->Slot)); break; case MPI2_EVENT_IR_PHYSDISK_RC_STATUS_FLAGS_CHANGED: mps_dprint(sc, MPS_EVENT, " Phys Disk Status changed " "from 0x%x to 0x%x for Phys Disk Number %d and " "handle 0x%x at Enclosure handle 0x%x, Slot %d", le32toh(event_data->PreviousValue), le32toh(event_data->NewValue), event_data->PhysDiskNum, le16toh(event_data->PhysDiskDevHandle), le16toh(event_data->EnclosureHandle), le16toh(event_data->Slot)); break; case MPI2_EVENT_IR_PHYSDISK_RC_STATE_CHANGED: mps_dprint(sc, MPS_EVENT, " Phys Disk State changed " "from 0x%x to 0x%x for Phys Disk Number %d and " "handle 0x%x at Enclosure handle 0x%x, Slot %d", le32toh(event_data->PreviousValue), le32toh(event_data->NewValue), event_data->PhysDiskNum, le16toh(event_data->PhysDiskDevHandle), le16toh(event_data->EnclosureHandle), le16toh(event_data->Slot)); switch (event_data->NewValue) { case MPI2_RAID_PD_STATE_ONLINE: case MPI2_RAID_PD_STATE_DEGRADED: case MPI2_RAID_PD_STATE_REBUILDING: case MPI2_RAID_PD_STATE_OPTIMAL: case MPI2_RAID_PD_STATE_HOT_SPARE: targ = mpssas_find_target_by_handle(sassc, 0, event_data->PhysDiskDevHandle); if (targ) { if(!sc->WD_available) { targ->flags |= MPS_TARGET_FLAGS_RAID_COMPONENT; printf("%s %d: Found Target for handle 0x%x. \n", __func__, __LINE__ , event_data->PhysDiskDevHandle); } else if ((sc->WD_available && (sc->WD_hide_expose == MPS_WD_HIDE_ALWAYS)) || (sc->WD_valid_config && (sc->WD_hide_expose == MPS_WD_HIDE_IF_VOLUME))) { targ->flags |= MPS_TARGET_FLAGS_RAID_COMPONENT; printf("%s %d: WD: Found Target for handle 0x%x. \n", __func__, __LINE__ , event_data->PhysDiskDevHandle); } } break; case MPI2_RAID_PD_STATE_OFFLINE: case MPI2_RAID_PD_STATE_NOT_CONFIGURED: case MPI2_RAID_PD_STATE_NOT_COMPATIBLE: default: targ = mpssas_find_target_by_handle(sassc, 0, event_data->PhysDiskDevHandle); if (targ) { targ->flags |= ~MPS_TARGET_FLAGS_RAID_COMPONENT; printf("%s %d: Found Target for handle 0x%x. \n", __func__, __LINE__ , event_data->PhysDiskDevHandle); } break; } default: break; } break; } case MPI2_EVENT_IR_OPERATION_STATUS: { Mpi2EventDataIrOperationStatus_t *event_data = fw_event->event_data; /* * Informational only. */ mps_dprint(sc, MPS_EVENT, "Received IR Op Status event:\n"); mps_dprint(sc, MPS_EVENT, " RAID Operation of %d is %d " "percent complete for Volume with handle 0x%x", event_data->RAIDOperation, event_data->PercentComplete, le16toh(event_data->VolDevHandle)); break; } case MPI2_EVENT_LOG_ENTRY_ADDED: { pMpi2EventDataLogEntryAdded_t logEntry; uint16_t logQualifier; uint8_t logCode; logEntry = (pMpi2EventDataLogEntryAdded_t)fw_event->event_data; logQualifier = logEntry->LogEntryQualifier; if (logQualifier == MPI2_WD_LOG_ENTRY) { logCode = logEntry->LogData[0]; switch (logCode) { case MPI2_WD_SSD_THROTTLING: printf("WarpDrive Warning: IO Throttling has " "occurred in the WarpDrive subsystem. " "Check WarpDrive documentation for " "additional details\n"); break; case MPI2_WD_DRIVE_LIFE_WARN: printf("WarpDrive Warning: Program/Erase " "Cycles for the WarpDrive subsystem in " "degraded range. Check WarpDrive " "documentation for additional details\n"); break; case MPI2_WD_DRIVE_LIFE_DEAD: printf("WarpDrive Fatal Error: There are no " "Program/Erase Cycles for the WarpDrive " "subsystem. The storage device will be in " "read-only mode. Check WarpDrive " "documentation for additional details\n"); break; case MPI2_WD_RAIL_MON_FAIL: printf("WarpDrive Fatal Error: The Backup Rail " "Monitor has failed on the WarpDrive " "subsystem. Check WarpDrive documentation " "for additional details\n"); break; default: break; } } break; } case MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE: case MPI2_EVENT_SAS_BROADCAST_PRIMITIVE: default: mps_dprint(sc, MPS_TRACE,"Unhandled event 0x%0X\n", fw_event->event); break; } mps_dprint(sc, MPS_EVENT, "(%d)->(%s) Event Free: [%x]\n",event_count,__func__, fw_event->event); mpssas_fw_event_free(sc, fw_event); } void mpssas_firmware_event_work(void *arg, int pending) { struct mps_fw_event_work *fw_event; struct mps_softc *sc; sc = (struct mps_softc *)arg; mps_lock(sc); while ((fw_event = TAILQ_FIRST(&sc->sassc->ev_queue)) != NULL) { TAILQ_REMOVE(&sc->sassc->ev_queue, fw_event, ev_link); mpssas_fw_work(sc, fw_event); } mps_unlock(sc); } static int mpssas_add_device(struct mps_softc *sc, u16 handle, u8 linkrate){ char devstring[80]; struct mpssas_softc *sassc; struct mpssas_target *targ; Mpi2ConfigReply_t mpi_reply; Mpi2SasDevicePage0_t config_page; uint64_t sas_address; uint64_t parent_sas_address = 0; u32 device_info, parent_devinfo = 0; unsigned int id; int ret = 1, error = 0, i; struct mpssas_lun *lun; u8 is_SATA_SSD = 0; struct mps_command *cm; sassc = sc->sassc; mpssas_startup_increment(sassc); if (mps_config_get_sas_device_pg0(sc, &mpi_reply, &config_page, MPI2_SAS_DEVICE_PGAD_FORM_HANDLE, handle) != 0) { mps_dprint(sc, MPS_INFO|MPS_MAPPING|MPS_FAULT, "Error reading SAS device %#x page0, iocstatus= 0x%x\n", handle, mpi_reply.IOCStatus); error = ENXIO; goto out; } device_info = le32toh(config_page.DeviceInfo); if (((device_info & MPI2_SAS_DEVICE_INFO_SMP_TARGET) == 0) && (le16toh(config_page.ParentDevHandle) != 0)) { Mpi2ConfigReply_t tmp_mpi_reply; Mpi2SasDevicePage0_t parent_config_page; if (mps_config_get_sas_device_pg0(sc, &tmp_mpi_reply, &parent_config_page, MPI2_SAS_DEVICE_PGAD_FORM_HANDLE, le16toh(config_page.ParentDevHandle)) != 0) { mps_dprint(sc, MPS_MAPPING|MPS_FAULT, "Error reading parent SAS device %#x page0, " "iocstatus= 0x%x\n", le16toh(config_page.ParentDevHandle), tmp_mpi_reply.IOCStatus); } else { parent_sas_address = parent_config_page.SASAddress.High; parent_sas_address = (parent_sas_address << 32) | parent_config_page.SASAddress.Low; parent_devinfo = le32toh(parent_config_page.DeviceInfo); } } /* TODO Check proper endianness */ sas_address = config_page.SASAddress.High; sas_address = (sas_address << 32) | config_page.SASAddress.Low; mps_dprint(sc, MPS_MAPPING, "Handle 0x%04x SAS Address from SAS device " "page0 = %jx\n", handle, sas_address); /* * Always get SATA Identify information because this is used to * determine if Start/Stop Unit should be sent to the drive when the * system is shutdown. */ if (device_info & MPI2_SAS_DEVICE_INFO_SATA_DEVICE) { ret = mpssas_get_sas_address_for_sata_disk(sc, &sas_address, handle, device_info, &is_SATA_SSD); if (ret) { mps_dprint(sc, MPS_MAPPING|MPS_ERROR, "%s: failed to get disk type (SSD or HDD) for SATA " "device with handle 0x%04x\n", __func__, handle); } else { mps_dprint(sc, MPS_MAPPING, "Handle 0x%04x SAS Address " "from SATA device = %jx\n", handle, sas_address); } } /* * use_phynum: * 1 - use the PhyNum field as a fallback to the mapping logic * 0 - never use the PhyNum field * -1 - only use the PhyNum field * * Note that using the Phy number to map a device can cause device adds * to fail if multiple enclosures/expanders are in the topology. For * example, if two devices are in the same slot number in two different * enclosures within the topology, only one of those devices will be * added. PhyNum mapping should not be used if multiple enclosures are * in the topology. */ id = MPS_MAP_BAD_ID; if (sc->use_phynum != -1) id = mps_mapping_get_tid(sc, sas_address, handle); if (id == MPS_MAP_BAD_ID) { if ((sc->use_phynum == 0) || ((id = config_page.PhyNum) > sassc->maxtargets)) { mps_dprint(sc, MPS_INFO, "failure at %s:%d/%s()! " "Could not get ID for device with handle 0x%04x\n", __FILE__, __LINE__, __func__, handle); error = ENXIO; goto out; } } mps_dprint(sc, MPS_MAPPING, "%s: Target ID for added device is %d.\n", __func__, id); /* * Only do the ID check and reuse check if the target is not from a * RAID Component. For Physical Disks of a Volume, the ID will be reused * when a volume is deleted because the mapping entry for the PD will * still be in the mapping table. The ID check should not be done here * either since this PD is already being used. */ targ = &sassc->targets[id]; if (!(targ->flags & MPS_TARGET_FLAGS_RAID_COMPONENT)) { if (mpssas_check_id(sassc, id) != 0) { mps_dprint(sc, MPS_MAPPING|MPS_INFO, "Excluding target id %d\n", id); error = ENXIO; goto out; } if (targ->handle != 0x0) { mps_dprint(sc, MPS_MAPPING, "Attempting to reuse " "target id %d handle 0x%04x\n", id, targ->handle); error = ENXIO; goto out; } } targ->devinfo = device_info; targ->devname = le32toh(config_page.DeviceName.High); targ->devname = (targ->devname << 32) | le32toh(config_page.DeviceName.Low); targ->encl_handle = le16toh(config_page.EnclosureHandle); targ->encl_slot = le16toh(config_page.Slot); targ->handle = handle; targ->parent_handle = le16toh(config_page.ParentDevHandle); targ->sasaddr = mps_to_u64(&config_page.SASAddress); targ->parent_sasaddr = le64toh(parent_sas_address); targ->parent_devinfo = parent_devinfo; targ->tid = id; targ->linkrate = (linkrate>>4); targ->flags = 0; if (is_SATA_SSD) { targ->flags = MPS_TARGET_IS_SATA_SSD; } TAILQ_INIT(&targ->commands); TAILQ_INIT(&targ->timedout_commands); while(!SLIST_EMPTY(&targ->luns)) { lun = SLIST_FIRST(&targ->luns); SLIST_REMOVE_HEAD(&targ->luns, lun_link); free(lun, M_MPT2); } SLIST_INIT(&targ->luns); mps_describe_devinfo(targ->devinfo, devstring, 80); mps_dprint(sc, MPS_MAPPING, "Found device <%s> <%s> <0x%04x> <%d/%d>\n", devstring, mps_describe_table(mps_linkrate_names, targ->linkrate), targ->handle, targ->encl_handle, targ->encl_slot); #if __FreeBSD_version < 1000039 if ((sassc->flags & MPSSAS_IN_STARTUP) == 0) #endif mpssas_rescan_target(sc, targ); mps_dprint(sc, MPS_MAPPING, "Target id 0x%x added\n", targ->tid); /* * Check all commands to see if the SATA_ID_TIMEOUT flag has been set. * If so, send a Target Reset TM to the target that was just created. * An Abort Task TM should be used instead of a Target Reset, but that * would be much more difficult because targets have not been fully * discovered yet, and LUN's haven't been setup. So, just reset the * target instead of the LUN. */ for (i = 1; i < sc->num_reqs; i++) { cm = &sc->commands[i]; if (cm->cm_flags & MPS_CM_FLAGS_SATA_ID_TIMEOUT) { targ->timeouts++; - cm->cm_state = MPS_CM_STATE_TIMEDOUT; + cm->cm_flags |= MPS_CM_FLAGS_TIMEDOUT; if ((targ->tm = mpssas_alloc_tm(sc)) != NULL) { mps_dprint(sc, MPS_INFO, "%s: sending Target " "Reset for stuck SATA identify command " "(cm = %p)\n", __func__, cm); targ->tm->cm_targ = targ; mpssas_send_reset(sc, targ->tm, MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET); } else { mps_dprint(sc, MPS_ERROR, "Failed to allocate " "tm for Target Reset after SATA ID command " "timed out (cm %p)\n", cm); } /* * No need to check for more since the target is * already being reset. */ break; } } out: /* * Free the commands that may not have been freed from the SATA ID call */ for (i = 1; i < sc->num_reqs; i++) { cm = &sc->commands[i]; if (cm->cm_flags & MPS_CM_FLAGS_SATA_ID_TIMEOUT) { + free(cm->cm_data, M_MPT2); mps_free_command(sc, cm); } } mpssas_startup_decrement(sassc); return (error); } int mpssas_get_sas_address_for_sata_disk(struct mps_softc *sc, u64 *sas_address, u16 handle, u32 device_info, u8 *is_SATA_SSD) { Mpi2SataPassthroughReply_t mpi_reply; int i, rc, try_count; u32 *bufferptr; union _sata_sas_address hash_address; struct _ata_identify_device_data ata_identify; u8 buffer[MPT2SAS_MN_LEN + MPT2SAS_SN_LEN]; u32 ioc_status; u8 sas_status; memset(&ata_identify, 0, sizeof(ata_identify)); try_count = 0; do { rc = mpssas_get_sata_identify(sc, handle, &mpi_reply, (char *)&ata_identify, sizeof(ata_identify), device_info); try_count++; ioc_status = le16toh(mpi_reply.IOCStatus) & MPI2_IOCSTATUS_MASK; sas_status = mpi_reply.SASStatus; switch (ioc_status) { case MPI2_IOCSTATUS_SUCCESS: break; case MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR: /* No sense sleeping. this error won't get better */ break; default: if (sc->spinup_wait_time > 0) { mps_dprint(sc, MPS_INFO, "Sleeping %d seconds " "after SATA ID error to wait for spinup\n", sc->spinup_wait_time); msleep(&sc->msleep_fake_chan, &sc->mps_mtx, 0, "mpsid", sc->spinup_wait_time * hz); } } } while (((rc && (rc != EWOULDBLOCK)) || (ioc_status && (ioc_status != MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR)) || sas_status) && (try_count < 5)); if (rc == 0 && !ioc_status && !sas_status) { mps_dprint(sc, MPS_MAPPING, "%s: got SATA identify " "successfully for handle = 0x%x with try_count = %d\n", __func__, handle, try_count); } else { mps_dprint(sc, MPS_MAPPING, "%s: handle = 0x%x failed\n", __func__, handle); return -1; } /* Copy & byteswap the 40 byte model number to a buffer */ for (i = 0; i < MPT2SAS_MN_LEN; i += 2) { buffer[i] = ((u8 *)ata_identify.model_number)[i + 1]; buffer[i + 1] = ((u8 *)ata_identify.model_number)[i]; } /* Copy & byteswap the 20 byte serial number to a buffer */ for (i = 0; i < MPT2SAS_SN_LEN; i += 2) { buffer[MPT2SAS_MN_LEN + i] = ((u8 *)ata_identify.serial_number)[i + 1]; buffer[MPT2SAS_MN_LEN + i + 1] = ((u8 *)ata_identify.serial_number)[i]; } bufferptr = (u32 *)buffer; /* There are 60 bytes to hash down to 8. 60 isn't divisible by 8, * so loop through the first 56 bytes (7*8), * and then add in the last dword. */ hash_address.word.low = 0; hash_address.word.high = 0; for (i = 0; (i < ((MPT2SAS_MN_LEN+MPT2SAS_SN_LEN)/8)); i++) { hash_address.word.low += *bufferptr; bufferptr++; hash_address.word.high += *bufferptr; bufferptr++; } /* Add the last dword */ hash_address.word.low += *bufferptr; /* Make sure the hash doesn't start with 5, because it could clash * with a SAS address. Change 5 to a D. */ if ((hash_address.word.high & 0x000000F0) == (0x00000050)) hash_address.word.high |= 0x00000080; *sas_address = (u64)hash_address.wwid[0] << 56 | (u64)hash_address.wwid[1] << 48 | (u64)hash_address.wwid[2] << 40 | (u64)hash_address.wwid[3] << 32 | (u64)hash_address.wwid[4] << 24 | (u64)hash_address.wwid[5] << 16 | (u64)hash_address.wwid[6] << 8 | (u64)hash_address.wwid[7]; if (ata_identify.rotational_speed == 1) { *is_SATA_SSD = 1; } return 0; } static int mpssas_get_sata_identify(struct mps_softc *sc, u16 handle, Mpi2SataPassthroughReply_t *mpi_reply, char *id_buffer, int sz, u32 devinfo) { Mpi2SataPassthroughRequest_t *mpi_request; Mpi2SataPassthroughReply_t *reply = NULL; struct mps_command *cm; char *buffer; int error = 0; buffer = malloc( sz, M_MPT2, M_NOWAIT | M_ZERO); if (!buffer) return ENOMEM; if ((cm = mps_alloc_command(sc)) == NULL) { free(buffer, M_MPT2); return (EBUSY); } mpi_request = (MPI2_SATA_PASSTHROUGH_REQUEST *)cm->cm_req; bzero(mpi_request,sizeof(MPI2_SATA_PASSTHROUGH_REQUEST)); mpi_request->Function = MPI2_FUNCTION_SATA_PASSTHROUGH; mpi_request->VF_ID = 0; mpi_request->DevHandle = htole16(handle); mpi_request->PassthroughFlags = (MPI2_SATA_PT_REQ_PT_FLAGS_PIO | MPI2_SATA_PT_REQ_PT_FLAGS_READ); mpi_request->DataLength = htole32(sz); mpi_request->CommandFIS[0] = 0x27; mpi_request->CommandFIS[1] = 0x80; mpi_request->CommandFIS[2] = (devinfo & MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE) ? 0xA1 : 0xEC; cm->cm_sge = &mpi_request->SGL; cm->cm_sglsize = sizeof(MPI2_SGE_IO_UNION); cm->cm_flags = MPS_CM_FLAGS_SGE_SIMPLE | MPS_CM_FLAGS_DATAIN; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_data = buffer; cm->cm_length = htole32(sz); /* - * Start a timeout counter specifically for the SATA ID command. This - * is used to fix a problem where the FW does not send a reply sometimes + * Use a custom handler to avoid reinit'ing the controller on timeout. + * This fixes a problem where the FW does not send a reply sometimes * when a bad disk is in the topology. So, this is used to timeout the * command so that processing can continue normally. */ - mps_dprint(sc, MPS_XINFO, "%s start timeout counter for SATA ID " - "command\n", __func__); - callout_reset(&cm->cm_callout, MPS_ATA_ID_TIMEOUT * hz, - mpssas_ata_id_timeout, cm); - error = mps_wait_command(sc, &cm, 60, CAN_SLEEP); - mps_dprint(sc, MPS_XINFO, "%s stop timeout counter for SATA ID " - "command\n", __func__); - /* XXX KDM need to fix the case where this command is destroyed */ - callout_stop(&cm->cm_callout); + cm->cm_timeout_handler = mpssas_ata_id_timeout; - if (cm != NULL) - reply = (Mpi2SataPassthroughReply_t *)cm->cm_reply; + error = mps_wait_command(sc, &cm, MPS_ATA_ID_TIMEOUT, CAN_SLEEP); + + /* mpssas_ata_id_timeout does not reset controller */ + KASSERT(cm != NULL, ("%s: surprise command freed", __func__)); + + reply = (Mpi2SataPassthroughReply_t *)cm->cm_reply; if (error || (reply == NULL)) { /* FIXME */ /* * If the request returns an error then we need to do a diag * reset */ mps_dprint(sc, MPS_INFO|MPS_FAULT|MPS_MAPPING, - "Request for SATA PASSTHROUGH page completed with error %d", + "Request for SATA PASSTHROUGH page completed with error %d\n", error); error = ENXIO; goto out; } bcopy(buffer, id_buffer, sz); bcopy(reply, mpi_reply, sizeof(Mpi2SataPassthroughReply_t)); if ((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) { mps_dprint(sc, MPS_INFO|MPS_MAPPING|MPS_FAULT, "Error reading device %#x SATA PASSTHRU; iocstatus= 0x%x\n", handle, reply->IOCStatus); error = ENXIO; goto out; } out: /* * If the SATA_ID_TIMEOUT flag has been set for this command, don't free - * it. The command will be freed after sending a target reset TM. If - * the command did timeout, use EWOULDBLOCK. + * it. The command and buffer will be freed after sending an Abort + * Task TM. */ - if ((cm != NULL) - && (cm->cm_flags & MPS_CM_FLAGS_SATA_ID_TIMEOUT) == 0) + if ((cm->cm_flags & MPS_CM_FLAGS_SATA_ID_TIMEOUT) == 0) { mps_free_command(sc, cm); - else if (error == 0) - error = EWOULDBLOCK; - free(buffer, M_MPT2); + free(buffer, M_MPT2); + } return (error); } static void -mpssas_ata_id_timeout(void *data) +mpssas_ata_id_timeout(struct mps_softc *sc, struct mps_command *cm) { - struct mps_softc *sc; - struct mps_command *cm; - cm = (struct mps_command *)data; - sc = cm->cm_sc; - mtx_assert(&sc->mps_mtx, MA_OWNED); - - mps_dprint(sc, MPS_INFO, "%s checking ATA ID command %p sc %p\n", + mps_dprint(sc, MPS_INFO, "%s ATA ID command timeout cm %p sc %p\n", __func__, cm, sc); - if ((callout_pending(&cm->cm_callout)) || - (!callout_active(&cm->cm_callout))) { - mps_dprint(sc, MPS_INFO, "%s ATA ID command almost timed out\n", - __func__); - return; - } - callout_deactivate(&cm->cm_callout); /* - * Run the interrupt handler to make sure it's not pending. This - * isn't perfect because the command could have already completed - * and been re-used, though this is unlikely. + * The Abort Task cannot be sent from here because the driver has not + * completed setting up targets. Instead, the command is flagged so + * that special handling will be used to send the abort. Now that + * this command has timed out, it's no longer in the queue. */ - mps_intr_locked(sc); - if (cm->cm_state == MPS_CM_STATE_FREE) { - mps_dprint(sc, MPS_INFO, "%s ATA ID command almost timed out\n", - __func__); - return; - } - - mps_dprint(sc, MPS_INFO, "ATA ID command timeout cm %p\n", cm); - - /* - * Send wakeup() to the sleeping thread that issued this ATA ID command. - * wakeup() will cause msleep to return a 0 (not EWOULDBLOCK), and this - * will keep reinit() from being called. This way, an Abort Task TM can - * be issued so that the timed out command can be cleared. The Abort - * Task cannot be sent from here because the driver has not completed - * setting up targets. Instead, the command is flagged so that special - * handling will be used to send the abort. - */ cm->cm_flags |= MPS_CM_FLAGS_SATA_ID_TIMEOUT; - wakeup(cm); + cm->cm_state = MPS_CM_STATE_BUSY; } static int mpssas_volume_add(struct mps_softc *sc, u16 handle) { struct mpssas_softc *sassc; struct mpssas_target *targ; u64 wwid; unsigned int id; int error = 0; struct mpssas_lun *lun; sassc = sc->sassc; mpssas_startup_increment(sassc); /* wwid is endian safe */ mps_config_get_volume_wwid(sc, handle, &wwid); if (!wwid) { printf("%s: invalid WWID; cannot add volume to mapping table\n", __func__); error = ENXIO; goto out; } id = mps_mapping_get_raid_tid(sc, wwid, handle); if (id == MPS_MAP_BAD_ID) { printf("%s: could not get ID for volume with handle 0x%04x and " "WWID 0x%016llx\n", __func__, handle, (unsigned long long)wwid); error = ENXIO; goto out; } targ = &sassc->targets[id]; targ->tid = id; targ->handle = handle; targ->devname = wwid; TAILQ_INIT(&targ->commands); TAILQ_INIT(&targ->timedout_commands); while(!SLIST_EMPTY(&targ->luns)) { lun = SLIST_FIRST(&targ->luns); SLIST_REMOVE_HEAD(&targ->luns, lun_link); free(lun, M_MPT2); } SLIST_INIT(&targ->luns); #if __FreeBSD_version < 1000039 if ((sassc->flags & MPSSAS_IN_STARTUP) == 0) #endif mpssas_rescan_target(sc, targ); mps_dprint(sc, MPS_MAPPING, "RAID target id %d added (WWID = 0x%jx)\n", targ->tid, wwid); out: mpssas_startup_decrement(sassc); return (error); } /** * mpssas_SSU_to_SATA_devices * @sc: per adapter object * @howto: mast of RB_* bits for how we're rebooting * * Looks through the target list and issues a StartStopUnit SCSI command to each * SATA direct-access device. This helps to ensure that data corruption is * avoided when the system is being shut down. This must be called after the IR * System Shutdown RAID Action is sent if in IR mode. * * Return nothing. */ static void mpssas_SSU_to_SATA_devices(struct mps_softc *sc, int howto) { struct mpssas_softc *sassc = sc->sassc; union ccb *ccb; path_id_t pathid = cam_sim_path(sassc->sim); target_id_t targetid; struct mpssas_target *target; char path_str[64]; int timeout; /* * For each target, issue a StartStopUnit command to stop the device. */ sc->SSU_started = TRUE; sc->SSU_refcount = 0; for (targetid = 0; targetid < sc->max_devices; targetid++) { target = &sassc->targets[targetid]; if (target->handle == 0x0) { continue; } ccb = xpt_alloc_ccb_nowait(); if (ccb == NULL) { mps_dprint(sc, MPS_FAULT, "Unable to alloc CCB to stop " "unit.\n"); return; } /* * The stop_at_shutdown flag will be set if this device is * a SATA direct-access end device. */ if (target->stop_at_shutdown) { if (xpt_create_path(&ccb->ccb_h.path, xpt_periph, pathid, targetid, CAM_LUN_WILDCARD) != CAM_REQ_CMP) { mps_dprint(sc, MPS_FAULT, "Unable to create " "LUN path to stop unit.\n"); xpt_free_ccb(ccb); return; } xpt_path_string(ccb->ccb_h.path, path_str, sizeof(path_str)); mps_dprint(sc, MPS_INFO, "Sending StopUnit: path %s " "handle %d\n", path_str, target->handle); /* * Issue a START STOP UNIT command for the target. * Increment the SSU counter to be used to count the * number of required replies. */ mps_dprint(sc, MPS_INFO, "Incrementing SSU count\n"); sc->SSU_refcount++; ccb->ccb_h.target_id = xpt_path_target_id(ccb->ccb_h.path); ccb->ccb_h.ppriv_ptr1 = sassc; scsi_start_stop(&ccb->csio, /*retries*/0, mpssas_stop_unit_done, MSG_SIMPLE_Q_TAG, /*start*/FALSE, /*load/eject*/0, /*immediate*/FALSE, MPS_SENSE_LEN, /*timeout*/10000); xpt_action(ccb); } } /* * Timeout after 60 seconds by default or 10 seconds if howto has * RB_NOSYNC set which indicates we're likely handling a panic. */ timeout = 600; if (howto & RB_NOSYNC) timeout = 100; /* * Wait until all of the SSU commands have completed or timeout has * expired. Pause for 100ms each time through. If any command * times out, the target will be reset in the SCSI command timeout * routine. */ while (sc->SSU_refcount > 0) { pause("mpswait", hz/10); if (SCHEDULER_STOPPED()) xpt_sim_poll(sassc->sim); if (--timeout == 0) { mps_dprint(sc, MPS_FAULT, "Time has expired waiting " "for SSU commands to complete.\n"); break; } } } static void mpssas_stop_unit_done(struct cam_periph *periph, union ccb *done_ccb) { struct mpssas_softc *sassc; char path_str[64]; if (done_ccb == NULL) return; sassc = (struct mpssas_softc *)done_ccb->ccb_h.ppriv_ptr1; xpt_path_string(done_ccb->ccb_h.path, path_str, sizeof(path_str)); mps_dprint(sassc->sc, MPS_INFO, "Completing stop unit for %s\n", path_str); /* * Nothing more to do except free the CCB and path. If the command * timed out, an abort reset, then target reset will be issued during * the SCSI Command process. */ xpt_free_path(done_ccb->ccb_h.path); xpt_free_ccb(done_ccb); } /** * mpssas_ir_shutdown - IR shutdown notification * @sc: per adapter object * @howto: mast of RB_* bits for how we're rebooting * * Sending RAID Action to alert the Integrated RAID subsystem of the IOC that * the host system is shutting down. * * Return nothing. */ void mpssas_ir_shutdown(struct mps_softc *sc, int howto) { u16 volume_mapping_flags; u16 ioc_pg8_flags = le16toh(sc->ioc_pg8.Flags); struct dev_mapping_table *mt_entry; u32 start_idx, end_idx; unsigned int id, found_volume = 0; struct mps_command *cm; Mpi2RaidActionRequest_t *action; target_id_t targetid; struct mpssas_target *target; mps_dprint(sc, MPS_TRACE, "%s\n", __func__); /* is IR firmware build loaded? */ if (!sc->ir_firmware) goto out; /* are there any volumes? Look at IR target IDs. */ // TODO-later, this should be looked up in the RAID config structure // when it is implemented. volume_mapping_flags = le16toh(sc->ioc_pg8.IRVolumeMappingFlags) & MPI2_IOCPAGE8_IRFLAGS_MASK_VOLUME_MAPPING_MODE; if (volume_mapping_flags == MPI2_IOCPAGE8_IRFLAGS_LOW_VOLUME_MAPPING) { start_idx = 0; if (ioc_pg8_flags & MPI2_IOCPAGE8_FLAGS_RESERVED_TARGETID_0) start_idx = 1; } else start_idx = sc->max_devices - sc->max_volumes; end_idx = start_idx + sc->max_volumes - 1; for (id = start_idx; id < end_idx; id++) { mt_entry = &sc->mapping_table[id]; if ((mt_entry->physical_id != 0) && (mt_entry->missing_count == 0)) { found_volume = 1; break; } } if (!found_volume) goto out; if ((cm = mps_alloc_command(sc)) == NULL) { printf("%s: command alloc failed\n", __func__); goto out; } action = (MPI2_RAID_ACTION_REQUEST *)cm->cm_req; action->Function = MPI2_FUNCTION_RAID_ACTION; action->Action = MPI2_RAID_ACTION_SYSTEM_SHUTDOWN_INITIATED; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; mps_lock(sc); mps_wait_command(sc, &cm, 5, CAN_SLEEP); mps_unlock(sc); /* * Don't check for reply, just leave. */ if (cm) mps_free_command(sc, cm); out: /* * All of the targets must have the correct value set for * 'stop_at_shutdown' for the current 'enable_ssu' sysctl variable. * * The possible values for the 'enable_ssu' variable are: * 0: disable to SSD and HDD * 1: disable only to HDD (default) * 2: disable only to SSD * 3: enable to SSD and HDD * anything else will default to 1. */ for (targetid = 0; targetid < sc->max_devices; targetid++) { target = &sc->sassc->targets[targetid]; if (target->handle == 0x0) { continue; } if (target->supports_SSU) { switch (sc->enable_ssu) { case MPS_SSU_DISABLE_SSD_DISABLE_HDD: target->stop_at_shutdown = FALSE; break; case MPS_SSU_DISABLE_SSD_ENABLE_HDD: target->stop_at_shutdown = TRUE; if (target->flags & MPS_TARGET_IS_SATA_SSD) { target->stop_at_shutdown = FALSE; } break; case MPS_SSU_ENABLE_SSD_ENABLE_HDD: target->stop_at_shutdown = TRUE; break; case MPS_SSU_ENABLE_SSD_DISABLE_HDD: default: target->stop_at_shutdown = TRUE; if ((target->flags & MPS_TARGET_IS_SATA_SSD) == 0) { target->stop_at_shutdown = FALSE; } break; } } } mpssas_SSU_to_SATA_devices(sc, howto); } Index: releng/12.1/sys/dev/mps/mps_table.c =================================================================== --- releng/12.1/sys/dev/mps/mps_table.c (revision 352760) +++ releng/12.1/sys/dev/mps/mps_table.c (revision 352761) @@ -1,555 +1,567 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2009 Yahoo! Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* Debugging tables for MPT2 */ /* TODO Move headers to mpsvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include char * mps_describe_table(struct mps_table_lookup *table, u_int code) { int i; for (i = 0; table[i].string != NULL; i++) { if (table[i].code == code) return(table[i].string); } return(table[i+1].string); } struct mps_table_lookup mps_event_names[] = { {"LogData", 0x01}, {"StateChange", 0x02}, {"HardResetReceived", 0x05}, {"EventChange", 0x0a}, {"TaskSetFull", 0x0e}, {"SasDeviceStatusChange", 0x0f}, {"IrOperationStatus", 0x14}, {"SasDiscovery", 0x16}, {"SasBroadcastPrimitive", 0x17}, {"SasInitDeviceStatusChange", 0x18}, {"SasInitTableOverflow", 0x19}, {"SasTopologyChangeList", 0x1c}, {"SasEnclDeviceStatusChange", 0x1d}, {"IrVolume", 0x1e}, {"IrPhysicalDisk", 0x1f}, {"IrConfigurationChangeList", 0x20}, {"LogEntryAdded", 0x21}, {"SasPhyCounter", 0x22}, {"GpioInterrupt", 0x23}, {"HbdPhyEvent", 0x24}, {NULL, 0}, {"Unknown Event", 0} }; struct mps_table_lookup mps_phystatus_names[] = { {"NewTargetAdded", 0x01}, {"TargetGone", 0x02}, {"PHYLinkStatusChange", 0x03}, {"PHYLinkStatusUnchanged", 0x04}, {"TargetMissing", 0x05}, {NULL, 0}, {"Unknown Status", 0} }; struct mps_table_lookup mps_linkrate_names[] = { {"PHY disabled", 0x01}, {"Speed Negotiation Failed", 0x02}, {"SATA OOB Complete", 0x03}, {"SATA Port Selector", 0x04}, {"SMP Reset in Progress", 0x05}, {"1.5Gbps", 0x08}, {"3.0Gbps", 0x09}, {"6.0Gbps", 0x0a}, {NULL, 0}, {"LinkRate Unknown", 0x00} }; struct mps_table_lookup mps_sasdev0_devtype[] = { {"End Device", 0x01}, {"Edge Expander", 0x02}, {"Fanout Expander", 0x03}, {NULL, 0}, {"No Device", 0x00} }; struct mps_table_lookup mps_phyinfo_reason_names[] = { {"Power On", 0x01}, {"Hard Reset", 0x02}, {"SMP Phy Control Link Reset", 0x03}, {"Loss DWORD Sync", 0x04}, {"Multiplex Sequence", 0x05}, {"I-T Nexus Loss Timer", 0x06}, {"Break Timeout Timer", 0x07}, {"PHY Test Function", 0x08}, {NULL, 0}, {"Unknown Reason", 0x00} }; struct mps_table_lookup mps_whoinit_names[] = { {"System BIOS", 0x01}, {"ROM BIOS", 0x02}, {"PCI Peer", 0x03}, {"Host Driver", 0x04}, {"Manufacturing", 0x05}, {NULL, 0}, {"Not Initialized", 0x00} }; struct mps_table_lookup mps_sasdisc_reason[] = { {"Discovery Started", 0x01}, {"Discovery Complete", 0x02}, {NULL, 0}, {"Unknown", 0x00} }; struct mps_table_lookup mps_sastopo_exp[] = { {"Added", 0x01}, {"Not Responding", 0x02}, {"Responding", 0x03}, {"Delay Not Responding", 0x04}, {NULL, 0}, {"Unknown", 0x00} }; struct mps_table_lookup mps_sasdev_reason[] = { {"SMART Data", 0x05}, {"Unsupported", 0x07}, {"Internal Device Reset", 0x08}, {"Task Abort Internal", 0x09}, {"Abort Task Set Internal", 0x0a}, {"Clear Task Set Internal", 0x0b}, {"Query Task Internal", 0x0c}, {"Async Notification", 0x0d}, {"Cmp Internal Device Reset", 0x0e}, {"Cmp Task Abort Internal", 0x0f}, {"Sata Init Failure", 0x10}, {NULL, 0}, {"Unknown", 0x00} }; struct mps_table_lookup mps_iocstatus_string[] = { {"success", MPI2_IOCSTATUS_SUCCESS}, {"invalid function", MPI2_IOCSTATUS_INVALID_FUNCTION}, {"scsi recovered error", MPI2_IOCSTATUS_SCSI_RECOVERED_ERROR}, {"scsi invalid dev handle", MPI2_IOCSTATUS_SCSI_INVALID_DEVHANDLE}, {"scsi device not there", MPI2_IOCSTATUS_SCSI_DEVICE_NOT_THERE}, {"scsi data overrun", MPI2_IOCSTATUS_SCSI_DATA_OVERRUN}, {"scsi data underrun", MPI2_IOCSTATUS_SCSI_DATA_UNDERRUN}, {"scsi io data error", MPI2_IOCSTATUS_SCSI_IO_DATA_ERROR}, {"scsi protocol error", MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR}, {"scsi task terminated", MPI2_IOCSTATUS_SCSI_TASK_TERMINATED}, {"scsi residual mismatch", MPI2_IOCSTATUS_SCSI_RESIDUAL_MISMATCH}, {"scsi task mgmt failed", MPI2_IOCSTATUS_SCSI_TASK_MGMT_FAILED}, {"scsi ioc terminated", MPI2_IOCSTATUS_SCSI_IOC_TERMINATED}, {"scsi ext terminated", MPI2_IOCSTATUS_SCSI_EXT_TERMINATED}, {"eedp guard error", MPI2_IOCSTATUS_EEDP_GUARD_ERROR}, {"eedp ref tag error", MPI2_IOCSTATUS_EEDP_REF_TAG_ERROR}, {"eedp app tag error", MPI2_IOCSTATUS_EEDP_APP_TAG_ERROR}, {NULL, 0}, {"unknown", 0x00} }; struct mps_table_lookup mps_scsi_status_string[] = { {"good", MPI2_SCSI_STATUS_GOOD}, {"check condition", MPI2_SCSI_STATUS_CHECK_CONDITION}, {"condition met", MPI2_SCSI_STATUS_CONDITION_MET}, {"busy", MPI2_SCSI_STATUS_BUSY}, {"intermediate", MPI2_SCSI_STATUS_INTERMEDIATE}, {"intermediate condmet", MPI2_SCSI_STATUS_INTERMEDIATE_CONDMET}, {"reservation conflict", MPI2_SCSI_STATUS_RESERVATION_CONFLICT}, {"command terminated", MPI2_SCSI_STATUS_COMMAND_TERMINATED}, {"task set full", MPI2_SCSI_STATUS_TASK_SET_FULL}, {"aca active", MPI2_SCSI_STATUS_ACA_ACTIVE}, {"task aborted", MPI2_SCSI_STATUS_TASK_ABORTED}, {NULL, 0}, {"unknown", 0x00} }; struct mps_table_lookup mps_scsi_taskmgmt_string[] = { {"task mgmt request completed", MPI2_SCSITASKMGMT_RSP_TM_COMPLETE}, {"invalid frame", MPI2_SCSITASKMGMT_RSP_INVALID_FRAME}, {"task mgmt request not supp", MPI2_SCSITASKMGMT_RSP_TM_NOT_SUPPORTED}, {"task mgmt request failed", MPI2_SCSITASKMGMT_RSP_TM_FAILED}, {"task mgmt request_succeeded", MPI2_SCSITASKMGMT_RSP_TM_SUCCEEDED}, {"invalid lun", MPI2_SCSITASKMGMT_RSP_TM_INVALID_LUN}, {"overlapped tag attempt", 0xA}, {"task queued on IOC", MPI2_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC}, {NULL, 0}, {"unknown", 0x00} }; void mps_describe_devinfo(uint32_t devinfo, char *string, int len) { snprintf(string, len, "%b,%s", devinfo, "\20" "\4SataHost" "\5SmpInit" "\6StpInit" "\7SspInit" "\10SataDev" "\11SmpTarg" "\12StpTarg" "\13SspTarg" "\14Direct" "\15LsiDev" "\16AtapiDev" "\17SepDev", mps_describe_table(mps_sasdev0_devtype, devinfo & 0x03)); } void mps_print_iocfacts(struct mps_softc *sc, MPI2_IOC_FACTS_REPLY *facts) { MPS_PRINTFIELD_START(sc, "IOCFacts"); MPS_PRINTFIELD(sc, facts, MsgVersion, 0x%x); MPS_PRINTFIELD(sc, facts, HeaderVersion, 0x%x); MPS_PRINTFIELD(sc, facts, IOCNumber, %d); MPS_PRINTFIELD(sc, facts, IOCExceptions, 0x%x); MPS_PRINTFIELD(sc, facts, MaxChainDepth, %d); mps_print_field(sc, "WhoInit: %s\n", mps_describe_table(mps_whoinit_names, facts->WhoInit)); MPS_PRINTFIELD(sc, facts, NumberOfPorts, %d); MPS_PRINTFIELD(sc, facts, MaxMSIxVectors, %d); MPS_PRINTFIELD(sc, facts, RequestCredit, %d); MPS_PRINTFIELD(sc, facts, ProductID, 0x%x); mps_print_field(sc, "IOCCapabilities: %b\n", facts->IOCCapabilities, "\20" "\3ScsiTaskFull" "\4DiagTrace" "\5SnapBuf" "\6ExtBuf" "\7EEDP" "\10BiDirTarg" "\11Multicast" "\14TransRetry" "\15IR" "\16EventReplay" "\17RaidAccel" "\20MSIXIndex" "\21HostDisc"); mps_print_field(sc, "FWVersion= %d-%d-%d-%d\n", facts->FWVersion.Struct.Major, facts->FWVersion.Struct.Minor, facts->FWVersion.Struct.Unit, facts->FWVersion.Struct.Dev); MPS_PRINTFIELD(sc, facts, IOCRequestFrameSize, %d); MPS_PRINTFIELD(sc, facts, MaxInitiators, %d); MPS_PRINTFIELD(sc, facts, MaxTargets, %d); MPS_PRINTFIELD(sc, facts, MaxSasExpanders, %d); MPS_PRINTFIELD(sc, facts, MaxEnclosures, %d); mps_print_field(sc, "ProtocolFlags: %b\n", facts->ProtocolFlags, "\20" "\1ScsiTarg" "\2ScsiInit"); MPS_PRINTFIELD(sc, facts, HighPriorityCredit, %d); MPS_PRINTFIELD(sc, facts, MaxReplyDescriptorPostQueueDepth, %d); MPS_PRINTFIELD(sc, facts, ReplyFrameSize, %d); MPS_PRINTFIELD(sc, facts, MaxVolumes, %d); MPS_PRINTFIELD(sc, facts, MaxDevHandle, %d); MPS_PRINTFIELD(sc, facts, MaxPersistentEntries, %d); } void mps_print_portfacts(struct mps_softc *sc, MPI2_PORT_FACTS_REPLY *facts) { MPS_PRINTFIELD_START(sc, "PortFacts"); MPS_PRINTFIELD(sc, facts, PortNumber, %d); MPS_PRINTFIELD(sc, facts, PortType, 0x%x); MPS_PRINTFIELD(sc, facts, MaxPostedCmdBuffers, %d); } void mps_print_evt_generic(struct mps_softc *sc, MPI2_EVENT_NOTIFICATION_REPLY *event) { MPS_PRINTFIELD_START(sc, "EventReply"); MPS_PRINTFIELD(sc, event, EventDataLength, %d); MPS_PRINTFIELD(sc, event, AckRequired, %d); mps_print_field(sc, "Event: %s (0x%x)\n", mps_describe_table(mps_event_names, event->Event), event->Event); MPS_PRINTFIELD(sc, event, EventContext, 0x%x); } void mps_print_sasdev0(struct mps_softc *sc, MPI2_CONFIG_PAGE_SAS_DEV_0 *buf) { MPS_PRINTFIELD_START(sc, "SAS Device Page 0"); MPS_PRINTFIELD(sc, buf, Slot, %d); MPS_PRINTFIELD(sc, buf, EnclosureHandle, 0x%x); mps_print_field(sc, "SASAddress: 0x%jx\n", mps_to_u64(&buf->SASAddress)); MPS_PRINTFIELD(sc, buf, ParentDevHandle, 0x%x); MPS_PRINTFIELD(sc, buf, PhyNum, %d); MPS_PRINTFIELD(sc, buf, AccessStatus, 0x%x); MPS_PRINTFIELD(sc, buf, DevHandle, 0x%x); MPS_PRINTFIELD(sc, buf, AttachedPhyIdentifier, 0x%x); MPS_PRINTFIELD(sc, buf, ZoneGroup, %d); mps_print_field(sc, "DeviceInfo: %b,%s\n", buf->DeviceInfo, "\20" "\4SataHost" "\5SmpInit" "\6StpInit" "\7SspInit" "\10SataDev" "\11SmpTarg" "\12StpTarg" "\13SspTarg" "\14Direct" "\15LsiDev" "\16AtapiDev" "\17SepDev", mps_describe_table(mps_sasdev0_devtype, buf->DeviceInfo & 0x03)); MPS_PRINTFIELD(sc, buf, Flags, 0x%x); MPS_PRINTFIELD(sc, buf, PhysicalPort, %d); MPS_PRINTFIELD(sc, buf, MaxPortConnections, %d); mps_print_field(sc, "DeviceName: 0x%jx\n", mps_to_u64(&buf->DeviceName)); MPS_PRINTFIELD(sc, buf, PortGroups, %d); MPS_PRINTFIELD(sc, buf, DmaGroup, %d); MPS_PRINTFIELD(sc, buf, ControlGroup, %d); } void mps_print_evt_sas(struct mps_softc *sc, MPI2_EVENT_NOTIFICATION_REPLY *event) { mps_print_evt_generic(sc, event); switch(event->Event) { case MPI2_EVENT_SAS_DISCOVERY: { MPI2_EVENT_DATA_SAS_DISCOVERY *data; data = (MPI2_EVENT_DATA_SAS_DISCOVERY *)&event->EventData; mps_print_field(sc, "Flags: %b\n", data->Flags, "\20" "\1InProgress" "\2DeviceChange"); mps_print_field(sc, "ReasonCode: %s\n", mps_describe_table(mps_sasdisc_reason, data->ReasonCode)); MPS_PRINTFIELD(sc, data, PhysicalPort, %d); mps_print_field(sc, "DiscoveryStatus: %b\n", data->DiscoveryStatus, "\20" "\1Loop" "\2UnaddressableDev" "\3DupSasAddr" "\5SmpTimeout" "\6ExpRouteFull" "\7RouteIndexError" "\10SmpFailed" "\11SmpCrcError" "\12SubSubLink" "\13TableTableLink" "\14UnsupDevice" "\15TableSubLink" "\16MultiDomain" "\17MultiSub" "\20MultiSubSub" "\34DownstreamInit" "\35MaxPhys" "\36MaxTargs" "\37MaxExpanders" "\40MaxEnclosures"); break; } case MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST: { MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST *data; MPI2_EVENT_SAS_TOPO_PHY_ENTRY *phy; int i, phynum; data = (MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST *) &event->EventData; MPS_PRINTFIELD(sc, data, EnclosureHandle, 0x%x); MPS_PRINTFIELD(sc, data, ExpanderDevHandle, 0x%x); MPS_PRINTFIELD(sc, data, NumPhys, %d); MPS_PRINTFIELD(sc, data, NumEntries, %d); MPS_PRINTFIELD(sc, data, StartPhyNum, %d); mps_print_field(sc, "ExpStatus: %s (0x%x)\n", mps_describe_table(mps_sastopo_exp, data->ExpStatus), data->ExpStatus); MPS_PRINTFIELD(sc, data, PhysicalPort, %d); for (i = 0; i < data->NumEntries; i++) { phy = &data->PHY[i]; phynum = data->StartPhyNum + i; mps_print_field(sc, "PHY[%d].AttachedDevHandle: 0x%04x\n", phynum, phy->AttachedDevHandle); mps_print_field(sc, "PHY[%d].LinkRate: %s (0x%x)\n", phynum, mps_describe_table(mps_linkrate_names, (phy->LinkRate >> 4) & 0xf), phy->LinkRate); mps_print_field(sc, "PHY[%d].PhyStatus: %s\n", phynum, mps_describe_table(mps_phystatus_names, phy->PhyStatus)); } break; } case MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE: { MPI2_EVENT_DATA_SAS_ENCL_DEV_STATUS_CHANGE *data; data = (MPI2_EVENT_DATA_SAS_ENCL_DEV_STATUS_CHANGE *) &event->EventData; MPS_PRINTFIELD(sc, data, EnclosureHandle, 0x%x); mps_print_field(sc, "ReasonCode: %s\n", mps_describe_table(mps_sastopo_exp, data->ReasonCode)); MPS_PRINTFIELD(sc, data, PhysicalPort, %d); MPS_PRINTFIELD(sc, data, NumSlots, %d); MPS_PRINTFIELD(sc, data, StartSlot, %d); MPS_PRINTFIELD(sc, data, PhyBits, 0x%x); break; } case MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE: { MPI2_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE *data; data = (MPI2_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE *) &event->EventData; MPS_PRINTFIELD(sc, data, TaskTag, 0x%x); mps_print_field(sc, "ReasonCode: %s\n", mps_describe_table(mps_sasdev_reason, data->ReasonCode)); MPS_PRINTFIELD(sc, data, ASC, 0x%x); MPS_PRINTFIELD(sc, data, ASCQ, 0x%x); MPS_PRINTFIELD(sc, data, DevHandle, 0x%x); mps_print_field(sc, "SASAddress: 0x%jx\n", mps_to_u64(&data->SASAddress)); + break; + } + case MPI2_EVENT_SAS_BROADCAST_PRIMITIVE: + { + MPI2_EVENT_DATA_SAS_BROADCAST_PRIMITIVE *data; + + data = (MPI2_EVENT_DATA_SAS_BROADCAST_PRIMITIVE *)&event->EventData; + MPS_PRINTFIELD(sc, data, PhyNum, %d); + MPS_PRINTFIELD(sc, data, Port, %d); + MPS_PRINTFIELD(sc, data, PortWidth, %d); + MPS_PRINTFIELD(sc, data, Primitive, 0x%x); + break; } default: break; } } void mps_print_expander1(struct mps_softc *sc, MPI2_CONFIG_PAGE_EXPANDER_1 *buf) { MPS_PRINTFIELD_START(sc, "SAS Expander Page 1 #%d", buf->Phy); MPS_PRINTFIELD(sc, buf, PhysicalPort, %d); MPS_PRINTFIELD(sc, buf, NumPhys, %d); MPS_PRINTFIELD(sc, buf, Phy, %d); MPS_PRINTFIELD(sc, buf, NumTableEntriesProgrammed, %d); mps_print_field(sc, "ProgrammedLinkRate: %s (0x%x)\n", mps_describe_table(mps_linkrate_names, (buf->ProgrammedLinkRate >> 4) & 0xf), buf->ProgrammedLinkRate); mps_print_field(sc, "HwLinkRate: %s (0x%x)\n", mps_describe_table(mps_linkrate_names, (buf->HwLinkRate >> 4) & 0xf), buf->HwLinkRate); MPS_PRINTFIELD(sc, buf, AttachedDevHandle, 0x%04x); mps_print_field(sc, "PhyInfo Reason: %s (0x%x)\n", mps_describe_table(mps_phyinfo_reason_names, (buf->PhyInfo >> 16) & 0xf), buf->PhyInfo); mps_print_field(sc, "AttachedDeviceInfo: %b,%s\n", buf->AttachedDeviceInfo, "\20" "\4SATAhost" "\5SMPinit" "\6STPinit" "\7SSPinit" "\10SATAdev" "\11SMPtarg" "\12STPtarg" "\13SSPtarg" "\14Direct" "\15LSIdev" "\16ATAPIdev" "\17SEPdev", mps_describe_table(mps_sasdev0_devtype, buf->AttachedDeviceInfo & 0x03)); MPS_PRINTFIELD(sc, buf, ExpanderDevHandle, 0x%04x); MPS_PRINTFIELD(sc, buf, ChangeCount, %d); mps_print_field(sc, "NegotiatedLinkRate: %s (0x%x)\n", mps_describe_table(mps_linkrate_names, buf->NegotiatedLinkRate & 0xf), buf->NegotiatedLinkRate); MPS_PRINTFIELD(sc, buf, PhyIdentifier, %d); MPS_PRINTFIELD(sc, buf, AttachedPhyIdentifier, %d); MPS_PRINTFIELD(sc, buf, DiscoveryInfo, 0x%x); MPS_PRINTFIELD(sc, buf, AttachedPhyInfo, 0x%x); mps_print_field(sc, "AttachedPhyInfo Reason: %s (0x%x)\n", mps_describe_table(mps_phyinfo_reason_names, buf->AttachedPhyInfo & 0xf), buf->AttachedPhyInfo); MPS_PRINTFIELD(sc, buf, ZoneGroup, %d); MPS_PRINTFIELD(sc, buf, SelfConfigStatus, 0x%x); } void mps_print_sasphy0(struct mps_softc *sc, MPI2_CONFIG_PAGE_SAS_PHY_0 *buf) { MPS_PRINTFIELD_START(sc, "SAS PHY Page 0"); MPS_PRINTFIELD(sc, buf, OwnerDevHandle, 0x%04x); MPS_PRINTFIELD(sc, buf, AttachedDevHandle, 0x%04x); MPS_PRINTFIELD(sc, buf, AttachedPhyIdentifier, %d); mps_print_field(sc, "AttachedPhyInfo Reason: %s (0x%x)\n", mps_describe_table(mps_phyinfo_reason_names, buf->AttachedPhyInfo & 0xf), buf->AttachedPhyInfo); mps_print_field(sc, "ProgrammedLinkRate: %s (0x%x)\n", mps_describe_table(mps_linkrate_names, (buf->ProgrammedLinkRate >> 4) & 0xf), buf->ProgrammedLinkRate); mps_print_field(sc, "HwLinkRate: %s (0x%x)\n", mps_describe_table(mps_linkrate_names, (buf->HwLinkRate >> 4) & 0xf), buf->HwLinkRate); MPS_PRINTFIELD(sc, buf, ChangeCount, %d); MPS_PRINTFIELD(sc, buf, Flags, 0x%x); mps_print_field(sc, "PhyInfo Reason: %s (0x%x)\n", mps_describe_table(mps_phyinfo_reason_names, (buf->PhyInfo >> 16) & 0xf), buf->PhyInfo); mps_print_field(sc, "NegotiatedLinkRate: %s (0x%x)\n", mps_describe_table(mps_linkrate_names, buf->NegotiatedLinkRate & 0xf), buf->NegotiatedLinkRate); } void mps_print_sgl(struct mps_softc *sc, struct mps_command *cm, int offset) { MPI2_SGE_SIMPLE64 *sge; MPI2_SGE_CHAIN32 *sgc; MPI2_REQUEST_HEADER *req; struct mps_chain *chain = NULL; char *frame; u_int i = 0, flags; req = (MPI2_REQUEST_HEADER *)cm->cm_req; frame = (char *)cm->cm_req; sge = (MPI2_SGE_SIMPLE64 *)&frame[offset * 4]; printf("SGL for command %p\n", cm); hexdump(frame, 128, NULL, 0); while (frame != NULL) { flags = le32toh(sge->FlagsLength) >> MPI2_SGE_FLAGS_SHIFT; printf("seg%d flags=0x%02x len=0x%06x addr=0x%016jx\n", i, flags, le32toh(sge->FlagsLength) & 0xffffff, mps_to_u64(&sge->Address)); if (flags & (MPI2_SGE_FLAGS_END_OF_LIST | MPI2_SGE_FLAGS_END_OF_BUFFER)) break; sge++; i++; if (flags & MPI2_SGE_FLAGS_LAST_ELEMENT) { sgc = (MPI2_SGE_CHAIN32 *)sge; printf("chain flags=0x%x len=0x%x Offset=0x%x " "Address=0x%x\n", sgc->Flags, le16toh(sgc->Length), sgc->NextChainOffset, le32toh(sgc->Address)); if (chain == NULL) chain = TAILQ_FIRST(&cm->cm_chain_list); else chain = TAILQ_NEXT(chain, chain_link); frame = (char *)chain->chain; sge = (MPI2_SGE_SIMPLE64 *)frame; hexdump(frame, 128, NULL, 0); } } } void mps_print_scsiio_cmd(struct mps_softc *sc, struct mps_command *cm) { MPI2_SCSI_IO_REQUEST *req; req = (MPI2_SCSI_IO_REQUEST *)cm->cm_req; mps_print_sgl(sc, cm, req->SGLOffset0); } Index: releng/12.1/sys/dev/mps/mps_user.c =================================================================== --- releng/12.1/sys/dev/mps/mps_user.c (revision 352760) +++ releng/12.1/sys/dev/mps/mps_user.c (revision 352761) @@ -1,2529 +1,2527 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2008 Yahoo!, Inc. * All rights reserved. * Written by: John Baldwin * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of any co-contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD userland interface */ /*- * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2015 Avago Technologies * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ #include __FBSDID("$FreeBSD$"); /* TODO Move headers to mpsvar */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include static d_open_t mps_open; static d_close_t mps_close; static d_ioctl_t mps_ioctl_devsw; static struct cdevsw mps_cdevsw = { .d_version = D_VERSION, .d_flags = 0, .d_open = mps_open, .d_close = mps_close, .d_ioctl = mps_ioctl_devsw, .d_name = "mps", }; typedef int (mps_user_f)(struct mps_command *, struct mps_usr_command *); static mps_user_f mpi_pre_ioc_facts; static mps_user_f mpi_pre_port_facts; static mps_user_f mpi_pre_fw_download; static mps_user_f mpi_pre_fw_upload; static mps_user_f mpi_pre_sata_passthrough; static mps_user_f mpi_pre_smp_passthrough; static mps_user_f mpi_pre_config; static mps_user_f mpi_pre_sas_io_unit_control; static int mps_user_read_cfg_header(struct mps_softc *, struct mps_cfg_page_req *); static int mps_user_read_cfg_page(struct mps_softc *, struct mps_cfg_page_req *, void *); static int mps_user_read_extcfg_header(struct mps_softc *, struct mps_ext_cfg_page_req *); static int mps_user_read_extcfg_page(struct mps_softc *, struct mps_ext_cfg_page_req *, void *); static int mps_user_write_cfg_page(struct mps_softc *, struct mps_cfg_page_req *, void *); static int mps_user_setup_request(struct mps_command *, struct mps_usr_command *); static int mps_user_command(struct mps_softc *, struct mps_usr_command *); static int mps_user_pass_thru(struct mps_softc *sc, mps_pass_thru_t *data); static void mps_user_get_adapter_data(struct mps_softc *sc, mps_adapter_data_t *data); static void mps_user_read_pci_info(struct mps_softc *sc, mps_pci_info_t *data); static uint8_t mps_get_fw_diag_buffer_number(struct mps_softc *sc, uint32_t unique_id); static int mps_post_fw_diag_buffer(struct mps_softc *sc, mps_fw_diagnostic_buffer_t *pBuffer, uint32_t *return_code); static int mps_release_fw_diag_buffer(struct mps_softc *sc, mps_fw_diagnostic_buffer_t *pBuffer, uint32_t *return_code, uint32_t diag_type); static int mps_diag_register(struct mps_softc *sc, mps_fw_diag_register_t *diag_register, uint32_t *return_code); static int mps_diag_unregister(struct mps_softc *sc, mps_fw_diag_unregister_t *diag_unregister, uint32_t *return_code); static int mps_diag_query(struct mps_softc *sc, mps_fw_diag_query_t *diag_query, uint32_t *return_code); static int mps_diag_read_buffer(struct mps_softc *sc, mps_diag_read_buffer_t *diag_read_buffer, uint8_t *ioctl_buf, uint32_t *return_code); static int mps_diag_release(struct mps_softc *sc, mps_fw_diag_release_t *diag_release, uint32_t *return_code); static int mps_do_diag_action(struct mps_softc *sc, uint32_t action, uint8_t *diag_action, uint32_t length, uint32_t *return_code); static int mps_user_diag_action(struct mps_softc *sc, mps_diag_action_t *data); static void mps_user_event_query(struct mps_softc *sc, mps_event_query_t *data); static void mps_user_event_enable(struct mps_softc *sc, mps_event_enable_t *data); static int mps_user_event_report(struct mps_softc *sc, mps_event_report_t *data); static int mps_user_reg_access(struct mps_softc *sc, mps_reg_access_t *data); static int mps_user_btdh(struct mps_softc *sc, mps_btdh_mapping_t *data); MALLOC_DEFINE(M_MPSUSER, "mps_user", "Buffers for mps(4) ioctls"); /* Macros from compat/freebsd32/freebsd32.h */ #define PTRIN(v) (void *)(uintptr_t)(v) #define PTROUT(v) (uint32_t)(uintptr_t)(v) #define CP(src,dst,fld) do { (dst).fld = (src).fld; } while (0) #define PTRIN_CP(src,dst,fld) \ do { (dst).fld = PTRIN((src).fld); } while (0) #define PTROUT_CP(src,dst,fld) \ do { (dst).fld = PTROUT((src).fld); } while (0) int mps_attach_user(struct mps_softc *sc) { int unit; unit = device_get_unit(sc->mps_dev); sc->mps_cdev = make_dev(&mps_cdevsw, unit, UID_ROOT, GID_OPERATOR, 0640, "mps%d", unit); if (sc->mps_cdev == NULL) { return (ENOMEM); } sc->mps_cdev->si_drv1 = sc; return (0); } void mps_detach_user(struct mps_softc *sc) { /* XXX: do a purge of pending requests? */ if (sc->mps_cdev != NULL) destroy_dev(sc->mps_cdev); } static int mps_open(struct cdev *dev, int flags, int fmt, struct thread *td) { return (0); } static int mps_close(struct cdev *dev, int flags, int fmt, struct thread *td) { return (0); } static int mps_user_read_cfg_header(struct mps_softc *sc, struct mps_cfg_page_req *page_req) { MPI2_CONFIG_PAGE_HEADER *hdr; struct mps_config_params params; int error; hdr = ¶ms.hdr.Struct; params.action = MPI2_CONFIG_ACTION_PAGE_HEADER; params.page_address = le32toh(page_req->page_address); hdr->PageVersion = 0; hdr->PageLength = 0; hdr->PageNumber = page_req->header.PageNumber; hdr->PageType = page_req->header.PageType; params.buffer = NULL; params.length = 0; params.callback = NULL; if ((error = mps_read_config_page(sc, ¶ms)) != 0) { /* * Leave the request. Without resetting the chip, it's * still owned by it and we'll just get into trouble * freeing it now. Mark it as abandoned so that if it * shows up later it can be freed. */ mps_printf(sc, "read_cfg_header timed out\n"); return (ETIMEDOUT); } page_req->ioc_status = htole16(params.status); if ((page_req->ioc_status & MPI2_IOCSTATUS_MASK) == MPI2_IOCSTATUS_SUCCESS) { bcopy(hdr, &page_req->header, sizeof(page_req->header)); } return (0); } static int mps_user_read_cfg_page(struct mps_softc *sc, struct mps_cfg_page_req *page_req, void *buf) { MPI2_CONFIG_PAGE_HEADER *reqhdr, *hdr; struct mps_config_params params; int error; reqhdr = buf; hdr = ¶ms.hdr.Struct; hdr->PageVersion = reqhdr->PageVersion; hdr->PageLength = reqhdr->PageLength; hdr->PageNumber = reqhdr->PageNumber; hdr->PageType = reqhdr->PageType & MPI2_CONFIG_PAGETYPE_MASK; params.action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; params.page_address = le32toh(page_req->page_address); params.buffer = buf; params.length = le32toh(page_req->len); params.callback = NULL; if ((error = mps_read_config_page(sc, ¶ms)) != 0) { mps_printf(sc, "mps_user_read_cfg_page timed out\n"); return (ETIMEDOUT); } page_req->ioc_status = htole16(params.status); return (0); } static int mps_user_read_extcfg_header(struct mps_softc *sc, struct mps_ext_cfg_page_req *ext_page_req) { MPI2_CONFIG_EXTENDED_PAGE_HEADER *hdr; struct mps_config_params params; int error; hdr = ¶ms.hdr.Ext; params.action = MPI2_CONFIG_ACTION_PAGE_HEADER; hdr->PageVersion = ext_page_req->header.PageVersion; hdr->PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; hdr->ExtPageLength = 0; hdr->PageNumber = ext_page_req->header.PageNumber; hdr->ExtPageType = ext_page_req->header.ExtPageType; params.page_address = le32toh(ext_page_req->page_address); params.buffer = NULL; params.length = 0; params.callback = NULL; if ((error = mps_read_config_page(sc, ¶ms)) != 0) { /* * Leave the request. Without resetting the chip, it's * still owned by it and we'll just get into trouble * freeing it now. Mark it as abandoned so that if it * shows up later it can be freed. */ mps_printf(sc, "mps_user_read_extcfg_header timed out\n"); return (ETIMEDOUT); } ext_page_req->ioc_status = htole16(params.status); if ((ext_page_req->ioc_status & MPI2_IOCSTATUS_MASK) == MPI2_IOCSTATUS_SUCCESS) { ext_page_req->header.PageVersion = hdr->PageVersion; ext_page_req->header.PageNumber = hdr->PageNumber; ext_page_req->header.PageType = hdr->PageType; ext_page_req->header.ExtPageLength = hdr->ExtPageLength; ext_page_req->header.ExtPageType = hdr->ExtPageType; } return (0); } static int mps_user_read_extcfg_page(struct mps_softc *sc, struct mps_ext_cfg_page_req *ext_page_req, void *buf) { MPI2_CONFIG_EXTENDED_PAGE_HEADER *reqhdr, *hdr; struct mps_config_params params; int error; reqhdr = buf; hdr = ¶ms.hdr.Ext; params.action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; params.page_address = le32toh(ext_page_req->page_address); hdr->PageVersion = reqhdr->PageVersion; hdr->PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; hdr->PageNumber = reqhdr->PageNumber; hdr->ExtPageType = reqhdr->ExtPageType; hdr->ExtPageLength = reqhdr->ExtPageLength; params.buffer = buf; params.length = le32toh(ext_page_req->len); params.callback = NULL; if ((error = mps_read_config_page(sc, ¶ms)) != 0) { mps_printf(sc, "mps_user_read_extcfg_page timed out\n"); return (ETIMEDOUT); } ext_page_req->ioc_status = htole16(params.status); return (0); } static int mps_user_write_cfg_page(struct mps_softc *sc, struct mps_cfg_page_req *page_req, void *buf) { MPI2_CONFIG_PAGE_HEADER *reqhdr, *hdr; struct mps_config_params params; u_int hdr_attr; int error; reqhdr = buf; hdr = ¶ms.hdr.Struct; hdr_attr = reqhdr->PageType & MPI2_CONFIG_PAGEATTR_MASK; if (hdr_attr != MPI2_CONFIG_PAGEATTR_CHANGEABLE && hdr_attr != MPI2_CONFIG_PAGEATTR_PERSISTENT) { mps_printf(sc, "page type 0x%x not changeable\n", reqhdr->PageType & MPI2_CONFIG_PAGETYPE_MASK); return (EINVAL); } /* * There isn't any point in restoring stripped out attributes * if you then mask them going down to issue the request. */ hdr->PageVersion = reqhdr->PageVersion; hdr->PageLength = reqhdr->PageLength; hdr->PageNumber = reqhdr->PageNumber; hdr->PageType = reqhdr->PageType; params.action = MPI2_CONFIG_ACTION_PAGE_WRITE_CURRENT; params.page_address = le32toh(page_req->page_address); params.buffer = buf; params.length = le32toh(page_req->len); params.callback = NULL; if ((error = mps_write_config_page(sc, ¶ms)) != 0) { mps_printf(sc, "mps_write_cfg_page timed out\n"); return (ETIMEDOUT); } page_req->ioc_status = htole16(params.status); return (0); } void mpi_init_sge(struct mps_command *cm, void *req, void *sge) { int off, space; space = (int)cm->cm_sc->reqframesz; off = (uintptr_t)sge - (uintptr_t)req; KASSERT(off < space, ("bad pointers %p %p, off %d, space %d", req, sge, off, space)); cm->cm_sge = sge; cm->cm_sglsize = space - off; } /* * Prepare the mps_command for an IOC_FACTS request. */ static int mpi_pre_ioc_facts(struct mps_command *cm, struct mps_usr_command *cmd) { MPI2_IOC_FACTS_REQUEST *req = (void *)cm->cm_req; MPI2_IOC_FACTS_REPLY *rpl; if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); cm->cm_sge = NULL; cm->cm_sglsize = 0; return (0); } /* * Prepare the mps_command for a PORT_FACTS request. */ static int mpi_pre_port_facts(struct mps_command *cm, struct mps_usr_command *cmd) { MPI2_PORT_FACTS_REQUEST *req = (void *)cm->cm_req; MPI2_PORT_FACTS_REPLY *rpl; if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); cm->cm_sge = NULL; cm->cm_sglsize = 0; return (0); } /* * Prepare the mps_command for a FW_DOWNLOAD request. */ static int mpi_pre_fw_download(struct mps_command *cm, struct mps_usr_command *cmd) { MPI2_FW_DOWNLOAD_REQUEST *req = (void *)cm->cm_req; MPI2_FW_DOWNLOAD_REPLY *rpl; MPI2_FW_DOWNLOAD_TCSGE tc; int error; /* * This code assumes there is room in the request's SGL for * the TransactionContext plus at least a SGL chain element. */ CTASSERT(sizeof req->SGL >= sizeof tc + MPS_SGC_SIZE); if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); if (cmd->len == 0) return (EINVAL); error = copyin(cmd->buf, cm->cm_data, cmd->len); if (error != 0) return (error); mpi_init_sge(cm, req, &req->SGL); bzero(&tc, sizeof tc); /* * For now, the F/W image must be provided in a single request. */ if ((req->MsgFlags & MPI2_FW_DOWNLOAD_MSGFLGS_LAST_SEGMENT) == 0) return (EINVAL); if (req->TotalImageSize != cmd->len) return (EINVAL); /* * The value of the first two elements is specified in the * Fusion-MPT Message Passing Interface document. */ tc.ContextSize = 0; tc.DetailsLength = 12; tc.ImageOffset = 0; tc.ImageSize = cmd->len; cm->cm_flags |= MPS_CM_FLAGS_DATAOUT; return (mps_push_sge(cm, &tc, sizeof tc, 0)); } /* * Prepare the mps_command for a FW_UPLOAD request. */ static int mpi_pre_fw_upload(struct mps_command *cm, struct mps_usr_command *cmd) { MPI2_FW_UPLOAD_REQUEST *req = (void *)cm->cm_req; MPI2_FW_UPLOAD_REPLY *rpl; MPI2_FW_UPLOAD_TCSGE tc; /* * This code assumes there is room in the request's SGL for * the TransactionContext plus at least a SGL chain element. */ CTASSERT(sizeof req->SGL >= sizeof tc + MPS_SGC_SIZE); if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); mpi_init_sge(cm, req, &req->SGL); bzero(&tc, sizeof tc); /* * The value of the first two elements is specified in the * Fusion-MPT Message Passing Interface document. */ tc.ContextSize = 0; tc.DetailsLength = 12; /* * XXX Is there any reason to fetch a partial image? I.e. to * set ImageOffset to something other than 0? */ tc.ImageOffset = 0; tc.ImageSize = cmd->len; cm->cm_flags |= MPS_CM_FLAGS_DATAIN; return (mps_push_sge(cm, &tc, sizeof tc, 0)); } /* * Prepare the mps_command for a SATA_PASSTHROUGH request. */ static int mpi_pre_sata_passthrough(struct mps_command *cm, struct mps_usr_command *cmd) { MPI2_SATA_PASSTHROUGH_REQUEST *req = (void *)cm->cm_req; MPI2_SATA_PASSTHROUGH_REPLY *rpl; if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); mpi_init_sge(cm, req, &req->SGL); return (0); } /* * Prepare the mps_command for a SMP_PASSTHROUGH request. */ static int mpi_pre_smp_passthrough(struct mps_command *cm, struct mps_usr_command *cmd) { MPI2_SMP_PASSTHROUGH_REQUEST *req = (void *)cm->cm_req; MPI2_SMP_PASSTHROUGH_REPLY *rpl; if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); mpi_init_sge(cm, req, &req->SGL); return (0); } /* * Prepare the mps_command for a CONFIG request. */ static int mpi_pre_config(struct mps_command *cm, struct mps_usr_command *cmd) { MPI2_CONFIG_REQUEST *req = (void *)cm->cm_req; MPI2_CONFIG_REPLY *rpl; if (cmd->req_len != sizeof *req) return (EINVAL); if (cmd->rpl_len != sizeof *rpl) return (EINVAL); mpi_init_sge(cm, req, &req->PageBufferSGE); return (0); } /* * Prepare the mps_command for a SAS_IO_UNIT_CONTROL request. */ static int mpi_pre_sas_io_unit_control(struct mps_command *cm, struct mps_usr_command *cmd) { cm->cm_sge = NULL; cm->cm_sglsize = 0; return (0); } /* * A set of functions to prepare an mps_command for the various * supported requests. */ struct mps_user_func { U8 Function; mps_user_f *f_pre; } mps_user_func_list[] = { { MPI2_FUNCTION_IOC_FACTS, mpi_pre_ioc_facts }, { MPI2_FUNCTION_PORT_FACTS, mpi_pre_port_facts }, { MPI2_FUNCTION_FW_DOWNLOAD, mpi_pre_fw_download }, { MPI2_FUNCTION_FW_UPLOAD, mpi_pre_fw_upload }, { MPI2_FUNCTION_SATA_PASSTHROUGH, mpi_pre_sata_passthrough }, { MPI2_FUNCTION_SMP_PASSTHROUGH, mpi_pre_smp_passthrough}, { MPI2_FUNCTION_CONFIG, mpi_pre_config}, { MPI2_FUNCTION_SAS_IO_UNIT_CONTROL, mpi_pre_sas_io_unit_control }, { 0xFF, NULL } /* list end */ }; static int mps_user_setup_request(struct mps_command *cm, struct mps_usr_command *cmd) { MPI2_REQUEST_HEADER *hdr = (MPI2_REQUEST_HEADER *)cm->cm_req; struct mps_user_func *f; for (f = mps_user_func_list; f->f_pre != NULL; f++) { if (hdr->Function == f->Function) return (f->f_pre(cm, cmd)); } return (EINVAL); } static int mps_user_command(struct mps_softc *sc, struct mps_usr_command *cmd) { MPI2_REQUEST_HEADER *hdr; MPI2_DEFAULT_REPLY *rpl; void *buf = NULL; struct mps_command *cm = NULL; int err = 0; int sz; mps_lock(sc); cm = mps_alloc_command(sc); if (cm == NULL) { mps_printf(sc, "%s: no mps requests\n", __func__); err = ENOMEM; goto RetFree; } mps_unlock(sc); hdr = (MPI2_REQUEST_HEADER *)cm->cm_req; mps_dprint(sc, MPS_USER, "%s: req %p %d rpl %p %d\n", __func__, cmd->req, cmd->req_len, cmd->rpl, cmd->rpl_len); if (cmd->req_len > (int)sc->reqframesz) { err = EINVAL; goto RetFreeUnlocked; } err = copyin(cmd->req, hdr, cmd->req_len); if (err != 0) goto RetFreeUnlocked; mps_dprint(sc, MPS_USER, "%s: Function %02X MsgFlags %02X\n", __func__, hdr->Function, hdr->MsgFlags); if (cmd->len > 0) { buf = malloc(cmd->len, M_MPSUSER, M_WAITOK|M_ZERO); cm->cm_data = buf; cm->cm_length = cmd->len; } else { cm->cm_data = NULL; cm->cm_length = 0; } cm->cm_flags = MPS_CM_FLAGS_SGE_SIMPLE; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; err = mps_user_setup_request(cm, cmd); if (err == EINVAL) { mps_printf(sc, "%s: unsupported parameter or unsupported " "function in request (function = 0x%X)\n", __func__, hdr->Function); } if (err != 0) goto RetFreeUnlocked; mps_lock(sc); err = mps_wait_command(sc, &cm, 60, CAN_SLEEP); if (err || (cm == NULL)) { mps_printf(sc, "%s: invalid request: error %d\n", __func__, err); goto RetFree; } rpl = (MPI2_DEFAULT_REPLY *)cm->cm_reply; if (rpl != NULL) sz = rpl->MsgLength * 4; else sz = 0; if (sz > cmd->rpl_len) { mps_printf(sc, "%s: user reply buffer (%d) smaller than " "returned buffer (%d)\n", __func__, cmd->rpl_len, sz); sz = cmd->rpl_len; } mps_unlock(sc); copyout(rpl, cmd->rpl, sz); if (buf != NULL) copyout(buf, cmd->buf, cmd->len); mps_dprint(sc, MPS_USER, "%s: reply size %d\n", __func__, sz); RetFreeUnlocked: mps_lock(sc); RetFree: if (cm != NULL) mps_free_command(sc, cm); mps_unlock(sc); if (buf != NULL) free(buf, M_MPSUSER); return (err); } static int mps_user_pass_thru(struct mps_softc *sc, mps_pass_thru_t *data) { MPI2_REQUEST_HEADER *hdr, tmphdr; MPI2_DEFAULT_REPLY *rpl = NULL; struct mps_command *cm = NULL; int err = 0, dir = 0, sz; uint8_t function = 0; u_int sense_len; struct mpssas_target *targ = NULL; /* * Only allow one passthru command at a time. Use the MPS_FLAGS_BUSY * bit to denote that a passthru is being processed. */ mps_lock(sc); if (sc->mps_flags & MPS_FLAGS_BUSY) { mps_dprint(sc, MPS_USER, "%s: Only one passthru command " "allowed at a single time.", __func__); mps_unlock(sc); return (EBUSY); } sc->mps_flags |= MPS_FLAGS_BUSY; mps_unlock(sc); /* * Do some validation on data direction. Valid cases are: * 1) DataSize is 0 and direction is NONE * 2) DataSize is non-zero and one of: * a) direction is READ or * b) direction is WRITE or * c) direction is BOTH and DataOutSize is non-zero * If valid and the direction is BOTH, change the direction to READ. * if valid and the direction is not BOTH, make sure DataOutSize is 0. */ if (((data->DataSize == 0) && (data->DataDirection == MPS_PASS_THRU_DIRECTION_NONE)) || ((data->DataSize != 0) && ((data->DataDirection == MPS_PASS_THRU_DIRECTION_READ) || (data->DataDirection == MPS_PASS_THRU_DIRECTION_WRITE) || ((data->DataDirection == MPS_PASS_THRU_DIRECTION_BOTH) && (data->DataOutSize != 0))))) { if (data->DataDirection == MPS_PASS_THRU_DIRECTION_BOTH) data->DataDirection = MPS_PASS_THRU_DIRECTION_READ; else data->DataOutSize = 0; } else { err = EINVAL; goto RetFreeUnlocked; } mps_dprint(sc, MPS_USER, "%s: req 0x%jx %d rpl 0x%jx %d " "data in 0x%jx %d data out 0x%jx %d data dir %d\n", __func__, data->PtrRequest, data->RequestSize, data->PtrReply, data->ReplySize, data->PtrData, data->DataSize, data->PtrDataOut, data->DataOutSize, data->DataDirection); /* * copy in the header so we know what we're dealing with before we * commit to allocating a command for it. */ err = copyin(PTRIN(data->PtrRequest), &tmphdr, data->RequestSize); if (err != 0) goto RetFreeUnlocked; if (data->RequestSize > (int)sc->reqframesz) { err = EINVAL; goto RetFreeUnlocked; } function = tmphdr.Function; mps_dprint(sc, MPS_USER, "%s: Function %02X MsgFlags %02X\n", __func__, function, tmphdr.MsgFlags); /* * Handle a passthru TM request. */ if (function == MPI2_FUNCTION_SCSI_TASK_MGMT) { MPI2_SCSI_TASK_MANAGE_REQUEST *task; mps_lock(sc); cm = mpssas_alloc_tm(sc); if (cm == NULL) { err = EINVAL; goto Ret; } /* Copy the header in. Only a small fixup is needed. */ task = (MPI2_SCSI_TASK_MANAGE_REQUEST *)cm->cm_req; bcopy(&tmphdr, task, data->RequestSize); task->TaskMID = cm->cm_desc.Default.SMID; cm->cm_data = NULL; - cm->cm_desc.HighPriority.RequestFlags = - MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; cm->cm_complete = NULL; cm->cm_complete_data = NULL; targ = mpssas_find_target_by_handle(sc->sassc, 0, task->DevHandle); if (targ == NULL) { mps_dprint(sc, MPS_INFO, "%s %d : invalid handle for requested TM 0x%x \n", __func__, __LINE__, task->DevHandle); err = 1; } else { mpssas_prepare_for_tm(sc, cm, targ, CAM_LUN_WILDCARD); err = mps_wait_command(sc, &cm, 30, CAN_SLEEP); } if (err != 0) { err = EIO; mps_dprint(sc, MPS_FAULT, "%s: task management failed", __func__); } /* * Copy the reply data and sense data to user space. */ if ((cm != NULL) && (cm->cm_reply != NULL)) { rpl = (MPI2_DEFAULT_REPLY *)cm->cm_reply; sz = rpl->MsgLength * 4; if (sz > data->ReplySize) { mps_printf(sc, "%s: user reply buffer (%d) " "smaller than returned buffer (%d)\n", __func__, data->ReplySize, sz); } mps_unlock(sc); copyout(cm->cm_reply, PTRIN(data->PtrReply), data->ReplySize); mps_lock(sc); } mpssas_free_tm(sc, cm); goto Ret; } mps_lock(sc); cm = mps_alloc_command(sc); if (cm == NULL) { mps_printf(sc, "%s: no mps requests\n", __func__); err = ENOMEM; goto Ret; } mps_unlock(sc); hdr = (MPI2_REQUEST_HEADER *)cm->cm_req; bcopy(&tmphdr, hdr, data->RequestSize); /* * Do some checking to make sure the IOCTL request contains a valid * request. Then set the SGL info. */ mpi_init_sge(cm, hdr, (void *)((uint8_t *)hdr + data->RequestSize)); /* * Set up for read, write or both. From check above, DataOutSize will * be 0 if direction is READ or WRITE, but it will have some non-zero * value if the direction is BOTH. So, just use the biggest size to get * the cm_data buffer size. If direction is BOTH, 2 SGLs need to be set * up; the first is for the request and the second will contain the * response data. cm_out_len needs to be set here and this will be used * when the SGLs are set up. */ cm->cm_data = NULL; cm->cm_length = MAX(data->DataSize, data->DataOutSize); cm->cm_out_len = data->DataOutSize; cm->cm_flags = 0; if (cm->cm_length != 0) { cm->cm_data = malloc(cm->cm_length, M_MPSUSER, M_WAITOK | M_ZERO); cm->cm_flags = MPS_CM_FLAGS_DATAIN; if (data->DataOutSize) { cm->cm_flags |= MPS_CM_FLAGS_DATAOUT; err = copyin(PTRIN(data->PtrDataOut), cm->cm_data, data->DataOutSize); } else if (data->DataDirection == MPS_PASS_THRU_DIRECTION_WRITE) { cm->cm_flags = MPS_CM_FLAGS_DATAOUT; err = copyin(PTRIN(data->PtrData), cm->cm_data, data->DataSize); } if (err != 0) mps_dprint(sc, MPS_FAULT, "%s: failed to copy " "IOCTL data from user space\n", __func__); } cm->cm_flags |= MPS_CM_FLAGS_SGE_SIMPLE; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; /* * Set up Sense buffer and SGL offset for IO passthru. SCSI IO request * uses SCSI IO descriptor. */ if ((function == MPI2_FUNCTION_SCSI_IO_REQUEST) || (function == MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH)) { MPI2_SCSI_IO_REQUEST *scsi_io_req; scsi_io_req = (MPI2_SCSI_IO_REQUEST *)hdr; /* * Put SGE for data and data_out buffer at the end of * scsi_io_request message header (64 bytes in total). * Following above SGEs, the residual space will be used by * sense data. */ scsi_io_req->SenseBufferLength = (uint8_t)(data->RequestSize - 64); scsi_io_req->SenseBufferLowAddress = htole32(cm->cm_sense_busaddr); /* * Set SGLOffset0 value. This is the number of dwords that SGL * is offset from the beginning of MPI2_SCSI_IO_REQUEST struct. */ scsi_io_req->SGLOffset0 = 24; /* * Setup descriptor info. RAID passthrough must use the * default request descriptor which is already set, so if this * is a SCSI IO request, change the descriptor to SCSI IO. * Also, if this is a SCSI IO request, handle the reply in the * mpssas_scsio_complete function. */ if (function == MPI2_FUNCTION_SCSI_IO_REQUEST) { cm->cm_desc.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO; cm->cm_desc.SCSIIO.DevHandle = scsi_io_req->DevHandle; /* * Make sure the DevHandle is not 0 because this is a * likely error. */ if (scsi_io_req->DevHandle == 0) { err = EINVAL; goto RetFreeUnlocked; } } } mps_lock(sc); err = mps_wait_command(sc, &cm, 30, CAN_SLEEP); if (err || (cm == NULL)) { mps_printf(sc, "%s: invalid request: error %d\n", __func__, err); mps_unlock(sc); goto RetFreeUnlocked; } /* * Sync the DMA data, if any. Then copy the data to user space. */ if (cm->cm_data != NULL) { if (cm->cm_flags & MPS_CM_FLAGS_DATAIN) dir = BUS_DMASYNC_POSTREAD; else if (cm->cm_flags & MPS_CM_FLAGS_DATAOUT) dir = BUS_DMASYNC_POSTWRITE; bus_dmamap_sync(sc->buffer_dmat, cm->cm_dmamap, dir); bus_dmamap_unload(sc->buffer_dmat, cm->cm_dmamap); if (cm->cm_flags & MPS_CM_FLAGS_DATAIN) { mps_unlock(sc); err = copyout(cm->cm_data, PTRIN(data->PtrData), data->DataSize); mps_lock(sc); if (err != 0) mps_dprint(sc, MPS_FAULT, "%s: failed to copy " "IOCTL data to user space\n", __func__); } } /* * Copy the reply data and sense data to user space. */ if (cm->cm_reply != NULL) { rpl = (MPI2_DEFAULT_REPLY *)cm->cm_reply; sz = rpl->MsgLength * 4; if (sz > data->ReplySize) { mps_printf(sc, "%s: user reply buffer (%d) smaller " "than returned buffer (%d)\n", __func__, data->ReplySize, sz); } mps_unlock(sc); copyout(cm->cm_reply, PTRIN(data->PtrReply), data->ReplySize); mps_lock(sc); if ((function == MPI2_FUNCTION_SCSI_IO_REQUEST) || (function == MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH)) { if (((MPI2_SCSI_IO_REPLY *)rpl)->SCSIState & MPI2_SCSI_STATE_AUTOSENSE_VALID) { sense_len = MIN((le32toh(((MPI2_SCSI_IO_REPLY *)rpl)->SenseCount)), sizeof(struct scsi_sense_data)); mps_unlock(sc); copyout(cm->cm_sense, cm->cm_req + 64, sense_len); mps_lock(sc); } } } mps_unlock(sc); RetFreeUnlocked: mps_lock(sc); if (cm != NULL) { if (cm->cm_data) free(cm->cm_data, M_MPSUSER); mps_free_command(sc, cm); } Ret: sc->mps_flags &= ~MPS_FLAGS_BUSY; mps_unlock(sc); return (err); } static void mps_user_get_adapter_data(struct mps_softc *sc, mps_adapter_data_t *data) { Mpi2ConfigReply_t mpi_reply; Mpi2BiosPage3_t config_page; /* * Use the PCI interface functions to get the Bus, Device, and Function * information. */ data->PciInformation.u.bits.BusNumber = pci_get_bus(sc->mps_dev); data->PciInformation.u.bits.DeviceNumber = pci_get_slot(sc->mps_dev); data->PciInformation.u.bits.FunctionNumber = pci_get_function(sc->mps_dev); /* * Get the FW version that should already be saved in IOC Facts. */ data->MpiFirmwareVersion = sc->facts->FWVersion.Word; /* * General device info. */ data->AdapterType = MPSIOCTL_ADAPTER_TYPE_SAS2; if (sc->mps_flags & MPS_FLAGS_WD_AVAILABLE) data->AdapterType = MPSIOCTL_ADAPTER_TYPE_SAS2_SSS6200; data->PCIDeviceHwId = pci_get_device(sc->mps_dev); data->PCIDeviceHwRev = pci_read_config(sc->mps_dev, PCIR_REVID, 1); data->SubSystemId = pci_get_subdevice(sc->mps_dev); data->SubsystemVendorId = pci_get_subvendor(sc->mps_dev); /* * Get the driver version. */ strcpy((char *)&data->DriverVersion[0], MPS_DRIVER_VERSION); /* * Need to get BIOS Config Page 3 for the BIOS Version. */ data->BiosVersion = 0; mps_lock(sc); if (mps_config_get_bios_pg3(sc, &mpi_reply, &config_page)) printf("%s: Error while retrieving BIOS Version\n", __func__); else data->BiosVersion = config_page.BiosVersion; mps_unlock(sc); } static void mps_user_read_pci_info(struct mps_softc *sc, mps_pci_info_t *data) { int i; /* * Use the PCI interface functions to get the Bus, Device, and Function * information. */ data->BusNumber = pci_get_bus(sc->mps_dev); data->DeviceNumber = pci_get_slot(sc->mps_dev); data->FunctionNumber = pci_get_function(sc->mps_dev); /* * Now get the interrupt vector and the pci header. The vector can * only be 0 right now. The header is the first 256 bytes of config * space. */ data->InterruptVector = 0; for (i = 0; i < sizeof (data->PciHeader); i++) { data->PciHeader[i] = pci_read_config(sc->mps_dev, i, 1); } } static uint8_t mps_get_fw_diag_buffer_number(struct mps_softc *sc, uint32_t unique_id) { uint8_t index; for (index = 0; index < MPI2_DIAG_BUF_TYPE_COUNT; index++) { if (sc->fw_diag_buffer_list[index].unique_id == unique_id) { return (index); } } return (MPS_FW_DIAGNOSTIC_UID_NOT_FOUND); } static int mps_post_fw_diag_buffer(struct mps_softc *sc, mps_fw_diagnostic_buffer_t *pBuffer, uint32_t *return_code) { MPI2_DIAG_BUFFER_POST_REQUEST *req; MPI2_DIAG_BUFFER_POST_REPLY *reply = NULL; struct mps_command *cm = NULL; int i, status; /* * If buffer is not enabled, just leave. */ *return_code = MPS_FW_DIAG_ERROR_POST_FAILED; if (!pBuffer->enabled) { return (MPS_DIAG_FAILURE); } /* * Clear some flags initially. */ pBuffer->force_release = FALSE; pBuffer->valid_data = FALSE; pBuffer->owned_by_firmware = FALSE; /* * Get a command. */ cm = mps_alloc_command(sc); if (cm == NULL) { mps_printf(sc, "%s: no mps requests\n", __func__); return (MPS_DIAG_FAILURE); } /* * Build the request for releasing the FW Diag Buffer and send it. */ req = (MPI2_DIAG_BUFFER_POST_REQUEST *)cm->cm_req; req->Function = MPI2_FUNCTION_DIAG_BUFFER_POST; req->BufferType = pBuffer->buffer_type; req->ExtendedType = pBuffer->extended_type; req->BufferLength = pBuffer->size; for (i = 0; i < (sizeof(req->ProductSpecific) / 4); i++) req->ProductSpecific[i] = pBuffer->product_specific[i]; mps_from_u64(sc->fw_diag_busaddr, &req->BufferAddress); cm->cm_data = NULL; cm->cm_length = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_complete_data = NULL; /* * Send command synchronously. */ status = mps_wait_command(sc, &cm, 30, CAN_SLEEP); if (status || (cm == NULL)) { mps_printf(sc, "%s: invalid request: error %d\n", __func__, status); status = MPS_DIAG_FAILURE; goto done; } /* * Process POST reply. */ reply = (MPI2_DIAG_BUFFER_POST_REPLY *)cm->cm_reply; if (reply == NULL) { mps_printf(sc, "%s: reply is NULL, probably due to " "reinitialization\n", __func__); status = MPS_DIAG_FAILURE; goto done; } if ((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) { status = MPS_DIAG_FAILURE; mps_dprint(sc, MPS_FAULT, "%s: post of FW Diag Buffer failed " "with IOCStatus = 0x%x, IOCLogInfo = 0x%x and " "TransferLength = 0x%x\n", __func__, le16toh(reply->IOCStatus), le32toh(reply->IOCLogInfo), le32toh(reply->TransferLength)); goto done; } /* * Post was successful. */ pBuffer->valid_data = TRUE; pBuffer->owned_by_firmware = TRUE; *return_code = MPS_FW_DIAG_ERROR_SUCCESS; status = MPS_DIAG_SUCCESS; done: if (cm != NULL) mps_free_command(sc, cm); return (status); } static int mps_release_fw_diag_buffer(struct mps_softc *sc, mps_fw_diagnostic_buffer_t *pBuffer, uint32_t *return_code, uint32_t diag_type) { MPI2_DIAG_RELEASE_REQUEST *req; MPI2_DIAG_RELEASE_REPLY *reply = NULL; struct mps_command *cm = NULL; int status; /* * If buffer is not enabled, just leave. */ *return_code = MPS_FW_DIAG_ERROR_RELEASE_FAILED; if (!pBuffer->enabled) { mps_dprint(sc, MPS_USER, "%s: This buffer type is not " "supported by the IOC", __func__); return (MPS_DIAG_FAILURE); } /* * Clear some flags initially. */ pBuffer->force_release = FALSE; pBuffer->valid_data = FALSE; pBuffer->owned_by_firmware = FALSE; /* * Get a command. */ cm = mps_alloc_command(sc); if (cm == NULL) { mps_printf(sc, "%s: no mps requests\n", __func__); return (MPS_DIAG_FAILURE); } /* * Build the request for releasing the FW Diag Buffer and send it. */ req = (MPI2_DIAG_RELEASE_REQUEST *)cm->cm_req; req->Function = MPI2_FUNCTION_DIAG_RELEASE; req->BufferType = pBuffer->buffer_type; cm->cm_data = NULL; cm->cm_length = 0; cm->cm_desc.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; cm->cm_complete_data = NULL; /* * Send command synchronously. */ status = mps_wait_command(sc, &cm, 30, CAN_SLEEP); if (status || (cm == NULL)) { mps_printf(sc, "%s: invalid request: error %d\n", __func__, status); status = MPS_DIAG_FAILURE; goto done; } /* * Process RELEASE reply. */ reply = (MPI2_DIAG_RELEASE_REPLY *)cm->cm_reply; if (reply == NULL) { mps_printf(sc, "%s: reply is NULL, probably due to " "reinitialization\n", __func__); status = MPS_DIAG_FAILURE; goto done; } if (((le16toh(reply->IOCStatus) & MPI2_IOCSTATUS_MASK) != MPI2_IOCSTATUS_SUCCESS) || pBuffer->owned_by_firmware) { status = MPS_DIAG_FAILURE; mps_dprint(sc, MPS_FAULT, "%s: release of FW Diag Buffer " "failed with IOCStatus = 0x%x and IOCLogInfo = 0x%x\n", __func__, le16toh(reply->IOCStatus), le32toh(reply->IOCLogInfo)); goto done; } /* * Release was successful. */ *return_code = MPS_FW_DIAG_ERROR_SUCCESS; status = MPS_DIAG_SUCCESS; /* * If this was for an UNREGISTER diag type command, clear the unique ID. */ if (diag_type == MPS_FW_DIAG_TYPE_UNREGISTER) { pBuffer->unique_id = MPS_FW_DIAG_INVALID_UID; } done: if (cm != NULL) mps_free_command(sc, cm); return (status); } static int mps_diag_register(struct mps_softc *sc, mps_fw_diag_register_t *diag_register, uint32_t *return_code) { mps_fw_diagnostic_buffer_t *pBuffer; struct mps_busdma_context *ctx; uint8_t extended_type, buffer_type, i; uint32_t buffer_size; uint32_t unique_id; int status; int error; extended_type = diag_register->ExtendedType; buffer_type = diag_register->BufferType; buffer_size = diag_register->RequestedBufferSize; unique_id = diag_register->UniqueId; ctx = NULL; error = 0; /* * Check for valid buffer type */ if (buffer_type >= MPI2_DIAG_BUF_TYPE_COUNT) { *return_code = MPS_FW_DIAG_ERROR_INVALID_PARAMETER; return (MPS_DIAG_FAILURE); } /* * Get the current buffer and look up the unique ID. The unique ID * should not be found. If it is, the ID is already in use. */ i = mps_get_fw_diag_buffer_number(sc, unique_id); pBuffer = &sc->fw_diag_buffer_list[buffer_type]; if (i != MPS_FW_DIAGNOSTIC_UID_NOT_FOUND) { *return_code = MPS_FW_DIAG_ERROR_INVALID_UID; return (MPS_DIAG_FAILURE); } /* * The buffer's unique ID should not be registered yet, and the given * unique ID cannot be 0. */ if ((pBuffer->unique_id != MPS_FW_DIAG_INVALID_UID) || (unique_id == MPS_FW_DIAG_INVALID_UID)) { *return_code = MPS_FW_DIAG_ERROR_INVALID_UID; return (MPS_DIAG_FAILURE); } /* * If this buffer is already posted as immediate, just change owner. */ if (pBuffer->immediate && pBuffer->owned_by_firmware && (pBuffer->unique_id == MPS_FW_DIAG_INVALID_UID)) { pBuffer->immediate = FALSE; pBuffer->unique_id = unique_id; return (MPS_DIAG_SUCCESS); } /* * Post a new buffer after checking if it's enabled. The DMA buffer * that is allocated will be contiguous (nsegments = 1). */ if (!pBuffer->enabled) { *return_code = MPS_FW_DIAG_ERROR_NO_BUFFER; return (MPS_DIAG_FAILURE); } if (bus_dma_tag_create( sc->mps_parent_dmat, /* parent */ 1, 0, /* algnmnt, boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ buffer_size, /* maxsize */ 1, /* nsegments */ buffer_size, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->fw_diag_dmat)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate FW diag buffer DMA tag\n"); *return_code = MPS_FW_DIAG_ERROR_NO_BUFFER; status = MPS_DIAG_FAILURE; goto bailout; } if (bus_dmamem_alloc(sc->fw_diag_dmat, (void **)&sc->fw_diag_buffer, BUS_DMA_NOWAIT, &sc->fw_diag_map)) { mps_dprint(sc, MPS_ERROR, "Cannot allocate FW diag buffer memory\n"); *return_code = MPS_FW_DIAG_ERROR_NO_BUFFER; status = MPS_DIAG_FAILURE; goto bailout; } bzero(sc->fw_diag_buffer, buffer_size); ctx = malloc(sizeof(*ctx), M_MPSUSER, M_WAITOK | M_ZERO); if (ctx == NULL) { device_printf(sc->mps_dev, "%s: context malloc failed\n", __func__); *return_code = MPS_FW_DIAG_ERROR_NO_BUFFER; status = MPS_DIAG_FAILURE; goto bailout; } ctx->addr = &sc->fw_diag_busaddr; ctx->buffer_dmat = sc->fw_diag_dmat; ctx->buffer_dmamap = sc->fw_diag_map; ctx->softc = sc; error = bus_dmamap_load(sc->fw_diag_dmat, sc->fw_diag_map, sc->fw_diag_buffer, buffer_size, mps_memaddr_wait_cb, ctx, 0); if (error == EINPROGRESS) { /* XXX KDM */ device_printf(sc->mps_dev, "%s: Deferred bus_dmamap_load\n", __func__); /* * Wait for the load to complete. If we're interrupted, * bail out. */ mps_lock(sc); if (ctx->completed == 0) { error = msleep(ctx, &sc->mps_mtx, PCATCH, "mpswait", 0); if (error != 0) { /* * We got an error from msleep(9). This is * most likely due to a signal. Tell * mpr_memaddr_wait_cb() that we've abandoned * the context, so it needs to clean up when * it is called. */ ctx->abandoned = 1; /* The callback will free this memory */ ctx = NULL; mps_unlock(sc); device_printf(sc->mps_dev, "Cannot " "bus_dmamap_load FW diag buffer, error = " "%d returned from msleep\n", error); *return_code = MPS_FW_DIAG_ERROR_NO_BUFFER; status = MPS_DIAG_FAILURE; goto bailout; } } mps_unlock(sc); } if ((error != 0) || (ctx->error != 0)) { device_printf(sc->mps_dev, "Cannot bus_dmamap_load FW diag " "buffer, %serror = %d\n", error ? "" : "callback ", error ? error : ctx->error); *return_code = MPS_FW_DIAG_ERROR_NO_BUFFER; status = MPS_DIAG_FAILURE; goto bailout; } bus_dmamap_sync(sc->fw_diag_dmat, sc->fw_diag_map, BUS_DMASYNC_PREREAD); pBuffer->size = buffer_size; /* * Copy the given info to the diag buffer and post the buffer. */ pBuffer->buffer_type = buffer_type; pBuffer->immediate = FALSE; if (buffer_type == MPI2_DIAG_BUF_TYPE_TRACE) { for (i = 0; i < (sizeof (pBuffer->product_specific) / 4); i++) { pBuffer->product_specific[i] = diag_register->ProductSpecific[i]; } } pBuffer->extended_type = extended_type; pBuffer->unique_id = unique_id; status = mps_post_fw_diag_buffer(sc, pBuffer, return_code); bailout: /* * In case there was a failure, free the DMA buffer. */ if (status == MPS_DIAG_FAILURE) { if (sc->fw_diag_busaddr != 0) { bus_dmamap_unload(sc->fw_diag_dmat, sc->fw_diag_map); sc->fw_diag_busaddr = 0; } if (sc->fw_diag_buffer != NULL) { bus_dmamem_free(sc->fw_diag_dmat, sc->fw_diag_buffer, sc->fw_diag_map); sc->fw_diag_buffer = NULL; } if (sc->fw_diag_dmat != NULL) { bus_dma_tag_destroy(sc->fw_diag_dmat); sc->fw_diag_dmat = NULL; } } if (ctx != NULL) free(ctx, M_MPSUSER); return (status); } static int mps_diag_unregister(struct mps_softc *sc, mps_fw_diag_unregister_t *diag_unregister, uint32_t *return_code) { mps_fw_diagnostic_buffer_t *pBuffer; uint8_t i; uint32_t unique_id; int status; unique_id = diag_unregister->UniqueId; /* * Get the current buffer and look up the unique ID. The unique ID * should be there. */ i = mps_get_fw_diag_buffer_number(sc, unique_id); if (i == MPS_FW_DIAGNOSTIC_UID_NOT_FOUND) { *return_code = MPS_FW_DIAG_ERROR_INVALID_UID; return (MPS_DIAG_FAILURE); } pBuffer = &sc->fw_diag_buffer_list[i]; /* * Try to release the buffer from FW before freeing it. If release * fails, don't free the DMA buffer in case FW tries to access it * later. If buffer is not owned by firmware, can't release it. */ if (!pBuffer->owned_by_firmware) { status = MPS_DIAG_SUCCESS; } else { status = mps_release_fw_diag_buffer(sc, pBuffer, return_code, MPS_FW_DIAG_TYPE_UNREGISTER); } /* * At this point, return the current status no matter what happens with * the DMA buffer. */ pBuffer->unique_id = MPS_FW_DIAG_INVALID_UID; if (status == MPS_DIAG_SUCCESS) { if (sc->fw_diag_busaddr != 0) { bus_dmamap_unload(sc->fw_diag_dmat, sc->fw_diag_map); sc->fw_diag_busaddr = 0; } if (sc->fw_diag_buffer != NULL) { bus_dmamem_free(sc->fw_diag_dmat, sc->fw_diag_buffer, sc->fw_diag_map); sc->fw_diag_buffer = NULL; } if (sc->fw_diag_dmat != NULL) { bus_dma_tag_destroy(sc->fw_diag_dmat); sc->fw_diag_dmat = NULL; } } return (status); } static int mps_diag_query(struct mps_softc *sc, mps_fw_diag_query_t *diag_query, uint32_t *return_code) { mps_fw_diagnostic_buffer_t *pBuffer; uint8_t i; uint32_t unique_id; unique_id = diag_query->UniqueId; /* * If ID is valid, query on ID. * If ID is invalid, query on buffer type. */ if (unique_id == MPS_FW_DIAG_INVALID_UID) { i = diag_query->BufferType; if (i >= MPI2_DIAG_BUF_TYPE_COUNT) { *return_code = MPS_FW_DIAG_ERROR_INVALID_UID; return (MPS_DIAG_FAILURE); } } else { i = mps_get_fw_diag_buffer_number(sc, unique_id); if (i == MPS_FW_DIAGNOSTIC_UID_NOT_FOUND) { *return_code = MPS_FW_DIAG_ERROR_INVALID_UID; return (MPS_DIAG_FAILURE); } } /* * Fill query structure with the diag buffer info. */ pBuffer = &sc->fw_diag_buffer_list[i]; diag_query->BufferType = pBuffer->buffer_type; diag_query->ExtendedType = pBuffer->extended_type; if (diag_query->BufferType == MPI2_DIAG_BUF_TYPE_TRACE) { for (i = 0; i < (sizeof(diag_query->ProductSpecific) / 4); i++) { diag_query->ProductSpecific[i] = pBuffer->product_specific[i]; } } diag_query->TotalBufferSize = pBuffer->size; diag_query->DriverAddedBufferSize = 0; diag_query->UniqueId = pBuffer->unique_id; diag_query->ApplicationFlags = 0; diag_query->DiagnosticFlags = 0; /* * Set/Clear application flags */ if (pBuffer->immediate) { diag_query->ApplicationFlags &= ~MPS_FW_DIAG_FLAG_APP_OWNED; } else { diag_query->ApplicationFlags |= MPS_FW_DIAG_FLAG_APP_OWNED; } if (pBuffer->valid_data || pBuffer->owned_by_firmware) { diag_query->ApplicationFlags |= MPS_FW_DIAG_FLAG_BUFFER_VALID; } else { diag_query->ApplicationFlags &= ~MPS_FW_DIAG_FLAG_BUFFER_VALID; } if (pBuffer->owned_by_firmware) { diag_query->ApplicationFlags |= MPS_FW_DIAG_FLAG_FW_BUFFER_ACCESS; } else { diag_query->ApplicationFlags &= ~MPS_FW_DIAG_FLAG_FW_BUFFER_ACCESS; } return (MPS_DIAG_SUCCESS); } static int mps_diag_read_buffer(struct mps_softc *sc, mps_diag_read_buffer_t *diag_read_buffer, uint8_t *ioctl_buf, uint32_t *return_code) { mps_fw_diagnostic_buffer_t *pBuffer; uint8_t i, *pData; uint32_t unique_id; int status; unique_id = diag_read_buffer->UniqueId; /* * Get the current buffer and look up the unique ID. The unique ID * should be there. */ i = mps_get_fw_diag_buffer_number(sc, unique_id); if (i == MPS_FW_DIAGNOSTIC_UID_NOT_FOUND) { *return_code = MPS_FW_DIAG_ERROR_INVALID_UID; return (MPS_DIAG_FAILURE); } pBuffer = &sc->fw_diag_buffer_list[i]; /* * Make sure requested read is within limits */ if (diag_read_buffer->StartingOffset + diag_read_buffer->BytesToRead > pBuffer->size) { *return_code = MPS_FW_DIAG_ERROR_INVALID_PARAMETER; return (MPS_DIAG_FAILURE); } /* Sync the DMA map before we copy to userland. */ bus_dmamap_sync(sc->fw_diag_dmat, sc->fw_diag_map, BUS_DMASYNC_POSTREAD); /* * Copy the requested data from DMA to the diag_read_buffer. The DMA * buffer that was allocated is one contiguous buffer. */ pData = (uint8_t *)(sc->fw_diag_buffer + diag_read_buffer->StartingOffset); if (copyout(pData, ioctl_buf, diag_read_buffer->BytesToRead) != 0) return (MPS_DIAG_FAILURE); diag_read_buffer->Status = 0; /* * Set or clear the Force Release flag. */ if (pBuffer->force_release) { diag_read_buffer->Flags |= MPS_FW_DIAG_FLAG_FORCE_RELEASE; } else { diag_read_buffer->Flags &= ~MPS_FW_DIAG_FLAG_FORCE_RELEASE; } /* * If buffer is to be reregistered, make sure it's not already owned by * firmware first. */ status = MPS_DIAG_SUCCESS; if (!pBuffer->owned_by_firmware) { if (diag_read_buffer->Flags & MPS_FW_DIAG_FLAG_REREGISTER) { status = mps_post_fw_diag_buffer(sc, pBuffer, return_code); } } return (status); } static int mps_diag_release(struct mps_softc *sc, mps_fw_diag_release_t *diag_release, uint32_t *return_code) { mps_fw_diagnostic_buffer_t *pBuffer; uint8_t i; uint32_t unique_id; int status; unique_id = diag_release->UniqueId; /* * Get the current buffer and look up the unique ID. The unique ID * should be there. */ i = mps_get_fw_diag_buffer_number(sc, unique_id); if (i == MPS_FW_DIAGNOSTIC_UID_NOT_FOUND) { *return_code = MPS_FW_DIAG_ERROR_INVALID_UID; return (MPS_DIAG_FAILURE); } pBuffer = &sc->fw_diag_buffer_list[i]; /* * If buffer is not owned by firmware, it's already been released. */ if (!pBuffer->owned_by_firmware) { *return_code = MPS_FW_DIAG_ERROR_ALREADY_RELEASED; return (MPS_DIAG_FAILURE); } /* * Release the buffer. */ status = mps_release_fw_diag_buffer(sc, pBuffer, return_code, MPS_FW_DIAG_TYPE_RELEASE); return (status); } static int mps_do_diag_action(struct mps_softc *sc, uint32_t action, uint8_t *diag_action, uint32_t length, uint32_t *return_code) { mps_fw_diag_register_t diag_register; mps_fw_diag_unregister_t diag_unregister; mps_fw_diag_query_t diag_query; mps_diag_read_buffer_t diag_read_buffer; mps_fw_diag_release_t diag_release; int status = MPS_DIAG_SUCCESS; uint32_t original_return_code; original_return_code = *return_code; *return_code = MPS_FW_DIAG_ERROR_SUCCESS; switch (action) { case MPS_FW_DIAG_TYPE_REGISTER: if (!length) { *return_code = MPS_FW_DIAG_ERROR_INVALID_PARAMETER; status = MPS_DIAG_FAILURE; break; } if (copyin(diag_action, &diag_register, sizeof(diag_register)) != 0) return (MPS_DIAG_FAILURE); status = mps_diag_register(sc, &diag_register, return_code); break; case MPS_FW_DIAG_TYPE_UNREGISTER: if (length < sizeof(diag_unregister)) { *return_code = MPS_FW_DIAG_ERROR_INVALID_PARAMETER; status = MPS_DIAG_FAILURE; break; } if (copyin(diag_action, &diag_unregister, sizeof(diag_unregister)) != 0) return (MPS_DIAG_FAILURE); status = mps_diag_unregister(sc, &diag_unregister, return_code); break; case MPS_FW_DIAG_TYPE_QUERY: if (length < sizeof (diag_query)) { *return_code = MPS_FW_DIAG_ERROR_INVALID_PARAMETER; status = MPS_DIAG_FAILURE; break; } if (copyin(diag_action, &diag_query, sizeof(diag_query)) != 0) return (MPS_DIAG_FAILURE); status = mps_diag_query(sc, &diag_query, return_code); if (status == MPS_DIAG_SUCCESS) if (copyout(&diag_query, diag_action, sizeof (diag_query)) != 0) return (MPS_DIAG_FAILURE); break; case MPS_FW_DIAG_TYPE_READ_BUFFER: if (copyin(diag_action, &diag_read_buffer, sizeof(diag_read_buffer)) != 0) return (MPS_DIAG_FAILURE); if (length < diag_read_buffer.BytesToRead) { *return_code = MPS_FW_DIAG_ERROR_INVALID_PARAMETER; status = MPS_DIAG_FAILURE; break; } status = mps_diag_read_buffer(sc, &diag_read_buffer, PTRIN(diag_read_buffer.PtrDataBuffer), return_code); if (status == MPS_DIAG_SUCCESS) { if (copyout(&diag_read_buffer, diag_action, sizeof(diag_read_buffer) - sizeof(diag_read_buffer.PtrDataBuffer)) != 0) return (MPS_DIAG_FAILURE); } break; case MPS_FW_DIAG_TYPE_RELEASE: if (length < sizeof(diag_release)) { *return_code = MPS_FW_DIAG_ERROR_INVALID_PARAMETER; status = MPS_DIAG_FAILURE; break; } if (copyin(diag_action, &diag_release, sizeof(diag_release)) != 0) return (MPS_DIAG_FAILURE); status = mps_diag_release(sc, &diag_release, return_code); break; default: *return_code = MPS_FW_DIAG_ERROR_INVALID_PARAMETER; status = MPS_DIAG_FAILURE; break; } if ((status == MPS_DIAG_FAILURE) && (original_return_code == MPS_FW_DIAG_NEW) && (*return_code != MPS_FW_DIAG_ERROR_SUCCESS)) status = MPS_DIAG_SUCCESS; return (status); } static int mps_user_diag_action(struct mps_softc *sc, mps_diag_action_t *data) { int status; /* * Only allow one diag action at one time. */ if (sc->mps_flags & MPS_FLAGS_BUSY) { mps_dprint(sc, MPS_USER, "%s: Only one FW diag command " "allowed at a single time.", __func__); return (EBUSY); } sc->mps_flags |= MPS_FLAGS_BUSY; /* * Send diag action request */ if (data->Action == MPS_FW_DIAG_TYPE_REGISTER || data->Action == MPS_FW_DIAG_TYPE_UNREGISTER || data->Action == MPS_FW_DIAG_TYPE_QUERY || data->Action == MPS_FW_DIAG_TYPE_READ_BUFFER || data->Action == MPS_FW_DIAG_TYPE_RELEASE) { status = mps_do_diag_action(sc, data->Action, PTRIN(data->PtrDiagAction), data->Length, &data->ReturnCode); } else status = EINVAL; sc->mps_flags &= ~MPS_FLAGS_BUSY; return (status); } /* * Copy the event recording mask and the event queue size out. For * clarification, the event recording mask (events_to_record) is not the same * thing as the event mask (event_mask). events_to_record has a bit set for * every event type that is to be recorded by the driver, and event_mask has a * bit cleared for every event that is allowed into the driver from the IOC. * They really have nothing to do with each other. */ static void mps_user_event_query(struct mps_softc *sc, mps_event_query_t *data) { uint8_t i; mps_lock(sc); data->Entries = MPS_EVENT_QUEUE_SIZE; for (i = 0; i < 4; i++) { data->Types[i] = sc->events_to_record[i]; } mps_unlock(sc); } /* * Set the driver's event mask according to what's been given. See * mps_user_event_query for explanation of the event recording mask and the IOC * event mask. It's the app's responsibility to enable event logging by setting * the bits in events_to_record. Initially, no events will be logged. */ static void mps_user_event_enable(struct mps_softc *sc, mps_event_enable_t *data) { uint8_t i; mps_lock(sc); for (i = 0; i < 4; i++) { sc->events_to_record[i] = data->Types[i]; } mps_unlock(sc); } /* * Copy out the events that have been recorded, up to the max events allowed. */ static int mps_user_event_report(struct mps_softc *sc, mps_event_report_t *data) { int status = 0; uint32_t size; mps_lock(sc); size = data->Size; if ((size >= sizeof(sc->recorded_events)) && (status == 0)) { mps_unlock(sc); if (copyout((void *)sc->recorded_events, PTRIN(data->PtrEvents), size) != 0) status = EFAULT; mps_lock(sc); } else { /* * data->Size value is not large enough to copy event data. */ status = EFAULT; } /* * Change size value to match the number of bytes that were copied. */ if (status == 0) data->Size = sizeof(sc->recorded_events); mps_unlock(sc); return (status); } /* * Record events into the driver from the IOC if they are not masked. */ void mpssas_record_event(struct mps_softc *sc, MPI2_EVENT_NOTIFICATION_REPLY *event_reply) { uint32_t event; int i, j; uint16_t event_data_len; boolean_t sendAEN = FALSE; event = event_reply->Event; /* * Generate a system event to let anyone who cares know that a * LOG_ENTRY_ADDED event has occurred. This is sent no matter what the * event mask is set to. */ if (event == MPI2_EVENT_LOG_ENTRY_ADDED) { sendAEN = TRUE; } /* * Record the event only if its corresponding bit is set in * events_to_record. event_index is the index into recorded_events and * event_number is the overall number of an event being recorded since * start-of-day. event_index will roll over; event_number will never * roll over. */ i = (uint8_t)(event / 32); j = (uint8_t)(event % 32); if ((i < 4) && ((1 << j) & sc->events_to_record[i])) { i = sc->event_index; sc->recorded_events[i].Type = event; sc->recorded_events[i].Number = ++sc->event_number; bzero(sc->recorded_events[i].Data, MPS_MAX_EVENT_DATA_LENGTH * 4); event_data_len = event_reply->EventDataLength; if (event_data_len > 0) { /* * Limit data to size in m_event entry */ if (event_data_len > MPS_MAX_EVENT_DATA_LENGTH) { event_data_len = MPS_MAX_EVENT_DATA_LENGTH; } for (j = 0; j < event_data_len; j++) { sc->recorded_events[i].Data[j] = event_reply->EventData[j]; } /* * check for index wrap-around */ if (++i == MPS_EVENT_QUEUE_SIZE) { i = 0; } sc->event_index = (uint8_t)i; /* * Set flag to send the event. */ sendAEN = TRUE; } } /* * Generate a system event if flag is set to let anyone who cares know * that an event has occurred. */ if (sendAEN) { //SLM-how to send a system event (see kqueue, kevent) // (void) ddi_log_sysevent(mpt->m_dip, DDI_VENDOR_LSI, "MPT_SAS", // "SAS", NULL, NULL, DDI_NOSLEEP); } } static int mps_user_reg_access(struct mps_softc *sc, mps_reg_access_t *data) { int status = 0; switch (data->Command) { /* * IO access is not supported. */ case REG_IO_READ: case REG_IO_WRITE: mps_dprint(sc, MPS_USER, "IO access is not supported. " "Use memory access."); status = EINVAL; break; case REG_MEM_READ: data->RegData = mps_regread(sc, data->RegOffset); break; case REG_MEM_WRITE: mps_regwrite(sc, data->RegOffset, data->RegData); break; default: status = EINVAL; break; } return (status); } static int mps_user_btdh(struct mps_softc *sc, mps_btdh_mapping_t *data) { uint8_t bt2dh = FALSE; uint8_t dh2bt = FALSE; uint16_t dev_handle, bus, target; bus = data->Bus; target = data->TargetID; dev_handle = data->DevHandle; /* * When DevHandle is 0xFFFF and Bus/Target are not 0xFFFF, use Bus/ * Target to get DevHandle. When Bus/Target are 0xFFFF and DevHandle is * not 0xFFFF, use DevHandle to get Bus/Target. Anything else is * invalid. */ if ((bus == 0xFFFF) && (target == 0xFFFF) && (dev_handle != 0xFFFF)) dh2bt = TRUE; if ((dev_handle == 0xFFFF) && (bus != 0xFFFF) && (target != 0xFFFF)) bt2dh = TRUE; if (!dh2bt && !bt2dh) return (EINVAL); /* * Only handle bus of 0. Make sure target is within range. */ if (bt2dh) { if (bus != 0) return (EINVAL); if (target > sc->max_devices) { mps_dprint(sc, MPS_FAULT, "Target ID is out of range " "for Bus/Target to DevHandle mapping."); return (EINVAL); } dev_handle = sc->mapping_table[target].dev_handle; if (dev_handle) data->DevHandle = dev_handle; } else { bus = 0; target = mps_mapping_get_tid_from_handle(sc, dev_handle); data->Bus = bus; data->TargetID = target; } return (0); } static int mps_ioctl(struct cdev *dev, u_long cmd, void *arg, int flag, struct thread *td) { struct mps_softc *sc; struct mps_cfg_page_req *page_req; struct mps_ext_cfg_page_req *ext_page_req; void *mps_page; int error, msleep_ret; mps_page = NULL; sc = dev->si_drv1; page_req = (void *)arg; ext_page_req = (void *)arg; switch (cmd) { case MPSIO_READ_CFG_HEADER: mps_lock(sc); error = mps_user_read_cfg_header(sc, page_req); mps_unlock(sc); break; case MPSIO_READ_CFG_PAGE: mps_page = malloc(page_req->len, M_MPSUSER, M_WAITOK | M_ZERO); error = copyin(page_req->buf, mps_page, sizeof(MPI2_CONFIG_PAGE_HEADER)); if (error) break; mps_lock(sc); error = mps_user_read_cfg_page(sc, page_req, mps_page); mps_unlock(sc); if (error) break; error = copyout(mps_page, page_req->buf, page_req->len); break; case MPSIO_READ_EXT_CFG_HEADER: mps_lock(sc); error = mps_user_read_extcfg_header(sc, ext_page_req); mps_unlock(sc); break; case MPSIO_READ_EXT_CFG_PAGE: mps_page = malloc(ext_page_req->len, M_MPSUSER, M_WAITOK|M_ZERO); error = copyin(ext_page_req->buf, mps_page, sizeof(MPI2_CONFIG_EXTENDED_PAGE_HEADER)); if (error) break; mps_lock(sc); error = mps_user_read_extcfg_page(sc, ext_page_req, mps_page); mps_unlock(sc); if (error) break; error = copyout(mps_page, ext_page_req->buf, ext_page_req->len); break; case MPSIO_WRITE_CFG_PAGE: mps_page = malloc(page_req->len, M_MPSUSER, M_WAITOK|M_ZERO); error = copyin(page_req->buf, mps_page, page_req->len); if (error) break; mps_lock(sc); error = mps_user_write_cfg_page(sc, page_req, mps_page); mps_unlock(sc); break; case MPSIO_MPS_COMMAND: error = mps_user_command(sc, (struct mps_usr_command *)arg); break; case MPTIOCTL_PASS_THRU: /* * The user has requested to pass through a command to be * executed by the MPT firmware. Call our routine which does * this. Only allow one passthru IOCTL at one time. */ error = mps_user_pass_thru(sc, (mps_pass_thru_t *)arg); break; case MPTIOCTL_GET_ADAPTER_DATA: /* * The user has requested to read adapter data. Call our * routine which does this. */ error = 0; mps_user_get_adapter_data(sc, (mps_adapter_data_t *)arg); break; case MPTIOCTL_GET_PCI_INFO: /* * The user has requested to read pci info. Call * our routine which does this. */ mps_lock(sc); error = 0; mps_user_read_pci_info(sc, (mps_pci_info_t *)arg); mps_unlock(sc); break; case MPTIOCTL_RESET_ADAPTER: mps_lock(sc); sc->port_enable_complete = 0; uint32_t reinit_start = time_uptime; error = mps_reinit(sc); /* Sleep for 300 second. */ msleep_ret = msleep(&sc->port_enable_complete, &sc->mps_mtx, PRIBIO, "mps_porten", 300 * hz); mps_unlock(sc); if (msleep_ret) printf("Port Enable did not complete after Diag " "Reset msleep error %d.\n", msleep_ret); else mps_dprint(sc, MPS_USER, "Hard Reset with Port Enable completed in %d seconds.\n", (uint32_t) (time_uptime - reinit_start)); break; case MPTIOCTL_DIAG_ACTION: /* * The user has done a diag buffer action. Call our routine * which does this. Only allow one diag action at one time. */ mps_lock(sc); error = mps_user_diag_action(sc, (mps_diag_action_t *)arg); mps_unlock(sc); break; case MPTIOCTL_EVENT_QUERY: /* * The user has done an event query. Call our routine which does * this. */ error = 0; mps_user_event_query(sc, (mps_event_query_t *)arg); break; case MPTIOCTL_EVENT_ENABLE: /* * The user has done an event enable. Call our routine which * does this. */ error = 0; mps_user_event_enable(sc, (mps_event_enable_t *)arg); break; case MPTIOCTL_EVENT_REPORT: /* * The user has done an event report. Call our routine which * does this. */ error = mps_user_event_report(sc, (mps_event_report_t *)arg); break; case MPTIOCTL_REG_ACCESS: /* * The user has requested register access. Call our routine * which does this. */ mps_lock(sc); error = mps_user_reg_access(sc, (mps_reg_access_t *)arg); mps_unlock(sc); break; case MPTIOCTL_BTDH_MAPPING: /* * The user has requested to translate a bus/target to a * DevHandle or a DevHandle to a bus/target. Call our routine * which does this. */ error = mps_user_btdh(sc, (mps_btdh_mapping_t *)arg); break; default: error = ENOIOCTL; break; } if (mps_page != NULL) free(mps_page, M_MPSUSER); return (error); } #ifdef COMPAT_FREEBSD32 struct mps_cfg_page_req32 { MPI2_CONFIG_PAGE_HEADER header; uint32_t page_address; uint32_t buf; int len; uint16_t ioc_status; }; struct mps_ext_cfg_page_req32 { MPI2_CONFIG_EXTENDED_PAGE_HEADER header; uint32_t page_address; uint32_t buf; int len; uint16_t ioc_status; }; struct mps_raid_action32 { uint8_t action; uint8_t volume_bus; uint8_t volume_id; uint8_t phys_disk_num; uint32_t action_data_word; uint32_t buf; int len; uint32_t volume_status; uint32_t action_data[4]; uint16_t action_status; uint16_t ioc_status; uint8_t write; }; struct mps_usr_command32 { uint32_t req; uint32_t req_len; uint32_t rpl; uint32_t rpl_len; uint32_t buf; int len; uint32_t flags; }; #define MPSIO_READ_CFG_HEADER32 _IOWR('M', 200, struct mps_cfg_page_req32) #define MPSIO_READ_CFG_PAGE32 _IOWR('M', 201, struct mps_cfg_page_req32) #define MPSIO_READ_EXT_CFG_HEADER32 _IOWR('M', 202, struct mps_ext_cfg_page_req32) #define MPSIO_READ_EXT_CFG_PAGE32 _IOWR('M', 203, struct mps_ext_cfg_page_req32) #define MPSIO_WRITE_CFG_PAGE32 _IOWR('M', 204, struct mps_cfg_page_req32) #define MPSIO_RAID_ACTION32 _IOWR('M', 205, struct mps_raid_action32) #define MPSIO_MPS_COMMAND32 _IOWR('M', 210, struct mps_usr_command32) static int mps_ioctl32(struct cdev *dev, u_long cmd32, void *_arg, int flag, struct thread *td) { struct mps_cfg_page_req32 *page32 = _arg; struct mps_ext_cfg_page_req32 *ext32 = _arg; struct mps_raid_action32 *raid32 = _arg; struct mps_usr_command32 *user32 = _arg; union { struct mps_cfg_page_req page; struct mps_ext_cfg_page_req ext; struct mps_raid_action raid; struct mps_usr_command user; } arg; u_long cmd; int error; switch (cmd32) { case MPSIO_READ_CFG_HEADER32: case MPSIO_READ_CFG_PAGE32: case MPSIO_WRITE_CFG_PAGE32: if (cmd32 == MPSIO_READ_CFG_HEADER32) cmd = MPSIO_READ_CFG_HEADER; else if (cmd32 == MPSIO_READ_CFG_PAGE32) cmd = MPSIO_READ_CFG_PAGE; else cmd = MPSIO_WRITE_CFG_PAGE; CP(*page32, arg.page, header); CP(*page32, arg.page, page_address); PTRIN_CP(*page32, arg.page, buf); CP(*page32, arg.page, len); CP(*page32, arg.page, ioc_status); break; case MPSIO_READ_EXT_CFG_HEADER32: case MPSIO_READ_EXT_CFG_PAGE32: if (cmd32 == MPSIO_READ_EXT_CFG_HEADER32) cmd = MPSIO_READ_EXT_CFG_HEADER; else cmd = MPSIO_READ_EXT_CFG_PAGE; CP(*ext32, arg.ext, header); CP(*ext32, arg.ext, page_address); PTRIN_CP(*ext32, arg.ext, buf); CP(*ext32, arg.ext, len); CP(*ext32, arg.ext, ioc_status); break; case MPSIO_RAID_ACTION32: cmd = MPSIO_RAID_ACTION; CP(*raid32, arg.raid, action); CP(*raid32, arg.raid, volume_bus); CP(*raid32, arg.raid, volume_id); CP(*raid32, arg.raid, phys_disk_num); CP(*raid32, arg.raid, action_data_word); PTRIN_CP(*raid32, arg.raid, buf); CP(*raid32, arg.raid, len); CP(*raid32, arg.raid, volume_status); bcopy(raid32->action_data, arg.raid.action_data, sizeof arg.raid.action_data); CP(*raid32, arg.raid, ioc_status); CP(*raid32, arg.raid, write); break; case MPSIO_MPS_COMMAND32: cmd = MPSIO_MPS_COMMAND; PTRIN_CP(*user32, arg.user, req); CP(*user32, arg.user, req_len); PTRIN_CP(*user32, arg.user, rpl); CP(*user32, arg.user, rpl_len); PTRIN_CP(*user32, arg.user, buf); CP(*user32, arg.user, len); CP(*user32, arg.user, flags); break; default: return (ENOIOCTL); } error = mps_ioctl(dev, cmd, &arg, flag, td); if (error == 0 && (cmd32 & IOC_OUT) != 0) { switch (cmd32) { case MPSIO_READ_CFG_HEADER32: case MPSIO_READ_CFG_PAGE32: case MPSIO_WRITE_CFG_PAGE32: CP(arg.page, *page32, header); CP(arg.page, *page32, page_address); PTROUT_CP(arg.page, *page32, buf); CP(arg.page, *page32, len); CP(arg.page, *page32, ioc_status); break; case MPSIO_READ_EXT_CFG_HEADER32: case MPSIO_READ_EXT_CFG_PAGE32: CP(arg.ext, *ext32, header); CP(arg.ext, *ext32, page_address); PTROUT_CP(arg.ext, *ext32, buf); CP(arg.ext, *ext32, len); CP(arg.ext, *ext32, ioc_status); break; case MPSIO_RAID_ACTION32: CP(arg.raid, *raid32, action); CP(arg.raid, *raid32, volume_bus); CP(arg.raid, *raid32, volume_id); CP(arg.raid, *raid32, phys_disk_num); CP(arg.raid, *raid32, action_data_word); PTROUT_CP(arg.raid, *raid32, buf); CP(arg.raid, *raid32, len); CP(arg.raid, *raid32, volume_status); bcopy(arg.raid.action_data, raid32->action_data, sizeof arg.raid.action_data); CP(arg.raid, *raid32, ioc_status); CP(arg.raid, *raid32, write); break; case MPSIO_MPS_COMMAND32: PTROUT_CP(arg.user, *user32, req); CP(arg.user, *user32, req_len); PTROUT_CP(arg.user, *user32, rpl); CP(arg.user, *user32, rpl_len); PTROUT_CP(arg.user, *user32, buf); CP(arg.user, *user32, len); CP(arg.user, *user32, flags); break; } } return (error); } #endif /* COMPAT_FREEBSD32 */ static int mps_ioctl_devsw(struct cdev *dev, u_long com, caddr_t arg, int flag, struct thread *td) { #ifdef COMPAT_FREEBSD32 if (SV_CURPROC_FLAG(SV_ILP32)) return (mps_ioctl32(dev, com, arg, flag, td)); #endif return (mps_ioctl(dev, com, arg, flag, td)); } Index: releng/12.1/sys/dev/mps/mpsvar.h =================================================================== --- releng/12.1/sys/dev/mps/mpsvar.h (revision 352760) +++ releng/12.1/sys/dev/mps/mpsvar.h (revision 352761) @@ -1,854 +1,862 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2009 Yahoo! Inc. * Copyright (c) 2011-2015 LSI Corp. * Copyright (c) 2013-2015 Avago Technologies * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Avago Technologies (LSI) MPT-Fusion Host Adapter FreeBSD * * $FreeBSD$ */ #ifndef _MPSVAR_H #define _MPSVAR_H #define MPS_DRIVER_VERSION "21.02.00.00-fbsd" #define MPS_DB_MAX_WAIT 2500 #define MPS_REQ_FRAMES 2048 #define MPS_PRI_REQ_FRAMES 128 #define MPS_EVT_REPLY_FRAMES 32 #define MPS_REPLY_FRAMES MPS_REQ_FRAMES #define MPS_CHAIN_FRAMES 16384 #define MPS_MAXIO_PAGES (-1) #define MPS_SENSE_LEN SSD_FULL_SIZE #define MPS_MSI_MAX 1 #define MPS_MSIX_MAX 16 #define MPS_SGE64_SIZE 12 #define MPS_SGE32_SIZE 8 #define MPS_SGC_SIZE 8 #define CAN_SLEEP 1 #define NO_SLEEP 0 #define MPS_PERIODIC_DELAY 1 /* 1 second heartbeat/watchdog check */ #define MPS_ATA_ID_TIMEOUT 5 /* 5 second timeout for SATA ID cmd */ #define MPS_MISSING_CHECK_DELAY 10 /* 10 seconds between missing check */ #define MPS_SCSI_RI_INVALID_FRAME (0x00000002) #define DEFAULT_SPINUP_WAIT 3 /* seconds to wait for spinup */ #include /* * host mapping related macro definitions */ #define MPS_MAPTABLE_BAD_IDX 0xFFFFFFFF #define MPS_DPM_BAD_IDX 0xFFFF #define MPS_ENCTABLE_BAD_IDX 0xFF #define MPS_MAX_MISSING_COUNT 0x0F #define MPS_DEV_RESERVED 0x20000000 #define MPS_MAP_IN_USE 0x10000000 #define MPS_MAP_BAD_ID 0xFFFFFFFF /* * WarpDrive controller */ #define MPS_CHIP_WD_DEVICE_ID 0x007E #define MPS_WD_LSI_OEM 0x80 #define MPS_WD_HIDE_EXPOSE_MASK 0x03 #define MPS_WD_HIDE_ALWAYS 0x00 #define MPS_WD_EXPOSE_ALWAYS 0x01 #define MPS_WD_HIDE_IF_VOLUME 0x02 #define MPS_WD_RETRY 0x01 #define MPS_MAN_PAGE10_SIZE 0x5C /* Hardcode for now */ #define MPS_MAX_DISKS_IN_VOL 10 /* * WarpDrive Event Logging */ #define MPI2_WD_LOG_ENTRY 0x8002 #define MPI2_WD_SSD_THROTTLING 0x0041 #define MPI2_WD_DRIVE_LIFE_WARN 0x0043 #define MPI2_WD_DRIVE_LIFE_DEAD 0x0044 #define MPI2_WD_RAIL_MON_FAIL 0x004D typedef uint8_t u8; typedef uint16_t u16; typedef uint32_t u32; typedef uint64_t u64; /** * struct dev_mapping_table - device mapping information * @physical_id: SAS address for drives or WWID for RAID volumes * @device_info: bitfield provides detailed info about the device * @phy_bits: bitfields indicating controller phys * @dpm_entry_num: index of this device in device persistent map table * @dev_handle: device handle for the device pointed by this entry * @id: target id * @missing_count: number of times the device not detected by driver * @hide_flag: Hide this physical disk/not (foreign configuration) * @init_complete: Whether the start of the day checks completed or not */ struct dev_mapping_table { u64 physical_id; u32 device_info; u32 phy_bits; u16 dpm_entry_num; u16 dev_handle; u16 reserved1; u16 id; u8 missing_count; u8 init_complete; u8 TLR_bits; u8 reserved2; }; /** * struct enc_mapping_table - mapping information about an enclosure * @enclosure_id: Logical ID of this enclosure * @start_index: index to the entry in dev_mapping_table * @phy_bits: bitfields indicating controller phys * @dpm_entry_num: index of this enclosure in device persistent map table * @enc_handle: device handle for the enclosure pointed by this entry * @num_slots: number of slots in the enclosure * @start_slot: Starting slot id * @missing_count: number of times the device not detected by driver * @removal_flag: used to mark the device for removal * @skip_search: used as a flag to include/exclude enclosure for search * @init_complete: Whether the start of the day checks completed or not */ struct enc_mapping_table { u64 enclosure_id; u32 start_index; u32 phy_bits; u16 dpm_entry_num; u16 enc_handle; u16 num_slots; u16 start_slot; u8 missing_count; u8 removal_flag; u8 skip_search; u8 init_complete; }; /** * struct map_removal_table - entries to be removed from mapping table * @dpm_entry_num: index of this device in device persistent map table * @dev_handle: device handle for the device pointed by this entry */ struct map_removal_table{ u16 dpm_entry_num; u16 dev_handle; }; typedef struct mps_fw_diagnostic_buffer { size_t size; uint8_t extended_type; uint8_t buffer_type; uint8_t force_release; uint32_t product_specific[23]; uint8_t immediate; uint8_t enabled; uint8_t valid_data; uint8_t owned_by_firmware; uint32_t unique_id; } mps_fw_diagnostic_buffer_t; struct mps_softc; struct mps_command; struct mpssas_softc; union ccb; struct mpssas_target; struct mps_column_map; MALLOC_DECLARE(M_MPT2); typedef void mps_evt_callback_t(struct mps_softc *, uintptr_t, MPI2_EVENT_NOTIFICATION_REPLY *reply); typedef void mps_command_callback_t(struct mps_softc *, struct mps_command *cm); struct mps_chain { TAILQ_ENTRY(mps_chain) chain_link; MPI2_SGE_IO_UNION *chain; uint32_t chain_busaddr; }; /* * This needs to be at least 2 to support SMP passthrough. */ #define MPS_IOVEC_COUNT 2 struct mps_command { TAILQ_ENTRY(mps_command) cm_link; TAILQ_ENTRY(mps_command) cm_recovery; struct mps_softc *cm_sc; union ccb *cm_ccb; void *cm_data; u_int cm_length; u_int cm_out_len; struct uio cm_uio; struct iovec cm_iovec[MPS_IOVEC_COUNT]; u_int cm_max_segs; u_int cm_sglsize; MPI2_SGE_IO_UNION *cm_sge; uint8_t *cm_req; uint8_t *cm_reply; uint32_t cm_reply_data; mps_command_callback_t *cm_complete; void *cm_complete_data; struct mpssas_target *cm_targ; MPI2_REQUEST_DESCRIPTOR_UNION cm_desc; u_int cm_lun; u_int cm_flags; #define MPS_CM_FLAGS_POLLED (1 << 0) #define MPS_CM_FLAGS_COMPLETE (1 << 1) #define MPS_CM_FLAGS_SGE_SIMPLE (1 << 2) #define MPS_CM_FLAGS_DATAOUT (1 << 3) #define MPS_CM_FLAGS_DATAIN (1 << 4) #define MPS_CM_FLAGS_WAKEUP (1 << 5) #define MPS_CM_FLAGS_DD_IO (1 << 6) #define MPS_CM_FLAGS_USE_UIO (1 << 7) #define MPS_CM_FLAGS_SMP_PASS (1 << 8) #define MPS_CM_FLAGS_CHAIN_FAILED (1 << 9) #define MPS_CM_FLAGS_ERROR_MASK MPS_CM_FLAGS_CHAIN_FAILED #define MPS_CM_FLAGS_USE_CCB (1 << 10) #define MPS_CM_FLAGS_SATA_ID_TIMEOUT (1 << 11) +#define MPS_CM_FLAGS_ON_RECOVERY (1 << 12) +#define MPS_CM_FLAGS_TIMEDOUT (1 << 13) u_int cm_state; #define MPS_CM_STATE_FREE 0 #define MPS_CM_STATE_BUSY 1 -#define MPS_CM_STATE_TIMEDOUT 2 -#define MPS_CM_STATE_INQUEUE 3 +#define MPS_CM_STATE_INQUEUE 2 bus_dmamap_t cm_dmamap; struct scsi_sense_data *cm_sense; TAILQ_HEAD(, mps_chain) cm_chain_list; uint32_t cm_req_busaddr; uint32_t cm_sense_busaddr; struct callout cm_callout; + mps_command_callback_t *cm_timeout_handler; }; struct mps_column_map { uint16_t dev_handle; uint8_t phys_disk_num; }; struct mps_event_handle { TAILQ_ENTRY(mps_event_handle) eh_list; mps_evt_callback_t *callback; void *data; u32 mask[MPI2_EVENT_NOTIFY_EVENTMASK_WORDS]; }; struct mps_busdma_context { int completed; int abandoned; int error; bus_addr_t *addr; struct mps_softc *softc; bus_dmamap_t buffer_dmamap; bus_dma_tag_t buffer_dmat; }; struct mps_queue { struct mps_softc *sc; int qnum; MPI2_REPLY_DESCRIPTORS_UNION *post_queue; int replypostindex; #ifdef notyet ck_ring_buffer_t *ringmem; ck_ring_buffer_t *chainmem; ck_ring_t req_ring; ck_ring_t chain_ring; #endif bus_dma_tag_t buffer_dmat; int io_cmds_highwater; int chain_free_lowwater; int chain_alloc_fail; struct resource *irq; void *intrhand; int irq_rid; }; struct mps_softc { device_t mps_dev; struct cdev *mps_cdev; u_int mps_flags; #define MPS_FLAGS_INTX (1 << 0) #define MPS_FLAGS_MSI (1 << 1) #define MPS_FLAGS_BUSY (1 << 2) #define MPS_FLAGS_SHUTDOWN (1 << 3) #define MPS_FLAGS_DIAGRESET (1 << 4) #define MPS_FLAGS_ATTACH_DONE (1 << 5) #define MPS_FLAGS_WD_AVAILABLE (1 << 6) #define MPS_FLAGS_REALLOCATED (1 << 7) u_int mps_debug; u_int msi_msgs; u_int reqframesz; u_int replyframesz; int tm_cmds_active; int io_cmds_active; int io_cmds_highwater; int chain_free; int max_chains; int max_io_pages; u_int maxio; int chain_free_lowwater; u_int enable_ssu; int spinup_wait_time; int use_phynum; uint64_t chain_alloc_fail; struct sysctl_ctx_list sysctl_ctx; struct sysctl_oid *sysctl_tree; char fw_version[16]; struct mps_command *commands; struct mps_chain *chains; struct callout periodic; struct callout device_check_callout; struct mps_queue *queues; struct mpssas_softc *sassc; TAILQ_HEAD(, mps_command) req_list; TAILQ_HEAD(, mps_command) high_priority_req_list; TAILQ_HEAD(, mps_chain) chain_list; TAILQ_HEAD(, mps_command) tm_list; int replypostindex; int replyfreeindex; struct resource *mps_regs_resource; bus_space_handle_t mps_bhandle; bus_space_tag_t mps_btag; int mps_regs_rid; bus_dma_tag_t mps_parent_dmat; bus_dma_tag_t buffer_dmat; MPI2_IOC_FACTS_REPLY *facts; int num_reqs; int num_prireqs; int num_replies; int num_chains; int fqdepth; /* Free queue */ int pqdepth; /* Post queue */ u32 event_mask[MPI2_EVENT_NOTIFY_EVENTMASK_WORDS]; TAILQ_HEAD(, mps_event_handle) event_list; struct mps_event_handle *mps_log_eh; struct mtx mps_mtx; struct intr_config_hook mps_ich; uint8_t *req_frames; bus_addr_t req_busaddr; bus_dma_tag_t req_dmat; bus_dmamap_t req_map; uint8_t *reply_frames; bus_addr_t reply_busaddr; bus_dma_tag_t reply_dmat; bus_dmamap_t reply_map; struct scsi_sense_data *sense_frames; bus_addr_t sense_busaddr; bus_dma_tag_t sense_dmat; bus_dmamap_t sense_map; uint8_t *chain_frames; bus_dma_tag_t chain_dmat; bus_dmamap_t chain_map; MPI2_REPLY_DESCRIPTORS_UNION *post_queue; bus_addr_t post_busaddr; uint32_t *free_queue; bus_addr_t free_busaddr; bus_dma_tag_t queues_dmat; bus_dmamap_t queues_map; uint8_t *fw_diag_buffer; bus_addr_t fw_diag_busaddr; bus_dma_tag_t fw_diag_dmat; bus_dmamap_t fw_diag_map; uint8_t ir_firmware; /* static config pages */ Mpi2IOCPage8_t ioc_pg8; /* host mapping support */ struct dev_mapping_table *mapping_table; struct enc_mapping_table *enclosure_table; struct map_removal_table *removal_table; uint8_t *dpm_entry_used; uint8_t *dpm_flush_entry; Mpi2DriverMappingPage0_t *dpm_pg0; uint16_t max_devices; uint16_t max_enclosures; uint16_t max_expanders; uint8_t max_volumes; uint8_t num_enc_table_entries; uint8_t num_rsvd_entries; uint16_t max_dpm_entries; uint8_t is_dpm_enable; uint8_t track_mapping_events; uint32_t pending_map_events; /* FW diag Buffer List */ mps_fw_diagnostic_buffer_t fw_diag_buffer_list[MPI2_DIAG_BUF_TYPE_COUNT]; /* Event Recording IOCTL support */ uint32_t events_to_record[4]; mps_event_entry_t recorded_events[MPS_EVENT_QUEUE_SIZE]; uint8_t event_index; uint32_t event_number; /* EEDP and TLR support */ uint8_t eedp_enabled; uint8_t control_TLR; /* Shutdown Event Handler */ eventhandler_tag shutdown_eh; /* To track topo events during reset */ #define MPS_DIAG_RESET_TIMEOUT 300000 uint8_t wait_for_port_enable; uint8_t port_enable_complete; uint8_t msleep_fake_chan; /* WD controller */ uint8_t WD_available; uint8_t WD_valid_config; uint8_t WD_hide_expose; /* Direct Drive for WarpDrive */ uint8_t DD_num_phys_disks; uint16_t DD_dev_handle; uint32_t DD_stripe_size; uint32_t DD_stripe_exponent; uint32_t DD_block_size; uint16_t DD_block_exponent; uint64_t DD_max_lba; struct mps_column_map DD_column_map[MPS_MAX_DISKS_IN_VOL]; /* StartStopUnit command handling at shutdown */ uint32_t SSU_refcount; uint8_t SSU_started; /* Configuration tunables */ u_int disable_msix; u_int disable_msi; u_int max_msix; u_int max_reqframes; u_int max_prireqframes; u_int max_replyframes; u_int max_evtframes; char exclude_ids[80]; struct timeval lastfail; }; struct mps_config_params { MPI2_CONFIG_EXT_PAGE_HEADER_UNION hdr; u_int action; u_int page_address; /* Attributes, not a phys address */ u_int status; void *buffer; u_int length; int timeout; void (*callback)(struct mps_softc *, struct mps_config_params *); void *cbdata; }; struct scsi_read_capacity_eedp { uint8_t addr[8]; uint8_t length[4]; uint8_t protect; }; static __inline uint32_t mps_regread(struct mps_softc *sc, uint32_t offset) { return (bus_space_read_4(sc->mps_btag, sc->mps_bhandle, offset)); } static __inline void mps_regwrite(struct mps_softc *sc, uint32_t offset, uint32_t val) { bus_space_write_4(sc->mps_btag, sc->mps_bhandle, offset, val); } /* free_queue must have Little Endian address * TODO- cm_reply_data is unwanted. We can remove it. * */ static __inline void mps_free_reply(struct mps_softc *sc, uint32_t busaddr) { if (++sc->replyfreeindex >= sc->fqdepth) sc->replyfreeindex = 0; sc->free_queue[sc->replyfreeindex] = htole32(busaddr); mps_regwrite(sc, MPI2_REPLY_FREE_HOST_INDEX_OFFSET, sc->replyfreeindex); } static __inline struct mps_chain * mps_alloc_chain(struct mps_softc *sc) { struct mps_chain *chain; if ((chain = TAILQ_FIRST(&sc->chain_list)) != NULL) { TAILQ_REMOVE(&sc->chain_list, chain, chain_link); sc->chain_free--; if (sc->chain_free < sc->chain_free_lowwater) sc->chain_free_lowwater = sc->chain_free; } else sc->chain_alloc_fail++; return (chain); } static __inline void mps_free_chain(struct mps_softc *sc, struct mps_chain *chain) { sc->chain_free++; TAILQ_INSERT_TAIL(&sc->chain_list, chain, chain_link); } static __inline void mps_free_command(struct mps_softc *sc, struct mps_command *cm) { struct mps_chain *chain, *chain_temp; - KASSERT(cm->cm_state == MPS_CM_STATE_BUSY, ("state not busy\n")); + KASSERT(cm->cm_state == MPS_CM_STATE_BUSY, + ("state not busy: %d\n", cm->cm_state)); if (cm->cm_reply != NULL) mps_free_reply(sc, cm->cm_reply_data); cm->cm_reply = NULL; cm->cm_flags = 0; cm->cm_complete = NULL; cm->cm_complete_data = NULL; cm->cm_ccb = NULL; cm->cm_targ = NULL; cm->cm_max_segs = 0; cm->cm_lun = 0; cm->cm_state = MPS_CM_STATE_FREE; cm->cm_data = NULL; cm->cm_length = 0; cm->cm_out_len = 0; cm->cm_sglsize = 0; cm->cm_sge = NULL; TAILQ_FOREACH_SAFE(chain, &cm->cm_chain_list, chain_link, chain_temp) { TAILQ_REMOVE(&cm->cm_chain_list, chain, chain_link); mps_free_chain(sc, chain); } TAILQ_INSERT_TAIL(&sc->req_list, cm, cm_link); } static __inline struct mps_command * mps_alloc_command(struct mps_softc *sc) { struct mps_command *cm; cm = TAILQ_FIRST(&sc->req_list); if (cm == NULL) return (NULL); KASSERT(cm->cm_state == MPS_CM_STATE_FREE, - ("mps: Allocating busy command\n")); + ("mps: Allocating busy command: %d\n", cm->cm_state)); TAILQ_REMOVE(&sc->req_list, cm, cm_link); cm->cm_state = MPS_CM_STATE_BUSY; + cm->cm_timeout_handler = NULL; return (cm); } static __inline void mps_free_high_priority_command(struct mps_softc *sc, struct mps_command *cm) { struct mps_chain *chain, *chain_temp; - KASSERT(cm->cm_state == MPS_CM_STATE_BUSY, ("state not busy\n")); + KASSERT(cm->cm_state == MPS_CM_STATE_BUSY, + ("state not busy: %d\n", cm->cm_state)); if (cm->cm_reply != NULL) mps_free_reply(sc, cm->cm_reply_data); cm->cm_reply = NULL; cm->cm_flags = 0; cm->cm_complete = NULL; cm->cm_complete_data = NULL; cm->cm_ccb = NULL; cm->cm_targ = NULL; cm->cm_lun = 0; cm->cm_state = MPS_CM_STATE_FREE; TAILQ_FOREACH_SAFE(chain, &cm->cm_chain_list, chain_link, chain_temp) { TAILQ_REMOVE(&cm->cm_chain_list, chain, chain_link); mps_free_chain(sc, chain); } TAILQ_INSERT_TAIL(&sc->high_priority_req_list, cm, cm_link); } static __inline struct mps_command * mps_alloc_high_priority_command(struct mps_softc *sc) { struct mps_command *cm; cm = TAILQ_FIRST(&sc->high_priority_req_list); if (cm == NULL) return (NULL); KASSERT(cm->cm_state == MPS_CM_STATE_FREE, - ("mps: Allocating busy command\n")); + ("mps: Allocating high priority busy command: %d\n", cm->cm_state)); TAILQ_REMOVE(&sc->high_priority_req_list, cm, cm_link); cm->cm_state = MPS_CM_STATE_BUSY; + cm->cm_timeout_handler = NULL; + cm->cm_desc.HighPriority.RequestFlags = + MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; return (cm); } static __inline void mps_lock(struct mps_softc *sc) { mtx_lock(&sc->mps_mtx); } static __inline void mps_unlock(struct mps_softc *sc) { mtx_unlock(&sc->mps_mtx); } #define MPS_INFO (1 << 0) /* Basic info */ #define MPS_FAULT (1 << 1) /* Hardware faults */ #define MPS_EVENT (1 << 2) /* Event data from the controller */ #define MPS_LOG (1 << 3) /* Log data from the controller */ #define MPS_RECOVERY (1 << 4) /* Command error recovery tracing */ #define MPS_ERROR (1 << 5) /* Parameter errors, programming bugs */ #define MPS_INIT (1 << 6) /* Things related to system init */ #define MPS_XINFO (1 << 7) /* More detailed/noisy info */ #define MPS_USER (1 << 8) /* Trace user-generated commands */ #define MPS_MAPPING (1 << 9) /* Trace device mappings */ #define MPS_TRACE (1 << 10) /* Function-by-function trace */ #define MPS_SSU_DISABLE_SSD_DISABLE_HDD 0 #define MPS_SSU_ENABLE_SSD_DISABLE_HDD 1 #define MPS_SSU_DISABLE_SSD_ENABLE_HDD 2 #define MPS_SSU_ENABLE_SSD_ENABLE_HDD 3 #define mps_printf(sc, args...) \ device_printf((sc)->mps_dev, ##args) #define mps_print_field(sc, msg, args...) \ printf("\t" msg, ##args) #define mps_vprintf(sc, args...) \ do { \ if (bootverbose) \ mps_printf(sc, ##args); \ } while (0) #define mps_dprint(sc, level, msg, args...) \ do { \ if ((sc)->mps_debug & (level)) \ device_printf((sc)->mps_dev, msg, ##args); \ } while (0) #define MPS_PRINTFIELD_START(sc, tag...) \ mps_printf((sc), ##tag); \ mps_print_field((sc), ":\n") #define MPS_PRINTFIELD_END(sc, tag) \ mps_printf((sc), tag "\n") #define MPS_PRINTFIELD(sc, facts, attr, fmt) \ mps_print_field((sc), #attr ": " #fmt "\n", (facts)->attr) #define MPS_FUNCTRACE(sc) \ mps_dprint((sc), MPS_TRACE, "%s\n", __func__) #define CAN_SLEEP 1 #define NO_SLEEP 0 static __inline void mps_from_u64(uint64_t data, U64 *mps) { (mps)->High = htole32((uint32_t)((data) >> 32)); (mps)->Low = htole32((uint32_t)((data) & 0xffffffff)); } static __inline uint64_t mps_to_u64(U64 *data) { return (((uint64_t)le32toh(data->High) << 32) | le32toh(data->Low)); } static __inline void mps_mask_intr(struct mps_softc *sc) { uint32_t mask; mask = mps_regread(sc, MPI2_HOST_INTERRUPT_MASK_OFFSET); mask |= MPI2_HIM_REPLY_INT_MASK; mps_regwrite(sc, MPI2_HOST_INTERRUPT_MASK_OFFSET, mask); } static __inline void mps_unmask_intr(struct mps_softc *sc) { uint32_t mask; mask = mps_regread(sc, MPI2_HOST_INTERRUPT_MASK_OFFSET); mask &= ~MPI2_HIM_REPLY_INT_MASK; mps_regwrite(sc, MPI2_HOST_INTERRUPT_MASK_OFFSET, mask); } int mps_pci_setup_interrupts(struct mps_softc *sc); void mps_pci_free_interrupts(struct mps_softc *sc); int mps_pci_restore(struct mps_softc *sc); void mps_get_tunables(struct mps_softc *sc); int mps_attach(struct mps_softc *sc); int mps_free(struct mps_softc *sc); void mps_intr(void *); void mps_intr_msi(void *); void mps_intr_locked(void *); int mps_register_events(struct mps_softc *, u32 *, mps_evt_callback_t *, void *, struct mps_event_handle **); int mps_restart(struct mps_softc *); int mps_update_events(struct mps_softc *, struct mps_event_handle *, u32 *); void mps_deregister_events(struct mps_softc *, struct mps_event_handle *); int mps_push_sge(struct mps_command *, void *, size_t, int); int mps_add_dmaseg(struct mps_command *, vm_paddr_t, size_t, u_int, int); int mps_attach_sas(struct mps_softc *sc); int mps_detach_sas(struct mps_softc *sc); int mps_read_config_page(struct mps_softc *, struct mps_config_params *); int mps_write_config_page(struct mps_softc *, struct mps_config_params *); void mps_memaddr_cb(void *, bus_dma_segment_t *, int , int ); void mps_memaddr_wait_cb(void *, bus_dma_segment_t *, int , int ); void mpi_init_sge(struct mps_command *cm, void *req, void *sge); int mps_attach_user(struct mps_softc *); void mps_detach_user(struct mps_softc *); void mpssas_record_event(struct mps_softc *sc, MPI2_EVENT_NOTIFICATION_REPLY *event_reply); int mps_map_command(struct mps_softc *sc, struct mps_command *cm); int mps_wait_command(struct mps_softc *sc, struct mps_command **cm, int timeout, int sleep_flag); int mps_config_get_bios_pg3(struct mps_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2BiosPage3_t *config_page); int mps_config_get_raid_volume_pg0(struct mps_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2RaidVolPage0_t *config_page, u32 page_address); int mps_config_get_ioc_pg8(struct mps_softc *sc, Mpi2ConfigReply_t *, Mpi2IOCPage8_t *); int mps_config_get_man_pg10(struct mps_softc *sc, Mpi2ConfigReply_t *mpi_reply); int mps_config_get_sas_device_pg0(struct mps_softc *, Mpi2ConfigReply_t *, Mpi2SasDevicePage0_t *, u32 , u16 ); int mps_config_get_dpm_pg0(struct mps_softc *, Mpi2ConfigReply_t *, Mpi2DriverMappingPage0_t *, u16 ); int mps_config_get_raid_volume_pg1(struct mps_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2RaidVolPage1_t *config_page, u32 form, u16 handle); int mps_config_get_volume_wwid(struct mps_softc *sc, u16 volume_handle, u64 *wwid); int mps_config_get_raid_pd_pg0(struct mps_softc *sc, Mpi2ConfigReply_t *mpi_reply, Mpi2RaidPhysDiskPage0_t *config_page, u32 page_address); void mpssas_ir_shutdown(struct mps_softc *sc, int howto); int mps_reinit(struct mps_softc *sc); void mpssas_handle_reinit(struct mps_softc *sc); void mps_base_static_config_pages(struct mps_softc *sc); void mps_wd_config_pages(struct mps_softc *sc); int mps_mapping_initialize(struct mps_softc *); void mps_mapping_topology_change_event(struct mps_softc *, Mpi2EventDataSasTopologyChangeList_t *); void mps_mapping_free_memory(struct mps_softc *sc); int mps_config_set_dpm_pg0(struct mps_softc *, Mpi2ConfigReply_t *, Mpi2DriverMappingPage0_t *, u16 ); void mps_mapping_exit(struct mps_softc *); void mps_mapping_check_devices(void *); int mps_mapping_allocate_memory(struct mps_softc *sc); unsigned int mps_mapping_get_tid(struct mps_softc *, uint64_t , u16); unsigned int mps_mapping_get_tid_from_handle(struct mps_softc *sc, u16 handle); unsigned int mps_mapping_get_raid_tid(struct mps_softc *sc, u64 wwid, u16 volHandle); unsigned int mps_mapping_get_raid_tid_from_handle(struct mps_softc *sc, u16 volHandle); void mps_mapping_enclosure_dev_status_change_event(struct mps_softc *, Mpi2EventDataSasEnclDevStatusChange_t *event_data); void mps_mapping_ir_config_change_event(struct mps_softc *sc, Mpi2EventDataIrConfigChangeList_t *event_data); int mps_mapping_dump(SYSCTL_HANDLER_ARGS); int mps_mapping_encl_dump(SYSCTL_HANDLER_ARGS); void mpssas_evt_handler(struct mps_softc *sc, uintptr_t data, MPI2_EVENT_NOTIFICATION_REPLY *event); void mpssas_prepare_remove(struct mpssas_softc *sassc, uint16_t handle); void mpssas_prepare_volume_remove(struct mpssas_softc *sassc, uint16_t handle); int mpssas_startup(struct mps_softc *sc); struct mpssas_target * mpssas_find_target_by_handle(struct mpssas_softc *, int, uint16_t); void mpssas_realloc_targets(struct mps_softc *sc, int maxtargets); struct mps_command * mpssas_alloc_tm(struct mps_softc *sc); void mpssas_free_tm(struct mps_softc *sc, struct mps_command *tm); void mpssas_release_simq_reinit(struct mpssas_softc *sassc); int mpssas_send_reset(struct mps_softc *sc, struct mps_command *tm, uint8_t type); SYSCTL_DECL(_hw_mps); /* Compatibility shims for different OS versions */ #if __FreeBSD_version >= 800001 #define mps_kproc_create(func, farg, proc_ptr, flags, stackpgs, fmtstr, arg) \ kproc_create(func, farg, proc_ptr, flags, stackpgs, fmtstr, arg) #define mps_kproc_exit(arg) kproc_exit(arg) #else #define mps_kproc_create(func, farg, proc_ptr, flags, stackpgs, fmtstr, arg) \ kthread_create(func, farg, proc_ptr, flags, stackpgs, fmtstr, arg) #define mps_kproc_exit(arg) kthread_exit(arg) #endif #if defined(CAM_PRIORITY_XPT) #define MPS_PRIORITY_XPT CAM_PRIORITY_XPT #else #define MPS_PRIORITY_XPT 5 #endif #if __FreeBSD_version < 800107 // Prior to FreeBSD-8.0 scp3_flags was not defined. #define spc3_flags reserved #define SPC3_SID_PROTECT 0x01 #define SPC3_SID_3PC 0x08 #define SPC3_SID_TPGS_MASK 0x30 #define SPC3_SID_TPGS_IMPLICIT 0x10 #define SPC3_SID_TPGS_EXPLICIT 0x20 #define SPC3_SID_ACC 0x40 #define SPC3_SID_SCCS 0x80 #define CAM_PRIORITY_NORMAL CAM_PRIORITY_NONE #endif #endif Index: releng/12.1/sys/modules/Makefile =================================================================== --- releng/12.1/sys/modules/Makefile (revision 352760) +++ releng/12.1/sys/modules/Makefile (revision 352761) @@ -1,862 +1,868 @@ # $FreeBSD$ SYSDIR?=${SRCTOP}/sys .include "${SYSDIR}/conf/kern.opts.mk" SUBDIR_PARALLEL= # Modules that include binary-only blobs of microcode should be selectable by # MK_SOURCELESS_UCODE option (see below). .if defined(MODULES_OVERRIDE) && !defined(ALL_MODULES) SUBDIR=${MODULES_OVERRIDE} .else SUBDIR= \ ${_3dfx} \ ${_3dfx_linux} \ ${_aac} \ ${_aacraid} \ accf_data \ accf_dns \ accf_http \ acl_nfs4 \ acl_posix1e \ ${_acpi} \ ae \ ${_aesni} \ age \ ${_agp} \ aha \ ahci \ ${_aic} \ aic7xxx \ alc \ ale \ alq \ ${_amd_ecc_inject} \ ${_amdgpio} \ ${_amdsbwd} \ ${_amdsmn} \ ${_amdtemp} \ amr \ ${_an} \ ${_aout} \ ${_apm} \ ${_arcmsr} \ ${_allwinner} \ ${_armv8crypto} \ ${_asmc} \ ata \ ath \ ath_dfs \ ath_hal \ ath_hal_ar5210 \ ath_hal_ar5211 \ ath_hal_ar5212 \ ath_hal_ar5416 \ ath_hal_ar9300 \ ath_main \ ath_rate \ ath_pci \ ${_autofs} \ ${_auxio} \ ${_bce} \ ${_bcm283x_clkman} \ ${_bcm283x_pwm} \ bfe \ bge \ bhnd \ ${_bxe} \ ${_bios} \ ${_bktr} \ ${_blake2} \ ${_bm} \ bnxt \ bridgestp \ bwi \ bwn \ ${_bytgpio} \ ${_chvgpio} \ cam \ ${_cardbus} \ ${_carp} \ cas \ ${_cbb} \ cc \ ${_ccp} \ cd9660 \ cd9660_iconv \ ${_ce} \ ${_cfi} \ ${_chromebook_platform} \ ${_ciss} \ cloudabi \ ${_cloudabi32} \ ${_cloudabi64} \ ${_cmx} \ ${_coff} \ ${_coretemp} \ ${_cp} \ ${_cpsw} \ ${_cpuctl} \ ${_cpufreq} \ ${_crypto} \ ${_cryptodev} \ ${_cs} \ ${_ctau} \ ctl \ ${_cxgb} \ ${_cxgbe} \ dc \ dcons \ dcons_crom \ de \ ${_dpms} \ ${_dpt} \ ${_drm} \ ${_drm2} \ dummynet \ ${_ed} \ ${_efirt} \ ${_em} \ ${_ena} \ ${_ep} \ ${_epic} \ esp \ ${_et} \ evdev \ ${_ex} \ ${_exca} \ ext2fs \ fdc \ fdescfs \ ${_fe} \ ${_ffec} \ filemon \ firewire \ firmware \ fusefs \ ${_fxp} \ gem \ geom \ ${_glxiic} \ ${_glxsb} \ gpio \ hifn \ hme \ ${_hpt27xx} \ ${_hptiop} \ ${_hptmv} \ ${_hptnr} \ ${_hptrr} \ hwpmc \ ${_hwpmc_mips24k} \ ${_hwpmc_mips74k} \ ${_hyperv} \ i2c \ ${_iavf} \ ${_ibcore} \ ${_ibcs2} \ ${_ichwd} \ ${_ida} \ if_bridge \ if_disc \ if_edsc \ ${_if_enc} \ if_epair \ ${_if_gif} \ ${_if_gre} \ ${_if_me} \ if_lagg \ ${_if_ndis} \ ${_if_stf} \ if_tap \ if_tun \ if_vlan \ if_vxlan \ iflib \ ${_iir} \ imgact_binmisc \ ${_intelspi} \ ${_io} \ ${_ioat} \ ${_ipoib} \ ${_ipdivert} \ ${_ipfilter} \ ${_ipfw} \ ipfw_nat \ ${_ipfw_nat64} \ ${_ipfw_nptv6} \ ${_ipfw_pmod} \ ${_ipmi} \ ip6_mroute_mod \ ip_mroute_mod \ ${_ips} \ ${_ipsec} \ ${_ipw} \ ${_ipwfw} \ ${_isci} \ ${_iser} \ isp \ ${_ispfw} \ ${_iwi} \ ${_iwifw} \ ${_iwm} \ ${_iwmfw} \ ${_iwn} \ ${_iwnfw} \ ${_ix} \ ${_ixv} \ ${_ixl} \ jme \ joy \ kbdmux \ kgssapi \ kgssapi_krb5 \ khelp \ krpc \ ksyms \ le \ lge \ libalias \ libiconv \ libmchain \ ${_linux} \ ${_linux_common} \ ${_linux64} \ linuxkpi \ ${_lio} \ lpt \ mac_biba \ mac_bsdextended \ mac_ifoff \ mac_lomac \ mac_mls \ mac_none \ mac_ntpd \ mac_partition \ mac_portacl \ mac_seeotheruids \ mac_stub \ mac_test \ malo \ md \ mdio \ mem \ mfi \ mii \ mlx \ mlxfw \ ${_mlx4} \ ${_mlx4ib} \ ${_mlx4en} \ ${_mlx5} \ ${_mlx5en} \ ${_mlx5ib} \ ${_mly} \ mmc \ mmcsd \ - mpr \ - mps \ + ${_mpr} \ + ${_mps} \ mpt \ mqueue \ mrsas \ msdosfs \ msdosfs_iconv \ ${_mse} \ msk \ ${_mthca} \ mvs \ mwl \ ${_mwlfw} \ mxge \ my \ ${_nandfs} \ ${_nandsim} \ ${_ncr} \ ${_nctgpio} \ ${_ncv} \ ${_ndis} \ ${_netgraph} \ ${_nfe} \ nfscl \ nfscommon \ nfsd \ nfslock \ nfslockd \ nfssvc \ nge \ nmdm \ ${_nsp} \ nullfs \ ${_ntb} \ ${_nvd} \ ${_nvdimm} \ ${_nvme} \ ${_nvram} \ oce \ ${_ocs_fc} \ otus \ ${_otusfw} \ ow \ ${_padlock} \ ${_padlock_rng} \ ${_pccard} \ ${_pcfclock} \ pcn \ ${_pf} \ ${_pflog} \ ${_pfsync} \ plip \ ${_pms} \ ppbus \ ppc \ ppi \ pps \ procfs \ proto \ pseudofs \ ${_pst} \ pty \ puc \ pwm \ ${_qlxge} \ ${_qlxgb} \ ${_qlxgbe} \ ${_qlnx} \ ral \ ${_ralfw} \ ${_random_fortuna} \ ${_random_other} \ rc4 \ ${_rdma} \ ${_rdrand_rng} \ re \ rl \ ${_rockchip} \ rtwn \ rtwn_pci \ rtwn_usb \ ${_rtwnfw} \ ${_s3} \ ${_safe} \ ${_sbni} \ scc \ ${_scsi_low} \ sdhci \ ${_sdhci_acpi} \ sdhci_pci \ sem \ send \ ${_sf} \ ${_sfxge} \ sge \ ${_sgx} \ ${_sgx_linux} \ siftr \ siis \ sis \ sk \ ${_smartpqi} \ smbfs \ sn \ snp \ sound \ ${_speaker} \ spi \ ${_splash} \ ${_sppp} \ ste \ ${_stg} \ stge \ ${_sym} \ ${_syscons} \ sysvipc \ tcp \ ${_ti} \ tl \ tmpfs \ ${_toecore} \ ${_tpm} \ trm \ ${_twa} \ twe \ tws \ tx \ ${_txp} \ uart \ ubsec \ udf \ udf_iconv \ ufs \ uinput \ unionfs \ usb \ ${_vesa} \ ${_virtio} \ vge \ ${_viawd} \ videomode \ vkbd \ ${_vmm} \ ${_vmware} \ ${_vpo} \ vr \ vte \ vx \ wb \ ${_wbwd} \ ${_wi} \ wlan \ wlan_acl \ wlan_amrr \ wlan_ccmp \ wlan_rssadapt \ wlan_tkip \ wlan_wep \ wlan_xauth \ ${_wpi} \ ${_wpifw} \ ${_x86bios} \ ${_xe} \ xl \ xz \ zlib .if ${MK_AUTOFS} != "no" || defined(ALL_MODULES) _autofs= autofs .endif .if ${MK_CDDL} != "no" || defined(ALL_MODULES) .if (${MACHINE_CPUARCH} != "arm" || ${MACHINE_ARCH:Marmv[67]*} != "") && \ ${MACHINE_CPUARCH} != "mips" && \ ${MACHINE_CPUARCH} != "sparc64" SUBDIR+= dtrace .endif SUBDIR+= opensolaris .endif .if ${MK_CRYPT} != "no" || defined(ALL_MODULES) .if exists(${SRCTOP}/sys/opencrypto) _crypto= crypto _cryptodev= cryptodev _random_fortuna=random_fortuna _random_other= random_other .endif .endif .if ${MK_CUSE} != "no" || defined(ALL_MODULES) SUBDIR+= cuse .endif .if (${MK_INET_SUPPORT} != "no" || ${MK_INET6_SUPPORT} != "no") || \ defined(ALL_MODULES) _carp= carp _toecore= toecore _if_enc= if_enc _if_gif= if_gif _if_gre= if_gre _ipfw_pmod= ipfw_pmod .if ${MK_IPSEC_SUPPORT} != "no" _ipsec= ipsec .endif .endif .if (${MK_INET_SUPPORT} != "no" && ${MK_INET6_SUPPORT} != "no") || \ defined(ALL_MODULES) _if_stf= if_stf .endif .if ${MK_INET_SUPPORT} != "no" || defined(ALL_MODULES) _if_me= if_me _ipdivert= ipdivert _ipfw= ipfw .if ${MK_INET6_SUPPORT} != "no" || defined(ALL_MODULES) _ipfw_nat64= ipfw_nat64 .endif .endif .if ${MK_INET6_SUPPORT} != "no" || defined(ALL_MODULES) _ipfw_nptv6= ipfw_nptv6 .endif .if ${MK_IPFILTER} != "no" || defined(ALL_MODULES) _ipfilter= ipfilter .endif .if ${MK_ISCSI} != "no" || defined(ALL_MODULES) SUBDIR+= cfiscsi SUBDIR+= iscsi SUBDIR+= iscsi_initiator .endif .if !empty(OPT_FDT) SUBDIR+= fdt .endif .if ${MACHINE_CPUARCH} == "aarch64" || ${MACHINE_CPUARCH} == "amd64" || \ ${MACHINE_CPUARCH} == "i386" SUBDIR+= linprocfs SUBDIR+= linsysfs _ena= ena .if ${MK_OFED} != "no" || defined(ALL_MODULES) _ibcore= ibcore _ipoib= ipoib _iser= iser .endif _mlx4= mlx4 _mlx5= mlx5 .if (${MK_INET_SUPPORT} != "no" && ${MK_INET6_SUPPORT} != "no") || \ defined(ALL_MODULES) _mlx4en= mlx4en _mlx5en= mlx5en .endif .if ${MK_OFED} != "no" || defined(ALL_MODULES) _mthca= mthca _mlx4ib= mlx4ib _mlx5ib= mlx5ib .endif .endif .if ${MK_NAND} != "no" || defined(ALL_MODULES) _nandfs= nandfs _nandsim= nandsim .endif .if ${MK_NETGRAPH} != "no" || defined(ALL_MODULES) _netgraph= netgraph .endif .if (${MK_PF} != "no" && (${MK_INET_SUPPORT} != "no" || \ ${MK_INET6_SUPPORT} != "no")) || defined(ALL_MODULES) _pf= pf _pflog= pflog .if ${MK_INET_SUPPORT} != "no" _pfsync= pfsync .endif .endif .if ${MK_SOURCELESS_UCODE} != "no" _bce= bce _fxp= fxp _ispfw= ispfw _sf= sf _ti= ti _txp= txp .if ${MACHINE_CPUARCH} != "mips" _mwlfw= mwlfw _otusfw= otusfw _ralfw= ralfw _rtwnfw= rtwnfw .endif .endif .if ${MK_SOURCELESS_UCODE} != "no" && ${MACHINE_CPUARCH} != "arm" && \ ${MACHINE_CPUARCH} != "mips" && \ ${MACHINE_ARCH} != "powerpc" && ${MACHINE_ARCH} != "powerpcspe" && \ ${MACHINE_CPUARCH} != "riscv" _cxgbe= cxgbe +.endif + +# These rely on 64bit atomics +.if ${MACHINE_ARCH} != "powerpc" && ${MACHINE_CPUARCH} != "mips" +_mps= mps +_mpr= mpr .endif .if ${MK_TESTS} != "no" || defined(ALL_MODULES) SUBDIR+= tests .endif .if ${MK_ZFS} != "no" || defined(ALL_MODULES) SUBDIR+= zfs .endif .if (${MACHINE_CPUARCH} == "mips" && ${MACHINE_ARCH:Mmips64} == "") _hwpmc_mips24k= hwpmc_mips24k _hwpmc_mips74k= hwpmc_mips74k .endif .if ${MACHINE_CPUARCH} != "aarch64" && ${MACHINE_CPUARCH} != "arm" && \ ${MACHINE_CPUARCH} != "mips" && ${MACHINE_CPUARCH} != "powerpc" && \ ${MACHINE_CPUARCH} != "riscv" _syscons= syscons _vpo= vpo .endif .if ${MACHINE_CPUARCH} != "mips" # no BUS_SPACE_UNSPECIFIED # No barrier instruction support (specific to this driver) _sym= sym # intr_disable() is a macro, causes problems .if ${MK_SOURCELESS_UCODE} != "no" _cxgb= cxgb .endif .endif .if ${MACHINE_CPUARCH} == "aarch64" _allwinner= allwinner _armv8crypto= armv8crypto _efirt= efirt _em= em _rockchip= rockchip .endif .if ${MACHINE_CPUARCH} == "i386" || ${MACHINE_CPUARCH} == "amd64" _agp= agp _an= an _aout= aout _bios= bios _bktr= bktr .if ${MK_SOURCELESS_UCODE} != "no" _bxe= bxe .endif _cardbus= cardbus _cbb= cbb _cpuctl= cpuctl _cpufreq= cpufreq _cs= cs _dpms= dpms .if ${MK_MODULE_DRM} != "no" _drm= drm .endif .if ${MK_MODULE_DRM2} != "no" _drm2= drm2 .endif _ed= ed _em= em _ep= ep _et= et _exca= exca _fe= fe _if_ndis= if_ndis _io= io _ix= ix _ixv= ixv _linux= linux .if ${MK_SOURCELESS_UCODE} != "no" _lio= lio .endif _nctgpio= nctgpio _ndis= ndis _ocs_fc= ocs_fc _pccard= pccard .if ${MK_OFED} != "no" || defined(ALL_MODULES) _rdma= rdma .endif _safe= safe _scsi_low= scsi_low _speaker= speaker _splash= splash _sppp= sppp _vmware= vmware _wbwd= wbwd _wi= wi _xe= xe _aac= aac _aacraid= aacraid _acpi= acpi .if ${MK_CRYPT} != "no" || defined(ALL_MODULES) .if ${COMPILER_TYPE} != "gcc" || ${COMPILER_VERSION} > 40201 _aesni= aesni .endif .endif _amd_ecc_inject=amd_ecc_inject _amdsbwd= amdsbwd _amdsmn= amdsmn _amdtemp= amdtemp _arcmsr= arcmsr _asmc= asmc .if ${MK_CRYPT} != "no" || defined(ALL_MODULES) _blake2= blake2 .endif _bytgpio= bytgpio _chvgpio= chvgpio _ciss= ciss _chromebook_platform= chromebook_platform _cmx= cmx _coretemp= coretemp .if ${MK_SOURCELESS_HOST} != "no" _hpt27xx= hpt27xx .endif _hptiop= hptiop .if ${MK_SOURCELESS_HOST} != "no" _hptmv= hptmv _hptnr= hptnr _hptrr= hptrr .endif _hyperv= hyperv _ichwd= ichwd _ida= ida _iir= iir _intelspi= intelspi _ipmi= ipmi _ips= ips _isci= isci _ipw= ipw _iwi= iwi _iwm= iwm _iwn= iwn .if ${MK_SOURCELESS_UCODE} != "no" _ipwfw= ipwfw _iwifw= iwifw _iwmfw= iwmfw _iwnfw= iwnfw .endif _mly= mly _nfe= nfe _nvd= nvd _nvme= nvme _nvram= nvram .if ${MK_CRYPT} != "no" || defined(ALL_MODULES) _padlock= padlock _padlock_rng= padlock_rng _rdrand_rng= rdrand_rng .endif _s3= s3 _sdhci_acpi= sdhci_acpi _tpm= tpm _twa= twa _vesa= vesa _viawd= viawd _virtio= virtio _wpi= wpi .if ${MK_SOURCELESS_UCODE} != "no" _wpifw= wpifw .endif _x86bios= x86bios .endif .if ${MACHINE_CPUARCH} == "amd64" _amdgpio= amdgpio _ccp= ccp _efirt= efirt _iavf= iavf _ioat= ioat _ixl= ixl _linux64= linux64 _linux_common= linux_common _ntb= ntb _nvdimm= nvdimm _pms= pms _qlxge= qlxge _qlxgb= qlxgb .if ${MK_SOURCELESS_UCODE} != "no" _qlxgbe= qlxgbe _qlnx= qlnx .endif _sfxge= sfxge _sgx= sgx _sgx_linux= sgx_linux _smartpqi= smartpqi .if ${MK_BHYVE} != "no" || defined(ALL_MODULES) _vmm= vmm .endif .endif .if ${MACHINE_CPUARCH} == "i386" # XXX some of these can move to the general case when de-i386'ed # XXX some of these can move now, but are untested on other architectures. _3dfx= 3dfx _3dfx_linux= 3dfx_linux _aic= aic _apm= apm .if ${MK_SOURCELESS_UCODE} != "no" _ce= ce .endif _coff= coff .if ${MK_SOURCELESS_UCODE} != "no" _cp= cp .endif _glxiic= glxiic _glxsb= glxsb #_ibcs2= ibcs2 _mse= mse _ncr= ncr _ncv= ncv _nsp= nsp _pcfclock= pcfclock _pst= pst _sbni= sbni _stg= stg .if ${MK_SOURCELESS_UCODE} != "no" _ctau= ctau .endif _dpt= dpt _ex= ex .endif .if ${MACHINE_CPUARCH} == "arm" _cfi= cfi _cpsw= cpsw .endif .if ${MACHINE_CPUARCH} == "powerpc" _agp= agp _an= an _bm= bm _cardbus= cardbus _cbb= cbb _cfi= cfi _cpufreq= cpufreq .if ${MK_MODULE_DRM} != "no" _drm= drm .endif _exca= exca _ffec= ffec _nvd= nvd _nvme= nvme _pccard= pccard _wi= wi .endif .if ${MACHINE_ARCH} == "powerpc64" .if ${MK_MODULE_DRM2} != "no" _drm2= drm2 .endif _ipmi= ipmi .endif .if ${MACHINE_ARCH} == "powerpc64" || ${MACHINE_ARCH} == "powerpc" # Don't build powermac_nvram for powerpcspe, it's never supported. _nvram= powermac_nvram .endif .if ${MACHINE_CPUARCH} == "sparc64" _auxio= auxio _em= em _epic= epic .endif .if (${MACHINE_CPUARCH} == "aarch64" || ${MACHINE_CPUARCH} == "amd64" || \ ${MACHINE_ARCH:Marmv[67]*} != "" || ${MACHINE_CPUARCH} == "i386") _cloudabi32= cloudabi32 .endif .if ${MACHINE_CPUARCH} == "aarch64" || ${MACHINE_CPUARCH} == "amd64" _cloudabi64= cloudabi64 .endif .endif .if ${MACHINE_ARCH:Marmv[67]*} != "" || ${MACHINE_CPUARCH} == "aarch64" _bcm283x_clkman= bcm283x_clkman _bcm283x_pwm= bcm283x_pwm .endif SUBDIR+=${MODULES_EXTRA} .for reject in ${WITHOUT_MODULES} SUBDIR:= ${SUBDIR:N${reject}} .endfor # Calling kldxref(8) for each module is expensive. .if !defined(NO_XREF) .MAKEFLAGS+= -DNO_XREF afterinstall: .PHONY @if type kldxref >/dev/null 2>&1; then \ ${ECHO} kldxref ${DESTDIR}${KMODDIR}; \ kldxref ${DESTDIR}${KMODDIR}; \ fi .endif .include "${SYSDIR}/conf/config.mk" SUBDIR:= ${SUBDIR:u:O} .include Index: releng/12.1/sys/powerpc/conf/GENERIC =================================================================== --- releng/12.1/sys/powerpc/conf/GENERIC (revision 352760) +++ releng/12.1/sys/powerpc/conf/GENERIC (revision 352761) @@ -1,226 +1,225 @@ # # GENERIC -- Generic kernel configuration file for FreeBSD/powerpc # # For more information on this file, please read the handbook section on # Kernel Configuration Files: # # https://www.FreeBSD.org/doc/en_US.ISO8859-1/books/handbook/kernelconfig-config.html # # The handbook is also available locally in /usr/share/doc/handbook # if you've installed the doc distribution, otherwise always see the # FreeBSD World Wide Web server (https://www.FreeBSD.org/) for the # latest information. # # An exhaustive list of options and more detailed explanations of the # device lines is also present in the ../../conf/NOTES and NOTES files. # If you are in doubt as to the purpose or necessity of a line, check first # in NOTES. # # $FreeBSD$ cpu AIM ident GENERIC machine powerpc powerpc makeoptions DEBUG=-g #Build kernel with gdb(1) debug symbols makeoptions WITH_CTF=1 # Platform support options POWERMAC #NewWorld Apple PowerMacs options PSIM #GDB PSIM ppc simulator options MAMBO #IBM Mambo Full System Simulator options PSERIES #PAPR-compliant systems options FDT options SCHED_ULE #ULE scheduler options PREEMPTION #Enable kernel thread preemption options VIMAGE # Subsystem virtualization, e.g. VNET options INET #InterNETworking options INET6 #IPv6 communications protocols options IPSEC # IP (v4/v6) security options IPSEC_SUPPORT # Allow kldload of ipsec and tcpmd5 options TCP_HHOOK # hhook(9) framework for TCP options TCP_RFC7413 # TCP Fast Open options SCTP #Stream Control Transmission Protocol options FFS #Berkeley Fast Filesystem options SOFTUPDATES #Enable FFS soft updates support options UFS_ACL #Support for access control lists options UFS_DIRHASH #Improve performance on big directories options UFS_GJOURNAL #Enable gjournal-based UFS journaling options QUOTA #Enable disk quotas for UFS options MD_ROOT #MD is a potential root device options NFSCL #Network Filesystem Client options NFSD #Network Filesystem Server options NFSLOCKD #Network Lock Manager options NFS_ROOT #NFS usable as root device options MSDOSFS #MSDOS Filesystem options CD9660 #ISO 9660 Filesystem options PROCFS #Process filesystem (requires PSEUDOFS) options PSEUDOFS #Pseudo-filesystem framework options GEOM_PART_APM #Apple Partition Maps. options GEOM_PART_GPT #GUID Partition Tables. options GEOM_LABEL #Provides labelization options COMPAT_FREEBSD4 #Keep this for a while options COMPAT_FREEBSD5 #Compatible with FreeBSD5 options COMPAT_FREEBSD6 #Compatible with FreeBSD6 options COMPAT_FREEBSD7 #Compatible with FreeBSD7 options COMPAT_FREEBSD9 # Compatible with FreeBSD9 options COMPAT_FREEBSD10 # Compatible with FreeBSD10 options COMPAT_FREEBSD11 # Compatible with FreeBSD11 options SCSI_DELAY=5000 #Delay (in ms) before probing SCSI options KTRACE #ktrace(1) syscall trace support options STACK #stack(9) support options SYSVSHM #SYSV-style shared memory options SYSVMSG #SYSV-style message queues options SYSVSEM #SYSV-style semaphores options _KPOSIX_PRIORITY_SCHEDULING #Posix P1003_1B real-time extensions options HWPMC_HOOKS # Necessary kernel hooks for hwpmc(4) options AUDIT # Security event auditing options CAPABILITY_MODE # Capsicum capability mode options CAPABILITIES # Capsicum capabilities options MAC # TrustedBSD MAC Framework options KDTRACE_HOOKS # Kernel DTrace hooks options DDB_CTF # Kernel ELF linker loads CTF data options INCLUDE_CONFIG_FILE # Include this file in kernel options RACCT # Resource accounting framework options RACCT_DEFAULT_TO_DISABLED # Set kern.racct.enable=0 by default options RCTL # Resource limits # Debugging support. Always need this: options KDB # Enable kernel debugger support. options KDB_TRACE # Print a stack trace for a panic. # Kernel dump features. options EKCD # Support for encrypted kernel dumps options GZIO # gzip-compressed kernel and user dumps options ZSTDIO # zstd-compressed kernel and user dumps options NETDUMP # netdump(4) client support # Make an SMP-capable kernel by default options SMP # Symmetric MultiProcessor Kernel # CPU frequency control device cpufreq # Standard busses device pci options PCI_HP # PCI-Express native HotPlug device agp # ATA controllers device ahci # AHCI-compatible SATA controllers device ata # Legacy ATA/SATA controllers device mvs # Marvell 88SX50XX/88SX60XX/88SX70XX/SoC SATA device siis # SiliconImage SiI3124/SiI3132/SiI3531 SATA # SCSI Controllers device ahc # AHA2940 and onboard AIC7xxx devices options AHC_ALLOW_MEMIO # Attempt to use memory mapped I/O device isp # Qlogic family device ispfw # Firmware module for Qlogic host adapters device mpt # LSI-Logic MPT-Fusion -device mps # LSI-Logic MPT-Fusion 2 device sym # NCR/Symbios/LSI Logic 53C8XX/53C1010/53C1510D # ATA/SCSI peripherals device scbus # SCSI bus (required for ATA/SCSI) device da # Direct Access (disks) device sa # Sequential Access (tape etc) device cd # CD device pass # Passthrough device (direct ATA/SCSI access) # vt is the default console driver, resembling an SCO console device vt # Generic console driver (pulls in OF FB) device kbdmux # Serial (COM) ports device scc device uart device uart_z8530 # FireWire support device firewire # FireWire bus code device sbp # SCSI over FireWire (Requires scbus and da) device fwe # Ethernet over FireWire (non-standard!) # PCI Ethernet NICs that use the common MII bus controller code. device miibus # MII bus support device bge # Broadcom BCM570xx Gigabit Ethernet device bm # Apple BMAC Ethernet device gem # Sun GEM/Sun ERI/Apple GMAC device dc # DEC/Intel 21143 and various workalikes device fxp # Intel EtherExpress PRO/100B (82557, 82558) # Pseudo devices. device crypto # core crypto support device loop # Network loopback device random # Entropy device device ether # Ethernet support device vlan # 802.1Q VLAN support device tun # Packet tunnel. device md # Memory "disks" device ofwd # Open Firmware disks device gif # IPv6 and IPv4 tunneling device firmware # firmware assist module # The `bpf' device enables the Berkeley Packet Filter. # Be aware of the administrative consequences of enabling this! # Note that 'bpf' is required for DHCP. device bpf #Berkeley packet filter # USB support options USB_DEBUG # enable debug msgs device uhci # UHCI PCI->USB interface device ohci # OHCI PCI->USB interface device ehci # EHCI PCI->USB interface device usb # USB Bus (required) device uhid # "Human Interface Devices" device ukbd # Keyboard options KBD_INSTALL_CDEV # install a CDEV entry in /dev device ulpt # Printer device umass # Disks/Mass storage - Requires scbus and da0 device ums # Mouse device atp # Apple USB touchpad device urio # Diamond Rio 500 MP3 player # USB Ethernet device aue # ADMtek USB Ethernet device axe # ASIX Electronics USB Ethernet device cdce # Generic USB over Ethernet device cue # CATC USB Ethernet device kue # Kawasaki LSI USB Ethernet # Wireless NIC cards options IEEE80211_SUPPORT_MESH options AH_SUPPORT_AR5416 # Misc device iicbus # I2C bus code device kiic # Keywest I2C device ad7417 # PowerMac7,2 temperature sensor device adt746x # PowerBook5,8 temperature sensor device ds1631 # PowerMac11,2 temperature sensor device ds1775 # PowerMac7,2 temperature sensor device fcu # Apple Fan Control Unit device max6690 # PowerMac7,2 temperature sensor device powermac_nvram # Open Firmware configuration NVRAM device smu # Apple System Management Unit device adm1030 # Apple G4 MDD fan controller device atibl # ATI-based backlight driver for PowerBooks/iBooks device nvbl # nVidia-based backlight driver for PowerBooks/iBooks # ADB support device adb device cuda device pmu # Sound support device sound # Generic sound driver (required) device snd_ai2s # Apple I2S audio device snd_davbus # Apple DAVBUS audio device snd_uaudio # USB Audio # evdev interface options EVDEV_SUPPORT # evdev support in legacy drivers device evdev # input event device support device uinput # install /dev/uinput cdev Index: releng/12.1 =================================================================== --- releng/12.1 (revision 352760) +++ releng/12.1 (revision 352761) Property changes on: releng/12.1 ___________________________________________________________________ Modified: svn:mergeinfo ## -0,0 +0,2 ## Merged /stable/12:r352735,352741 Merged /head:r341754-341755,342354-342355,342386-342388,342526,342528,342530-342531,342533-342536,342659,345479,345485,345573,347237,349849,349909