diff --git a/share/man/man4/amr.4 b/share/man/man4/amr.4 index 1a38ee98cad8..145ab0251e8a 100644 --- a/share/man/man4/amr.4 +++ b/share/man/man4/amr.4 @@ -1,241 +1,246 @@ .\" .\" Copyright (c) 2000 Jeroen Ruigrok van der Werven .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. The name of the author may not be used to endorse or promote products .\" derived from this software without specific prior written permission .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES .\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. .\" IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, .\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT .\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF .\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" .Dd March 29, 2006 .Dt AMR 4 .Os .Sh NAME .Nm amr .Nd MegaRAID SCSI/ATA/SATA RAID driver .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device pci" .Cd "device scbus" .Cd "device amr" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent amr_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 14.0 . .Sh DESCRIPTION The .Nm driver provides support for LSI Logic MegaRAID SCSI, ATA and SATA RAID controllers and legacy American Megatrends MegaRAID SCSI RAID controllers, including models relabeled and sold by Dell and Hewlett-Packard. .Pp LSI MegaRAID SAS controllers are supported by .Xr mfi 4 and will not work with this driver. .Sh HARDWARE Controllers supported by the .Nm driver include: .Pp .Bl -bullet -compact .It MegaRAID SATA 150-4 .It MegaRAID SATA 150-6 .It MegaRAID SATA 300-4X .It MegaRAID SATA 300-8X .It MegaRAID SCSI 320-1E .It MegaRAID SCSI 320-2E .It MegaRAID SCSI 320-4E .It MegaRAID SCSI 320-0X .It MegaRAID SCSI 320-2X .It MegaRAID SCSI 320-4X .It MegaRAID SCSI 320-0 .It MegaRAID SCSI 320-1 .It MegaRAID SCSI 320-2 .It MegaRAID SCSI 320-4 .It MegaRAID Series 418 .It MegaRAID i4 133 RAID .It MegaRAID Elite 1500 (Series 467) .It MegaRAID Elite 1600 (Series 493) .It MegaRAID Elite 1650 (Series 4xx) .It MegaRAID Enterprise 1200 (Series 428) .It MegaRAID Enterprise 1300 (Series 434) .It MegaRAID Enterprise 1400 (Series 438) .It MegaRAID Enterprise 1500 (Series 467) .It MegaRAID Enterprise 1600 (Series 471) .It MegaRAID Express 100 (Series 466WS) .It MegaRAID Express 200 (Series 466) .It MegaRAID Express 300 (Series 490) .It MegaRAID Express 500 (Series 475) .It Dell PERC .It Dell PERC 2/SC .It Dell PERC 2/DC .It Dell PERC 3/DCL .It Dell PERC 3/QC .It Dell PERC 4/DC .It Dell PERC 4/IM .It Dell PERC 4/SC .It Dell PERC 4/Di .It Dell PERC 4e/DC .It Dell PERC 4e/Di .It Dell PERC 4e/Si .It Dell PERC 4ei .It HP NetRAID-1/Si .It HP NetRAID-3/Si (D4943A) .It HP Embedded NetRAID .It Intel RAID Controller SRCS16 .It Intel RAID Controller SRCU42X .El .Sh DIAGNOSTICS .Ss Driver initialisation/shutdown phase .Bl -diag .It amr%d: memory window not available .It amr%d: I/O window not available .Pp The PCI BIOS did not allocate resources necessary for the correct operation of the controller. The driver cannot attach to this controller. .It amr%d: busmaster bit not set, enabling .Pp The PCI BIOS did not enable busmaster DMA, which is required for the correct operation of the controller. The driver has enabled this bit and initialisation will proceed. .It amr%d: can't allocate register window .It amr%d: can't allocate interrupt .It amr%d: can't set up interrupt .It amr%d: can't allocate parent DMA tag .It amr%d: can't allocate buffer DMA tag .It amr%d: can't allocate scatter/gather DMA tag .It amr%d: can't allocate s/g table .It amr%d: can't allocate mailbox tag .It amr%d: can't allocate mailbox memory .Pp A resource allocation error occurred while initialising the driver; initialisation has failed and the driver will not attach to this controller. .It amr%d: can't obtain configuration data from controller .It amr%d: can't obtain product data from controller .Pp The driver was unable to obtain vital configuration data from the controller. Initialisation has failed and the driver will not attach to this controller. .It amr%d: can't establish configuration hook .It amr%d: can't scan controller for drives .Pp The scan for logical drives managed by the controller failed. No drives will be attached. .It amr%d: device_add_child failed .It amr%d: bus_generic_attach returned %d .Pp Creation of the logical drive instances failed; attachment of one or more logical drives may have been aborted. .It amr%d: flushing cache... .Pp The controller cache is being flushed prior to shutdown or detach. .El .Ss Operational diagnostics .Bl -diag .It amr%d: I/O beyond end of unit (%u,%d > %u) .Pp A partitioning error or disk corruption has caused an I/O request beyond the end of the logical drive. This may also occur if FlexRAID Virtual Sizing is enabled and an I/O operation is attempted on a portion of the virtual drive beyond the actual capacity available. .It amr%d: polled command timeout .Pp An initialisation command timed out. The initialisation process may fail as a result. .It amr%d: bad slot %d completed .Pp The controller reported completion of a command that the driver did not issue. This may result in data corruption, and suggests a hardware or firmware problem with the system or controller. .It amr%d: I/O error - %x .Pp An I/O error has occurred. .El .Sh SEE ALSO .Xr cd 4 , .Xr da 4 , .Xr mfi 4 , .Xr sa 4 , .Xr scsi 4 .Sh AUTHORS .An -nosplit The .Nm driver was written by .An Mike Smith Aq Mt msmith@FreeBSD.org . .Pp This manual page was written by .An Mike Smith Aq Mt msmith@FreeBSD.org and .An Jeroen Ruigrok van der Werven Aq Mt asmodai@FreeBSD.org . diff --git a/share/man/man4/esp.4 b/share/man/man4/esp.4 index 74676f8f03f9..2bbc12c31329 100644 --- a/share/man/man4/esp.4 +++ b/share/man/man4/esp.4 @@ -1,111 +1,116 @@ .\" .\" Copyright (c) 2011 Marius Strobl .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd November 1, 2011 +.Dd March 10, 2020 .Dt ESP 4 .Os .Sh NAME .Nm esp .Nd Emulex ESP, NCR 53C9x and QLogic FAS families based SCSI controllers .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device scbus" .Cd "device esp" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_esp_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 14.0 . .Sh DESCRIPTION The .Nm driver provides support for the .Tn AMD Am53C974, the .Tn Emulex ESP100, ESP100A, ESP200 and ESP406, the .Tn NCR 53C90, 53C94 and 53C96 as well as the .Tn QLogic FAS100A, FAS216, FAS366 and FAS408 .Tn SCSI controller chips found in a wide variety of systems and peripheral boards. .Sh HARDWARE Controllers supported by the .Nm driver include: .Pp .Bl -bullet -compact .It Sun ESP family .It Sun FAS family .It Tekram DC390 .It Tekram DC390T .El .Sh SEE ALSO .Xr cd 4 , .Xr ch 4 , .Xr da 4 , .Xr intro 4 , .Xr pci 4 , .Xr sa 4 , .Xr sbus 4 , .Xr scsi 4 , .Xr camcontrol 8 .Sh HISTORY The .Nm driver first appeared in .Nx 1.3 . The first .Fx version to include it was .Fx 5.3 . .Sh AUTHORS .An -nosplit The .Nm driver was ported to .Fx by .An Scott Long Aq Mt scottl@FreeBSD.org and later on considerably improved by .An Marius Strobl Aq Mt marius@FreeBSD.org . .Sh BUGS The .Nm driver should read the EEPROM settings of .Tn Tekram controllers. diff --git a/share/man/man4/iir.4 b/share/man/man4/iir.4 index d9e4309771cd..eba9b88eb50c 100644 --- a/share/man/man4/iir.4 +++ b/share/man/man4/iir.4 @@ -1,77 +1,82 @@ .\" $FreeBSD$ .\" Written by Tom Rhodes .\" This file is in the public domain. .\" .Dd August 8, 2004 .Dt IIR 4 .Os .Sh NAME .Nm iir .Nd Intel Integrated RAID controller and ICP Vortex driver +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 14.0 . .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device pci" .Cd "device scbus" .Cd "device iir" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent iir_load="YES" .Ed .Sh DESCRIPTION The .Nm driver claims to interface with the Intel integrated RAID controller cards, and all versions of the ICP Vortex controllers (including FC). .Sh HARDWARE Controllers supported by the .Nm driver include: .Pp .Bl -bullet -compact .It Intel RAID Controller SRCMR .It Intel Server RAID Controller U3-l (SRCU31a) .It Intel Server RAID Controller U3-1L (SRCU31La) .It Intel Server RAID Controller U3-2 (SRCU32) .It All past and future releases of Intel and ICP RAID Controllers. .El .Pp .Bl -bullet -compact .It Intel RAID Controller SRCU21 (discontinued) .It Intel RAID Controller SRCU31 (older revision, not compatible) .It Intel RAID Controller SRCU31L (older revision, not compatible) .El .Pp The SRCU31 and SRCU31L can be updated via a firmware update available from Intel. .Sh SEE ALSO .Xr cam 4 , .Xr pass 4 , .Xr xpt 4 , .Xr camcontrol 8 .Sh AUTHORS The .Nm driver is supported and maintained by .An -nosplit .An Achim Leubner Aq Mt Achim_Leubner@adaptec.com . .Pp This manual page was written by .An Tom Rhodes Aq Mt trhodes@FreeBSD.org and is based on information supplied by the driver authors and the website of .An Mike Smith Aq Mt msmith@FreeBSD.org . diff --git a/share/man/man4/mly.4 b/share/man/man4/mly.4 index 0a701762d415..f96844113542 100644 --- a/share/man/man4/mly.4 +++ b/share/man/man4/mly.4 @@ -1,270 +1,275 @@ .\" .\" Copyright (c) 2000 Michael Smith .\" Copyright (c) 2000 BSDi .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. The name of the author may not be used to endorse or promote products .\" derived from this software without specific prior written permission .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES .\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. .\" IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, .\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT .\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF .\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" .Dd August 10, 2004 .Dt MLY 4 .Os .Sh NAME .Nm mly .Nd Mylex AcceleRAID/eXtremeRAID family driver .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device pci" .Cd "device scbus" .Cd "device da" .Cd "device mly" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent mly_load="YES" .Ed +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 14.0 . .Sh DESCRIPTION The .Nm driver provides support for Mylex AcceleRAID and eXtremeRAID-family PCI to SCSI RAID controllers with version 6.00 and later firmware. .Pp Logical devices (array drives) attached to the controller are presented to the SCSI subsystem as though they were direct-access devices on a virtual SCSI bus. Physical devices which are not claimed by a logical device are presented on SCSI channels which match the physical channels on the controller. .Sh HARDWARE Controllers supported by the .Nm driver include: .Pp .Bl -bullet -compact .It Mylex AcceleRAID 160 .It Mylex AcceleRAID 170 .It Mylex AcceleRAID 352 .It Mylex eXtremeRAID 2000 .It Mylex eXtremeRAID 3000 .El .Pp Compatible Mylex controllers not listed should work, but have not been verified. .Sh DIAGNOSTICS .Ss Controller initialisation phase .Bl -diag .It "mly%d: controller initialisation started" .It "mly%d: initialisation complete" .Pp The controller firmware has started initialisation. Normally this process is performed by the controller BIOS, but the driver may need to do this in cases where the BIOS has failed, or is not compatible (e.g.\& on non-x86 systems). .It "mly%d: drive spinup in progress" .Pp Drive startup is in progress; this may take several minutes. .It "mly%d: mirror race recovery failed, one or more drives offline" .It "mly%d: mirror race recovery in progress" .It "mly%d: mirror race recovery on a critical drive" .Pp These error codes are undocumented. .It "mly%d: FATAL MEMORY PARITY ERROR" .Pp Firmware detected a fatal memory error; the driver will not attempt to attach to this controller. .It "mly%d: unknown initialisation code %x" .Pp An unknown error occurred during initialisation; it will be ignored. .El .Ss Driver initialisation/shutdown phase .Bl -diag .It "mly%d: can't enable busmaster feature" .It "mly%d: memory window not available" .It "mly%d: can't allocate register window" .It "mly%d: can't allocate interrupt" .It "mly%d: can't set up interrupt" .Pp The system's PCI BIOS has not correctly configured the controller's PCI interface; initialisation has failed and the driver will not attach to this controller. .It "mly%d: can't allocate parent DMA tag" .It "mly%d: can't allocate buffer DMA tag" .It "mly%d: can't allocate command packet DMA tag" .It "mly%d: can't allocate scatter/gather DMA tag" .It "mly%d: can't allocate s/g table" .It "mly%d: can't allocate memory mailbox DMA tag" .It "mly%d: can't allocate memory mailbox" .Pp A resource allocation error occurred while initialising the driver; initialisation has failed and the driver will not attach to this controller. .It "mly%d: BTL rescan result corrupted" .Pp The results of a scan for an attached device were corrupted. One or more devices may not be correctly reported. .It "mly%d: flushing cache..." .Pp The controller cache is being flushed prior to detach or shutdown. .El .Ss Operational diagnostics .Bl -diag .It "mly%d: physical device %d:%d online" .It "mly%d: physical device %d:%d standby" .It "mly%d: physical device %d:%d automatic rebuild started" .It "mly%d: physical device %d:%d manual rebuild started" .It "mly%d: physical device %d:%d rebuild completed" .It "mly%d: physical device %d:%d rebuild cancelled" .It "mly%d: physical device %d:%d rebuild failed for unknown reasons" .It "mly%d: physical device %d:%d rebuild failed due to new physical device" .It "mly%d: physical device %d:%d rebuild failed due to logical drive failure" .It "mly%d: physical device %d:%d found" .It "mly%d: physical device %d:%d gone" .It "mly%d: physical device %d:%d unconfigured" .It "mly%d: physical device %d:%d expand capacity started" .It "mly%d: physical device %d:%d expand capacity completed" .It "mly%d: physical device %d:%d expand capacity failed" .It "mly%d: physical device %d:%d parity error" .It "mly%d: physical device %d:%d soft error" .It "mly%d: physical device %d:%d miscellaneous error" .It "mly%d: physical device %d:%d reset" .It "mly%d: physical device %d:%d active spare found" .It "mly%d: physical device %d:%d warm spare found" .It "mly%d: physical device %d:%d initialization started" .It "mly%d: physical device %d:%d initialization completed" .It "mly%d: physical device %d:%d initialization failed" .It "mly%d: physical device %d:%d initialization cancelled" .It "mly%d: physical device %d:%d write recovery failed" .It "mly%d: physical device %d:%d scsi bus reset failed" .It "mly%d: physical device %d:%d double check condition" .It "mly%d: physical device %d:%d device cannot be accessed" .It "mly%d: physical device %d:%d gross error on scsi processor" .It "mly%d: physical device %d:%d bad tag from device" .It "mly%d: physical device %d:%d command timeout" .It "mly%d: physical device %d:%d system reset" .It "mly%d: physical device %d:%d busy status or parity error" .It "mly%d: physical device %d:%d host set device to failed state" .It "mly%d: physical device %d:%d selection timeout" .It "mly%d: physical device %d:%d scsi bus phase error" .It "mly%d: physical device %d:%d device returned unknown status" .It "mly%d: physical device %d:%d device not ready" .It "mly%d: physical device %d:%d device not found at startup" .It "mly%d: physical device %d:%d COD write operation failed" .It "mly%d: physical device %d:%d BDT write operation failed" .It "mly%d: physical device %d:%d missing at startup" .It "mly%d: physical device %d:%d start rebuild failed due to physical drive too small" .It "mly%d: physical device %d:%d sense data received" .It "mly%d: sense key %d asc %02x ascq %02x" .It "mly%d: info %4D csi %4D" .It "mly%d: physical device %d:%d offline" .It "mly%d: sense key %d asc %02x ascq %02x" .It "mly%d: info %4D csi %4D" .Pp The reported event refers to the physical device at the given channel:target address. .It "mly%d: logical device %d (%s) consistency check started" .It "mly%d: logical device %d (%s) consistency check completed" .It "mly%d: logical device %d (%s) consistency check cancelled" .It "mly%d: logical device %d (%s) consistency check completed with errors" .It "mly%d: logical device %d (%s) consistency check failed due to logical drive failure" .It "mly%d: logical device %d (%s) consistency check failed due to physical device failure" .It "mly%d: logical device %d (%s) automatic rebuild started" .It "mly%d: logical device %d (%s) manual rebuild started" .It "mly%d: logical device %d (%s) rebuild completed" .It "mly%d: logical device %d (%s) rebuild cancelled" .It "mly%d: logical device %d (%s) rebuild failed for unknown reasons" .It "mly%d: logical device %d (%s) rebuild failed due to new physical device" .It "mly%d: logical device %d (%s) rebuild failed due to logical drive failure" .It "mly%d: logical device %d (%s) offline" .It "mly%d: logical device %d (%s) critical" .It "mly%d: logical device %d (%s) online" .It "mly%d: logical device %d (%s) initialization started" .It "mly%d: logical device %d (%s) initialization completed" .It "mly%d: logical device %d (%s) initialization cancelled" .It "mly%d: logical device %d (%s) initialization failed" .It "mly%d: logical device %d (%s) found" .It "mly%d: logical device %d (%s) gone" .It "mly%d: logical device %d (%s) expand capacity started" .It "mly%d: logical device %d (%s) expand capacity completed" .It "mly%d: logical device %d (%s) expand capacity failed" .It "mly%d: logical device %d (%s) bad block found" .It "mly%d: logical device %d (%s) size changed" .It "mly%d: logical device %d (%s) type changed" .It "mly%d: logical device %d (%s) bad data block found" .It "mly%d: logical device %d (%s) read of data block in bdt" .It "mly%d: logical device %d (%s) write back data for disk block lost" .Pp The event report will include the name of the SCSI device which has attached to the device if possible. .It "mly%d: enclosure %d fan %d failed" .It "mly%d: enclosure %d fan %d ok" .It "mly%d: enclosure %d fan %d not present" .It "mly%d: enclosure %d power supply %d failed" .It "mly%d: enclosure %d power supply %d ok" .It "mly%d: enclosure %d power supply %d not present" .It "mly%d: enclosure %d temperature sensor %d failed" .It "mly%d: enclosure %d temperature sensor %d critical" .It "mly%d: enclosure %d temperature sensor %d ok" .It "mly%d: enclosure %d temperature sensor %d not present" .It "mly%d: enclosure %d unit %d access critical" .It "mly%d: enclosure %d unit %d access ok" .It "mly%d: enclosure %d unit %d access offline" .Pp These events refer to external enclosures by number. The driver does not attempt to name the enclosures. .It "mly%d: controller cache write back error" .It "mly%d: controller battery backup unit found" .It "mly%d: controller battery backup unit charge level low" .It "mly%d: controller battery backup unit charge level ok" .It "mly%d: controller installation aborted" .It "mly%d: controller mirror race recovery in progress" .It "mly%d: controller mirror race on critical drive" .It "mly%d: controller memory soft ecc error" .It "mly%d: controller memory hard ecc error" .It "mly%d: controller battery backup unit failed" .Pp These events report controller status changes. .El .Sh AUTHORS .An -nosplit The .Nm driver was written by .An Michael Smith Aq Mt msmith@FreeBSD.org . .Pp This manual page was written by .An Michael Smith Aq Mt msmith@FreeBSD.org . .Sh BUGS The driver does not yet provide an external management interface. .Pp Enclosures are not named or otherwise identified in event messages. diff --git a/share/man/man4/twa.4 b/share/man/man4/twa.4 index dbafe91885cc..bdc3935d8079 100644 --- a/share/man/man4/twa.4 +++ b/share/man/man4/twa.4 @@ -1,135 +1,140 @@ .\" .\" Copyright (c) 2004 3ware, Inc. .\" Copyright (c) 2000 BSDi .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES .\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. .\" IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, .\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT .\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF .\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" .Dd May 9, 2007 .Dt TWA 4 .Os .Sh NAME .Nm twa .Nd 3ware 9000/9500/9550/9650 series SATA RAID controllers driver +.Sh DEPRECATION NOTICE +The +.Nm +driver is not present in +.Fx 14.0 . .Sh SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: .Bd -ragged -offset indent .Cd "device scbus" .Cd "device twa" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent twa_load="YES" .Ed .Sh DESCRIPTION The .Nm driver provides support for AMCC's 3ware 9000/9500/9550/9650 series SATA controllers. .Pp These controllers are available in 4, 8, 12 or 16-port configurations, and support the following RAID levels: 0, 1, 10, 5, 50. The device nodes for the controllers are of the form .Pa /dev/twa Ns Ar X , where .Ar X is the controller number. The driver is implemented as a SCSI SIM under CAM, and, as such, the logical units that it controls are accessible via the device nodes, .Pa /dev/da Ns Ar Y , where .Ar Y is the logical unit number. .Sh HARDWARE The .Nm driver supports the following SATA RAID controllers: .Pp .Bl -bullet -compact .It AMCC's 3ware 9500S-4LP .It AMCC's 3ware 9500S-8 .It AMCC's 3ware 9500S-8MI .It AMCC's 3ware 9500S-12 .It AMCC's 3ware 9500S-12MI .It AMCC's 3ware 9500SX-4LP .It AMCC's 3ware 9500SX-8LP .It AMCC's 3ware 9500SX-12 .It AMCC's 3ware 9500SX-12MI .It AMCC's 3ware 9500SX-16ML .It AMCC's 3ware 9550SX-4LP .It AMCC's 3ware 9550SX-8LP .It AMCC's 3ware 9550SX-12 .It AMCC's 3ware 9550SX-12MI .It AMCC's 3ware 9550SX-16ML .It AMCC's 3ware 9650SE-2LP .It AMCC's 3ware 9650SE-4LPML .It AMCC's 3ware 9650SE-8LPML .It AMCC's 3ware 9650SE-12ML .It AMCC's 3ware 9650SE-16ML .It AMCC's 3ware 9650SE-24M8 .El .Sh DIAGNOSTICS Whenever the driver encounters a command failure, it prints out an error code in the format: .Qq Li "ERROR: (: ):" , followed by a text description of the error. There are other error messages and warnings that the driver prints out, depending on the kinds of errors that it encounters. If the driver is compiled with .Dv TWA_DEBUG defined, it prints out a whole bunch of debug messages, the quantity of which varies depending on the value assigned to .Dv TWA_DEBUG (0 to 10). .Sh AUTHORS The .Nm driver and manpage were written by .An Vinod Kashyap Aq Mt vkashyap@FreeBSD.org . diff --git a/sys/dev/amr/amr_pci.c b/sys/dev/amr/amr_pci.c index 25b37eda3895..11941768d879 100644 --- a/sys/dev/amr/amr_pci.c +++ b/sys/dev/amr/amr_pci.c @@ -1,707 +1,709 @@ /*- * Copyright (c) 1999,2000 Michael Smith * Copyright (c) 2000 BSDi * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2002 Eric Moore * Copyright (c) 2002, 2004 LSI Logic Corporation * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. The party using or redistributing the source code and binary forms * agrees to the disclaimer below and the terms and conditions set forth * herein. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include static int amr_pci_probe(device_t dev); static int amr_pci_attach(device_t dev); static int amr_pci_detach(device_t dev); static int amr_pci_shutdown(device_t dev); static int amr_pci_suspend(device_t dev); static int amr_pci_resume(device_t dev); static void amr_pci_intr(void *arg); static void amr_pci_free(struct amr_softc *sc); static void amr_sglist_helper(void *arg, bus_dma_segment_t *segs, int nseg, int error); static int amr_sglist_map(struct amr_softc *sc); static int amr_setup_mbox(struct amr_softc *sc); static int amr_ccb_map(struct amr_softc *sc); static u_int amr_force_sg32 = 0; SYSCTL_DECL(_hw_amr); SYSCTL_UINT(_hw_amr, OID_AUTO, force_sg32, CTLFLAG_RDTUN, &amr_force_sg32, 0, "Force the AMR driver to use 32bit scatter gather"); static device_method_t amr_methods[] = { /* Device interface */ DEVMETHOD(device_probe, amr_pci_probe), DEVMETHOD(device_attach, amr_pci_attach), DEVMETHOD(device_detach, amr_pci_detach), DEVMETHOD(device_shutdown, amr_pci_shutdown), DEVMETHOD(device_suspend, amr_pci_suspend), DEVMETHOD(device_resume, amr_pci_resume), DEVMETHOD_END }; static driver_t amr_pci_driver = { "amr", amr_methods, sizeof(struct amr_softc) }; static struct amr_ident { uint16_t vendor; uint16_t device; int flags; #define AMR_ID_PROBE_SIG (1<<0) /* generic i960RD, check signature */ #define AMR_ID_DO_SG64 (1<<1) #define AMR_ID_QUARTZ (1<<2) } amr_device_ids[] = { {0x101e, 0x9010, 0}, {0x101e, 0x9060, 0}, {0x8086, 0x1960, AMR_ID_QUARTZ | AMR_ID_PROBE_SIG}, {0x101e, 0x1960, AMR_ID_QUARTZ}, {0x1000, 0x1960, AMR_ID_QUARTZ | AMR_ID_DO_SG64 | AMR_ID_PROBE_SIG}, {0x1000, 0x0407, AMR_ID_QUARTZ | AMR_ID_DO_SG64}, {0x1000, 0x0408, AMR_ID_QUARTZ | AMR_ID_DO_SG64}, {0x1000, 0x0409, AMR_ID_QUARTZ | AMR_ID_DO_SG64}, {0x1028, 0x000e, AMR_ID_QUARTZ | AMR_ID_DO_SG64 | AMR_ID_PROBE_SIG}, /* perc4/di i960 */ {0x1028, 0x000f, AMR_ID_QUARTZ | AMR_ID_DO_SG64}, /* perc4/di Verde*/ {0x1028, 0x0013, AMR_ID_QUARTZ | AMR_ID_DO_SG64}, /* perc4/di */ {0, 0, 0} }; static devclass_t amr_devclass; DRIVER_MODULE(amr, pci, amr_pci_driver, amr_devclass, 0, 0); MODULE_PNP_INFO("U16:vendor;U16:device", pci, amr, amr_device_ids, nitems(amr_device_ids) - 1); MODULE_DEPEND(amr, pci, 1, 1, 1); MODULE_DEPEND(amr, cam, 1, 1, 1); static struct amr_ident * amr_find_ident(device_t dev) { struct amr_ident *id; int sig; for (id = amr_device_ids; id->vendor != 0; id++) { if ((pci_get_vendor(dev) == id->vendor) && (pci_get_device(dev) == id->device)) { /* do we need to test for a signature? */ if (id->flags & AMR_ID_PROBE_SIG) { sig = pci_read_config(dev, AMR_CFG_SIG, 2); if ((sig != AMR_SIGNATURE_1) && (sig != AMR_SIGNATURE_2)) continue; } return (id); } } return (NULL); } static int amr_pci_probe(device_t dev) { debug_called(1); if (amr_find_ident(dev) != NULL) { device_set_desc(dev, LSI_DESC_PCI); return(BUS_PROBE_DEFAULT); } return(ENXIO); } static int amr_pci_attach(device_t dev) { struct amr_softc *sc; struct amr_ident *id; int rid, rtype, error; debug_called(1); /* * Initialise softc. */ sc = device_get_softc(dev); bzero(sc, sizeof(*sc)); sc->amr_dev = dev; /* assume failure is 'not configured' */ error = ENXIO; /* * Determine board type. */ if ((id = amr_find_ident(dev)) == NULL) return (ENXIO); if (id->flags & AMR_ID_QUARTZ) { sc->amr_type |= AMR_TYPE_QUARTZ; } if ((amr_force_sg32 == 0) && (id->flags & AMR_ID_DO_SG64) && (sizeof(vm_paddr_t) > 4)) { device_printf(dev, "Using 64-bit DMA\n"); sc->amr_type |= AMR_TYPE_SG64; } /* force the busmaster enable bit on */ pci_enable_busmaster(dev); /* * Allocate the PCI register window. */ rid = PCIR_BAR(0); rtype = AMR_IS_QUARTZ(sc) ? SYS_RES_MEMORY : SYS_RES_IOPORT; sc->amr_reg = bus_alloc_resource_any(dev, rtype, &rid, RF_ACTIVE); if (sc->amr_reg == NULL) { device_printf(sc->amr_dev, "can't allocate register window\n"); goto out; } sc->amr_btag = rman_get_bustag(sc->amr_reg); sc->amr_bhandle = rman_get_bushandle(sc->amr_reg); /* * Allocate and connect our interrupt. */ rid = 0; sc->amr_irq = bus_alloc_resource_any(sc->amr_dev, SYS_RES_IRQ, &rid, RF_SHAREABLE | RF_ACTIVE); if (sc->amr_irq == NULL) { device_printf(sc->amr_dev, "can't allocate interrupt\n"); goto out; } if (bus_setup_intr(sc->amr_dev, sc->amr_irq, INTR_TYPE_BIO | INTR_ENTROPY | INTR_MPSAFE, NULL, amr_pci_intr, sc, &sc->amr_intr)) { device_printf(sc->amr_dev, "can't set up interrupt\n"); goto out; } debug(2, "interrupt attached"); /* assume failure is 'out of memory' */ error = ENOMEM; /* * Allocate the parent bus DMA tag appropriate for PCI. */ if (bus_dma_tag_create(bus_get_dma_tag(dev), /* PCI parent */ 1, 0, /* alignment,boundary */ AMR_IS_SG64(sc) ? BUS_SPACE_MAXADDR : BUS_SPACE_MAXADDR_32BIT, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ BUS_SPACE_MAXSIZE, /* maxsize */ BUS_SPACE_UNRESTRICTED, /* nsegments */ BUS_SPACE_MAXSIZE_32BIT, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->amr_parent_dmat)) { device_printf(dev, "can't allocate parent DMA tag\n"); goto out; } /* * Create DMA tag for mapping buffers into controller-addressable space. */ if (bus_dma_tag_create(sc->amr_parent_dmat, /* parent */ 1, 0, /* alignment,boundary */ BUS_SPACE_MAXADDR_32BIT, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ DFLTPHYS, /* maxsize */ AMR_NSEG, /* nsegments */ BUS_SPACE_MAXSIZE_32BIT, /* maxsegsize */ 0, /* flags */ busdma_lock_mutex, /* lockfunc */ &sc->amr_list_lock, /* lockarg */ &sc->amr_buffer_dmat)) { device_printf(sc->amr_dev, "can't allocate buffer DMA tag\n"); goto out; } if (bus_dma_tag_create(sc->amr_parent_dmat, /* parent */ 1, 0, /* alignment,boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ DFLTPHYS, /* maxsize */ AMR_NSEG, /* nsegments */ BUS_SPACE_MAXSIZE_32BIT, /* maxsegsize */ 0, /* flags */ busdma_lock_mutex, /* lockfunc */ &sc->amr_list_lock, /* lockarg */ &sc->amr_buffer64_dmat)) { device_printf(sc->amr_dev, "can't allocate buffer DMA tag\n"); goto out; } debug(2, "dma tag done"); /* * Allocate and set up mailbox in a bus-visible fashion. */ mtx_init(&sc->amr_list_lock, "AMR List Lock", NULL, MTX_DEF); mtx_init(&sc->amr_hw_lock, "AMR HW Lock", NULL, MTX_DEF); if ((error = amr_setup_mbox(sc)) != 0) goto out; debug(2, "mailbox setup"); /* * Build the scatter/gather buffers. */ if ((error = amr_sglist_map(sc)) != 0) goto out; debug(2, "s/g list mapped"); if ((error = amr_ccb_map(sc)) != 0) goto out; debug(2, "ccb mapped"); /* * Do bus-independant initialisation, bring controller online. */ error = amr_attach(sc); out: if (error) amr_pci_free(sc); + else + gone_in_dev(dev, 14, "amr(4) driver"); return(error); } /******************************************************************************** * Disconnect from the controller completely, in preparation for unload. */ static int amr_pci_detach(device_t dev) { struct amr_softc *sc = device_get_softc(dev); int error; debug_called(1); if (sc->amr_state & AMR_STATE_OPEN) return(EBUSY); if ((error = amr_pci_shutdown(dev))) return(error); amr_pci_free(sc); return(0); } /******************************************************************************** * Bring the controller down to a dormant state and detach all child devices. * * This function is called before detach, system shutdown, or before performing * an operation which may add or delete system disks. (Call amr_startup to * resume normal operation.) * * Note that we can assume that the bioq on the controller is empty, as we won't * allow shutdown if any device is open. */ static int amr_pci_shutdown(device_t dev) { struct amr_softc *sc = device_get_softc(dev); int i,error; debug_called(1); /* mark ourselves as in-shutdown */ sc->amr_state |= AMR_STATE_SHUTDOWN; /* flush controller */ device_printf(sc->amr_dev, "flushing cache..."); printf("%s\n", amr_flush(sc) ? "failed" : "done"); error = 0; /* delete all our child devices */ for(i = 0 ; i < AMR_MAXLD; i++) { if( sc->amr_drive[i].al_disk != 0) { if((error = device_delete_child(sc->amr_dev,sc->amr_drive[i].al_disk)) != 0) goto shutdown_out; sc->amr_drive[i].al_disk = 0; } } /* XXX disable interrupts? */ shutdown_out: return(error); } /******************************************************************************** * Bring the controller to a quiescent state, ready for system suspend. */ static int amr_pci_suspend(device_t dev) { struct amr_softc *sc = device_get_softc(dev); debug_called(1); sc->amr_state |= AMR_STATE_SUSPEND; /* flush controller */ device_printf(sc->amr_dev, "flushing cache..."); printf("%s\n", amr_flush(sc) ? "failed" : "done"); /* XXX disable interrupts? */ return(0); } /******************************************************************************** * Bring the controller back to a state ready for operation. */ static int amr_pci_resume(device_t dev) { struct amr_softc *sc = device_get_softc(dev); debug_called(1); sc->amr_state &= ~AMR_STATE_SUSPEND; /* XXX enable interrupts? */ return(0); } /******************************************************************************* * Take an interrupt, or be poked by other code to look for interrupt-worthy * status. */ static void amr_pci_intr(void *arg) { struct amr_softc *sc = (struct amr_softc *)arg; debug_called(3); /* collect finished commands, queue anything waiting */ amr_done(sc); } /******************************************************************************** * Free all of the resources associated with (sc) * * Should not be called if the controller is active. */ static void amr_pci_free(struct amr_softc *sc) { void *p; debug_called(1); amr_free(sc); /* destroy data-transfer DMA tag */ if (sc->amr_buffer_dmat) bus_dma_tag_destroy(sc->amr_buffer_dmat); if (sc->amr_buffer64_dmat) bus_dma_tag_destroy(sc->amr_buffer64_dmat); /* free and destroy DMA memory and tag for passthrough pool */ if (sc->amr_ccb) { bus_dmamap_unload(sc->amr_ccb_dmat, sc->amr_ccb_dmamap); bus_dmamem_free(sc->amr_ccb_dmat, sc->amr_ccb, sc->amr_ccb_dmamap); } if (sc->amr_ccb_dmat) bus_dma_tag_destroy(sc->amr_ccb_dmat); /* free and destroy DMA memory and tag for s/g lists */ if (sc->amr_sgtable) { bus_dmamap_unload(sc->amr_sg_dmat, sc->amr_sg_dmamap); bus_dmamem_free(sc->amr_sg_dmat, sc->amr_sgtable, sc->amr_sg_dmamap); } if (sc->amr_sg_dmat) bus_dma_tag_destroy(sc->amr_sg_dmat); /* free and destroy DMA memory and tag for mailbox */ p = (void *)(uintptr_t)(volatile void *)sc->amr_mailbox64; if (sc->amr_mailbox) { bus_dmamap_unload(sc->amr_mailbox_dmat, sc->amr_mailbox_dmamap); bus_dmamem_free(sc->amr_mailbox_dmat, p, sc->amr_mailbox_dmamap); } if (sc->amr_mailbox_dmat) bus_dma_tag_destroy(sc->amr_mailbox_dmat); /* disconnect the interrupt handler */ if (sc->amr_intr) bus_teardown_intr(sc->amr_dev, sc->amr_irq, sc->amr_intr); if (sc->amr_irq != NULL) bus_release_resource(sc->amr_dev, SYS_RES_IRQ, 0, sc->amr_irq); /* destroy the parent DMA tag */ if (sc->amr_parent_dmat) bus_dma_tag_destroy(sc->amr_parent_dmat); /* release the register window mapping */ if (sc->amr_reg != NULL) bus_release_resource(sc->amr_dev, AMR_IS_QUARTZ(sc) ? SYS_RES_MEMORY : SYS_RES_IOPORT, PCIR_BAR(0), sc->amr_reg); } /******************************************************************************** * Allocate and map the scatter/gather table in bus space. */ static void amr_sglist_helper(void *arg, bus_dma_segment_t *segs, int nseg, int error) { uint32_t *addr; debug_called(1); addr = arg; *addr = segs[0].ds_addr; } static int amr_sglist_map(struct amr_softc *sc) { size_t segsize; void *p; int error; debug_called(1); /* * Create a single tag describing a region large enough to hold all of * the s/g lists we will need. * * Note that we could probably use AMR_LIMITCMD here, but that may become * tunable. */ if (AMR_IS_SG64(sc)) segsize = sizeof(struct amr_sg64entry) * AMR_NSEG * AMR_MAXCMD; else segsize = sizeof(struct amr_sgentry) * AMR_NSEG * AMR_MAXCMD; error = bus_dma_tag_create(sc->amr_parent_dmat, /* parent */ 512, 0, /* alignment,boundary */ BUS_SPACE_MAXADDR_32BIT, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ segsize, 1, /* maxsize, nsegments */ BUS_SPACE_MAXSIZE_32BIT, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->amr_sg_dmat); if (error != 0) { device_printf(sc->amr_dev, "can't allocate scatter/gather DMA tag\n"); return(ENOMEM); } /* * Allocate enough s/g maps for all commands and permanently map them into * controller-visible space. * * XXX this assumes we can get enough space for all the s/g maps in one * contiguous slab. We may need to switch to a more complex arrangement * where we allocate in smaller chunks and keep a lookup table from slot * to bus address. * * XXX HACK ALERT: at least some controllers don't like the s/g memory * being allocated below 0x2000. We leak some memory if * we get some below this mark and allocate again. We * should be able to avoid this with the tag setup, but * that does't seem to work. */ retry: error = bus_dmamem_alloc(sc->amr_sg_dmat, (void **)&p, BUS_DMA_NOWAIT, &sc->amr_sg_dmamap); if (error) { device_printf(sc->amr_dev, "can't allocate s/g table\n"); return(ENOMEM); } bus_dmamap_load(sc->amr_sg_dmat, sc->amr_sg_dmamap, p, segsize, amr_sglist_helper, &sc->amr_sgbusaddr, 0); if (sc->amr_sgbusaddr < 0x2000) { debug(1, "s/g table too low (0x%x), reallocating\n", sc->amr_sgbusaddr); goto retry; } if (AMR_IS_SG64(sc)) sc->amr_sg64table = (struct amr_sg64entry *)p; sc->amr_sgtable = (struct amr_sgentry *)p; return(0); } /******************************************************************************** * Allocate and set up mailbox areas for the controller (sc) * * The basic mailbox structure should be 16-byte aligned. */ static int amr_setup_mbox(struct amr_softc *sc) { int error; void *p; uint32_t baddr; debug_called(1); /* * Create a single tag describing a region large enough to hold the entire * mailbox. */ error = bus_dma_tag_create(sc->amr_parent_dmat, /* parent */ 16, 0, /* alignment,boundary */ BUS_SPACE_MAXADDR_32BIT, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ sizeof(struct amr_mailbox64), /* maxsize */ 1, /* nsegments */ BUS_SPACE_MAXSIZE_32BIT, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->amr_mailbox_dmat); if (error != 0) { device_printf(sc->amr_dev, "can't allocate mailbox tag\n"); return(ENOMEM); } /* * Allocate the mailbox structure and permanently map it into * controller-visible space. */ error = bus_dmamem_alloc(sc->amr_mailbox_dmat, (void **)&p, BUS_DMA_NOWAIT, &sc->amr_mailbox_dmamap); if (error) { device_printf(sc->amr_dev, "can't allocate mailbox memory\n"); return(ENOMEM); } bus_dmamap_load(sc->amr_mailbox_dmat, sc->amr_mailbox_dmamap, p, sizeof(struct amr_mailbox64), amr_sglist_helper, &baddr, 0); /* * Conventional mailbox is inside the mailbox64 region. */ /* save physical base of the basic mailbox structure */ sc->amr_mailboxphys = baddr + offsetof(struct amr_mailbox64, mb); bzero(p, sizeof(struct amr_mailbox64)); sc->amr_mailbox64 = (struct amr_mailbox64 *)p; sc->amr_mailbox = &sc->amr_mailbox64->mb; return(0); } static int amr_ccb_map(struct amr_softc *sc) { int ccbsize, error; /* * Passthrough and Extended passthrough structures will share the same * memory. */ ccbsize = sizeof(union amr_ccb) * AMR_MAXCMD; error = bus_dma_tag_create(sc->amr_parent_dmat, /* parent */ 128, 0, /* alignment,boundary */ BUS_SPACE_MAXADDR_32BIT,/* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ ccbsize, /* maxsize */ 1, /* nsegments */ ccbsize, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->amr_ccb_dmat); if (error != 0) { device_printf(sc->amr_dev, "can't allocate ccb tag\n"); return (ENOMEM); } error = bus_dmamem_alloc(sc->amr_ccb_dmat, (void **)&sc->amr_ccb, BUS_DMA_NOWAIT, &sc->amr_ccb_dmamap); if (error) { device_printf(sc->amr_dev, "can't allocate ccb memory\n"); return (ENOMEM); } bus_dmamap_load(sc->amr_ccb_dmat, sc->amr_ccb_dmamap, sc->amr_ccb, ccbsize, amr_sglist_helper, &sc->amr_ccb_busaddr, 0); bzero(sc->amr_ccb, ccbsize); return (0); } diff --git a/sys/dev/esp/ncr53c9x.c b/sys/dev/esp/ncr53c9x.c index 98d40ce70697..6068497ce76b 100644 --- a/sys/dev/esp/ncr53c9x.c +++ b/sys/dev/esp/ncr53c9x.c @@ -1,3259 +1,3260 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD AND BSD-2-Clause NetBSD * * Copyright (c) 2004 Scott Long * Copyright (c) 2005, 2008 Marius Strobl * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ /* $NetBSD: ncr53c9x.c,v 1.145 2012/06/18 21:23:56 martin Exp $ */ /*- * Copyright (c) 1998, 2002 The NetBSD Foundation, Inc. * All rights reserved. * * This code is derived from software contributed to The NetBSD Foundation * by Charles M. Hannum. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. */ /*- * Copyright (c) 1994 Peter Galbavy * Copyright (c) 1995 Paul Kranenburg * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by Peter Galbavy * 4. The name of the author may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. */ /* * Based on aic6360 by Jarle Greipsland * * Acknowledgements: Many of the algorithms used in this driver are * inspired by the work of Julian Elischer (julian@FreeBSD.org) and * Charles Hannum (mycroft@duality.gnu.ai.mit.edu). Thanks a million! */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include devclass_t esp_devclass; MODULE_DEPEND(esp, cam, 1, 1, 1); #ifdef NCR53C9X_DEBUG int ncr53c9x_debug = NCR_SHOWMISC /* | NCR_SHOWPHASE | NCR_SHOWTRAC | NCR_SHOWCMDS */; #endif static void ncr53c9x_abort(struct ncr53c9x_softc *sc, struct ncr53c9x_ecb *ecb); static void ncr53c9x_action(struct cam_sim *sim, union ccb *ccb); static void ncr53c9x_async(void *cbarg, uint32_t code, struct cam_path *path, void *arg); static void ncr53c9x_callout(void *arg); static void ncr53c9x_clear(struct ncr53c9x_softc *sc, cam_status result); static void ncr53c9x_clear_target(struct ncr53c9x_softc *sc, int target, cam_status result); static void ncr53c9x_dequeue(struct ncr53c9x_softc *sc, struct ncr53c9x_ecb *ecb); static void ncr53c9x_done(struct ncr53c9x_softc *sc, struct ncr53c9x_ecb *ecb); static void ncr53c9x_free_ecb(struct ncr53c9x_softc *sc, struct ncr53c9x_ecb *ecb); static void ncr53c9x_msgin(struct ncr53c9x_softc *sc); static void ncr53c9x_msgout(struct ncr53c9x_softc *sc); static void ncr53c9x_init(struct ncr53c9x_softc *sc, int doreset); static void ncr53c9x_intr1(struct ncr53c9x_softc *sc); static void ncr53c9x_poll(struct cam_sim *sim); static int ncr53c9x_rdfifo(struct ncr53c9x_softc *sc, int how); static int ncr53c9x_reselect(struct ncr53c9x_softc *sc, int message, int tagtype, int tagid); static void ncr53c9x_reset(struct ncr53c9x_softc *sc); static void ncr53c9x_sense(struct ncr53c9x_softc *sc, struct ncr53c9x_ecb *ecb); static void ncr53c9x_sched(struct ncr53c9x_softc *sc); static void ncr53c9x_select(struct ncr53c9x_softc *sc, struct ncr53c9x_ecb *ecb); static void ncr53c9x_watch(void *arg); static void ncr53c9x_wrfifo(struct ncr53c9x_softc *sc, uint8_t *p, int len); static struct ncr53c9x_ecb *ncr53c9x_get_ecb(struct ncr53c9x_softc *sc); static struct ncr53c9x_linfo *ncr53c9x_lunsearch(struct ncr53c9x_tinfo *sc, int64_t lun); static inline void ncr53c9x_readregs(struct ncr53c9x_softc *sc); static inline void ncr53c9x_setsync(struct ncr53c9x_softc *sc, struct ncr53c9x_tinfo *ti); static inline int ncr53c9x_stp2cpb(struct ncr53c9x_softc *sc, int period); #define NCR_RDFIFO_START 0 #define NCR_RDFIFO_CONTINUE 1 #define NCR_SET_COUNT(sc, size) do { \ NCR_WRITE_REG((sc), NCR_TCL, (size)); \ NCR_WRITE_REG((sc), NCR_TCM, (size) >> 8); \ if ((sc->sc_features & NCR_F_LARGEXFER) != 0) \ NCR_WRITE_REG((sc), NCR_TCH, (size) >> 16); \ if (sc->sc_rev == NCR_VARIANT_FAS366) \ NCR_WRITE_REG(sc, NCR_RCH, 0); \ } while (/* CONSTCOND */0) #ifndef mstohz #define mstohz(ms) \ (((ms) < 0x20000) ? \ ((ms +0u) / 1000u) * hz : \ ((ms +0u) * hz) /1000u) #endif /* * Names for the NCR53c9x variants, corresponding to the variant tags * in ncr53c9xvar.h. */ static const char *ncr53c9x_variant_names[] = { "ESP100", "ESP100A", "ESP200", "NCR53C94", "NCR53C96", "ESP406", "FAS408", "FAS216", "AM53C974", "FAS366/HME", "NCR53C90 (86C01)", "FAS100A", "FAS236", }; /* * Search linked list for LUN info by LUN id. */ static struct ncr53c9x_linfo * ncr53c9x_lunsearch(struct ncr53c9x_tinfo *ti, int64_t lun) { struct ncr53c9x_linfo *li; LIST_FOREACH(li, &ti->luns, link) if (li->lun == lun) return (li); return (NULL); } /* * Attach this instance, and then all the sub-devices. */ int ncr53c9x_attach(struct ncr53c9x_softc *sc) { struct cam_devq *devq; struct cam_sim *sim; struct cam_path *path; struct ncr53c9x_ecb *ecb; int error, i; if (NCR_LOCK_INITIALIZED(sc) == 0) { device_printf(sc->sc_dev, "mutex not initialized\n"); return (ENXIO); } callout_init_mtx(&sc->sc_watchdog, &sc->sc_lock, 0); /* * Note, the front-end has set us up to print the chip variation. */ if (sc->sc_rev >= NCR_VARIANT_MAX) { device_printf(sc->sc_dev, "unknown variant %d, devices not " "attached\n", sc->sc_rev); return (EINVAL); } device_printf(sc->sc_dev, "%s, %d MHz, SCSI ID %d\n", ncr53c9x_variant_names[sc->sc_rev], sc->sc_freq, sc->sc_id); sc->sc_ntarg = (sc->sc_rev == NCR_VARIANT_FAS366) ? 16 : 8; /* * Allocate SCSI message buffers. * Front-ends can override allocation to avoid alignment * handling in the DMA engines. Note that ncr53c9x_msgout() * can request a 1 byte DMA transfer. */ if (sc->sc_omess == NULL) { sc->sc_omess_self = 1; sc->sc_omess = malloc(NCR_MAX_MSG_LEN, M_DEVBUF, M_NOWAIT); if (sc->sc_omess == NULL) { device_printf(sc->sc_dev, "cannot allocate MSGOUT buffer\n"); return (ENOMEM); } } else sc->sc_omess_self = 0; if (sc->sc_imess == NULL) { sc->sc_imess_self = 1; sc->sc_imess = malloc(NCR_MAX_MSG_LEN + 1, M_DEVBUF, M_NOWAIT); if (sc->sc_imess == NULL) { device_printf(sc->sc_dev, "cannot allocate MSGIN buffer\n"); error = ENOMEM; goto fail_omess; } } else sc->sc_imess_self = 0; sc->sc_tinfo = malloc(sc->sc_ntarg * sizeof(sc->sc_tinfo[0]), M_DEVBUF, M_NOWAIT | M_ZERO); if (sc->sc_tinfo == NULL) { device_printf(sc->sc_dev, "cannot allocate target info buffer\n"); error = ENOMEM; goto fail_imess; } /* * Treat NCR53C90 with the 86C01 DMA chip exactly as ESP100 * from now on. */ if (sc->sc_rev == NCR_VARIANT_NCR53C90_86C01) sc->sc_rev = NCR_VARIANT_ESP100; sc->sc_ccf = FREQTOCCF(sc->sc_freq); /* The value *must not* be == 1. Make it 2. */ if (sc->sc_ccf == 1) sc->sc_ccf = 2; /* * The recommended timeout is 250ms. This register is loaded * with a value calculated as follows, from the docs: * * (timeout period) x (CLK frequency) * reg = ------------------------------------- * 8192 x (Clock Conversion Factor) * * Since CCF has a linear relation to CLK, this generally computes * to the constant of 153. */ sc->sc_timeout = ((250 * 1000) * sc->sc_freq) / (8192 * sc->sc_ccf); /* The CCF register only has 3 bits; 0 is actually 8. */ sc->sc_ccf &= 7; /* * Register with CAM. */ devq = cam_simq_alloc(sc->sc_ntarg); if (devq == NULL) { device_printf(sc->sc_dev, "cannot allocate device queue\n"); error = ENOMEM; goto fail_tinfo; } sim = cam_sim_alloc(ncr53c9x_action, ncr53c9x_poll, "esp", sc, device_get_unit(sc->sc_dev), &sc->sc_lock, 1, NCR_TAG_DEPTH, devq); if (sim == NULL) { device_printf(sc->sc_dev, "cannot allocate SIM entry\n"); error = ENOMEM; goto fail_devq; } NCR_LOCK(sc); if (xpt_bus_register(sim, sc->sc_dev, 0) != CAM_SUCCESS) { device_printf(sc->sc_dev, "cannot register bus\n"); error = EIO; goto fail_lock; } if (xpt_create_path(&path, NULL, cam_sim_path(sim), CAM_TARGET_WILDCARD, CAM_LUN_WILDCARD) != CAM_REQ_CMP) { device_printf(sc->sc_dev, "cannot create path\n"); error = EIO; goto fail_bus; } if (xpt_register_async(AC_LOST_DEVICE, ncr53c9x_async, sim, path) != CAM_REQ_CMP) { device_printf(sc->sc_dev, "cannot register async handler\n"); error = EIO; goto fail_path; } sc->sc_sim = sim; sc->sc_path = path; /* Reset state and bus. */ #if 0 sc->sc_cfflags = sc->sc_dev.dv_cfdata->cf_flags; #else sc->sc_cfflags = 0; #endif sc->sc_state = 0; ncr53c9x_init(sc, 1); TAILQ_INIT(&sc->free_list); if ((sc->ecb_array = malloc(sizeof(struct ncr53c9x_ecb) * NCR_TAG_DEPTH, M_DEVBUF, M_NOWAIT | M_ZERO)) == NULL) { device_printf(sc->sc_dev, "cannot allocate ECB array\n"); error = ENOMEM; goto fail_async; } for (i = 0; i < NCR_TAG_DEPTH; i++) { ecb = &sc->ecb_array[i]; ecb->sc = sc; ecb->tag_id = i; callout_init_mtx(&ecb->ch, &sc->sc_lock, 0); TAILQ_INSERT_HEAD(&sc->free_list, ecb, free_links); } callout_reset(&sc->sc_watchdog, 60 * hz, ncr53c9x_watch, sc); NCR_UNLOCK(sc); + gone_in_dev(sc->sc_dev, 14, "esp(4) driver"); return (0); fail_async: xpt_register_async(0, ncr53c9x_async, sim, path); fail_path: xpt_free_path(path); fail_bus: xpt_bus_deregister(cam_sim_path(sim)); fail_lock: NCR_UNLOCK(sc); cam_sim_free(sim, TRUE); fail_devq: cam_simq_free(devq); fail_tinfo: free(sc->sc_tinfo, M_DEVBUF); fail_imess: if (sc->sc_imess_self) free(sc->sc_imess, M_DEVBUF); fail_omess: if (sc->sc_omess_self) free(sc->sc_omess, M_DEVBUF); return (error); } int ncr53c9x_detach(struct ncr53c9x_softc *sc) { struct ncr53c9x_linfo *li, *nextli; int t; callout_drain(&sc->sc_watchdog); NCR_LOCK(sc); if (sc->sc_tinfo) { /* Cancel all commands. */ ncr53c9x_clear(sc, CAM_REQ_ABORTED); /* Free logical units. */ for (t = 0; t < sc->sc_ntarg; t++) { for (li = LIST_FIRST(&sc->sc_tinfo[t].luns); li; li = nextli) { nextli = LIST_NEXT(li, link); free(li, M_DEVBUF); } } } xpt_register_async(0, ncr53c9x_async, sc->sc_sim, sc->sc_path); xpt_free_path(sc->sc_path); xpt_bus_deregister(cam_sim_path(sc->sc_sim)); cam_sim_free(sc->sc_sim, TRUE); NCR_UNLOCK(sc); free(sc->ecb_array, M_DEVBUF); free(sc->sc_tinfo, M_DEVBUF); if (sc->sc_imess_self) free(sc->sc_imess, M_DEVBUF); if (sc->sc_omess_self) free(sc->sc_omess, M_DEVBUF); return (0); } /* * This is the generic ncr53c9x reset function. It does not reset the SCSI * bus, only this controller, but kills any on-going commands, and also stops * and resets the DMA. * * After reset, registers are loaded with the defaults from the attach * routine above. */ static void ncr53c9x_reset(struct ncr53c9x_softc *sc) { NCR_LOCK_ASSERT(sc, MA_OWNED); /* Reset DMA first. */ NCRDMA_RESET(sc); /* Reset SCSI chip. */ NCRCMD(sc, NCRCMD_RSTCHIP); NCRCMD(sc, NCRCMD_NOP); DELAY(500); /* Do these backwards, and fall through. */ switch (sc->sc_rev) { case NCR_VARIANT_ESP406: case NCR_VARIANT_FAS408: NCR_WRITE_REG(sc, NCR_CFG5, sc->sc_cfg5 | NCRCFG5_SINT); NCR_WRITE_REG(sc, NCR_CFG4, sc->sc_cfg4); /* FALLTHROUGH */ case NCR_VARIANT_AM53C974: case NCR_VARIANT_FAS100A: case NCR_VARIANT_FAS216: case NCR_VARIANT_FAS236: case NCR_VARIANT_NCR53C94: case NCR_VARIANT_NCR53C96: case NCR_VARIANT_ESP200: sc->sc_features |= NCR_F_HASCFG3; NCR_WRITE_REG(sc, NCR_CFG3, sc->sc_cfg3); /* FALLTHROUGH */ case NCR_VARIANT_ESP100A: sc->sc_features |= NCR_F_SELATN3; if ((sc->sc_cfg2 & NCRCFG2_FE) != 0) sc->sc_features |= NCR_F_LARGEXFER; NCR_WRITE_REG(sc, NCR_CFG2, sc->sc_cfg2); /* FALLTHROUGH */ case NCR_VARIANT_ESP100: NCR_WRITE_REG(sc, NCR_CFG1, sc->sc_cfg1); NCR_WRITE_REG(sc, NCR_CCF, sc->sc_ccf); NCR_WRITE_REG(sc, NCR_SYNCOFF, 0); NCR_WRITE_REG(sc, NCR_TIMEOUT, sc->sc_timeout); break; case NCR_VARIANT_FAS366: sc->sc_features |= NCR_F_HASCFG3 | NCR_F_FASTSCSI | NCR_F_SELATN3 | NCR_F_LARGEXFER; sc->sc_cfg3 = NCRFASCFG3_FASTCLK | NCRFASCFG3_OBAUTO; if (sc->sc_id > 7) sc->sc_cfg3 |= NCRFASCFG3_IDBIT3; sc->sc_cfg3_fscsi = NCRFASCFG3_FASTSCSI; NCR_WRITE_REG(sc, NCR_CFG3, sc->sc_cfg3); sc->sc_cfg2 = NCRCFG2_HMEFE | NCRCFG2_HME32; NCR_WRITE_REG(sc, NCR_CFG2, sc->sc_cfg2); NCR_WRITE_REG(sc, NCR_CFG1, sc->sc_cfg1); NCR_WRITE_REG(sc, NCR_CCF, sc->sc_ccf); NCR_WRITE_REG(sc, NCR_SYNCOFF, 0); NCR_WRITE_REG(sc, NCR_TIMEOUT, sc->sc_timeout); break; default: device_printf(sc->sc_dev, "unknown revision code, assuming ESP100\n"); NCR_WRITE_REG(sc, NCR_CFG1, sc->sc_cfg1); NCR_WRITE_REG(sc, NCR_CCF, sc->sc_ccf); NCR_WRITE_REG(sc, NCR_SYNCOFF, 0); NCR_WRITE_REG(sc, NCR_TIMEOUT, sc->sc_timeout); } if (sc->sc_rev == NCR_VARIANT_AM53C974) NCR_WRITE_REG(sc, NCR_AMDCFG4, sc->sc_cfg4); #if 0 device_printf(sc->sc_dev, "%s: revision %d\n", __func__, sc->sc_rev); device_printf(sc->sc_dev, "%s: cfg1 0x%x, cfg2 0x%x, cfg3 0x%x, ccf " "0x%x, timeout 0x%x\n", __func__, sc->sc_cfg1, sc->sc_cfg2, sc->sc_cfg3, sc->sc_ccf, sc->sc_timeout); #endif } /* * Clear all commands. */ static void ncr53c9x_clear(struct ncr53c9x_softc *sc, cam_status result) { struct ncr53c9x_ecb *ecb; int r; NCR_LOCK_ASSERT(sc, MA_OWNED); /* Cancel any active commands. */ sc->sc_state = NCR_CLEANING; sc->sc_msgify = 0; ecb = sc->sc_nexus; if (ecb != NULL) { ecb->ccb->ccb_h.status = result; ncr53c9x_done(sc, ecb); } /* Cancel outstanding disconnected commands. */ for (r = 0; r < sc->sc_ntarg; r++) ncr53c9x_clear_target(sc, r, result); } /* * Clear all commands for a specific target. */ static void ncr53c9x_clear_target(struct ncr53c9x_softc *sc, int target, cam_status result) { struct ncr53c9x_ecb *ecb; struct ncr53c9x_linfo *li; int i; NCR_LOCK_ASSERT(sc, MA_OWNED); /* Cancel outstanding disconnected commands on each LUN. */ LIST_FOREACH(li, &sc->sc_tinfo[target].luns, link) { ecb = li->untagged; if (ecb != NULL) { li->untagged = NULL; /* * XXX should we terminate a command * that never reached the disk? */ li->busy = 0; ecb->ccb->ccb_h.status = result; ncr53c9x_done(sc, ecb); } for (i = 0; i < NCR_TAG_DEPTH; i++) { ecb = li->queued[i]; if (ecb != NULL) { li->queued[i] = NULL; ecb->ccb->ccb_h.status = result; ncr53c9x_done(sc, ecb); } } li->used = 0; } } /* * Initialize ncr53c9x state machine. */ static void ncr53c9x_init(struct ncr53c9x_softc *sc, int doreset) { struct ncr53c9x_tinfo *ti; int r; NCR_LOCK_ASSERT(sc, MA_OWNED); NCR_MISC(("[NCR_INIT(%d) %d] ", doreset, sc->sc_state)); if (sc->sc_state == 0) { /* First time through; initialize. */ TAILQ_INIT(&sc->ready_list); sc->sc_nexus = NULL; memset(sc->sc_tinfo, 0, sizeof(*sc->sc_tinfo)); for (r = 0; r < sc->sc_ntarg; r++) { LIST_INIT(&sc->sc_tinfo[r].luns); } } else ncr53c9x_clear(sc, CAM_CMD_TIMEOUT); /* * Reset the chip to a known state. */ ncr53c9x_reset(sc); sc->sc_flags = 0; sc->sc_msgpriq = sc->sc_msgout = sc->sc_msgoutq = 0; sc->sc_phase = sc->sc_prevphase = INVALID_PHASE; /* * If we're the first time through, set the default parameters * for all targets. Otherwise we only clear their current transfer * settings so we'll renegotiate their goal settings with the next * command. */ if (sc->sc_state == 0) { for (r = 0; r < sc->sc_ntarg; r++) { ti = &sc->sc_tinfo[r]; /* XXX - config flags per target: low bits: no reselect; high bits: no synch */ ti->flags = ((sc->sc_minsync != 0 && (sc->sc_cfflags & (1 << ((r & 7) + 8))) == 0) ? 0 : T_SYNCHOFF) | ((sc->sc_cfflags & (1 << (r & 7))) == 0 ? 0 : T_RSELECTOFF); ti->curr.period = ti->goal.period = 0; ti->curr.offset = ti->goal.offset = 0; ti->curr.width = ti->goal.width = MSG_EXT_WDTR_BUS_8_BIT; } } else { for (r = 0; r < sc->sc_ntarg; r++) { ti = &sc->sc_tinfo[r]; ti->flags &= ~(T_SDTRSENT | T_WDTRSENT); ti->curr.period = 0; ti->curr.offset = 0; ti->curr.width = MSG_EXT_WDTR_BUS_8_BIT; } } if (doreset) { sc->sc_state = NCR_SBR; NCRCMD(sc, NCRCMD_RSTSCSI); /* Give the bus a fighting chance to settle. */ DELAY(250000); } else { sc->sc_state = NCR_IDLE; ncr53c9x_sched(sc); } } /* * Read the NCR registers, and save their contents for later use. * NCR_STAT, NCR_STEP & NCR_INTR are mostly zeroed out when reading * NCR_INTR - so make sure it is the last read. * * I think that (from reading the docs) most bits in these registers * only make sense when the DMA CSR has an interrupt showing. Call only * if an interrupt is pending. */ static inline void ncr53c9x_readregs(struct ncr53c9x_softc *sc) { NCR_LOCK_ASSERT(sc, MA_OWNED); sc->sc_espstat = NCR_READ_REG(sc, NCR_STAT); /* Only the step bits are of interest. */ sc->sc_espstep = NCR_READ_REG(sc, NCR_STEP) & NCRSTEP_MASK; if (sc->sc_rev == NCR_VARIANT_FAS366) sc->sc_espstat2 = NCR_READ_REG(sc, NCR_STAT2); sc->sc_espintr = NCR_READ_REG(sc, NCR_INTR); /* * Determine the SCSI bus phase, return either a real SCSI bus phase * or some pseudo phase we use to detect certain exceptions. */ sc->sc_phase = (sc->sc_espintr & NCRINTR_DIS) ? BUSFREE_PHASE : sc->sc_espstat & NCRSTAT_PHASE; NCR_INTS(("regs[intr=%02x,stat=%02x,step=%02x,stat2=%02x] ", sc->sc_espintr, sc->sc_espstat, sc->sc_espstep, sc->sc_espstat2)); } /* * Convert Synchronous Transfer Period to chip register Clock Per Byte value. */ static inline int ncr53c9x_stp2cpb(struct ncr53c9x_softc *sc, int period) { int v; NCR_LOCK_ASSERT(sc, MA_OWNED); v = (sc->sc_freq * period) / 250; if (ncr53c9x_cpb2stp(sc, v) < period) /* Correct round-down error. */ v++; return (v); } static inline void ncr53c9x_setsync(struct ncr53c9x_softc *sc, struct ncr53c9x_tinfo *ti) { uint8_t cfg3, syncoff, synctp; NCR_LOCK_ASSERT(sc, MA_OWNED); cfg3 = sc->sc_cfg3; if (ti->curr.offset != 0) { syncoff = ti->curr.offset; synctp = ncr53c9x_stp2cpb(sc, ti->curr.period); if (sc->sc_features & NCR_F_FASTSCSI) { /* * If the period is 200ns or less (ti->period <= 50), * put the chip in Fast SCSI mode. */ if (ti->curr.period <= 50) /* * There are (at least) 4 variations of the * configuration 3 register. The drive attach * routine sets the appropriate bit to put the * chip into Fast SCSI mode so that it doesn't * have to be figured out here each time. */ cfg3 |= sc->sc_cfg3_fscsi; } /* * Am53c974 requires different SYNCTP values when the * FSCSI bit is off. */ if (sc->sc_rev == NCR_VARIANT_AM53C974 && (cfg3 & NCRAMDCFG3_FSCSI) == 0) synctp--; } else { syncoff = 0; synctp = 0; } if (ti->curr.width != MSG_EXT_WDTR_BUS_8_BIT) { if (sc->sc_rev == NCR_VARIANT_FAS366) cfg3 |= NCRFASCFG3_EWIDE; } if (sc->sc_features & NCR_F_HASCFG3) NCR_WRITE_REG(sc, NCR_CFG3, cfg3); NCR_WRITE_REG(sc, NCR_SYNCOFF, syncoff); NCR_WRITE_REG(sc, NCR_SYNCTP, synctp); } /* * Send a command to a target, set the driver state to NCR_SELECTING * and let the caller take care of the rest. * * Keeping this as a function allows me to say that this may be done * by DMA instead of programmed I/O soon. */ static void ncr53c9x_select(struct ncr53c9x_softc *sc, struct ncr53c9x_ecb *ecb) { struct ncr53c9x_tinfo *ti; uint8_t *cmd; size_t dmasize; int clen, error, selatn3, selatns; int lun = ecb->ccb->ccb_h.target_lun; int target = ecb->ccb->ccb_h.target_id; NCR_LOCK_ASSERT(sc, MA_OWNED); NCR_TRACE(("[%s(t%d,l%d,cmd:%x,tag:%x,%x)] ", __func__, target, lun, ecb->cmd.cmd.opcode, ecb->tag[0], ecb->tag[1])); ti = &sc->sc_tinfo[target]; sc->sc_state = NCR_SELECTING; /* * Schedule the callout now, the first time we will go away * expecting to come back due to an interrupt, because it is * always possible that the interrupt may never happen. */ callout_reset(&ecb->ch, mstohz(ecb->timeout), ncr53c9x_callout, ecb); /* * The docs say the target register is never reset, and I * can't think of a better place to set it. */ if (sc->sc_rev == NCR_VARIANT_FAS366) { NCRCMD(sc, NCRCMD_FLUSH); NCR_WRITE_REG(sc, NCR_SELID, target | NCR_BUSID_HMEXC32 | NCR_BUSID_HMEENCID); } else NCR_WRITE_REG(sc, NCR_SELID, target); /* * If we are requesting sense, force a renegotiation if we are * currently using anything different from asynchronous at 8 bit * as the target might have lost our transfer negotiations. */ if ((ecb->flags & ECB_SENSE) != 0 && (ti->curr.offset != 0 || ti->curr.width != MSG_EXT_WDTR_BUS_8_BIT)) { ti->curr.period = 0; ti->curr.offset = 0; ti->curr.width = MSG_EXT_WDTR_BUS_8_BIT; } ncr53c9x_setsync(sc, ti); selatn3 = selatns = 0; if (ecb->tag[0] != 0) { if (sc->sc_features & NCR_F_SELATN3) /* Use SELATN3 to send tag messages. */ selatn3 = 1; else /* We don't have SELATN3; use SELATNS to send tags. */ selatns = 1; } if (ti->curr.period != ti->goal.period || ti->curr.offset != ti->goal.offset || ti->curr.width != ti->goal.width) { /* We have to use SELATNS to send sync/wide messages. */ selatn3 = 0; selatns = 1; } cmd = (uint8_t *)&ecb->cmd.cmd; if (selatn3) { /* We'll use tags with SELATN3. */ clen = ecb->clen + 3; cmd -= 3; cmd[0] = MSG_IDENTIFY(lun, 1); /* msg[0] */ cmd[1] = ecb->tag[0]; /* msg[1] */ cmd[2] = ecb->tag[1]; /* msg[2] */ } else { /* We don't have tags, or will send messages with SELATNS. */ clen = ecb->clen + 1; cmd -= 1; cmd[0] = MSG_IDENTIFY(lun, (ti->flags & T_RSELECTOFF) == 0); } if ((sc->sc_features & NCR_F_DMASELECT) && !selatns) { /* Setup DMA transfer for command. */ dmasize = clen; sc->sc_cmdlen = clen; sc->sc_cmdp = cmd; error = NCRDMA_SETUP(sc, &sc->sc_cmdp, &sc->sc_cmdlen, 0, &dmasize); if (error != 0) goto cmd; /* Program the SCSI counter. */ NCR_SET_COUNT(sc, dmasize); /* Load the count in. */ NCRCMD(sc, NCRCMD_NOP | NCRCMD_DMA); /* And get the target's attention. */ if (selatn3) { sc->sc_msgout = SEND_TAG; sc->sc_flags |= NCR_ATN; NCRCMD(sc, NCRCMD_SELATN3 | NCRCMD_DMA); } else NCRCMD(sc, NCRCMD_SELATN | NCRCMD_DMA); NCRDMA_GO(sc); return; } cmd: /* * Who am I? This is where we tell the target that we are * happy for it to disconnect etc. */ /* Now get the command into the FIFO. */ sc->sc_cmdlen = 0; ncr53c9x_wrfifo(sc, cmd, clen); /* And get the target's attention. */ if (selatns) { NCR_MSGS(("SELATNS \n")); /* Arbitrate, select and stop after IDENTIFY message. */ NCRCMD(sc, NCRCMD_SELATNS); } else if (selatn3) { sc->sc_msgout = SEND_TAG; sc->sc_flags |= NCR_ATN; NCRCMD(sc, NCRCMD_SELATN3); } else NCRCMD(sc, NCRCMD_SELATN); } static void ncr53c9x_free_ecb(struct ncr53c9x_softc *sc, struct ncr53c9x_ecb *ecb) { NCR_LOCK_ASSERT(sc, MA_OWNED); ecb->flags = 0; TAILQ_INSERT_TAIL(&sc->free_list, ecb, free_links); } static struct ncr53c9x_ecb * ncr53c9x_get_ecb(struct ncr53c9x_softc *sc) { struct ncr53c9x_ecb *ecb; NCR_LOCK_ASSERT(sc, MA_OWNED); ecb = TAILQ_FIRST(&sc->free_list); if (ecb) { if (ecb->flags != 0) panic("%s: ecb flags not cleared", __func__); TAILQ_REMOVE(&sc->free_list, ecb, free_links); ecb->flags = ECB_ALLOC; bzero(&ecb->ccb, sizeof(struct ncr53c9x_ecb) - offsetof(struct ncr53c9x_ecb, ccb)); } return (ecb); } /* * DRIVER FUNCTIONS CALLABLE FROM HIGHER LEVEL DRIVERS: */ /* * Start a SCSI-command. * This function is called by the higher level SCSI-driver to queue/run * SCSI-commands. */ static void ncr53c9x_action(struct cam_sim *sim, union ccb *ccb) { struct ccb_pathinq *cpi; struct ccb_scsiio *csio; struct ccb_trans_settings *cts; struct ccb_trans_settings_scsi *scsi; struct ccb_trans_settings_spi *spi; struct ncr53c9x_ecb *ecb; struct ncr53c9x_softc *sc; struct ncr53c9x_tinfo *ti; int target; sc = cam_sim_softc(sim); NCR_LOCK_ASSERT(sc, MA_OWNED); NCR_TRACE(("[%s %d]", __func__, ccb->ccb_h.func_code)); switch (ccb->ccb_h.func_code) { case XPT_RESET_BUS: ncr53c9x_init(sc, 1); ccb->ccb_h.status = CAM_REQ_CMP; break; case XPT_CALC_GEOMETRY: cam_calc_geometry(&ccb->ccg, sc->sc_extended_geom); break; case XPT_PATH_INQ: cpi = &ccb->cpi; cpi->version_num = 1; cpi->hba_inquiry = PI_SDTR_ABLE | PI_TAG_ABLE; cpi->hba_inquiry |= (sc->sc_rev == NCR_VARIANT_FAS366) ? PI_WIDE_16 : 0; cpi->target_sprt = 0; cpi->hba_misc = 0; cpi->hba_eng_cnt = 0; cpi->max_target = sc->sc_ntarg - 1; cpi->max_lun = 7; cpi->initiator_id = sc->sc_id; strlcpy(cpi->sim_vid, "FreeBSD", SIM_IDLEN); strlcpy(cpi->hba_vid, "NCR", HBA_IDLEN); strlcpy(cpi->dev_name, cam_sim_name(sim), DEV_IDLEN); cpi->unit_number = cam_sim_unit(sim); cpi->bus_id = 0; cpi->base_transfer_speed = 3300; cpi->protocol = PROTO_SCSI; cpi->protocol_version = SCSI_REV_2; cpi->transport = XPORT_SPI; cpi->transport_version = 2; cpi->maxio = sc->sc_maxxfer; ccb->ccb_h.status = CAM_REQ_CMP; break; case XPT_GET_TRAN_SETTINGS: cts = &ccb->cts; ti = &sc->sc_tinfo[ccb->ccb_h.target_id]; scsi = &cts->proto_specific.scsi; spi = &cts->xport_specific.spi; cts->protocol = PROTO_SCSI; cts->protocol_version = SCSI_REV_2; cts->transport = XPORT_SPI; cts->transport_version = 2; if (cts->type == CTS_TYPE_CURRENT_SETTINGS) { spi->sync_period = ti->curr.period; spi->sync_offset = ti->curr.offset; spi->bus_width = ti->curr.width; if ((ti->flags & T_TAG) != 0) { spi->flags |= CTS_SPI_FLAGS_DISC_ENB; scsi->flags |= CTS_SCSI_FLAGS_TAG_ENB; } else { spi->flags &= ~CTS_SPI_FLAGS_DISC_ENB; scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB; } } else { if ((ti->flags & T_SYNCHOFF) != 0) { spi->sync_period = 0; spi->sync_offset = 0; } else { spi->sync_period = sc->sc_minsync; spi->sync_offset = sc->sc_maxoffset; } spi->bus_width = sc->sc_maxwidth; spi->flags |= CTS_SPI_FLAGS_DISC_ENB; scsi->flags |= CTS_SCSI_FLAGS_TAG_ENB; } spi->valid = CTS_SPI_VALID_BUS_WIDTH | CTS_SPI_VALID_SYNC_RATE | CTS_SPI_VALID_SYNC_OFFSET | CTS_SPI_VALID_DISC; scsi->valid = CTS_SCSI_VALID_TQ; ccb->ccb_h.status = CAM_REQ_CMP; break; case XPT_ABORT: device_printf(sc->sc_dev, "XPT_ABORT called\n"); ccb->ccb_h.status = CAM_FUNC_NOTAVAIL; break; case XPT_TERM_IO: device_printf(sc->sc_dev, "XPT_TERM_IO called\n"); ccb->ccb_h.status = CAM_FUNC_NOTAVAIL; break; case XPT_RESET_DEV: case XPT_SCSI_IO: if (ccb->ccb_h.target_id >= sc->sc_ntarg) { ccb->ccb_h.status = CAM_PATH_INVALID; goto done; } /* Get an ECB to use. */ ecb = ncr53c9x_get_ecb(sc); /* * This should never happen as we track resources * in the mid-layer. */ if (ecb == NULL) { xpt_freeze_simq(sim, 1); ccb->ccb_h.status = CAM_REQUEUE_REQ; device_printf(sc->sc_dev, "unable to allocate ecb\n"); goto done; } /* Initialize ecb. */ ecb->ccb = ccb; ecb->timeout = ccb->ccb_h.timeout; if (ccb->ccb_h.func_code == XPT_RESET_DEV) { ecb->flags |= ECB_RESET; ecb->clen = 0; ecb->dleft = 0; } else { csio = &ccb->csio; if ((ccb->ccb_h.flags & CAM_CDB_POINTER) != 0) bcopy(csio->cdb_io.cdb_ptr, &ecb->cmd.cmd, csio->cdb_len); else bcopy(csio->cdb_io.cdb_bytes, &ecb->cmd.cmd, csio->cdb_len); ecb->clen = csio->cdb_len; ecb->daddr = csio->data_ptr; ecb->dleft = csio->dxfer_len; } ecb->stat = 0; TAILQ_INSERT_TAIL(&sc->ready_list, ecb, chain); ecb->flags |= ECB_READY; if (sc->sc_state == NCR_IDLE) ncr53c9x_sched(sc); return; case XPT_SET_TRAN_SETTINGS: cts = &ccb->cts; target = ccb->ccb_h.target_id; ti = &sc->sc_tinfo[target]; scsi = &cts->proto_specific.scsi; spi = &cts->xport_specific.spi; if ((scsi->valid & CTS_SCSI_VALID_TQ) != 0) { if ((sc->sc_cfflags & (1<<((target & 7) + 16))) == 0 && (scsi->flags & CTS_SCSI_FLAGS_TAG_ENB)) { NCR_MISC(("%s: target %d: tagged queuing\n", device_get_nameunit(sc->sc_dev), target)); ti->flags |= T_TAG; } else ti->flags &= ~T_TAG; } if ((spi->valid & CTS_SPI_VALID_BUS_WIDTH) != 0) { NCR_MISC(("%s: target %d: wide negotiation\n", device_get_nameunit(sc->sc_dev), target)); ti->goal.width = spi->bus_width; } if ((spi->valid & CTS_SPI_VALID_SYNC_RATE) != 0) { NCR_MISC(("%s: target %d: sync period negotiation\n", device_get_nameunit(sc->sc_dev), target)); ti->goal.period = spi->sync_period; } if ((spi->valid & CTS_SPI_VALID_SYNC_OFFSET) != 0) { NCR_MISC(("%s: target %d: sync offset negotiation\n", device_get_nameunit(sc->sc_dev), target)); ti->goal.offset = spi->sync_offset; } ccb->ccb_h.status = CAM_REQ_CMP; break; default: device_printf(sc->sc_dev, "Unhandled function code %d\n", ccb->ccb_h.func_code); ccb->ccb_h.status = CAM_PROVIDE_FAIL; } done: xpt_done(ccb); } /* * Used when interrupt driven I/O is not allowed, e.g. during boot. */ static void ncr53c9x_poll(struct cam_sim *sim) { struct ncr53c9x_softc *sc; sc = cam_sim_softc(sim); NCR_LOCK_ASSERT(sc, MA_OWNED); NCR_TRACE(("[%s] ", __func__)); if (NCRDMA_ISINTR(sc)) ncr53c9x_intr1(sc); } /* * Asynchronous notification handler */ static void ncr53c9x_async(void *cbarg, uint32_t code, struct cam_path *path, void *arg) { struct ncr53c9x_softc *sc; struct ncr53c9x_tinfo *ti; int target; sc = cam_sim_softc(cbarg); NCR_LOCK_ASSERT(sc, MA_OWNED); switch (code) { case AC_LOST_DEVICE: target = xpt_path_target_id(path); if (target < 0 || target >= sc->sc_ntarg) break; /* Cancel outstanding disconnected commands. */ ncr53c9x_clear_target(sc, target, CAM_REQ_ABORTED); /* Set the default parameters for the target. */ ti = &sc->sc_tinfo[target]; /* XXX - config flags per target: low bits: no reselect; high bits: no synch */ ti->flags = ((sc->sc_minsync != 0 && (sc->sc_cfflags & (1 << ((target & 7) + 8))) == 0) ? 0 : T_SYNCHOFF) | ((sc->sc_cfflags & (1 << (target & 7))) == 0 ? 0 : T_RSELECTOFF); ti->curr.period = ti->goal.period = 0; ti->curr.offset = ti->goal.offset = 0; ti->curr.width = ti->goal.width = MSG_EXT_WDTR_BUS_8_BIT; break; } } /* * LOW LEVEL SCSI UTILITIES */ /* * Schedule a SCSI operation. This has now been pulled out of the interrupt * handler so that we may call it from ncr53c9x_action and ncr53c9x_done. * This may save us an unnecessary interrupt just to get things going. * Should only be called when state == NCR_IDLE and with sc_lock held. */ static void ncr53c9x_sched(struct ncr53c9x_softc *sc) { struct ncr53c9x_ecb *ecb; struct ncr53c9x_linfo *li; struct ncr53c9x_tinfo *ti; int lun, tag; NCR_LOCK_ASSERT(sc, MA_OWNED); NCR_TRACE(("[%s] ", __func__)); if (sc->sc_state != NCR_IDLE) panic("%s: not IDLE (state=%d)", __func__, sc->sc_state); /* * Find first ecb in ready queue that is for a target/lunit * combinations that is not busy. */ TAILQ_FOREACH(ecb, &sc->ready_list, chain) { ti = &sc->sc_tinfo[ecb->ccb->ccb_h.target_id]; lun = ecb->ccb->ccb_h.target_lun; /* Select type of tag for this command */ if ((ti->flags & (T_RSELECTOFF | T_TAG)) != T_TAG) tag = 0; else if ((ecb->flags & ECB_SENSE) != 0) tag = 0; else if ((ecb->ccb->ccb_h.flags & CAM_TAG_ACTION_VALID) == 0) tag = 0; else if (ecb->ccb->csio.tag_action == CAM_TAG_ACTION_NONE) tag = 0; else tag = ecb->ccb->csio.tag_action; li = TINFO_LUN(ti, lun); if (li == NULL) { /* Initialize LUN info and add to list. */ li = malloc(sizeof(*li), M_DEVBUF, M_NOWAIT | M_ZERO); if (li == NULL) continue; li->lun = lun; LIST_INSERT_HEAD(&ti->luns, li, link); if (lun < NCR_NLUN) ti->lun[lun] = li; } li->last_used = time_second; if (tag == 0) { /* Try to issue this as an untagged command. */ if (li->untagged == NULL) li->untagged = ecb; } if (li->untagged != NULL) { tag = 0; if ((li->busy != 1) && li->used == 0) { /* * We need to issue this untagged command * now. */ ecb = li->untagged; } else { /* not ready, yet */ continue; } } ecb->tag[0] = tag; if (tag != 0) { li->queued[ecb->tag_id] = ecb; ecb->tag[1] = ecb->tag_id; li->used++; } if (li->untagged != NULL && (li->busy != 1)) { li->busy = 1; TAILQ_REMOVE(&sc->ready_list, ecb, chain); ecb->flags &= ~ECB_READY; sc->sc_nexus = ecb; ncr53c9x_select(sc, ecb); break; } if (li->untagged == NULL && tag != 0) { TAILQ_REMOVE(&sc->ready_list, ecb, chain); ecb->flags &= ~ECB_READY; sc->sc_nexus = ecb; ncr53c9x_select(sc, ecb); break; } else NCR_TRACE(("[%s %d:%d busy] \n", __func__, ecb->ccb->ccb_h.target_id, ecb->ccb->ccb_h.target_lun)); } } static void ncr53c9x_sense(struct ncr53c9x_softc *sc, struct ncr53c9x_ecb *ecb) { union ccb *ccb = ecb->ccb; struct ncr53c9x_linfo *li; struct ncr53c9x_tinfo *ti; struct scsi_request_sense *ss = (void *)&ecb->cmd.cmd; int lun; NCR_LOCK_ASSERT(sc, MA_OWNED); NCR_TRACE(("[%s] ", __func__)); lun = ccb->ccb_h.target_lun; ti = &sc->sc_tinfo[ccb->ccb_h.target_id]; /* Next, setup a REQUEST SENSE command block. */ memset(ss, 0, sizeof(*ss)); ss->opcode = REQUEST_SENSE; ss->byte2 = ccb->ccb_h.target_lun << SCSI_CMD_LUN_SHIFT; ss->length = sizeof(struct scsi_sense_data); ecb->clen = sizeof(*ss); memset(&ccb->csio.sense_data, 0, sizeof(ccb->csio.sense_data)); ecb->daddr = (uint8_t *)&ccb->csio.sense_data; ecb->dleft = sizeof(struct scsi_sense_data); ecb->flags |= ECB_SENSE; ecb->timeout = NCR_SENSE_TIMEOUT; ti->senses++; li = TINFO_LUN(ti, lun); if (li->busy) li->busy = 0; ncr53c9x_dequeue(sc, ecb); li->untagged = ecb; /* Must be executed first to fix C/A. */ li->busy = 2; if (ecb == sc->sc_nexus) ncr53c9x_select(sc, ecb); else { TAILQ_INSERT_HEAD(&sc->ready_list, ecb, chain); ecb->flags |= ECB_READY; if (sc->sc_state == NCR_IDLE) ncr53c9x_sched(sc); } } /* * POST PROCESSING OF SCSI_CMD (usually current) */ static void ncr53c9x_done(struct ncr53c9x_softc *sc, struct ncr53c9x_ecb *ecb) { union ccb *ccb = ecb->ccb; struct ncr53c9x_linfo *li; struct ncr53c9x_tinfo *ti; int lun, sense_returned; NCR_LOCK_ASSERT(sc, MA_OWNED); NCR_TRACE(("[%s(status:%x)] ", __func__, ccb->ccb_h.status)); ti = &sc->sc_tinfo[ccb->ccb_h.target_id]; lun = ccb->ccb_h.target_lun; li = TINFO_LUN(ti, lun); callout_stop(&ecb->ch); /* * Now, if we've come here with no error code, i.e. we've kept the * initial CAM_REQ_CMP, and the status code signals that we should * check sense, we'll need to set up a request sense cmd block and * push the command back into the ready queue *before* any other * commands for this target/lunit, else we lose the sense info. * We don't support chk sense conditions for the request sense cmd. */ if (ccb->ccb_h.status == CAM_REQ_CMP) { ccb->csio.scsi_status = ecb->stat; if ((ecb->flags & ECB_ABORT) != 0) ccb->ccb_h.status = CAM_CMD_TIMEOUT; else if ((ecb->flags & ECB_SENSE) != 0 && (ecb->stat != SCSI_STATUS_CHECK_COND)) { ccb->csio.scsi_status = SCSI_STATUS_CHECK_COND; ccb->ccb_h.status = CAM_SCSI_STATUS_ERROR | CAM_AUTOSNS_VALID; sense_returned = sizeof(ccb->csio.sense_data) - ecb->dleft; if (sense_returned < ccb->csio.sense_len) ccb->csio.sense_resid = ccb->csio.sense_len - sense_returned; else ccb->csio.sense_resid = 0; } else if (ecb->stat == SCSI_STATUS_CHECK_COND) { if ((ecb->flags & ECB_SENSE) != 0) ccb->ccb_h.status = CAM_AUTOSENSE_FAIL; else { /* First, save the return values. */ ccb->csio.resid = ecb->dleft; if ((ccb->ccb_h.flags & CAM_DIS_AUTOSENSE) == 0) { ncr53c9x_sense(sc, ecb); return; } ccb->ccb_h.status = CAM_SCSI_STATUS_ERROR; } } else ccb->csio.resid = ecb->dleft; if (ecb->stat == SCSI_STATUS_QUEUE_FULL) ccb->ccb_h.status = CAM_SCSI_STATUS_ERROR; else if (ecb->stat == SCSI_STATUS_BUSY) ccb->ccb_h.status = CAM_SCSI_BUSY; } else if ((ccb->ccb_h.status & CAM_DEV_QFRZN) == 0) { ccb->ccb_h.status |= CAM_DEV_QFRZN; xpt_freeze_devq(ccb->ccb_h.path, 1); } #ifdef NCR53C9X_DEBUG if ((ncr53c9x_debug & NCR_SHOWTRAC) != 0) { if (ccb->csio.resid != 0) printf("resid=%d ", ccb->csio.resid); if ((ccb->ccb_h.status & CAM_AUTOSNS_VALID) != 0) printf("sense=0x%02x\n", ccb->csio.sense_data.error_code); else printf("status SCSI=0x%x CAM=0x%x\n", ccb->csio.scsi_status, ccb->ccb_h.status); } #endif /* * Remove the ECB from whatever queue it's on. */ ncr53c9x_dequeue(sc, ecb); if (ecb == sc->sc_nexus) { sc->sc_nexus = NULL; if (sc->sc_state != NCR_CLEANING) { sc->sc_state = NCR_IDLE; ncr53c9x_sched(sc); } } if ((ccb->ccb_h.status & CAM_SEL_TIMEOUT) != 0) { /* Selection timeout -- discard this LUN if empty. */ if (li->untagged == NULL && li->used == 0) { if (lun < NCR_NLUN) ti->lun[lun] = NULL; LIST_REMOVE(li, link); free(li, M_DEVBUF); } } ncr53c9x_free_ecb(sc, ecb); ti->cmds++; xpt_done(ccb); } static void ncr53c9x_dequeue(struct ncr53c9x_softc *sc, struct ncr53c9x_ecb *ecb) { struct ncr53c9x_linfo *li; struct ncr53c9x_tinfo *ti; int64_t lun; NCR_LOCK_ASSERT(sc, MA_OWNED); ti = &sc->sc_tinfo[ecb->ccb->ccb_h.target_id]; lun = ecb->ccb->ccb_h.target_lun; li = TINFO_LUN(ti, lun); #ifdef DIAGNOSTIC if (li == NULL || li->lun != lun) panic("%s: lun %llx for ecb %p does not exist", __func__, (long long)lun, ecb); #endif if (li->untagged == ecb) { li->busy = 0; li->untagged = NULL; } if (ecb->tag[0] && li->queued[ecb->tag[1]] != NULL) { #ifdef DIAGNOSTIC if (li->queued[ecb->tag[1]] != NULL && (li->queued[ecb->tag[1]] != ecb)) panic("%s: slot %d for lun %llx has %p instead of ecb " "%p", __func__, ecb->tag[1], (long long)lun, li->queued[ecb->tag[1]], ecb); #endif li->queued[ecb->tag[1]] = NULL; li->used--; } ecb->tag[0] = ecb->tag[1] = 0; if ((ecb->flags & ECB_READY) != 0) { ecb->flags &= ~ECB_READY; TAILQ_REMOVE(&sc->ready_list, ecb, chain); } } /* * INTERRUPT/PROTOCOL ENGINE */ /* * Schedule an outgoing message by prioritizing it, and asserting * attention on the bus. We can only do this when we are the initiator * else there will be an illegal command interrupt. */ #define ncr53c9x_sched_msgout(m) do { \ NCR_MSGS(("ncr53c9x_sched_msgout %x %d", m, __LINE__)); \ NCRCMD(sc, NCRCMD_SETATN); \ sc->sc_flags |= NCR_ATN; \ sc->sc_msgpriq |= (m); \ } while (/* CONSTCOND */0) static void ncr53c9x_flushfifo(struct ncr53c9x_softc *sc) { NCR_LOCK_ASSERT(sc, MA_OWNED); NCR_TRACE(("[%s] ", __func__)); NCRCMD(sc, NCRCMD_FLUSH); if (sc->sc_phase == COMMAND_PHASE || sc->sc_phase == MESSAGE_OUT_PHASE) DELAY(2); } static int ncr53c9x_rdfifo(struct ncr53c9x_softc *sc, int how) { int i, n; uint8_t *ibuf; NCR_LOCK_ASSERT(sc, MA_OWNED); switch (how) { case NCR_RDFIFO_START: ibuf = sc->sc_imess; sc->sc_imlen = 0; break; case NCR_RDFIFO_CONTINUE: ibuf = sc->sc_imess + sc->sc_imlen; break; default: panic("%s: bad flag", __func__); /* NOTREACHED */ } /* * XXX buffer (sc_imess) size for message */ n = NCR_READ_REG(sc, NCR_FFLAG) & NCRFIFO_FF; if (sc->sc_rev == NCR_VARIANT_FAS366) { n *= 2; for (i = 0; i < n; i++) ibuf[i] = NCR_READ_REG(sc, NCR_FIFO); if (sc->sc_espstat2 & NCRFAS_STAT2_ISHUTTLE) { NCR_WRITE_REG(sc, NCR_FIFO, 0); ibuf[i++] = NCR_READ_REG(sc, NCR_FIFO); NCR_READ_REG(sc, NCR_FIFO); ncr53c9x_flushfifo(sc); } } else for (i = 0; i < n; i++) ibuf[i] = NCR_READ_REG(sc, NCR_FIFO); sc->sc_imlen += i; #if 0 #ifdef NCR53C9X_DEBUG NCR_TRACE(("\n[rdfifo %s (%d):", (how == NCR_RDFIFO_START) ? "start" : "cont", (int)sc->sc_imlen)); if ((ncr53c9x_debug & NCR_SHOWTRAC) != 0) { for (i = 0; i < sc->sc_imlen; i++) printf(" %02x", sc->sc_imess[i]); printf("]\n"); } #endif #endif return (sc->sc_imlen); } static void ncr53c9x_wrfifo(struct ncr53c9x_softc *sc, uint8_t *p, int len) { int i; NCR_LOCK_ASSERT(sc, MA_OWNED); #ifdef NCR53C9X_DEBUG NCR_MSGS(("[wrfifo(%d):", len)); if ((ncr53c9x_debug & NCR_SHOWMSGS) != 0) { for (i = 0; i < len; i++) printf(" %02x", p[i]); printf("]\n"); } #endif for (i = 0; i < len; i++) { NCR_WRITE_REG(sc, NCR_FIFO, p[i]); if (sc->sc_rev == NCR_VARIANT_FAS366) NCR_WRITE_REG(sc, NCR_FIFO, 0); } } static int ncr53c9x_reselect(struct ncr53c9x_softc *sc, int message, int tagtype, int tagid) { struct ncr53c9x_ecb *ecb = NULL; struct ncr53c9x_linfo *li; struct ncr53c9x_tinfo *ti; uint8_t lun, selid, target; NCR_LOCK_ASSERT(sc, MA_OWNED); if (sc->sc_rev == NCR_VARIANT_FAS366) target = sc->sc_selid; else { /* * The SCSI chip made a snapshot of the data bus * while the reselection was being negotiated. * This enables us to determine which target did * the reselect. */ selid = sc->sc_selid & ~(1 << sc->sc_id); if (selid & (selid - 1)) { device_printf(sc->sc_dev, "reselect with invalid " "selid %02x; sending DEVICE RESET\n", selid); goto reset; } target = ffs(selid) - 1; } lun = message & 0x07; /* * Search wait queue for disconnected command. * The list should be short, so I haven't bothered with * any more sophisticated structures than a simple * singly linked list. */ ti = &sc->sc_tinfo[target]; li = TINFO_LUN(ti, lun); /* * We can get as far as the LUN with the IDENTIFY * message. Check to see if we're running an * untagged command. Otherwise ack the IDENTIFY * and wait for a tag message. */ if (li != NULL) { if (li->untagged != NULL && li->busy) ecb = li->untagged; else if (tagtype != MSG_SIMPLE_Q_TAG) { /* Wait for tag to come by. */ sc->sc_state = NCR_IDENTIFIED; return (0); } else if (tagtype) ecb = li->queued[tagid]; } if (ecb == NULL) { device_printf(sc->sc_dev, "reselect from target %d lun %d " "tag %x:%x with no nexus; sending ABORT\n", target, lun, tagtype, tagid); goto abort; } /* Make this nexus active again. */ sc->sc_state = NCR_CONNECTED; sc->sc_nexus = ecb; ncr53c9x_setsync(sc, ti); if (ecb->flags & ECB_RESET) ncr53c9x_sched_msgout(SEND_DEV_RESET); else if (ecb->flags & ECB_ABORT) ncr53c9x_sched_msgout(SEND_ABORT); /* Do an implicit RESTORE POINTERS. */ sc->sc_dp = ecb->daddr; sc->sc_dleft = ecb->dleft; return (0); reset: ncr53c9x_sched_msgout(SEND_DEV_RESET); return (1); abort: ncr53c9x_sched_msgout(SEND_ABORT); return (1); } /* From NetBSD; these should go into CAM at some point. */ #define MSG_ISEXTENDED(m) ((m) == MSG_EXTENDED) #define MSG_IS1BYTE(m) \ ((!MSG_ISEXTENDED(m) && (m) < 0x20) || MSG_ISIDENTIFY(m)) #define MSG_IS2BYTE(m) (((m) & 0xf0) == 0x20) static inline int __verify_msg_format(uint8_t *p, int len) { if (len == 1 && MSG_IS1BYTE(p[0])) return (1); if (len == 2 && MSG_IS2BYTE(p[0])) return (1); if (len >= 3 && MSG_ISEXTENDED(p[0]) && len == p[1] + 2) return (1); return (0); } /* * Get an incoming message as initiator. * * The SCSI bus must already be in MESSAGE_IN_PHASE and there is a * byte in the FIFO. */ static void ncr53c9x_msgin(struct ncr53c9x_softc *sc) { struct ncr53c9x_ecb *ecb; struct ncr53c9x_linfo *li; struct ncr53c9x_tinfo *ti; uint8_t *pb; int len, lun; NCR_LOCK_ASSERT(sc, MA_OWNED); NCR_TRACE(("[%s(curmsglen:%ld)] ", __func__, (long)sc->sc_imlen)); if (sc->sc_imlen == 0) { device_printf(sc->sc_dev, "msgin: no msg byte available\n"); return; } /* * Prepare for a new message. A message should (according * to the SCSI standard) be transmitted in one single * MESSAGE_IN_PHASE. If we have been in some other phase, * then this is a new message. */ if (sc->sc_prevphase != MESSAGE_IN_PHASE && sc->sc_state != NCR_RESELECTED) { device_printf(sc->sc_dev, "phase change, dropping message, " "prev %d, state %d\n", sc->sc_prevphase, sc->sc_state); sc->sc_flags &= ~NCR_DROP_MSGI; sc->sc_imlen = 0; } /* * If we're going to reject the message, don't bother storing * the incoming bytes. But still, we need to ACK them. */ if ((sc->sc_flags & NCR_DROP_MSGI) != 0) { NCRCMD(sc, NCRCMD_MSGOK); device_printf(sc->sc_dev, "", sc->sc_imess[sc->sc_imlen]); return; } if (sc->sc_imlen >= NCR_MAX_MSG_LEN) { ncr53c9x_sched_msgout(SEND_REJECT); sc->sc_flags |= NCR_DROP_MSGI; } else { switch (sc->sc_state) { /* * if received message is the first of reselection * then first byte is selid, and then message */ case NCR_RESELECTED: pb = sc->sc_imess + 1; len = sc->sc_imlen - 1; break; default: pb = sc->sc_imess; len = sc->sc_imlen; } if (__verify_msg_format(pb, len)) goto gotit; } /* Acknowledge what we have so far. */ NCRCMD(sc, NCRCMD_MSGOK); return; gotit: NCR_MSGS(("gotmsg(%x) state %d", sc->sc_imess[0], sc->sc_state)); /* * We got a complete message, flush the imess. * XXX nobody uses imlen below. */ sc->sc_imlen = 0; /* * Now we should have a complete message (1 byte, 2 byte * and moderately long extended messages). We only handle * extended messages which total length is shorter than * NCR_MAX_MSG_LEN. Longer messages will be amputated. */ switch (sc->sc_state) { case NCR_CONNECTED: ecb = sc->sc_nexus; ti = &sc->sc_tinfo[ecb->ccb->ccb_h.target_id]; switch (sc->sc_imess[0]) { case MSG_CMDCOMPLETE: NCR_MSGS(("cmdcomplete ")); if (sc->sc_dleft < 0) { xpt_print_path(ecb->ccb->ccb_h.path); printf("got %ld extra bytes\n", -(long)sc->sc_dleft); sc->sc_dleft = 0; } ecb->dleft = (ecb->flags & ECB_TENTATIVE_DONE) ? 0 : sc->sc_dleft; if ((ecb->flags & ECB_SENSE) == 0) ecb->ccb->csio.resid = ecb->dleft; sc->sc_state = NCR_CMDCOMPLETE; break; case MSG_MESSAGE_REJECT: NCR_MSGS(("msg reject (msgout=%x) ", sc->sc_msgout)); switch (sc->sc_msgout) { case SEND_TAG: /* * Target does not like tagged queuing. * - Flush the command queue * - Disable tagged queuing for the target * - Dequeue ecb from the queued array. */ device_printf(sc->sc_dev, "tagged queuing " "rejected: target %d\n", ecb->ccb->ccb_h.target_id); NCR_MSGS(("(rejected sent tag)")); NCRCMD(sc, NCRCMD_FLUSH); DELAY(1); ti->flags &= ~T_TAG; lun = ecb->ccb->ccb_h.target_lun; li = TINFO_LUN(ti, lun); if (ecb->tag[0] && li->queued[ecb->tag[1]] != NULL) { li->queued[ecb->tag[1]] = NULL; li->used--; } ecb->tag[0] = ecb->tag[1] = 0; li->untagged = ecb; li->busy = 1; break; case SEND_SDTR: device_printf(sc->sc_dev, "sync transfer " "rejected: target %d\n", ecb->ccb->ccb_h.target_id); ti->flags &= ~T_SDTRSENT; ti->curr.period = ti->goal.period = 0; ti->curr.offset = ti->goal.offset = 0; ncr53c9x_setsync(sc, ti); break; case SEND_WDTR: device_printf(sc->sc_dev, "wide transfer " "rejected: target %d\n", ecb->ccb->ccb_h.target_id); ti->flags &= ~T_WDTRSENT; ti->curr.width = ti->goal.width = MSG_EXT_WDTR_BUS_8_BIT; ncr53c9x_setsync(sc, ti); break; case SEND_INIT_DET_ERR: goto abort; } break; case MSG_NOOP: NCR_MSGS(("noop ")); break; case MSG_HEAD_OF_Q_TAG: case MSG_SIMPLE_Q_TAG: case MSG_ORDERED_Q_TAG: NCR_MSGS(("TAG %x:%x", sc->sc_imess[0], sc->sc_imess[1])); break; case MSG_DISCONNECT: NCR_MSGS(("disconnect ")); ti->dconns++; sc->sc_state = NCR_DISCONNECT; /* * Mark the fact that all bytes have moved. The * target may not bother to do a SAVE POINTERS * at this stage. This flag will set the residual * count to zero on MSG COMPLETE. */ if (sc->sc_dleft == 0) ecb->flags |= ECB_TENTATIVE_DONE; break; case MSG_SAVEDATAPOINTER: NCR_MSGS(("save datapointer ")); ecb->daddr = sc->sc_dp; ecb->dleft = sc->sc_dleft; break; case MSG_RESTOREPOINTERS: NCR_MSGS(("restore datapointer ")); sc->sc_dp = ecb->daddr; sc->sc_dleft = ecb->dleft; break; case MSG_IGN_WIDE_RESIDUE: NCR_MSGS(("ignore wide residue (%d bytes)", sc->sc_imess[1])); if (sc->sc_imess[1] != 1) { xpt_print_path(ecb->ccb->ccb_h.path); printf("unexpected MESSAGE IGNORE WIDE " "RESIDUE (%d bytes); sending REJECT\n", sc->sc_imess[1]); goto reject; } /* * If there was a last transfer of an even number of * bytes, wipe the "done" memory and adjust by one * byte (sc->sc_imess[1]). */ len = sc->sc_dleft - ecb->dleft; if (len != 0 && (len & 1) == 0) { ecb->flags &= ~ECB_TENTATIVE_DONE; sc->sc_dp = (char *)sc->sc_dp - 1; sc->sc_dleft--; } break; case MSG_EXTENDED: NCR_MSGS(("extended(%x) ", sc->sc_imess[2])); switch (sc->sc_imess[2]) { case MSG_EXT_SDTR: NCR_MSGS(("SDTR period %d, offset %d ", sc->sc_imess[3], sc->sc_imess[4])); if (sc->sc_imess[1] != 3) goto reject; ti->curr.period = sc->sc_imess[3]; ti->curr.offset = sc->sc_imess[4]; if (sc->sc_minsync == 0 || ti->curr.offset == 0 || ti->curr.period > 124) { #if 0 #ifdef NCR53C9X_DEBUG xpt_print_path(ecb->ccb->ccb_h.path); printf("async mode\n"); #endif #endif if ((ti->flags & T_SDTRSENT) == 0) { /* * target initiated negotiation */ ti->curr.offset = 0; ncr53c9x_sched_msgout( SEND_SDTR); } } else { ti->curr.period = ncr53c9x_cpb2stp(sc, ncr53c9x_stp2cpb(sc, ti->curr.period)); if ((ti->flags & T_SDTRSENT) == 0) { /* * target initiated negotiation */ if (ti->curr.period < sc->sc_minsync) ti->curr.period = sc->sc_minsync; if (ti->curr.offset > sc->sc_maxoffset) ti->curr.offset = sc->sc_maxoffset; ncr53c9x_sched_msgout( SEND_SDTR); } } ti->flags &= ~T_SDTRSENT; ti->goal.period = ti->curr.period; ti->goal.offset = ti->curr.offset; ncr53c9x_setsync(sc, ti); break; case MSG_EXT_WDTR: NCR_MSGS(("wide mode %d ", sc->sc_imess[3])); ti->curr.width = sc->sc_imess[3]; if (!(ti->flags & T_WDTRSENT)) /* * target initiated negotiation */ ncr53c9x_sched_msgout(SEND_WDTR); ti->flags &= ~T_WDTRSENT; ti->goal.width = ti->curr.width; ncr53c9x_setsync(sc, ti); break; default: xpt_print_path(ecb->ccb->ccb_h.path); printf("unrecognized MESSAGE EXTENDED 0x%x;" " sending REJECT\n", sc->sc_imess[2]); goto reject; } break; default: NCR_MSGS(("ident ")); xpt_print_path(ecb->ccb->ccb_h.path); printf("unrecognized MESSAGE 0x%x; sending REJECT\n", sc->sc_imess[0]); /* FALLTHROUGH */ reject: ncr53c9x_sched_msgout(SEND_REJECT); break; } break; case NCR_IDENTIFIED: /* * IDENTIFY message was received and queue tag is expected * now. */ if ((sc->sc_imess[0] != MSG_SIMPLE_Q_TAG) || (sc->sc_msgify == 0)) { device_printf(sc->sc_dev, "TAG reselect without " "IDENTIFY; MSG %x; sending DEVICE RESET\n", sc->sc_imess[0]); goto reset; } (void)ncr53c9x_reselect(sc, sc->sc_msgify, sc->sc_imess[0], sc->sc_imess[1]); break; case NCR_RESELECTED: if (MSG_ISIDENTIFY(sc->sc_imess[1])) sc->sc_msgify = sc->sc_imess[1]; else { device_printf(sc->sc_dev, "reselect without IDENTIFY;" " MSG %x; sending DEVICE RESET\n", sc->sc_imess[1]); goto reset; } (void)ncr53c9x_reselect(sc, sc->sc_msgify, 0, 0); break; default: device_printf(sc->sc_dev, "unexpected MESSAGE IN; " "sending DEVICE RESET\n"); /* FALLTHROUGH */ reset: ncr53c9x_sched_msgout(SEND_DEV_RESET); break; abort: ncr53c9x_sched_msgout(SEND_ABORT); } /* If we have more messages to send set ATN. */ if (sc->sc_msgpriq) { NCRCMD(sc, NCRCMD_SETATN); sc->sc_flags |= NCR_ATN; } /* Acknowledge last message byte. */ NCRCMD(sc, NCRCMD_MSGOK); /* Done, reset message pointer. */ sc->sc_flags &= ~NCR_DROP_MSGI; sc->sc_imlen = 0; } /* * Send the highest priority, scheduled message. */ static void ncr53c9x_msgout(struct ncr53c9x_softc *sc) { struct ncr53c9x_tinfo *ti; struct ncr53c9x_ecb *ecb; size_t size; int error; #ifdef NCR53C9X_DEBUG int i; #endif NCR_LOCK_ASSERT(sc, MA_OWNED); NCR_TRACE(("[%s(priq:%x, prevphase:%x)]", __func__, sc->sc_msgpriq, sc->sc_prevphase)); /* * XXX - the NCR_ATN flag is not in sync with the actual ATN * condition on the SCSI bus. The 53c9x chip * automatically turns off ATN before sending the * message byte. (See also the comment below in the * default case when picking out a message to send.) */ if (sc->sc_flags & NCR_ATN) { if (sc->sc_prevphase != MESSAGE_OUT_PHASE) { new: NCRCMD(sc, NCRCMD_FLUSH); #if 0 DELAY(1); #endif sc->sc_msgoutq = 0; sc->sc_omlen = 0; } } else { if (sc->sc_prevphase == MESSAGE_OUT_PHASE) { ncr53c9x_sched_msgout(sc->sc_msgoutq); goto new; } else device_printf(sc->sc_dev, "at line %d: unexpected " "MESSAGE OUT phase\n", __LINE__); } if (sc->sc_omlen == 0) { /* Pick up highest priority message. */ sc->sc_msgout = sc->sc_msgpriq & -sc->sc_msgpriq; sc->sc_msgoutq |= sc->sc_msgout; sc->sc_msgpriq &= ~sc->sc_msgout; sc->sc_omlen = 1; /* "Default" message len */ switch (sc->sc_msgout) { case SEND_SDTR: ecb = sc->sc_nexus; ti = &sc->sc_tinfo[ecb->ccb->ccb_h.target_id]; sc->sc_omess[0] = MSG_EXTENDED; sc->sc_omess[1] = MSG_EXT_SDTR_LEN; sc->sc_omess[2] = MSG_EXT_SDTR; sc->sc_omess[3] = ti->goal.period; sc->sc_omess[4] = ti->goal.offset; sc->sc_omlen = 5; break; case SEND_WDTR: ecb = sc->sc_nexus; ti = &sc->sc_tinfo[ecb->ccb->ccb_h.target_id]; sc->sc_omess[0] = MSG_EXTENDED; sc->sc_omess[1] = MSG_EXT_WDTR_LEN; sc->sc_omess[2] = MSG_EXT_WDTR; sc->sc_omess[3] = ti->goal.width; sc->sc_omlen = 4; break; case SEND_IDENTIFY: if (sc->sc_state != NCR_CONNECTED) device_printf(sc->sc_dev, "at line %d: no " "nexus\n", __LINE__); ecb = sc->sc_nexus; sc->sc_omess[0] = MSG_IDENTIFY(ecb->ccb->ccb_h.target_lun, 0); break; case SEND_TAG: if (sc->sc_state != NCR_CONNECTED) device_printf(sc->sc_dev, "at line %d: no " "nexus\n", __LINE__); ecb = sc->sc_nexus; sc->sc_omess[0] = ecb->tag[0]; sc->sc_omess[1] = ecb->tag[1]; sc->sc_omlen = 2; break; case SEND_DEV_RESET: sc->sc_flags |= NCR_ABORTING; sc->sc_omess[0] = MSG_BUS_DEV_RESET; ecb = sc->sc_nexus; ti = &sc->sc_tinfo[ecb->ccb->ccb_h.target_id]; ti->curr.period = 0; ti->curr.offset = 0; ti->curr.width = MSG_EXT_WDTR_BUS_8_BIT; break; case SEND_PARITY_ERROR: sc->sc_omess[0] = MSG_PARITY_ERROR; break; case SEND_ABORT: sc->sc_flags |= NCR_ABORTING; sc->sc_omess[0] = MSG_ABORT; break; case SEND_INIT_DET_ERR: sc->sc_omess[0] = MSG_INITIATOR_DET_ERR; break; case SEND_REJECT: sc->sc_omess[0] = MSG_MESSAGE_REJECT; break; default: /* * We normally do not get here, since the chip * automatically turns off ATN before the last * byte of a message is sent to the target. * However, if the target rejects our (multi-byte) * message early by switching to MSG IN phase * ATN remains on, so the target may return to * MSG OUT phase. If there are no scheduled messages * left we send a NO-OP. * * XXX - Note that this leaves no useful purpose for * the NCR_ATN flag. */ sc->sc_flags &= ~NCR_ATN; sc->sc_omess[0] = MSG_NOOP; } sc->sc_omp = sc->sc_omess; } #ifdef NCR53C9X_DEBUG if ((ncr53c9x_debug & NCR_SHOWMSGS) != 0) { NCR_MSGS(("sc_omlen; i++) NCR_MSGS((" %02x", sc->sc_omess[i])); NCR_MSGS(("> ")); } #endif if (sc->sc_rev != NCR_VARIANT_FAS366) { /* (Re)send the message. */ size = ulmin(sc->sc_omlen, sc->sc_maxxfer); error = NCRDMA_SETUP(sc, &sc->sc_omp, &sc->sc_omlen, 0, &size); if (error != 0) goto cmd; /* Program the SCSI counter. */ NCR_SET_COUNT(sc, size); /* Load the count in and start the message-out transfer. */ NCRCMD(sc, NCRCMD_NOP | NCRCMD_DMA); NCRCMD(sc, NCRCMD_TRANS | NCRCMD_DMA); NCRDMA_GO(sc); return; } cmd: /* * XXX FIFO size */ sc->sc_cmdlen = 0; ncr53c9x_flushfifo(sc); ncr53c9x_wrfifo(sc, sc->sc_omp, sc->sc_omlen); NCRCMD(sc, NCRCMD_TRANS); } void ncr53c9x_intr(void *arg) { struct ncr53c9x_softc *sc = arg; if (!NCRDMA_ISINTR(sc)) return; NCR_LOCK(sc); ncr53c9x_intr1(sc); NCR_UNLOCK(sc); } /* * This is the most critical part of the driver, and has to know * how to deal with *all* error conditions and phases from the SCSI * bus. If there are no errors and the DMA was active, then call the * DMA pseudo-interrupt handler. If this returns 1, then that was it * and we can return from here without further processing. * * Most of this needs verifying. */ static void ncr53c9x_intr1(struct ncr53c9x_softc *sc) { struct ncr53c9x_ecb *ecb; struct ncr53c9x_linfo *li; struct ncr53c9x_tinfo *ti; struct timeval cur, wait; size_t size; int error, i, nfifo; uint8_t msg; NCR_LOCK_ASSERT(sc, MA_OWNED); NCR_INTS(("[ncr53c9x_intr: state %d]", sc->sc_state)); again: /* and what do the registers say... */ ncr53c9x_readregs(sc); /* * At the moment, only a SCSI Bus Reset or Illegal * Command are classed as errors. A disconnect is a * valid condition, and we let the code check is the * "NCR_BUSFREE_OK" flag was set before declaring it * and error. * * Also, the status register tells us about "Gross * Errors" and "Parity errors". Only the Gross Error * is really bad, and the parity errors are dealt * with later. * * TODO * If there are too many parity error, go to slow * cable mode? */ if ((sc->sc_espintr & NCRINTR_SBR) != 0) { if ((NCR_READ_REG(sc, NCR_FFLAG) & NCRFIFO_FF) != 0) { NCRCMD(sc, NCRCMD_FLUSH); DELAY(1); } if (sc->sc_state != NCR_SBR) { device_printf(sc->sc_dev, "SCSI bus reset\n"); ncr53c9x_init(sc, 0); /* Restart everything. */ return; } #if 0 /*XXX*/ device_printf(sc->sc_dev, "\n", sc->sc_espintr, sc->sc_espstat, sc->sc_espstep); #endif if (sc->sc_nexus != NULL) panic("%s: nexus in reset state", device_get_nameunit(sc->sc_dev)); goto sched; } ecb = sc->sc_nexus; #define NCRINTR_ERR (NCRINTR_SBR | NCRINTR_ILL) if (sc->sc_espintr & NCRINTR_ERR || sc->sc_espstat & NCRSTAT_GE) { if ((sc->sc_espstat & NCRSTAT_GE) != 0) { /* Gross Error; no target? */ if (NCR_READ_REG(sc, NCR_FFLAG) & NCRFIFO_FF) { NCRCMD(sc, NCRCMD_FLUSH); DELAY(1); } if (sc->sc_state == NCR_CONNECTED || sc->sc_state == NCR_SELECTING) { ecb->ccb->ccb_h.status = CAM_SEL_TIMEOUT; ncr53c9x_done(sc, ecb); } return; } if ((sc->sc_espintr & NCRINTR_ILL) != 0) { if ((sc->sc_flags & NCR_EXPECT_ILLCMD) != 0) { /* * Eat away "Illegal command" interrupt * on a ESP100 caused by a re-selection * while we were trying to select * another target. */ #ifdef NCR53C9X_DEBUG device_printf(sc->sc_dev, "ESP100 work-around " "activated\n"); #endif sc->sc_flags &= ~NCR_EXPECT_ILLCMD; return; } /* Illegal command, out of sync? */ device_printf(sc->sc_dev, "illegal command: 0x%x " "(state %d, phase %x, prevphase %x)\n", sc->sc_lastcmd, sc->sc_state, sc->sc_phase, sc->sc_prevphase); if (NCR_READ_REG(sc, NCR_FFLAG) & NCRFIFO_FF) { NCRCMD(sc, NCRCMD_FLUSH); DELAY(1); } goto reset; } } sc->sc_flags &= ~NCR_EXPECT_ILLCMD; /* * Call if DMA is active. * * If DMA_INTR returns true, then maybe go 'round the loop * again in case there is no more DMA queued, but a phase * change is expected. */ if (NCRDMA_ISACTIVE(sc)) { if (NCRDMA_INTR(sc) == -1) { device_printf(sc->sc_dev, "DMA error; resetting\n"); goto reset; } /* If DMA active here, then go back to work... */ if (NCRDMA_ISACTIVE(sc)) return; if ((sc->sc_espstat & NCRSTAT_TC) == 0) { /* * DMA not completed. If we can not find a * acceptable explanation, print a diagnostic. */ if (sc->sc_state == NCR_SELECTING) /* * This can happen if we are reselected * while using DMA to select a target. */ /*void*/; else if (sc->sc_prevphase == MESSAGE_OUT_PHASE) { /* * Our (multi-byte) message (eg SDTR) was * interrupted by the target to send * a MSG REJECT. * Print diagnostic if current phase * is not MESSAGE IN. */ if (sc->sc_phase != MESSAGE_IN_PHASE) device_printf(sc->sc_dev,"!TC on MSGOUT" " [intr %x, stat %x, step %d]" " prevphase %x, resid %lx\n", sc->sc_espintr, sc->sc_espstat, sc->sc_espstep, sc->sc_prevphase, (u_long)sc->sc_omlen); } else if (sc->sc_dleft == 0) { /* * The DMA operation was started for * a DATA transfer. Print a diagnostic * if the DMA counter and TC bit * appear to be out of sync. * * XXX This is fatal and usually means that * the DMA engine is hopelessly out of * sync with reality. A disk is likely * getting spammed at this point. */ device_printf(sc->sc_dev, "!TC on DATA XFER" " [intr %x, stat %x, step %d]" " prevphase %x, resid %x\n", sc->sc_espintr, sc->sc_espstat, sc->sc_espstep, sc->sc_prevphase, ecb ? ecb->dleft : -1); goto reset; } } } /* * Check for less serious errors. */ if ((sc->sc_espstat & NCRSTAT_PE) != 0) { device_printf(sc->sc_dev, "SCSI bus parity error\n"); if (sc->sc_prevphase == MESSAGE_IN_PHASE) ncr53c9x_sched_msgout(SEND_PARITY_ERROR); else ncr53c9x_sched_msgout(SEND_INIT_DET_ERR); } if ((sc->sc_espintr & NCRINTR_DIS) != 0) { sc->sc_msgify = 0; NCR_INTS(("", sc->sc_espintr,sc->sc_espstat,sc->sc_espstep)); if (NCR_READ_REG(sc, NCR_FFLAG) & NCRFIFO_FF) { NCRCMD(sc, NCRCMD_FLUSH); #if 0 DELAY(1); #endif } /* * This command must (apparently) be issued within * 250mS of a disconnect. So here you are... */ NCRCMD(sc, NCRCMD_ENSEL); switch (sc->sc_state) { case NCR_RESELECTED: goto sched; case NCR_SELECTING: ecb->ccb->ccb_h.status = CAM_SEL_TIMEOUT; /* Selection timeout -- discard all LUNs if empty. */ ti = &sc->sc_tinfo[ecb->ccb->ccb_h.target_id]; li = LIST_FIRST(&ti->luns); while (li != NULL) { if (li->untagged == NULL && li->used == 0) { if (li->lun < NCR_NLUN) ti->lun[li->lun] = NULL; LIST_REMOVE(li, link); free(li, M_DEVBUF); /* * Restart the search at the beginning. */ li = LIST_FIRST(&ti->luns); continue; } li = LIST_NEXT(li, link); } goto finish; case NCR_CONNECTED: if (ecb != NULL) { ti = &sc->sc_tinfo[ecb->ccb->ccb_h.target_id]; if ((ti->flags & T_SDTRSENT) != 0) { xpt_print_path(ecb->ccb->ccb_h.path); printf("sync nego not completed!\n"); ti->flags &= ~T_SDTRSENT; ti->curr.period = ti->goal.period = 0; ti->curr.offset = ti->goal.offset = 0; ncr53c9x_setsync(sc, ti); } if ((ti->flags & T_WDTRSENT) != 0) { xpt_print_path(ecb->ccb->ccb_h.path); printf("wide nego not completed!\n"); ti->flags &= ~T_WDTRSENT; ti->curr.width = ti->goal.width = MSG_EXT_WDTR_BUS_8_BIT; ncr53c9x_setsync(sc, ti); } } /* It may be OK to disconnect. */ if ((sc->sc_flags & NCR_ABORTING) == 0) { /* * Section 5.1.1 of the SCSI 2 spec * suggests issuing a REQUEST SENSE * following an unexpected disconnect. * Some devices go into a contingent * allegiance condition when * disconnecting, and this is necessary * to clean up their state. */ device_printf(sc->sc_dev, "unexpected " "disconnect [state %d, intr %x, stat %x, " "phase(c %x, p %x)]; ", sc->sc_state, sc->sc_espintr, sc->sc_espstat, sc->sc_phase, sc->sc_prevphase); /* * XXX This will cause a chip reset and will * prevent us from finding out the real * problem with the device. However, it's * necessary until a way can be found to * safely cancel the DMA that is in * progress. */ if (1 || (ecb->flags & ECB_SENSE) != 0) { printf("resetting\n"); goto reset; } printf("sending REQUEST SENSE\n"); callout_stop(&ecb->ch); ncr53c9x_sense(sc, ecb); return; } else if (ecb != NULL && (ecb->flags & ECB_RESET) != 0) { ecb->ccb->ccb_h.status = CAM_REQ_CMP; goto finish; } ecb->ccb->ccb_h.status = CAM_CMD_TIMEOUT; goto finish; case NCR_DISCONNECT: sc->sc_nexus = NULL; goto sched; case NCR_CMDCOMPLETE: ecb->ccb->ccb_h.status = CAM_REQ_CMP; goto finish; } } switch (sc->sc_state) { case NCR_SBR: device_printf(sc->sc_dev, "waiting for Bus Reset to happen\n"); return; case NCR_RESELECTED: /* * We must be continuing a message? */ device_printf(sc->sc_dev, "unhandled reselect continuation, " "state %d, intr %02x\n", sc->sc_state, sc->sc_espintr); goto reset; break; case NCR_IDENTIFIED: ecb = sc->sc_nexus; if (sc->sc_phase != MESSAGE_IN_PHASE) { i = NCR_READ_REG(sc, NCR_FFLAG) & NCRFIFO_FF; /* * Things are seriously screwed up. * Pull the brakes, i.e. reset. */ device_printf(sc->sc_dev, "target didn't send tag: %d " "bytes in FIFO\n", i); /* Drain and display FIFO. */ while (i-- > 0) printf("[%d] ", NCR_READ_REG(sc, NCR_FIFO)); goto reset; } else goto msgin; case NCR_IDLE: case NCR_SELECTING: ecb = sc->sc_nexus; if (sc->sc_espintr & NCRINTR_RESEL) { sc->sc_msgpriq = sc->sc_msgout = sc->sc_msgoutq = 0; sc->sc_flags = 0; /* * If we're trying to select a * target ourselves, push our command * back into the ready list. */ if (sc->sc_state == NCR_SELECTING) { NCR_INTS(("backoff selector ")); callout_stop(&ecb->ch); ncr53c9x_dequeue(sc, ecb); TAILQ_INSERT_HEAD(&sc->ready_list, ecb, chain); ecb->flags |= ECB_READY; ecb = sc->sc_nexus = NULL; } sc->sc_state = NCR_RESELECTED; if (sc->sc_phase != MESSAGE_IN_PHASE) { /* * Things are seriously screwed up. * Pull the brakes, i.e. reset */ device_printf(sc->sc_dev, "target didn't " "identify\n"); goto reset; } /* * The C90 only inhibits FIFO writes until reselection * is complete instead of waiting until the interrupt * status register has been read. So, if the reselect * happens while we were entering command bytes (for * another target) some of those bytes can appear in * the FIFO here, after the interrupt is taken. * * To remedy this situation, pull the Selection ID * and Identify message from the FIFO directly, and * ignore any extraneous FIFO contents. Also, set * a flag that allows one Illegal Command Interrupt * to occur which the chip also generates as a result * of writing to the FIFO during a reselect. */ if (sc->sc_rev == NCR_VARIANT_ESP100) { nfifo = NCR_READ_REG(sc, NCR_FFLAG) & NCRFIFO_FF; sc->sc_imess[0] = NCR_READ_REG(sc, NCR_FIFO); sc->sc_imess[1] = NCR_READ_REG(sc, NCR_FIFO); sc->sc_imlen = 2; if (nfifo != 2) { /* Flush the rest. */ NCRCMD(sc, NCRCMD_FLUSH); } sc->sc_flags |= NCR_EXPECT_ILLCMD; if (nfifo > 2) nfifo = 2; /* We fixed it... */ } else nfifo = ncr53c9x_rdfifo(sc, NCR_RDFIFO_START); if (nfifo != 2) { device_printf(sc->sc_dev, "RESELECT: %d bytes " "in FIFO! [intr %x, stat %x, step %d, " "prevphase %x]\n", nfifo, sc->sc_espintr, sc->sc_espstat, sc->sc_espstep, sc->sc_prevphase); goto reset; } sc->sc_selid = sc->sc_imess[0]; NCR_INTS(("selid=%02x ", sc->sc_selid)); /* Handle IDENTIFY message. */ ncr53c9x_msgin(sc); if (sc->sc_state != NCR_CONNECTED && sc->sc_state != NCR_IDENTIFIED) { /* IDENTIFY fail?! */ device_printf(sc->sc_dev, "identify failed, " "state %d, intr %02x\n", sc->sc_state, sc->sc_espintr); goto reset; } goto shortcut; /* i.e. next phase expected soon */ } #define NCRINTR_DONE (NCRINTR_FC | NCRINTR_BS) if ((sc->sc_espintr & NCRINTR_DONE) == NCRINTR_DONE) { /* * Arbitration won; examine the `step' register * to determine how far the selection could progress. */ if (ecb == NULL) { /* * When doing path inquiry during boot * FAS100A trigger a stray interrupt which * we just ignore instead of panicing. */ if (sc->sc_state == NCR_IDLE && sc->sc_espstep == 0) return; panic("%s: no nexus", __func__); } ti = &sc->sc_tinfo[ecb->ccb->ccb_h.target_id]; switch (sc->sc_espstep) { case 0: /* * The target did not respond with a * message out phase - probably an old * device that doesn't recognize ATN. * Clear ATN and just continue, the * target should be in the command * phase. * XXX check for command phase? */ NCRCMD(sc, NCRCMD_RSTATN); break; case 1: if (ti->curr.period == ti->goal.period && ti->curr.offset == ti->goal.offset && ti->curr.width == ti->goal.width && ecb->tag[0] == 0) { device_printf(sc->sc_dev, "step 1 " "and no negotiation to perform " "or tag to send\n"); goto reset; } if (sc->sc_phase != MESSAGE_OUT_PHASE) { device_printf(sc->sc_dev, "step 1 " "but not in MESSAGE_OUT_PHASE\n"); goto reset; } sc->sc_prevphase = MESSAGE_OUT_PHASE; /* XXX */ if (ecb->flags & ECB_RESET) { /* * A DEVICE RESET was scheduled and * ATNS used. As SEND_DEV_RESET has * the highest priority, the target * will reset and disconnect and we * will end up in ncr53c9x_done w/o * negotiating or sending a TAG. So * we just break here in order to * avoid warnings about negotiation * not having completed. */ ncr53c9x_sched_msgout(SEND_DEV_RESET); break; } if (ti->curr.width != ti->goal.width) { ti->flags |= T_WDTRSENT | T_SDTRSENT; ncr53c9x_sched_msgout(SEND_WDTR | SEND_SDTR); } if (ti->curr.period != ti->goal.period || ti->curr.offset != ti->goal.offset) { ti->flags |= T_SDTRSENT; ncr53c9x_sched_msgout(SEND_SDTR); } if (ecb->tag[0] != 0) /* Could not do ATN3 so send TAG. */ ncr53c9x_sched_msgout(SEND_TAG); break; case 3: /* * Grr, this is supposed to mean * "target left command phase prematurely". * It seems to happen regularly when * sync mode is on. * Look at FIFO to see if command went out. * (Timing problems?) */ if (sc->sc_features & NCR_F_DMASELECT) { if (sc->sc_cmdlen == 0) { /* Hope for the best... */ break; } } else if ((NCR_READ_REG(sc, NCR_FFLAG) & NCRFIFO_FF) == 0) { /* Hope for the best... */ break; } xpt_print_path(ecb->ccb->ccb_h.path); printf("selection failed; %d left in FIFO " "[intr %x, stat %x, step %d]\n", NCR_READ_REG(sc, NCR_FFLAG) & NCRFIFO_FF, sc->sc_espintr, sc->sc_espstat, sc->sc_espstep); NCRCMD(sc, NCRCMD_FLUSH); ncr53c9x_sched_msgout(SEND_ABORT); return; case 2: /* Select stuck at Command Phase. */ NCRCMD(sc, NCRCMD_FLUSH); break; case 4: if (sc->sc_features & NCR_F_DMASELECT && sc->sc_cmdlen != 0) { xpt_print_path(ecb->ccb->ccb_h.path); printf("select; %lu left in DMA buffer " "[intr %x, stat %x, step %d]\n", (u_long)sc->sc_cmdlen, sc->sc_espintr, sc->sc_espstat, sc->sc_espstep); } /* So far, everything went fine. */ break; } sc->sc_prevphase = INVALID_PHASE; /* ??? */ /* Do an implicit RESTORE POINTERS. */ sc->sc_dp = ecb->daddr; sc->sc_dleft = ecb->dleft; sc->sc_state = NCR_CONNECTED; break; } else { device_printf(sc->sc_dev, "unexpected status after " "select: [intr %x, stat %x, step %x]\n", sc->sc_espintr, sc->sc_espstat, sc->sc_espstep); NCRCMD(sc, NCRCMD_FLUSH); DELAY(1); goto reset; } if (sc->sc_state == NCR_IDLE) { device_printf(sc->sc_dev, "stray interrupt\n"); return; } break; case NCR_CONNECTED: if ((sc->sc_flags & NCR_ICCS) != 0) { /* "Initiate Command Complete Steps" in progress */ sc->sc_flags &= ~NCR_ICCS; if ((sc->sc_espintr & NCRINTR_DONE) == 0) { device_printf(sc->sc_dev, "ICCS: " ": [intr %x, stat %x, step %x]\n", sc->sc_espintr, sc->sc_espstat, sc->sc_espstep); } ncr53c9x_rdfifo(sc, NCR_RDFIFO_START); if (sc->sc_imlen < 2) device_printf(sc->sc_dev, "can't get status, " "only %d bytes\n", (int)sc->sc_imlen); ecb->stat = sc->sc_imess[sc->sc_imlen - 2]; msg = sc->sc_imess[sc->sc_imlen - 1]; NCR_PHASE(("", ecb->stat, msg)); if (msg == MSG_CMDCOMPLETE) { ecb->dleft = (ecb->flags & ECB_TENTATIVE_DONE) ? 0 : sc->sc_dleft; if ((ecb->flags & ECB_SENSE) == 0) ecb->ccb->csio.resid = ecb->dleft; sc->sc_state = NCR_CMDCOMPLETE; } else device_printf(sc->sc_dev, "STATUS_PHASE: " "msg %d\n", msg); sc->sc_imlen = 0; NCRCMD(sc, NCRCMD_MSGOK); goto shortcut; /* i.e. wait for disconnect */ } break; default: device_printf(sc->sc_dev, "invalid state: %d [intr %x, " "phase(c %x, p %x)]\n", sc->sc_state, sc->sc_espintr, sc->sc_phase, sc->sc_prevphase); goto reset; } /* * Driver is now in state NCR_CONNECTED, i.e. we * have a current command working the SCSI bus. */ if (sc->sc_state != NCR_CONNECTED || ecb == NULL) panic("%s: no nexus", __func__); switch (sc->sc_phase) { case MESSAGE_OUT_PHASE: NCR_PHASE(("MESSAGE_OUT_PHASE ")); ncr53c9x_msgout(sc); sc->sc_prevphase = MESSAGE_OUT_PHASE; break; case MESSAGE_IN_PHASE: msgin: NCR_PHASE(("MESSAGE_IN_PHASE ")); if ((sc->sc_espintr & NCRINTR_BS) != 0) { if ((sc->sc_rev != NCR_VARIANT_FAS366) || (sc->sc_espstat2 & NCRFAS_STAT2_EMPTY) == 0) { NCRCMD(sc, NCRCMD_FLUSH); } sc->sc_flags |= NCR_WAITI; NCRCMD(sc, NCRCMD_TRANS); } else if ((sc->sc_espintr & NCRINTR_FC) != 0) { if ((sc->sc_flags & NCR_WAITI) == 0) { device_printf(sc->sc_dev, "MSGIN: unexpected " "FC bit: [intr %x, stat %x, step %x]\n", sc->sc_espintr, sc->sc_espstat, sc->sc_espstep); } sc->sc_flags &= ~NCR_WAITI; ncr53c9x_rdfifo(sc, (sc->sc_prevphase == sc->sc_phase) ? NCR_RDFIFO_CONTINUE : NCR_RDFIFO_START); ncr53c9x_msgin(sc); } else device_printf(sc->sc_dev, "MSGIN: weird bits: " "[intr %x, stat %x, step %x]\n", sc->sc_espintr, sc->sc_espstat, sc->sc_espstep); sc->sc_prevphase = MESSAGE_IN_PHASE; goto shortcut; /* i.e. expect data to be ready */ case COMMAND_PHASE: /* * Send the command block. Normally we don't see this * phase because the SEL_ATN command takes care of * all this. However, we end up here if either the * target or we wanted to exchange some more messages * first (e.g. to start negotiations). */ NCR_PHASE(("COMMAND_PHASE 0x%02x (%d) ", ecb->cmd.cmd.opcode, ecb->clen)); if (NCR_READ_REG(sc, NCR_FFLAG) & NCRFIFO_FF) { NCRCMD(sc, NCRCMD_FLUSH); #if 0 DELAY(1); #endif } /* * If we have more messages to send, e.g. WDTR or SDTR * after we've sent a TAG, set ATN so we'll go back to * MESSAGE_OUT_PHASE. */ if (sc->sc_msgpriq) { NCRCMD(sc, NCRCMD_SETATN); sc->sc_flags |= NCR_ATN; } if (sc->sc_features & NCR_F_DMASELECT) { /* Setup DMA transfer for command. */ size = ecb->clen; sc->sc_cmdlen = size; sc->sc_cmdp = (void *)&ecb->cmd.cmd; error = NCRDMA_SETUP(sc, &sc->sc_cmdp, &sc->sc_cmdlen, 0, &size); if (error != 0) goto cmd; /* Program the SCSI counter. */ NCR_SET_COUNT(sc, size); /* Load the count in. */ NCRCMD(sc, NCRCMD_NOP | NCRCMD_DMA); /* Start the command transfer. */ NCRCMD(sc, NCRCMD_TRANS | NCRCMD_DMA); NCRDMA_GO(sc); sc->sc_prevphase = COMMAND_PHASE; break; } cmd: sc->sc_cmdlen = 0; ncr53c9x_wrfifo(sc, (uint8_t *)&ecb->cmd.cmd, ecb->clen); NCRCMD(sc, NCRCMD_TRANS); sc->sc_prevphase = COMMAND_PHASE; break; case DATA_OUT_PHASE: NCR_PHASE(("DATA_OUT_PHASE [%ld] ", (long)sc->sc_dleft)); sc->sc_prevphase = DATA_OUT_PHASE; NCRCMD(sc, NCRCMD_FLUSH); size = ulmin(sc->sc_dleft, sc->sc_maxxfer); error = NCRDMA_SETUP(sc, &sc->sc_dp, &sc->sc_dleft, 0, &size); goto setup_xfer; case DATA_IN_PHASE: NCR_PHASE(("DATA_IN_PHASE ")); sc->sc_prevphase = DATA_IN_PHASE; if (sc->sc_rev == NCR_VARIANT_ESP100) NCRCMD(sc, NCRCMD_FLUSH); size = ulmin(sc->sc_dleft, sc->sc_maxxfer); error = NCRDMA_SETUP(sc, &sc->sc_dp, &sc->sc_dleft, 1, &size); setup_xfer: if (error != 0) { switch (error) { case EFBIG: ecb->ccb->ccb_h.status |= CAM_REQ_TOO_BIG; break; case EINPROGRESS: panic("%s: cannot deal with deferred DMA", __func__); case EINVAL: ecb->ccb->ccb_h.status |= CAM_REQ_INVALID; break; case ENOMEM: ecb->ccb->ccb_h.status |= CAM_REQUEUE_REQ; break; default: ecb->ccb->ccb_h.status |= CAM_REQ_CMP_ERR; } goto finish; } /* Target returned to data phase: wipe "done" memory. */ ecb->flags &= ~ECB_TENTATIVE_DONE; /* Program the SCSI counter. */ NCR_SET_COUNT(sc, size); /* Load the count in. */ NCRCMD(sc, NCRCMD_NOP | NCRCMD_DMA); /* * Note that if `size' is 0, we've already transceived * all the bytes we want but we're still in DATA PHASE. * Apparently, the device needs padding. Also, a * transfer size of 0 means "maximum" to the chip * DMA logic. */ NCRCMD(sc, (size == 0 ? NCRCMD_TRPAD : NCRCMD_TRANS) | NCRCMD_DMA); NCRDMA_GO(sc); return; case STATUS_PHASE: NCR_PHASE(("STATUS_PHASE ")); sc->sc_flags |= NCR_ICCS; NCRCMD(sc, NCRCMD_ICCS); sc->sc_prevphase = STATUS_PHASE; goto shortcut; /* i.e. expect status results soon */ case INVALID_PHASE: break; default: device_printf(sc->sc_dev, "unexpected bus phase; resetting\n"); goto reset; } return; reset: ncr53c9x_init(sc, 1); return; finish: ncr53c9x_done(sc, ecb); return; sched: sc->sc_state = NCR_IDLE; ncr53c9x_sched(sc); return; shortcut: /* * The idea is that many of the SCSI operations take very little * time, and going away and getting interrupted is too high an * overhead to pay. For example, selecting, sending a message * and command and then doing some work can be done in one "pass". * * The delay is a heuristic. It is 2 when at 20 MHz, 2 at 25 MHz and * 1 at 40 MHz. This needs testing. */ microtime(&wait); wait.tv_usec += 50 / sc->sc_freq; if (wait.tv_usec > 1000000) { wait.tv_sec++; wait.tv_usec -= 1000000; } do { if (NCRDMA_ISINTR(sc)) goto again; microtime(&cur); } while (cur.tv_sec <= wait.tv_sec && cur.tv_usec <= wait.tv_usec); } static void ncr53c9x_abort(struct ncr53c9x_softc *sc, struct ncr53c9x_ecb *ecb) { NCR_LOCK_ASSERT(sc, MA_OWNED); /* 2 secs for the abort */ ecb->timeout = NCR_ABORT_TIMEOUT; ecb->flags |= ECB_ABORT; if (ecb == sc->sc_nexus) { /* * If we're still selecting, the message will be scheduled * after selection is complete. */ if (sc->sc_state == NCR_CONNECTED) ncr53c9x_sched_msgout(SEND_ABORT); /* * Reschedule callout. */ callout_reset(&ecb->ch, mstohz(ecb->timeout), ncr53c9x_callout, ecb); } else { /* * Just leave the command where it is. * XXX - what choice do we have but to reset the SCSI * eventually? */ if (sc->sc_state == NCR_IDLE) ncr53c9x_sched(sc); } } static void ncr53c9x_callout(void *arg) { struct ncr53c9x_ecb *ecb = arg; union ccb *ccb = ecb->ccb; struct ncr53c9x_softc *sc = ecb->sc; struct ncr53c9x_tinfo *ti; NCR_LOCK_ASSERT(sc, MA_OWNED); ti = &sc->sc_tinfo[ccb->ccb_h.target_id]; xpt_print_path(ccb->ccb_h.path); device_printf(sc->sc_dev, "timed out [ecb %p (flags 0x%x, dleft %x, " "stat %x)], ", ecb, ecb->flags, ecb->dleft, ecb->stat, sc->sc_state, sc->sc_nexus, NCR_READ_REG(sc, NCR_STAT), sc->sc_phase, sc->sc_prevphase, (long)sc->sc_dleft, sc->sc_msgpriq, sc->sc_msgout, NCRDMA_ISACTIVE(sc) ? "DMA active" : ""); #if defined(NCR53C9X_DEBUG) && NCR53C9X_DEBUG > 1 printf("TRACE: %s.", ecb->trace); #endif if (ecb->flags & ECB_ABORT) { /* Abort timed out. */ printf(" AGAIN\n"); ncr53c9x_init(sc, 1); } else { /* Abort the operation that has timed out. */ printf("\n"); ccb->ccb_h.status = CAM_CMD_TIMEOUT; ncr53c9x_abort(sc, ecb); /* Disable sync mode if stuck in a data phase. */ if (ecb == sc->sc_nexus && ti->curr.offset != 0 && (sc->sc_phase & (MSGI | CDI)) == 0) { /* XXX ASYNC CALLBACK! */ ti->goal.offset = 0; xpt_print_path(ccb->ccb_h.path); printf("sync negotiation disabled\n"); } } } static void ncr53c9x_watch(void *arg) { struct ncr53c9x_softc *sc = arg; struct ncr53c9x_linfo *li; struct ncr53c9x_tinfo *ti; time_t old; int t; NCR_LOCK_ASSERT(sc, MA_OWNED); /* Delete any structures that have not been used in 10min. */ old = time_second - (10 * 60); for (t = 0; t < sc->sc_ntarg; t++) { ti = &sc->sc_tinfo[t]; li = LIST_FIRST(&ti->luns); while (li) { if (li->last_used < old && li->untagged == NULL && li->used == 0) { if (li->lun < NCR_NLUN) ti->lun[li->lun] = NULL; LIST_REMOVE(li, link); free(li, M_DEVBUF); /* Restart the search at the beginning. */ li = LIST_FIRST(&ti->luns); continue; } li = LIST_NEXT(li, link); } } callout_reset(&sc->sc_watchdog, 60 * hz, ncr53c9x_watch, sc); } diff --git a/sys/dev/iir/iir_pci.c b/sys/dev/iir/iir_pci.c index bd7f15b57c4a..51ed27584a93 100644 --- a/sys/dev/iir/iir_pci.c +++ b/sys/dev/iir/iir_pci.c @@ -1,459 +1,460 @@ /*- * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2000-03 ICP vortex GmbH * Copyright (c) 2002-03 Intel Corporation * Copyright (c) 2003 Adaptec Inc. * All Rights Reserved * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions, and the following disclaimer, * without modification, immediately at the beginning of the file. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. The name of the author may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * iir_pci.c: PCI Bus Attachment for Intel Integrated RAID Controller driver * * Written by: Achim Leubner * Written by: Achim Leubner * Fixes/Additions: Boji Tony Kannanthanam * * TODO: */ /* #include "opt_iir.h" */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* Mapping registers for various areas */ #define PCI_DPMEM PCIR_BAR(0) /* Product numbers for Fibre-Channel are greater than or equal to 0x200 */ #define GDT_PCI_PRODUCT_FC 0x200 /* PCI SRAM structure */ #define GDT_MAGIC 0x00 /* u_int32_t, controller ID from BIOS */ #define GDT_NEED_DEINIT 0x04 /* u_int16_t, switch between BIOS/driver */ #define GDT_SWITCH_SUPPORT 0x06 /* u_int8_t, see GDT_NEED_DEINIT */ #define GDT_OS_USED 0x10 /* u_int8_t [16], OS code per service */ #define GDT_FW_MAGIC 0x3c /* u_int8_t, controller ID from firmware */ #define GDT_SRAM_SZ 0x40 /* DPRAM PCI controllers */ #define GDT_DPR_IF 0x00 /* interface area */ #define GDT_6SR (0xff0 - GDT_SRAM_SZ) #define GDT_SEMA1 0xff1 /* volatile u_int8_t, command semaphore */ #define GDT_IRQEN 0xff5 /* u_int8_t, board interrupts enable */ #define GDT_EVENT 0xff8 /* u_int8_t, release event */ #define GDT_IRQDEL 0xffc /* u_int8_t, acknowledge board interrupt */ #define GDT_DPRAM_SZ 0x1000 /* PLX register structure (new PCI controllers) */ #define GDT_CFG_REG 0x00 /* u_int8_t, DPRAM cfg. (2: < 1MB, 0: any) */ #define GDT_SEMA0_REG 0x40 /* volatile u_int8_t, command semaphore */ #define GDT_SEMA1_REG 0x41 /* volatile u_int8_t, status semaphore */ #define GDT_PLX_STATUS 0x44 /* volatile u_int16_t, command status */ #define GDT_PLX_SERVICE 0x46 /* u_int16_t, service */ #define GDT_PLX_INFO 0x48 /* u_int32_t [2], additional info */ #define GDT_LDOOR_REG 0x60 /* u_int8_t, PCI to local doorbell */ #define GDT_EDOOR_REG 0x64 /* volatile u_int8_t, local to PCI doorbell */ #define GDT_CONTROL0 0x68 /* u_int8_t, control0 register (unused) */ #define GDT_CONTROL1 0x69 /* u_int8_t, board interrupts enable */ #define GDT_PLX_SZ 0x80 /* DPRAM new PCI controllers */ #define GDT_IC 0x00 /* interface */ #define GDT_PCINEW_6SR (0x4000 - GDT_SRAM_SZ) /* SRAM structure */ #define GDT_PCINEW_SZ 0x4000 /* i960 register structure (PCI MPR controllers) */ #define GDT_MPR_SEMA0 0x10 /* volatile u_int8_t, command semaphore */ #define GDT_MPR_SEMA1 0x12 /* volatile u_int8_t, status semaphore */ #define GDT_MPR_STATUS 0x14 /* volatile u_int16_t, command status */ #define GDT_MPR_SERVICE 0x16 /* u_int16_t, service */ #define GDT_MPR_INFO 0x18 /* u_int32_t [2], additional info */ #define GDT_MPR_LDOOR 0x20 /* u_int8_t, PCI to local doorbell */ #define GDT_MPR_EDOOR 0x2c /* volatile u_int8_t, locl to PCI doorbell */ #define GDT_EDOOR_EN 0x34 /* u_int8_t, board interrupts enable */ #define GDT_SEVERITY 0xefc /* u_int8_t, event severity */ #define GDT_EVT_BUF 0xf00 /* u_int8_t [256], event buffer */ #define GDT_I960_SZ 0x1000 /* DPRAM PCI MPR controllers */ #define GDT_I960R 0x00 /* 4KB i960 registers */ #define GDT_MPR_IC GDT_I960_SZ /* i960 register area */ #define GDT_MPR_6SR (GDT_I960_SZ + 0x3000 - GDT_SRAM_SZ) /* DPRAM struct. */ #define GDT_MPR_SZ (0x3000 - GDT_SRAM_SZ) static int iir_pci_probe(device_t dev); static int iir_pci_attach(device_t dev); void gdt_pci_enable_intr(struct gdt_softc *); void gdt_mpr_copy_cmd(struct gdt_softc *, struct gdt_ccb *); u_int8_t gdt_mpr_get_status(struct gdt_softc *); void gdt_mpr_intr(struct gdt_softc *, struct gdt_intr_ctx *); void gdt_mpr_release_event(struct gdt_softc *); void gdt_mpr_set_sema0(struct gdt_softc *); int gdt_mpr_test_busy(struct gdt_softc *); static device_method_t iir_pci_methods[] = { /* Device interface */ DEVMETHOD(device_probe, iir_pci_probe), DEVMETHOD(device_attach, iir_pci_attach), { 0, 0} }; static driver_t iir_pci_driver = { "iir", iir_pci_methods, sizeof(struct gdt_softc) }; static devclass_t iir_devclass; DRIVER_MODULE(iir, pci, iir_pci_driver, iir_devclass, 0, 0); MODULE_DEPEND(iir, pci, 1, 1, 1); MODULE_DEPEND(iir, cam, 1, 1, 1); static int iir_pci_probe(device_t dev) { if (pci_get_vendor(dev) == INTEL_VENDOR_ID_IIR && pci_get_device(dev) == INTEL_DEVICE_ID_IIR) { device_set_desc(dev, "Intel Integrated RAID Controller"); return (BUS_PROBE_DEFAULT); } if (pci_get_vendor(dev) == GDT_VENDOR_ID && ((pci_get_device(dev) >= GDT_DEVICE_ID_MIN && pci_get_device(dev) <= GDT_DEVICE_ID_MAX) || pci_get_device(dev) == GDT_DEVICE_ID_NEWRX)) { device_set_desc(dev, "ICP Disk Array Controller"); return (BUS_PROBE_DEFAULT); } return (ENXIO); } static int iir_pci_attach(device_t dev) { struct gdt_softc *gdt; struct resource *irq = NULL; int retries, rid, error = 0; void *ih; u_int8_t protocol; gdt = device_get_softc(dev); mtx_init(&gdt->sc_lock, "iir", NULL, MTX_DEF); /* map DPMEM */ rid = PCI_DPMEM; gdt->sc_dpmem = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &rid, RF_ACTIVE); if (gdt->sc_dpmem == NULL) { device_printf(dev, "can't allocate register resources\n"); error = ENOMEM; goto err; } /* get IRQ */ rid = 0; irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_ACTIVE | RF_SHAREABLE); if (irq == NULL) { device_printf(dev, "can't find IRQ value\n"); error = ENOMEM; goto err; } gdt->sc_devnode = dev; gdt->sc_init_level = 0; gdt->sc_hanum = device_get_unit(dev); gdt->sc_bus = pci_get_bus(dev); gdt->sc_slot = pci_get_slot(dev); gdt->sc_vendor = pci_get_vendor(dev); gdt->sc_device = pci_get_device(dev); gdt->sc_subdevice = pci_get_subdevice(dev); gdt->sc_class = GDT_MPR; /* no FC ctr. if (gdt->sc_device >= GDT_PCI_PRODUCT_FC) gdt->sc_class |= GDT_FC; */ /* initialize RP controller */ /* check and reset interface area */ bus_write_4(gdt->sc_dpmem, GDT_MPR_IC, htole32(GDT_MPR_MAGIC)); if (bus_read_4(gdt->sc_dpmem, GDT_MPR_IC) != htole32(GDT_MPR_MAGIC)) { device_printf(dev, "cannot access DPMEM at 0x%jx (shadowed?)\n", rman_get_start(gdt->sc_dpmem)); error = ENXIO; goto err; } bus_set_region_4(gdt->sc_dpmem, GDT_I960_SZ, htole32(0), GDT_MPR_SZ >> 2); /* Disable everything */ bus_write_1(gdt->sc_dpmem, GDT_EDOOR_EN, bus_read_1(gdt->sc_dpmem, GDT_EDOOR_EN) | 4); bus_write_1(gdt->sc_dpmem, GDT_MPR_EDOOR, 0xff); bus_write_1(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_STATUS, 0); bus_write_1(gdt->sc_dpmem, GDT_MPR_IC + GDT_CMD_INDEX, 0); bus_write_4(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_INFO, htole32(rman_get_start(gdt->sc_dpmem))); bus_write_1(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_CMD_INDX, 0xff); bus_write_1(gdt->sc_dpmem, GDT_MPR_LDOOR, 1); DELAY(20); retries = GDT_RETRIES; while (bus_read_1(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_STATUS) != 0xff) { if (--retries == 0) { device_printf(dev, "DEINIT failed\n"); error = ENXIO; goto err; } DELAY(1); } protocol = (uint8_t)le32toh(bus_read_4(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_INFO)); bus_write_1(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_STATUS, 0); if (protocol != GDT_PROTOCOL_VERSION) { device_printf(dev, "unsupported protocol %d\n", protocol); error = ENXIO; goto err; } /* special command to controller BIOS */ bus_write_4(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_INFO, htole32(0)); bus_write_4(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_INFO + sizeof (u_int32_t), htole32(0)); bus_write_4(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_INFO + 2 * sizeof (u_int32_t), htole32(1)); bus_write_4(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_INFO + 3 * sizeof (u_int32_t), htole32(0)); bus_write_1(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_CMD_INDX, 0xfe); bus_write_1(gdt->sc_dpmem, GDT_MPR_LDOOR, 1); DELAY(20); retries = GDT_RETRIES; while (bus_read_1(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_STATUS) != 0xfe) { if (--retries == 0) { device_printf(dev, "initialization error\n"); error = ENXIO; goto err; } DELAY(1); } bus_write_1(gdt->sc_dpmem, GDT_MPR_IC + GDT_S_STATUS, 0); gdt->sc_ic_all_size = GDT_MPR_SZ; gdt->sc_copy_cmd = gdt_mpr_copy_cmd; gdt->sc_get_status = gdt_mpr_get_status; gdt->sc_intr = gdt_mpr_intr; gdt->sc_release_event = gdt_mpr_release_event; gdt->sc_set_sema0 = gdt_mpr_set_sema0; gdt->sc_test_busy = gdt_mpr_test_busy; /* Allocate a dmatag representing the capabilities of this attachment */ if (bus_dma_tag_create(/*parent*/bus_get_dma_tag(dev), /*alignemnt*/1, /*boundary*/0, /*lowaddr*/BUS_SPACE_MAXADDR_32BIT, /*highaddr*/BUS_SPACE_MAXADDR, /*filter*/NULL, /*filterarg*/NULL, /*maxsize*/BUS_SPACE_MAXSIZE_32BIT, /*nsegments*/BUS_SPACE_UNRESTRICTED, /*maxsegsz*/BUS_SPACE_MAXSIZE_32BIT, /*flags*/0, /*lockfunc*/busdma_lock_mutex, /*lockarg*/&gdt->sc_lock, &gdt->sc_parent_dmat) != 0) { error = ENXIO; goto err; } gdt->sc_init_level++; if (iir_init(gdt) != 0) { iir_free(gdt); error = ENXIO; goto err; } /* Register with the XPT */ iir_attach(gdt); /* associate interrupt handler */ if (bus_setup_intr(dev, irq, INTR_TYPE_CAM | INTR_MPSAFE, NULL, iir_intr, gdt, &ih )) { device_printf(dev, "Unable to register interrupt handler\n"); error = ENXIO; goto err; } gdt_pci_enable_intr(gdt); + gone_in_dev(dev, 14, "iir(4) removed"); return (0); err: if (irq) bus_release_resource( dev, SYS_RES_IRQ, 0, irq ); if (gdt->sc_dpmem) bus_release_resource( dev, SYS_RES_MEMORY, rid, gdt->sc_dpmem ); mtx_destroy(&gdt->sc_lock); return (error); } /* Enable interrupts */ void gdt_pci_enable_intr(struct gdt_softc *gdt) { GDT_DPRINTF(GDT_D_INTR, ("gdt_pci_enable_intr(%p) ", gdt)); switch(GDT_CLASS(gdt)) { case GDT_MPR: bus_write_1(gdt->sc_dpmem, GDT_MPR_EDOOR, 0xff); bus_write_1(gdt->sc_dpmem, GDT_EDOOR_EN, bus_read_1(gdt->sc_dpmem, GDT_EDOOR_EN) & ~4); break; } } /* * MPR PCI controller-specific functions */ void gdt_mpr_copy_cmd(struct gdt_softc *gdt, struct gdt_ccb *gccb) { u_int16_t cp_count = roundup(gccb->gc_cmd_len, sizeof (u_int32_t)); u_int16_t dp_offset = gdt->sc_cmd_off; u_int16_t cmd_no = gdt->sc_cmd_cnt++; GDT_DPRINTF(GDT_D_CMD, ("gdt_mpr_copy_cmd(%p) ", gdt)); gdt->sc_cmd_off += cp_count; bus_write_region_4(gdt->sc_dpmem, GDT_MPR_IC + GDT_DPR_CMD + dp_offset, (u_int32_t *)gccb->gc_cmd, cp_count >> 2); bus_write_2(gdt->sc_dpmem, GDT_MPR_IC + GDT_COMM_QUEUE + cmd_no * GDT_COMM_Q_SZ + GDT_OFFSET, htole16(GDT_DPMEM_COMMAND_OFFSET + dp_offset)); bus_write_2(gdt->sc_dpmem, GDT_MPR_IC + GDT_COMM_QUEUE + cmd_no * GDT_COMM_Q_SZ + GDT_SERV_ID, htole16(gccb->gc_service)); } u_int8_t gdt_mpr_get_status(struct gdt_softc *gdt) { GDT_DPRINTF(GDT_D_MISC, ("gdt_mpr_get_status(%p) ", gdt)); return bus_read_1(gdt->sc_dpmem, GDT_MPR_EDOOR); } void gdt_mpr_intr(struct gdt_softc *gdt, struct gdt_intr_ctx *ctx) { int i; GDT_DPRINTF(GDT_D_INTR, ("gdt_mpr_intr(%p) ", gdt)); bus_write_1(gdt->sc_dpmem, GDT_MPR_EDOOR, 0xff); if (ctx->istatus & 0x80) { /* error flag */ ctx->istatus &= ~0x80; ctx->cmd_status = bus_read_2(gdt->sc_dpmem, GDT_MPR_STATUS); } else /* no error */ ctx->cmd_status = GDT_S_OK; ctx->info = bus_read_4(gdt->sc_dpmem, GDT_MPR_INFO); ctx->service = bus_read_2(gdt->sc_dpmem, GDT_MPR_SERVICE); ctx->info2 = bus_read_4(gdt->sc_dpmem, GDT_MPR_INFO + sizeof (u_int32_t)); /* event string */ if (ctx->istatus == GDT_ASYNCINDEX) { if (ctx->service != GDT_SCREENSERVICE && (gdt->sc_fw_vers & 0xff) >= 0x1a) { gdt->sc_dvr.severity = bus_read_1(gdt->sc_dpmem, GDT_SEVERITY); for (i = 0; i < 256; ++i) { gdt->sc_dvr.event_string[i] = bus_read_1(gdt->sc_dpmem, GDT_EVT_BUF + i); if (gdt->sc_dvr.event_string[i] == 0) break; } } } bus_write_1(gdt->sc_dpmem, GDT_MPR_SEMA1, 0); } void gdt_mpr_release_event(struct gdt_softc *gdt) { GDT_DPRINTF(GDT_D_MISC, ("gdt_mpr_release_event(%p) ", gdt)); bus_write_1(gdt->sc_dpmem, GDT_MPR_LDOOR, 1); } void gdt_mpr_set_sema0(struct gdt_softc *gdt) { GDT_DPRINTF(GDT_D_MISC, ("gdt_mpr_set_sema0(%p) ", gdt)); bus_write_1(gdt->sc_dpmem, GDT_MPR_SEMA0, 1); } int gdt_mpr_test_busy(struct gdt_softc *gdt) { GDT_DPRINTF(GDT_D_MISC, ("gdt_mpr_test_busy(%p) ", gdt)); return (bus_read_1(gdt->sc_dpmem, GDT_MPR_SEMA0) & 1); } diff --git a/sys/dev/mly/mly.c b/sys/dev/mly/mly.c index 359692840f5f..7f2ef792006c 100644 --- a/sys/dev/mly/mly.c +++ b/sys/dev/mly/mly.c @@ -1,3009 +1,3011 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2000, 2001 Michael Smith * Copyright (c) 2000 BSDi * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include static int mly_probe(device_t dev); static int mly_attach(device_t dev); static int mly_pci_attach(struct mly_softc *sc); static int mly_detach(device_t dev); static int mly_shutdown(device_t dev); static void mly_intr(void *arg); static int mly_sg_map(struct mly_softc *sc); static void mly_sg_map_helper(void *arg, bus_dma_segment_t *segs, int nseg, int error); static int mly_mmbox_map(struct mly_softc *sc); static void mly_mmbox_map_helper(void *arg, bus_dma_segment_t *segs, int nseg, int error); static void mly_free(struct mly_softc *sc); static int mly_get_controllerinfo(struct mly_softc *sc); static void mly_scan_devices(struct mly_softc *sc); static void mly_rescan_btl(struct mly_softc *sc, int bus, int target); static void mly_complete_rescan(struct mly_command *mc); static int mly_get_eventstatus(struct mly_softc *sc); static int mly_enable_mmbox(struct mly_softc *sc); static int mly_flush(struct mly_softc *sc); static int mly_ioctl(struct mly_softc *sc, struct mly_command_ioctl *ioctl, void **data, size_t datasize, u_int8_t *status, void *sense_buffer, size_t *sense_length); static void mly_check_event(struct mly_softc *sc); static void mly_fetch_event(struct mly_softc *sc); static void mly_complete_event(struct mly_command *mc); static void mly_process_event(struct mly_softc *sc, struct mly_event *me); static void mly_periodic(void *data); static int mly_immediate_command(struct mly_command *mc); static int mly_start(struct mly_command *mc); static void mly_done(struct mly_softc *sc); static void mly_complete(struct mly_softc *sc); static void mly_complete_handler(void *context, int pending); static int mly_alloc_command(struct mly_softc *sc, struct mly_command **mcp); static void mly_release_command(struct mly_command *mc); static void mly_alloc_commands_map(void *arg, bus_dma_segment_t *segs, int nseg, int error); static int mly_alloc_commands(struct mly_softc *sc); static void mly_release_commands(struct mly_softc *sc); static void mly_map_command(struct mly_command *mc); static void mly_unmap_command(struct mly_command *mc); static int mly_cam_attach(struct mly_softc *sc); static void mly_cam_detach(struct mly_softc *sc); static void mly_cam_rescan_btl(struct mly_softc *sc, int bus, int target); static void mly_cam_action(struct cam_sim *sim, union ccb *ccb); static int mly_cam_action_io(struct cam_sim *sim, struct ccb_scsiio *csio); static void mly_cam_poll(struct cam_sim *sim); static void mly_cam_complete(struct mly_command *mc); static struct cam_periph *mly_find_periph(struct mly_softc *sc, int bus, int target); static int mly_name_device(struct mly_softc *sc, int bus, int target); static int mly_fwhandshake(struct mly_softc *sc); static void mly_describe_controller(struct mly_softc *sc); #ifdef MLY_DEBUG static void mly_printstate(struct mly_softc *sc); static void mly_print_command(struct mly_command *mc); static void mly_print_packet(struct mly_command *mc); static void mly_panic(struct mly_softc *sc, char *reason); static void mly_timeout(void *arg); #endif void mly_print_controller(int controller); static d_open_t mly_user_open; static d_close_t mly_user_close; static d_ioctl_t mly_user_ioctl; static int mly_user_command(struct mly_softc *sc, struct mly_user_command *uc); static int mly_user_health(struct mly_softc *sc, struct mly_user_health *uh); #define MLY_CMD_TIMEOUT 20 static device_method_t mly_methods[] = { /* Device interface */ DEVMETHOD(device_probe, mly_probe), DEVMETHOD(device_attach, mly_attach), DEVMETHOD(device_detach, mly_detach), DEVMETHOD(device_shutdown, mly_shutdown), { 0, 0 } }; static driver_t mly_pci_driver = { "mly", mly_methods, sizeof(struct mly_softc) }; static devclass_t mly_devclass; DRIVER_MODULE(mly, pci, mly_pci_driver, mly_devclass, 0, 0); MODULE_DEPEND(mly, pci, 1, 1, 1); MODULE_DEPEND(mly, cam, 1, 1, 1); static struct cdevsw mly_cdevsw = { .d_version = D_VERSION, .d_open = mly_user_open, .d_close = mly_user_close, .d_ioctl = mly_user_ioctl, .d_name = "mly", }; /******************************************************************************** ******************************************************************************** Device Interface ******************************************************************************** ********************************************************************************/ static struct mly_ident { u_int16_t vendor; u_int16_t device; u_int16_t subvendor; u_int16_t subdevice; int hwif; char *desc; } mly_identifiers[] = { {0x1069, 0xba56, 0x1069, 0x0040, MLY_HWIF_STRONGARM, "Mylex eXtremeRAID 2000"}, {0x1069, 0xba56, 0x1069, 0x0030, MLY_HWIF_STRONGARM, "Mylex eXtremeRAID 3000"}, {0x1069, 0x0050, 0x1069, 0x0050, MLY_HWIF_I960RX, "Mylex AcceleRAID 352"}, {0x1069, 0x0050, 0x1069, 0x0052, MLY_HWIF_I960RX, "Mylex AcceleRAID 170"}, {0x1069, 0x0050, 0x1069, 0x0054, MLY_HWIF_I960RX, "Mylex AcceleRAID 160"}, {0, 0, 0, 0, 0, 0} }; /******************************************************************************** * Compare the provided PCI device with the list we support. */ static int mly_probe(device_t dev) { struct mly_ident *m; debug_called(1); for (m = mly_identifiers; m->vendor != 0; m++) { if ((m->vendor == pci_get_vendor(dev)) && (m->device == pci_get_device(dev)) && ((m->subvendor == 0) || ((m->subvendor == pci_get_subvendor(dev)) && (m->subdevice == pci_get_subdevice(dev))))) { device_set_desc(dev, m->desc); return(BUS_PROBE_DEFAULT); /* allow room to be overridden */ } } return(ENXIO); } /******************************************************************************** * Initialise the controller and softc */ static int mly_attach(device_t dev) { struct mly_softc *sc = device_get_softc(dev); int error; debug_called(1); sc->mly_dev = dev; mtx_init(&sc->mly_lock, "mly", NULL, MTX_DEF); callout_init_mtx(&sc->mly_periodic, &sc->mly_lock, 0); #ifdef MLY_DEBUG callout_init_mtx(&sc->mly_timeout, &sc->mly_lock, 0); if (device_get_unit(sc->mly_dev) == 0) mly_softc0 = sc; #endif /* * Do PCI-specific initialisation. */ if ((error = mly_pci_attach(sc)) != 0) goto out; /* * Initialise per-controller queues. */ mly_initq_free(sc); mly_initq_busy(sc); mly_initq_complete(sc); /* * Initialise command-completion task. */ TASK_INIT(&sc->mly_task_complete, 0, mly_complete_handler, sc); /* disable interrupts before we start talking to the controller */ MLY_MASK_INTERRUPTS(sc); /* * Wait for the controller to come ready, handshake with the firmware if required. * This is typically only necessary on platforms where the controller BIOS does not * run. */ if ((error = mly_fwhandshake(sc))) goto out; /* * Allocate initial command buffers. */ if ((error = mly_alloc_commands(sc))) goto out; /* * Obtain controller feature information */ MLY_LOCK(sc); error = mly_get_controllerinfo(sc); MLY_UNLOCK(sc); if (error) goto out; /* * Reallocate command buffers now we know how many we want. */ mly_release_commands(sc); if ((error = mly_alloc_commands(sc))) goto out; /* * Get the current event counter for health purposes, populate the initial * health status buffer. */ MLY_LOCK(sc); error = mly_get_eventstatus(sc); /* * Enable memory-mailbox mode. */ if (error == 0) error = mly_enable_mmbox(sc); MLY_UNLOCK(sc); if (error) goto out; /* * Attach to CAM. */ if ((error = mly_cam_attach(sc))) goto out; /* * Print a little information about the controller */ mly_describe_controller(sc); /* * Mark all attached devices for rescan. */ MLY_LOCK(sc); mly_scan_devices(sc); /* * Instigate the first status poll immediately. Rescan completions won't * happen until interrupts are enabled, which should still be before * the SCSI subsystem gets to us, courtesy of the "SCSI settling delay". */ mly_periodic((void *)sc); MLY_UNLOCK(sc); /* * Create the control device. */ sc->mly_dev_t = make_dev(&mly_cdevsw, 0, UID_ROOT, GID_OPERATOR, S_IRUSR | S_IWUSR, "mly%d", device_get_unit(sc->mly_dev)); sc->mly_dev_t->si_drv1 = sc; /* enable interrupts now */ MLY_UNMASK_INTERRUPTS(sc); #ifdef MLY_DEBUG callout_reset(&sc->mly_timeout, MLY_CMD_TIMEOUT * hz, mly_timeout, sc); #endif out: if (error != 0) mly_free(sc); + else + gone_in_dev(dev, 14, "mly(4) removed"); return(error); } /******************************************************************************** * Perform PCI-specific initialisation. */ static int mly_pci_attach(struct mly_softc *sc) { int i, error; debug_called(1); /* assume failure is 'not configured' */ error = ENXIO; /* * Verify that the adapter is correctly set up in PCI space. */ pci_enable_busmaster(sc->mly_dev); /* * Allocate the PCI register window. */ sc->mly_regs_rid = PCIR_BAR(0); /* first base address register */ if ((sc->mly_regs_resource = bus_alloc_resource_any(sc->mly_dev, SYS_RES_MEMORY, &sc->mly_regs_rid, RF_ACTIVE)) == NULL) { mly_printf(sc, "can't allocate register window\n"); goto fail; } /* * Allocate and connect our interrupt. */ sc->mly_irq_rid = 0; if ((sc->mly_irq = bus_alloc_resource_any(sc->mly_dev, SYS_RES_IRQ, &sc->mly_irq_rid, RF_SHAREABLE | RF_ACTIVE)) == NULL) { mly_printf(sc, "can't allocate interrupt\n"); goto fail; } if (bus_setup_intr(sc->mly_dev, sc->mly_irq, INTR_TYPE_CAM | INTR_ENTROPY | INTR_MPSAFE, NULL, mly_intr, sc, &sc->mly_intr)) { mly_printf(sc, "can't set up interrupt\n"); goto fail; } /* assume failure is 'out of memory' */ error = ENOMEM; /* * Allocate the parent bus DMA tag appropriate for our PCI interface. * * Note that all of these controllers are 64-bit capable. */ if (bus_dma_tag_create(bus_get_dma_tag(sc->mly_dev),/* PCI parent */ 1, 0, /* alignment, boundary */ BUS_SPACE_MAXADDR_32BIT, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ BUS_SPACE_MAXSIZE_32BIT, /* maxsize */ BUS_SPACE_UNRESTRICTED, /* nsegments */ BUS_SPACE_MAXSIZE_32BIT, /* maxsegsize */ BUS_DMA_ALLOCNOW, /* flags */ NULL, /* lockfunc */ NULL, /* lockarg */ &sc->mly_parent_dmat)) { mly_printf(sc, "can't allocate parent DMA tag\n"); goto fail; } /* * Create DMA tag for mapping buffers into controller-addressable space. */ if (bus_dma_tag_create(sc->mly_parent_dmat, /* parent */ 1, 0, /* alignment, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ DFLTPHYS, /* maxsize */ MLY_MAX_SGENTRIES, /* nsegments */ BUS_SPACE_MAXSIZE_32BIT, /* maxsegsize */ 0, /* flags */ busdma_lock_mutex, /* lockfunc */ &sc->mly_lock, /* lockarg */ &sc->mly_buffer_dmat)) { mly_printf(sc, "can't allocate buffer DMA tag\n"); goto fail; } /* * Initialise the DMA tag for command packets. */ if (bus_dma_tag_create(sc->mly_parent_dmat, /* parent */ 1, 0, /* alignment, boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ sizeof(union mly_command_packet) * MLY_MAX_COMMANDS, 1, /* maxsize, nsegments */ BUS_SPACE_MAXSIZE_32BIT, /* maxsegsize */ BUS_DMA_ALLOCNOW, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->mly_packet_dmat)) { mly_printf(sc, "can't allocate command packet DMA tag\n"); goto fail; } /* * Detect the hardware interface version */ for (i = 0; mly_identifiers[i].vendor != 0; i++) { if ((mly_identifiers[i].vendor == pci_get_vendor(sc->mly_dev)) && (mly_identifiers[i].device == pci_get_device(sc->mly_dev))) { sc->mly_hwif = mly_identifiers[i].hwif; switch(sc->mly_hwif) { case MLY_HWIF_I960RX: debug(1, "set hardware up for i960RX"); sc->mly_doorbell_true = 0x00; sc->mly_command_mailbox = MLY_I960RX_COMMAND_MAILBOX; sc->mly_status_mailbox = MLY_I960RX_STATUS_MAILBOX; sc->mly_idbr = MLY_I960RX_IDBR; sc->mly_odbr = MLY_I960RX_ODBR; sc->mly_error_status = MLY_I960RX_ERROR_STATUS; sc->mly_interrupt_status = MLY_I960RX_INTERRUPT_STATUS; sc->mly_interrupt_mask = MLY_I960RX_INTERRUPT_MASK; break; case MLY_HWIF_STRONGARM: debug(1, "set hardware up for StrongARM"); sc->mly_doorbell_true = 0xff; /* doorbell 'true' is 0 */ sc->mly_command_mailbox = MLY_STRONGARM_COMMAND_MAILBOX; sc->mly_status_mailbox = MLY_STRONGARM_STATUS_MAILBOX; sc->mly_idbr = MLY_STRONGARM_IDBR; sc->mly_odbr = MLY_STRONGARM_ODBR; sc->mly_error_status = MLY_STRONGARM_ERROR_STATUS; sc->mly_interrupt_status = MLY_STRONGARM_INTERRUPT_STATUS; sc->mly_interrupt_mask = MLY_STRONGARM_INTERRUPT_MASK; break; } break; } } /* * Create the scatter/gather mappings. */ if ((error = mly_sg_map(sc))) goto fail; /* * Allocate and map the memory mailbox */ if ((error = mly_mmbox_map(sc))) goto fail; error = 0; fail: return(error); } /******************************************************************************** * Shut the controller down and detach all our resources. */ static int mly_detach(device_t dev) { int error; if ((error = mly_shutdown(dev)) != 0) return(error); mly_free(device_get_softc(dev)); return(0); } /******************************************************************************** * Bring the controller to a state where it can be safely left alone. * * Note that it should not be necessary to wait for any outstanding commands, * as they should be completed prior to calling here. * * XXX this applies for I/O, but not status polls; we should beware of * the case where a status command is running while we detach. */ static int mly_shutdown(device_t dev) { struct mly_softc *sc = device_get_softc(dev); debug_called(1); MLY_LOCK(sc); if (sc->mly_state & MLY_STATE_OPEN) { MLY_UNLOCK(sc); return(EBUSY); } /* kill the periodic event */ callout_stop(&sc->mly_periodic); #ifdef MLY_DEBUG callout_stop(&sc->mly_timeout); #endif /* flush controller */ mly_printf(sc, "flushing cache..."); printf("%s\n", mly_flush(sc) ? "failed" : "done"); MLY_MASK_INTERRUPTS(sc); MLY_UNLOCK(sc); return(0); } /******************************************************************************* * Take an interrupt, or be poked by other code to look for interrupt-worthy * status. */ static void mly_intr(void *arg) { struct mly_softc *sc = (struct mly_softc *)arg; debug_called(2); MLY_LOCK(sc); mly_done(sc); MLY_UNLOCK(sc); }; /******************************************************************************** ******************************************************************************** Bus-dependant Resource Management ******************************************************************************** ********************************************************************************/ /******************************************************************************** * Allocate memory for the scatter/gather tables */ static int mly_sg_map(struct mly_softc *sc) { size_t segsize; debug_called(1); /* * Create a single tag describing a region large enough to hold all of * the s/g lists we will need. */ segsize = sizeof(struct mly_sg_entry) * MLY_MAX_COMMANDS *MLY_MAX_SGENTRIES; if (bus_dma_tag_create(sc->mly_parent_dmat, /* parent */ 1, 0, /* alignment,boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ segsize, 1, /* maxsize, nsegments */ BUS_SPACE_MAXSIZE_32BIT, /* maxsegsize */ BUS_DMA_ALLOCNOW, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->mly_sg_dmat)) { mly_printf(sc, "can't allocate scatter/gather DMA tag\n"); return(ENOMEM); } /* * Allocate enough s/g maps for all commands and permanently map them into * controller-visible space. * * XXX this assumes we can get enough space for all the s/g maps in one * contiguous slab. */ if (bus_dmamem_alloc(sc->mly_sg_dmat, (void **)&sc->mly_sg_table, BUS_DMA_NOWAIT, &sc->mly_sg_dmamap)) { mly_printf(sc, "can't allocate s/g table\n"); return(ENOMEM); } if (bus_dmamap_load(sc->mly_sg_dmat, sc->mly_sg_dmamap, sc->mly_sg_table, segsize, mly_sg_map_helper, sc, BUS_DMA_NOWAIT) != 0) return (ENOMEM); return(0); } /******************************************************************************** * Save the physical address of the base of the s/g table. */ static void mly_sg_map_helper(void *arg, bus_dma_segment_t *segs, int nseg, int error) { struct mly_softc *sc = (struct mly_softc *)arg; debug_called(1); /* save base of s/g table's address in bus space */ sc->mly_sg_busaddr = segs->ds_addr; } /******************************************************************************** * Allocate memory for the memory-mailbox interface */ static int mly_mmbox_map(struct mly_softc *sc) { /* * Create a DMA tag for a single contiguous region large enough for the * memory mailbox structure. */ if (bus_dma_tag_create(sc->mly_parent_dmat, /* parent */ 1, 0, /* alignment,boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ sizeof(struct mly_mmbox), 1, /* maxsize, nsegments */ BUS_SPACE_MAXSIZE_32BIT, /* maxsegsize */ BUS_DMA_ALLOCNOW, /* flags */ NULL, NULL, /* lockfunc, lockarg */ &sc->mly_mmbox_dmat)) { mly_printf(sc, "can't allocate memory mailbox DMA tag\n"); return(ENOMEM); } /* * Allocate the buffer */ if (bus_dmamem_alloc(sc->mly_mmbox_dmat, (void **)&sc->mly_mmbox, BUS_DMA_NOWAIT, &sc->mly_mmbox_dmamap)) { mly_printf(sc, "can't allocate memory mailbox\n"); return(ENOMEM); } if (bus_dmamap_load(sc->mly_mmbox_dmat, sc->mly_mmbox_dmamap, sc->mly_mmbox, sizeof(struct mly_mmbox), mly_mmbox_map_helper, sc, BUS_DMA_NOWAIT) != 0) return (ENOMEM); bzero(sc->mly_mmbox, sizeof(*sc->mly_mmbox)); return(0); } /******************************************************************************** * Save the physical address of the memory mailbox */ static void mly_mmbox_map_helper(void *arg, bus_dma_segment_t *segs, int nseg, int error) { struct mly_softc *sc = (struct mly_softc *)arg; debug_called(1); sc->mly_mmbox_busaddr = segs->ds_addr; } /******************************************************************************** * Free all of the resources associated with (sc) * * Should not be called if the controller is active. */ static void mly_free(struct mly_softc *sc) { debug_called(1); /* Remove the management device */ destroy_dev(sc->mly_dev_t); if (sc->mly_intr) bus_teardown_intr(sc->mly_dev, sc->mly_irq, sc->mly_intr); callout_drain(&sc->mly_periodic); #ifdef MLY_DEBUG callout_drain(&sc->mly_timeout); #endif /* detach from CAM */ mly_cam_detach(sc); /* release command memory */ mly_release_commands(sc); /* throw away the controllerinfo structure */ if (sc->mly_controllerinfo != NULL) free(sc->mly_controllerinfo, M_DEVBUF); /* throw away the controllerparam structure */ if (sc->mly_controllerparam != NULL) free(sc->mly_controllerparam, M_DEVBUF); /* destroy data-transfer DMA tag */ if (sc->mly_buffer_dmat) bus_dma_tag_destroy(sc->mly_buffer_dmat); /* free and destroy DMA memory and tag for s/g lists */ if (sc->mly_sg_table) { bus_dmamap_unload(sc->mly_sg_dmat, sc->mly_sg_dmamap); bus_dmamem_free(sc->mly_sg_dmat, sc->mly_sg_table, sc->mly_sg_dmamap); } if (sc->mly_sg_dmat) bus_dma_tag_destroy(sc->mly_sg_dmat); /* free and destroy DMA memory and tag for memory mailbox */ if (sc->mly_mmbox) { bus_dmamap_unload(sc->mly_mmbox_dmat, sc->mly_mmbox_dmamap); bus_dmamem_free(sc->mly_mmbox_dmat, sc->mly_mmbox, sc->mly_mmbox_dmamap); } if (sc->mly_mmbox_dmat) bus_dma_tag_destroy(sc->mly_mmbox_dmat); /* disconnect the interrupt handler */ if (sc->mly_irq != NULL) bus_release_resource(sc->mly_dev, SYS_RES_IRQ, sc->mly_irq_rid, sc->mly_irq); /* destroy the parent DMA tag */ if (sc->mly_parent_dmat) bus_dma_tag_destroy(sc->mly_parent_dmat); /* release the register window mapping */ if (sc->mly_regs_resource != NULL) bus_release_resource(sc->mly_dev, SYS_RES_MEMORY, sc->mly_regs_rid, sc->mly_regs_resource); mtx_destroy(&sc->mly_lock); } /******************************************************************************** ******************************************************************************** Command Wrappers ******************************************************************************** ********************************************************************************/ /******************************************************************************** * Fill in the mly_controllerinfo and mly_controllerparam fields in the softc. */ static int mly_get_controllerinfo(struct mly_softc *sc) { struct mly_command_ioctl mci; u_int8_t status; int error; debug_called(1); if (sc->mly_controllerinfo != NULL) free(sc->mly_controllerinfo, M_DEVBUF); /* build the getcontrollerinfo ioctl and send it */ bzero(&mci, sizeof(mci)); sc->mly_controllerinfo = NULL; mci.sub_ioctl = MDACIOCTL_GETCONTROLLERINFO; if ((error = mly_ioctl(sc, &mci, (void **)&sc->mly_controllerinfo, sizeof(*sc->mly_controllerinfo), &status, NULL, NULL))) return(error); if (status != 0) return(EIO); if (sc->mly_controllerparam != NULL) free(sc->mly_controllerparam, M_DEVBUF); /* build the getcontrollerparameter ioctl and send it */ bzero(&mci, sizeof(mci)); sc->mly_controllerparam = NULL; mci.sub_ioctl = MDACIOCTL_GETCONTROLLERPARAMETER; if ((error = mly_ioctl(sc, &mci, (void **)&sc->mly_controllerparam, sizeof(*sc->mly_controllerparam), &status, NULL, NULL))) return(error); if (status != 0) return(EIO); return(0); } /******************************************************************************** * Schedule all possible devices for a rescan. * */ static void mly_scan_devices(struct mly_softc *sc) { int bus, target; debug_called(1); /* * Clear any previous BTL information. */ bzero(&sc->mly_btl, sizeof(sc->mly_btl)); /* * Mark all devices as requiring a rescan, and let the next * periodic scan collect them. */ for (bus = 0; bus < sc->mly_cam_channels; bus++) if (MLY_BUS_IS_VALID(sc, bus)) for (target = 0; target < MLY_MAX_TARGETS; target++) sc->mly_btl[bus][target].mb_flags = MLY_BTL_RESCAN; } /******************************************************************************** * Rescan a device, possibly as a consequence of getting an event which suggests * that it may have changed. * * If we suffer resource starvation, we can abandon the rescan as we'll be * retried. */ static void mly_rescan_btl(struct mly_softc *sc, int bus, int target) { struct mly_command *mc; struct mly_command_ioctl *mci; debug_called(1); /* check that this bus is valid */ if (!MLY_BUS_IS_VALID(sc, bus)) return; /* get a command */ if (mly_alloc_command(sc, &mc)) return; /* set up the data buffer */ if ((mc->mc_data = malloc(sizeof(union mly_devinfo), M_DEVBUF, M_NOWAIT | M_ZERO)) == NULL) { mly_release_command(mc); return; } mc->mc_flags |= MLY_CMD_DATAIN; mc->mc_complete = mly_complete_rescan; /* * Build the ioctl. */ mci = (struct mly_command_ioctl *)&mc->mc_packet->ioctl; mci->opcode = MDACMD_IOCTL; mci->addr.phys.controller = 0; mci->timeout.value = 30; mci->timeout.scale = MLY_TIMEOUT_SECONDS; if (MLY_BUS_IS_VIRTUAL(sc, bus)) { mc->mc_length = mci->data_size = sizeof(struct mly_ioctl_getlogdevinfovalid); mci->sub_ioctl = MDACIOCTL_GETLOGDEVINFOVALID; mci->addr.log.logdev = MLY_LOGDEV_ID(sc, bus, target); debug(1, "logical device %d", mci->addr.log.logdev); } else { mc->mc_length = mci->data_size = sizeof(struct mly_ioctl_getphysdevinfovalid); mci->sub_ioctl = MDACIOCTL_GETPHYSDEVINFOVALID; mci->addr.phys.lun = 0; mci->addr.phys.target = target; mci->addr.phys.channel = bus; debug(1, "physical device %d:%d", mci->addr.phys.channel, mci->addr.phys.target); } /* * Dispatch the command. If we successfully send the command, clear the rescan * bit. */ if (mly_start(mc) != 0) { mly_release_command(mc); } else { sc->mly_btl[bus][target].mb_flags &= ~MLY_BTL_RESCAN; /* success */ } } /******************************************************************************** * Handle the completion of a rescan operation */ static void mly_complete_rescan(struct mly_command *mc) { struct mly_softc *sc = mc->mc_sc; struct mly_ioctl_getlogdevinfovalid *ldi; struct mly_ioctl_getphysdevinfovalid *pdi; struct mly_command_ioctl *mci; struct mly_btl btl, *btlp; int bus, target, rescan; debug_called(1); /* * Recover the bus and target from the command. We need these even in * the case where we don't have a useful response. */ mci = (struct mly_command_ioctl *)&mc->mc_packet->ioctl; if (mci->sub_ioctl == MDACIOCTL_GETLOGDEVINFOVALID) { bus = MLY_LOGDEV_BUS(sc, mci->addr.log.logdev); target = MLY_LOGDEV_TARGET(sc, mci->addr.log.logdev); } else { bus = mci->addr.phys.channel; target = mci->addr.phys.target; } /* XXX validate bus/target? */ /* the default result is 'no device' */ bzero(&btl, sizeof(btl)); /* if the rescan completed OK, we have possibly-new BTL data */ if (mc->mc_status == 0) { if (mc->mc_length == sizeof(*ldi)) { ldi = (struct mly_ioctl_getlogdevinfovalid *)mc->mc_data; if ((MLY_LOGDEV_BUS(sc, ldi->logical_device_number) != bus) || (MLY_LOGDEV_TARGET(sc, ldi->logical_device_number) != target)) { mly_printf(sc, "WARNING: BTL rescan for %d:%d returned data for %d:%d instead\n", bus, target, MLY_LOGDEV_BUS(sc, ldi->logical_device_number), MLY_LOGDEV_TARGET(sc, ldi->logical_device_number)); /* XXX what can we do about this? */ } btl.mb_flags = MLY_BTL_LOGICAL; btl.mb_type = ldi->raid_level; btl.mb_state = ldi->state; debug(1, "BTL rescan for %d returns %s, %s", ldi->logical_device_number, mly_describe_code(mly_table_device_type, ldi->raid_level), mly_describe_code(mly_table_device_state, ldi->state)); } else if (mc->mc_length == sizeof(*pdi)) { pdi = (struct mly_ioctl_getphysdevinfovalid *)mc->mc_data; if ((pdi->channel != bus) || (pdi->target != target)) { mly_printf(sc, "WARNING: BTL rescan for %d:%d returned data for %d:%d instead\n", bus, target, pdi->channel, pdi->target); /* XXX what can we do about this? */ } btl.mb_flags = MLY_BTL_PHYSICAL; btl.mb_type = MLY_DEVICE_TYPE_PHYSICAL; btl.mb_state = pdi->state; btl.mb_speed = pdi->speed; btl.mb_width = pdi->width; if (pdi->state != MLY_DEVICE_STATE_UNCONFIGURED) sc->mly_btl[bus][target].mb_flags |= MLY_BTL_PROTECTED; debug(1, "BTL rescan for %d:%d returns %s", bus, target, mly_describe_code(mly_table_device_state, pdi->state)); } else { mly_printf(sc, "BTL rescan result invalid\n"); } } free(mc->mc_data, M_DEVBUF); mly_release_command(mc); /* * Decide whether we need to rescan the device. */ rescan = 0; /* device type changes (usually between 'nothing' and 'something') */ btlp = &sc->mly_btl[bus][target]; if (btl.mb_flags != btlp->mb_flags) { debug(1, "flags changed, rescanning"); rescan = 1; } /* XXX other reasons? */ /* * Update BTL information. */ *btlp = btl; /* * Perform CAM rescan if required. */ if (rescan) mly_cam_rescan_btl(sc, bus, target); } /******************************************************************************** * Get the current health status and set the 'next event' counter to suit. */ static int mly_get_eventstatus(struct mly_softc *sc) { struct mly_command_ioctl mci; struct mly_health_status *mh; u_int8_t status; int error; /* build the gethealthstatus ioctl and send it */ bzero(&mci, sizeof(mci)); mh = NULL; mci.sub_ioctl = MDACIOCTL_GETHEALTHSTATUS; if ((error = mly_ioctl(sc, &mci, (void **)&mh, sizeof(*mh), &status, NULL, NULL))) return(error); if (status != 0) return(EIO); /* get the event counter */ sc->mly_event_change = mh->change_counter; sc->mly_event_waiting = mh->next_event; sc->mly_event_counter = mh->next_event; /* save the health status into the memory mailbox */ bcopy(mh, &sc->mly_mmbox->mmm_health.status, sizeof(*mh)); debug(1, "initial change counter %d, event counter %d", mh->change_counter, mh->next_event); free(mh, M_DEVBUF); return(0); } /******************************************************************************** * Enable the memory mailbox mode. */ static int mly_enable_mmbox(struct mly_softc *sc) { struct mly_command_ioctl mci; u_int8_t *sp, status; int error; debug_called(1); /* build the ioctl and send it */ bzero(&mci, sizeof(mci)); mci.sub_ioctl = MDACIOCTL_SETMEMORYMAILBOX; /* set buffer addresses */ mci.param.setmemorymailbox.command_mailbox_physaddr = sc->mly_mmbox_busaddr + offsetof(struct mly_mmbox, mmm_command); mci.param.setmemorymailbox.status_mailbox_physaddr = sc->mly_mmbox_busaddr + offsetof(struct mly_mmbox, mmm_status); mci.param.setmemorymailbox.health_buffer_physaddr = sc->mly_mmbox_busaddr + offsetof(struct mly_mmbox, mmm_health); /* set buffer sizes - abuse of data_size field is revolting */ sp = (u_int8_t *)&mci.data_size; sp[0] = ((sizeof(union mly_command_packet) * MLY_MMBOX_COMMANDS) / 1024); sp[1] = (sizeof(union mly_status_packet) * MLY_MMBOX_STATUS) / 1024; mci.param.setmemorymailbox.health_buffer_size = sizeof(union mly_health_region) / 1024; debug(1, "memory mailbox at %p (0x%llx/%d 0x%llx/%d 0x%llx/%d", sc->mly_mmbox, mci.param.setmemorymailbox.command_mailbox_physaddr, sp[0], mci.param.setmemorymailbox.status_mailbox_physaddr, sp[1], mci.param.setmemorymailbox.health_buffer_physaddr, mci.param.setmemorymailbox.health_buffer_size); if ((error = mly_ioctl(sc, &mci, NULL, 0, &status, NULL, NULL))) return(error); if (status != 0) return(EIO); sc->mly_state |= MLY_STATE_MMBOX_ACTIVE; debug(1, "memory mailbox active"); return(0); } /******************************************************************************** * Flush all pending I/O from the controller. */ static int mly_flush(struct mly_softc *sc) { struct mly_command_ioctl mci; u_int8_t status; int error; debug_called(1); /* build the ioctl */ bzero(&mci, sizeof(mci)); mci.sub_ioctl = MDACIOCTL_FLUSHDEVICEDATA; mci.param.deviceoperation.operation_device = MLY_OPDEVICE_PHYSICAL_CONTROLLER; /* pass it off to the controller */ if ((error = mly_ioctl(sc, &mci, NULL, 0, &status, NULL, NULL))) return(error); return((status == 0) ? 0 : EIO); } /******************************************************************************** * Perform an ioctl command. * * If (data) is not NULL, the command requires data transfer. If (*data) is NULL * the command requires data transfer from the controller, and we will allocate * a buffer for it. If (*data) is not NULL, the command requires data transfer * to the controller. * * XXX passing in the whole ioctl structure is ugly. Better ideas? * * XXX we don't even try to handle the case where datasize > 4k. We should. */ static int mly_ioctl(struct mly_softc *sc, struct mly_command_ioctl *ioctl, void **data, size_t datasize, u_int8_t *status, void *sense_buffer, size_t *sense_length) { struct mly_command *mc; struct mly_command_ioctl *mci; int error; debug_called(1); MLY_ASSERT_LOCKED(sc); mc = NULL; if (mly_alloc_command(sc, &mc)) { error = ENOMEM; goto out; } /* copy the ioctl structure, but save some important fields and then fixup */ mci = &mc->mc_packet->ioctl; ioctl->sense_buffer_address = mci->sense_buffer_address; ioctl->maximum_sense_size = mci->maximum_sense_size; *mci = *ioctl; mci->opcode = MDACMD_IOCTL; mci->timeout.value = 30; mci->timeout.scale = MLY_TIMEOUT_SECONDS; /* handle the data buffer */ if (data != NULL) { if (*data == NULL) { /* allocate data buffer */ if ((mc->mc_data = malloc(datasize, M_DEVBUF, M_NOWAIT)) == NULL) { error = ENOMEM; goto out; } mc->mc_flags |= MLY_CMD_DATAIN; } else { mc->mc_data = *data; mc->mc_flags |= MLY_CMD_DATAOUT; } mc->mc_length = datasize; mc->mc_packet->generic.data_size = datasize; } /* run the command */ if ((error = mly_immediate_command(mc))) goto out; /* clean up and return any data */ *status = mc->mc_status; if ((mc->mc_sense > 0) && (sense_buffer != NULL)) { bcopy(mc->mc_packet, sense_buffer, mc->mc_sense); *sense_length = mc->mc_sense; goto out; } /* should we return a data pointer? */ if ((data != NULL) && (*data == NULL)) *data = mc->mc_data; /* command completed OK */ error = 0; out: if (mc != NULL) { /* do we need to free a data buffer we allocated? */ if (error && (mc->mc_data != NULL) && (*data == NULL)) free(mc->mc_data, M_DEVBUF); mly_release_command(mc); } return(error); } /******************************************************************************** * Check for event(s) outstanding in the controller. */ static void mly_check_event(struct mly_softc *sc) { /* * The controller may have updated the health status information, * so check for it here. Note that the counters are all in host memory, * so this check is very cheap. Also note that we depend on checking on * completion */ if (sc->mly_mmbox->mmm_health.status.change_counter != sc->mly_event_change) { sc->mly_event_change = sc->mly_mmbox->mmm_health.status.change_counter; debug(1, "event change %d, event status update, %d -> %d", sc->mly_event_change, sc->mly_event_waiting, sc->mly_mmbox->mmm_health.status.next_event); sc->mly_event_waiting = sc->mly_mmbox->mmm_health.status.next_event; /* wake up anyone that might be interested in this */ wakeup(&sc->mly_event_change); } if (sc->mly_event_counter != sc->mly_event_waiting) mly_fetch_event(sc); } /******************************************************************************** * Fetch one event from the controller. * * If we fail due to resource starvation, we'll be retried the next time a * command completes. */ static void mly_fetch_event(struct mly_softc *sc) { struct mly_command *mc; struct mly_command_ioctl *mci; u_int32_t event; debug_called(1); /* get a command */ if (mly_alloc_command(sc, &mc)) return; /* set up the data buffer */ if ((mc->mc_data = malloc(sizeof(struct mly_event), M_DEVBUF, M_NOWAIT | M_ZERO)) == NULL) { mly_release_command(mc); return; } mc->mc_length = sizeof(struct mly_event); mc->mc_flags |= MLY_CMD_DATAIN; mc->mc_complete = mly_complete_event; /* * Get an event number to fetch. It's possible that we've raced with another * context for the last event, in which case there will be no more events. */ if (sc->mly_event_counter == sc->mly_event_waiting) { mly_release_command(mc); return; } event = sc->mly_event_counter++; /* * Build the ioctl. * * At this point we are committed to sending this request, as it * will be the only one constructed for this particular event number. */ mci = (struct mly_command_ioctl *)&mc->mc_packet->ioctl; mci->opcode = MDACMD_IOCTL; mci->data_size = sizeof(struct mly_event); mci->addr.phys.lun = (event >> 16) & 0xff; mci->addr.phys.target = (event >> 24) & 0xff; mci->addr.phys.channel = 0; mci->addr.phys.controller = 0; mci->timeout.value = 30; mci->timeout.scale = MLY_TIMEOUT_SECONDS; mci->sub_ioctl = MDACIOCTL_GETEVENT; mci->param.getevent.sequence_number_low = event & 0xffff; debug(1, "fetch event %u", event); /* * Submit the command. * * Note that failure of mly_start() will result in this event never being * fetched. */ if (mly_start(mc) != 0) { mly_printf(sc, "couldn't fetch event %u\n", event); mly_release_command(mc); } } /******************************************************************************** * Handle the completion of an event poll. */ static void mly_complete_event(struct mly_command *mc) { struct mly_softc *sc = mc->mc_sc; struct mly_event *me = (struct mly_event *)mc->mc_data; debug_called(1); /* * If the event was successfully fetched, process it. */ if (mc->mc_status == SCSI_STATUS_OK) { mly_process_event(sc, me); free(me, M_DEVBUF); } mly_release_command(mc); /* * Check for another event. */ mly_check_event(sc); } /******************************************************************************** * Process a controller event. */ static void mly_process_event(struct mly_softc *sc, struct mly_event *me) { struct scsi_sense_data_fixed *ssd; char *fp, *tp; int bus, target, event, class, action; ssd = (struct scsi_sense_data_fixed *)&me->sense[0]; /* * Errors can be reported using vendor-unique sense data. In this case, the * event code will be 0x1c (Request sense data present), the sense key will * be 0x09 (vendor specific), the MSB of the ASC will be set, and the * actual event code will be a 16-bit value comprised of the ASCQ (low byte) * and low seven bits of the ASC (low seven bits of the high byte). */ if ((me->code == 0x1c) && ((ssd->flags & SSD_KEY) == SSD_KEY_Vendor_Specific) && (ssd->add_sense_code & 0x80)) { event = ((int)(ssd->add_sense_code & ~0x80) << 8) + ssd->add_sense_code_qual; } else { event = me->code; } /* look up event, get codes */ fp = mly_describe_code(mly_table_event, event); debug(1, "Event %d code 0x%x", me->sequence_number, me->code); /* quiet event? */ class = fp[0]; if (isupper(class) && bootverbose) class = tolower(class); /* get action code, text string */ action = fp[1]; tp = &fp[2]; /* * Print some information about the event. * * This code uses a table derived from the corresponding portion of the Linux * driver, and thus the parser is very similar. */ switch(class) { case 'p': /* error on physical device */ mly_printf(sc, "physical device %d:%d %s\n", me->channel, me->target, tp); if (action == 'r') sc->mly_btl[me->channel][me->target].mb_flags |= MLY_BTL_RESCAN; break; case 'l': /* error on logical unit */ case 'm': /* message about logical unit */ bus = MLY_LOGDEV_BUS(sc, me->lun); target = MLY_LOGDEV_TARGET(sc, me->lun); mly_name_device(sc, bus, target); mly_printf(sc, "logical device %d (%s) %s\n", me->lun, sc->mly_btl[bus][target].mb_name, tp); if (action == 'r') sc->mly_btl[bus][target].mb_flags |= MLY_BTL_RESCAN; break; case 's': /* report of sense data */ if (((ssd->flags & SSD_KEY) == SSD_KEY_NO_SENSE) || (((ssd->flags & SSD_KEY) == SSD_KEY_NOT_READY) && (ssd->add_sense_code == 0x04) && ((ssd->add_sense_code_qual == 0x01) || (ssd->add_sense_code_qual == 0x02)))) break; /* ignore NO_SENSE or NOT_READY in one case */ mly_printf(sc, "physical device %d:%d %s\n", me->channel, me->target, tp); mly_printf(sc, " sense key %d asc %02x ascq %02x\n", ssd->flags & SSD_KEY, ssd->add_sense_code, ssd->add_sense_code_qual); mly_printf(sc, " info %4D csi %4D\n", ssd->info, "", ssd->cmd_spec_info, ""); if (action == 'r') sc->mly_btl[me->channel][me->target].mb_flags |= MLY_BTL_RESCAN; break; case 'e': mly_printf(sc, tp, me->target, me->lun); printf("\n"); break; case 'c': mly_printf(sc, "controller %s\n", tp); break; case '?': mly_printf(sc, "%s - %d\n", tp, me->code); break; default: /* probably a 'noisy' event being ignored */ break; } } /******************************************************************************** * Perform periodic activities. */ static void mly_periodic(void *data) { struct mly_softc *sc = (struct mly_softc *)data; int bus, target; debug_called(2); MLY_ASSERT_LOCKED(sc); /* * Scan devices. */ for (bus = 0; bus < sc->mly_cam_channels; bus++) { if (MLY_BUS_IS_VALID(sc, bus)) { for (target = 0; target < MLY_MAX_TARGETS; target++) { /* ignore the controller in this scan */ if (target == sc->mly_controllerparam->initiator_id) continue; /* perform device rescan? */ if (sc->mly_btl[bus][target].mb_flags & MLY_BTL_RESCAN) mly_rescan_btl(sc, bus, target); } } } /* check for controller events */ mly_check_event(sc); /* reschedule ourselves */ callout_schedule(&sc->mly_periodic, MLY_PERIODIC_INTERVAL * hz); } /******************************************************************************** ******************************************************************************** Command Processing ******************************************************************************** ********************************************************************************/ /******************************************************************************** * Run a command and wait for it to complete. * */ static int mly_immediate_command(struct mly_command *mc) { struct mly_softc *sc = mc->mc_sc; int error; debug_called(1); MLY_ASSERT_LOCKED(sc); if ((error = mly_start(mc))) { return(error); } if (sc->mly_state & MLY_STATE_INTERRUPTS_ON) { /* sleep on the command */ while(!(mc->mc_flags & MLY_CMD_COMPLETE)) { mtx_sleep(mc, &sc->mly_lock, PRIBIO, "mlywait", 0); } } else { /* spin and collect status while we do */ while(!(mc->mc_flags & MLY_CMD_COMPLETE)) { mly_done(mc->mc_sc); } } return(0); } /******************************************************************************** * Deliver a command to the controller. * * XXX it would be good to just queue commands that we can't submit immediately * and send them later, but we probably want a wrapper for that so that * we don't hang on a failed submission for an immediate command. */ static int mly_start(struct mly_command *mc) { struct mly_softc *sc = mc->mc_sc; union mly_command_packet *pkt; debug_called(2); MLY_ASSERT_LOCKED(sc); /* * Set the command up for delivery to the controller. */ mly_map_command(mc); mc->mc_packet->generic.command_id = mc->mc_slot; #ifdef MLY_DEBUG mc->mc_timestamp = time_second; #endif /* * Do we have to use the hardware mailbox? */ if (!(sc->mly_state & MLY_STATE_MMBOX_ACTIVE)) { /* * Check to see if the controller is ready for us. */ if (MLY_IDBR_TRUE(sc, MLY_HM_CMDSENT)) { return(EBUSY); } mc->mc_flags |= MLY_CMD_BUSY; /* * It's ready, send the command. */ MLY_SET_MBOX(sc, sc->mly_command_mailbox, &mc->mc_packetphys); MLY_SET_REG(sc, sc->mly_idbr, MLY_HM_CMDSENT); } else { /* use memory-mailbox mode */ pkt = &sc->mly_mmbox->mmm_command[sc->mly_mmbox_command_index]; /* check to see if the next index is free yet */ if (pkt->mmbox.flag != 0) { return(EBUSY); } mc->mc_flags |= MLY_CMD_BUSY; /* copy in new command */ bcopy(mc->mc_packet->mmbox.data, pkt->mmbox.data, sizeof(pkt->mmbox.data)); /* barrier to ensure completion of previous write before we write the flag */ bus_barrier(sc->mly_regs_resource, 0, 0, BUS_SPACE_BARRIER_WRITE); /* copy flag last */ pkt->mmbox.flag = mc->mc_packet->mmbox.flag; /* barrier to ensure completion of previous write before we notify the controller */ bus_barrier(sc->mly_regs_resource, 0, 0, BUS_SPACE_BARRIER_WRITE); /* signal controller, update index */ MLY_SET_REG(sc, sc->mly_idbr, MLY_AM_CMDSENT); sc->mly_mmbox_command_index = (sc->mly_mmbox_command_index + 1) % MLY_MMBOX_COMMANDS; } mly_enqueue_busy(mc); return(0); } /******************************************************************************** * Pick up command status from the controller, schedule a completion event */ static void mly_done(struct mly_softc *sc) { struct mly_command *mc; union mly_status_packet *sp; u_int16_t slot; int worked; MLY_ASSERT_LOCKED(sc); worked = 0; /* pick up hardware-mailbox commands */ if (MLY_ODBR_TRUE(sc, MLY_HM_STSREADY)) { slot = MLY_GET_REG2(sc, sc->mly_status_mailbox); if (slot < MLY_SLOT_MAX) { mc = &sc->mly_command[slot - MLY_SLOT_START]; mc->mc_status = MLY_GET_REG(sc, sc->mly_status_mailbox + 2); mc->mc_sense = MLY_GET_REG(sc, sc->mly_status_mailbox + 3); mc->mc_resid = MLY_GET_REG4(sc, sc->mly_status_mailbox + 4); mly_remove_busy(mc); mc->mc_flags &= ~MLY_CMD_BUSY; mly_enqueue_complete(mc); worked = 1; } else { /* slot 0xffff may mean "extremely bogus command" */ mly_printf(sc, "got HM completion for illegal slot %u\n", slot); } /* unconditionally acknowledge status */ MLY_SET_REG(sc, sc->mly_odbr, MLY_HM_STSREADY); MLY_SET_REG(sc, sc->mly_idbr, MLY_HM_STSACK); } /* pick up memory-mailbox commands */ if (MLY_ODBR_TRUE(sc, MLY_AM_STSREADY)) { for (;;) { sp = &sc->mly_mmbox->mmm_status[sc->mly_mmbox_status_index]; /* check for more status */ if (sp->mmbox.flag == 0) break; /* get slot number */ slot = sp->status.command_id; if (slot < MLY_SLOT_MAX) { mc = &sc->mly_command[slot - MLY_SLOT_START]; mc->mc_status = sp->status.status; mc->mc_sense = sp->status.sense_length; mc->mc_resid = sp->status.residue; mly_remove_busy(mc); mc->mc_flags &= ~MLY_CMD_BUSY; mly_enqueue_complete(mc); worked = 1; } else { /* slot 0xffff may mean "extremely bogus command" */ mly_printf(sc, "got AM completion for illegal slot %u at %d\n", slot, sc->mly_mmbox_status_index); } /* clear and move to next index */ sp->mmbox.flag = 0; sc->mly_mmbox_status_index = (sc->mly_mmbox_status_index + 1) % MLY_MMBOX_STATUS; } /* acknowledge that we have collected status value(s) */ MLY_SET_REG(sc, sc->mly_odbr, MLY_AM_STSREADY); } if (worked) { if (sc->mly_state & MLY_STATE_INTERRUPTS_ON) taskqueue_enqueue(taskqueue_thread, &sc->mly_task_complete); else mly_complete(sc); } } /******************************************************************************** * Process completed commands */ static void mly_complete_handler(void *context, int pending) { struct mly_softc *sc = (struct mly_softc *)context; MLY_LOCK(sc); mly_complete(sc); MLY_UNLOCK(sc); } static void mly_complete(struct mly_softc *sc) { struct mly_command *mc; void (* mc_complete)(struct mly_command *mc); debug_called(2); /* * Spin pulling commands off the completed queue and processing them. */ while ((mc = mly_dequeue_complete(sc)) != NULL) { /* * Free controller resources, mark command complete. * * Note that as soon as we mark the command complete, it may be freed * out from under us, so we need to save the mc_complete field in * order to later avoid dereferencing mc. (We would not expect to * have a polling/sleeping consumer with mc_complete != NULL). */ mly_unmap_command(mc); mc_complete = mc->mc_complete; mc->mc_flags |= MLY_CMD_COMPLETE; /* * Call completion handler or wake up sleeping consumer. */ if (mc_complete != NULL) { mc_complete(mc); } else { wakeup(mc); } } /* * XXX if we are deferring commands due to controller-busy status, we should * retry submitting them here. */ } /******************************************************************************** ******************************************************************************** Command Buffer Management ******************************************************************************** ********************************************************************************/ /******************************************************************************** * Allocate a command. */ static int mly_alloc_command(struct mly_softc *sc, struct mly_command **mcp) { struct mly_command *mc; debug_called(3); if ((mc = mly_dequeue_free(sc)) == NULL) return(ENOMEM); *mcp = mc; return(0); } /******************************************************************************** * Release a command back to the freelist. */ static void mly_release_command(struct mly_command *mc) { debug_called(3); /* * Fill in parts of the command that may cause confusion if * a consumer doesn't when we are later allocated. */ mc->mc_data = NULL; mc->mc_flags = 0; mc->mc_complete = NULL; mc->mc_private = NULL; /* * By default, we set up to overwrite the command packet with * sense information. */ mc->mc_packet->generic.sense_buffer_address = mc->mc_packetphys; mc->mc_packet->generic.maximum_sense_size = sizeof(union mly_command_packet); mly_enqueue_free(mc); } /******************************************************************************** * Map helper for command allocation. */ static void mly_alloc_commands_map(void *arg, bus_dma_segment_t *segs, int nseg, int error) { struct mly_softc *sc = (struct mly_softc *)arg; debug_called(1); sc->mly_packetphys = segs[0].ds_addr; } /******************************************************************************** * Allocate and initialise command and packet structures. * * If the controller supports fewer than MLY_MAX_COMMANDS commands, limit our * allocation to that number. If we don't yet know how many commands the * controller supports, allocate a very small set (suitable for initialisation * purposes only). */ static int mly_alloc_commands(struct mly_softc *sc) { struct mly_command *mc; int i, ncmd; if (sc->mly_controllerinfo == NULL) { ncmd = 4; } else { ncmd = min(MLY_MAX_COMMANDS, sc->mly_controllerinfo->maximum_parallel_commands); } /* * Allocate enough space for all the command packets in one chunk and * map them permanently into controller-visible space. */ if (bus_dmamem_alloc(sc->mly_packet_dmat, (void **)&sc->mly_packet, BUS_DMA_NOWAIT, &sc->mly_packetmap)) { return(ENOMEM); } if (bus_dmamap_load(sc->mly_packet_dmat, sc->mly_packetmap, sc->mly_packet, ncmd * sizeof(union mly_command_packet), mly_alloc_commands_map, sc, BUS_DMA_NOWAIT) != 0) return (ENOMEM); for (i = 0; i < ncmd; i++) { mc = &sc->mly_command[i]; bzero(mc, sizeof(*mc)); mc->mc_sc = sc; mc->mc_slot = MLY_SLOT_START + i; mc->mc_packet = sc->mly_packet + i; mc->mc_packetphys = sc->mly_packetphys + (i * sizeof(union mly_command_packet)); if (!bus_dmamap_create(sc->mly_buffer_dmat, 0, &mc->mc_datamap)) mly_release_command(mc); } return(0); } /******************************************************************************** * Free all the storage held by commands. * * Must be called with all commands on the free list. */ static void mly_release_commands(struct mly_softc *sc) { struct mly_command *mc; /* throw away command buffer DMA maps */ while (mly_alloc_command(sc, &mc) == 0) bus_dmamap_destroy(sc->mly_buffer_dmat, mc->mc_datamap); /* release the packet storage */ if (sc->mly_packet != NULL) { bus_dmamap_unload(sc->mly_packet_dmat, sc->mly_packetmap); bus_dmamem_free(sc->mly_packet_dmat, sc->mly_packet, sc->mly_packetmap); sc->mly_packet = NULL; } } /******************************************************************************** * Command-mapping helper function - populate this command's s/g table * with the s/g entries for its data. */ static void mly_map_command_sg(void *arg, bus_dma_segment_t *segs, int nseg, int error) { struct mly_command *mc = (struct mly_command *)arg; struct mly_softc *sc = mc->mc_sc; struct mly_command_generic *gen = &(mc->mc_packet->generic); struct mly_sg_entry *sg; int i, tabofs; debug_called(2); /* can we use the transfer structure directly? */ if (nseg <= 2) { sg = &gen->transfer.direct.sg[0]; gen->command_control.extended_sg_table = 0; } else { tabofs = ((mc->mc_slot - MLY_SLOT_START) * MLY_MAX_SGENTRIES); sg = sc->mly_sg_table + tabofs; gen->transfer.indirect.entries[0] = nseg; gen->transfer.indirect.table_physaddr[0] = sc->mly_sg_busaddr + (tabofs * sizeof(struct mly_sg_entry)); gen->command_control.extended_sg_table = 1; } /* copy the s/g table */ for (i = 0; i < nseg; i++) { sg[i].physaddr = segs[i].ds_addr; sg[i].length = segs[i].ds_len; } } #if 0 /******************************************************************************** * Command-mapping helper function - save the cdb's physical address. * * We don't support 'large' SCSI commands at this time, so this is unused. */ static void mly_map_command_cdb(void *arg, bus_dma_segment_t *segs, int nseg, int error) { struct mly_command *mc = (struct mly_command *)arg; debug_called(2); /* XXX can we safely assume that a CDB will never cross a page boundary? */ if ((segs[0].ds_addr % PAGE_SIZE) > ((segs[0].ds_addr + mc->mc_packet->scsi_large.cdb_length) % PAGE_SIZE)) panic("cdb crosses page boundary"); /* fix up fields in the command packet */ mc->mc_packet->scsi_large.cdb_physaddr = segs[0].ds_addr; } #endif /******************************************************************************** * Map a command into controller-visible space */ static void mly_map_command(struct mly_command *mc) { struct mly_softc *sc = mc->mc_sc; debug_called(2); /* don't map more than once */ if (mc->mc_flags & MLY_CMD_MAPPED) return; /* does the command have a data buffer? */ if (mc->mc_data != NULL) { if (mc->mc_flags & MLY_CMD_CCB) bus_dmamap_load_ccb(sc->mly_buffer_dmat, mc->mc_datamap, mc->mc_data, mly_map_command_sg, mc, 0); else bus_dmamap_load(sc->mly_buffer_dmat, mc->mc_datamap, mc->mc_data, mc->mc_length, mly_map_command_sg, mc, 0); if (mc->mc_flags & MLY_CMD_DATAIN) bus_dmamap_sync(sc->mly_buffer_dmat, mc->mc_datamap, BUS_DMASYNC_PREREAD); if (mc->mc_flags & MLY_CMD_DATAOUT) bus_dmamap_sync(sc->mly_buffer_dmat, mc->mc_datamap, BUS_DMASYNC_PREWRITE); } mc->mc_flags |= MLY_CMD_MAPPED; } /******************************************************************************** * Unmap a command from controller-visible space */ static void mly_unmap_command(struct mly_command *mc) { struct mly_softc *sc = mc->mc_sc; debug_called(2); if (!(mc->mc_flags & MLY_CMD_MAPPED)) return; /* does the command have a data buffer? */ if (mc->mc_data != NULL) { if (mc->mc_flags & MLY_CMD_DATAIN) bus_dmamap_sync(sc->mly_buffer_dmat, mc->mc_datamap, BUS_DMASYNC_POSTREAD); if (mc->mc_flags & MLY_CMD_DATAOUT) bus_dmamap_sync(sc->mly_buffer_dmat, mc->mc_datamap, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(sc->mly_buffer_dmat, mc->mc_datamap); } mc->mc_flags &= ~MLY_CMD_MAPPED; } /******************************************************************************** ******************************************************************************** CAM interface ******************************************************************************** ********************************************************************************/ /******************************************************************************** * Attach the physical and virtual SCSI busses to CAM. * * Physical bus numbering starts from 0, virtual bus numbering from one greater * than the highest physical bus. Physical busses are only registered if * the kernel environment variable "hw.mly.register_physical_channels" is set. * * When we refer to a "bus", we are referring to the bus number registered with * the SIM, whereas a "channel" is a channel number given to the adapter. In order * to keep things simple, we map these 1:1, so "bus" and "channel" may be used * interchangeably. */ static int mly_cam_attach(struct mly_softc *sc) { struct cam_devq *devq; int chn, i; debug_called(1); /* * Allocate a devq for all our channels combined. */ if ((devq = cam_simq_alloc(sc->mly_controllerinfo->maximum_parallel_commands)) == NULL) { mly_printf(sc, "can't allocate CAM SIM queue\n"); return(ENOMEM); } /* * If physical channel registration has been requested, register these first. * Note that we enable tagged command queueing for physical channels. */ if (testenv("hw.mly.register_physical_channels")) { chn = 0; for (i = 0; i < sc->mly_controllerinfo->physical_channels_present; i++, chn++) { if ((sc->mly_cam_sim[chn] = cam_sim_alloc(mly_cam_action, mly_cam_poll, "mly", sc, device_get_unit(sc->mly_dev), &sc->mly_lock, sc->mly_controllerinfo->maximum_parallel_commands, 1, devq)) == NULL) { return(ENOMEM); } MLY_LOCK(sc); if (xpt_bus_register(sc->mly_cam_sim[chn], sc->mly_dev, chn)) { MLY_UNLOCK(sc); mly_printf(sc, "CAM XPT phsyical channel registration failed\n"); return(ENXIO); } MLY_UNLOCK(sc); debug(1, "registered physical channel %d", chn); } } /* * Register our virtual channels, with bus numbers matching channel numbers. */ chn = sc->mly_controllerinfo->physical_channels_present; for (i = 0; i < sc->mly_controllerinfo->virtual_channels_present; i++, chn++) { if ((sc->mly_cam_sim[chn] = cam_sim_alloc(mly_cam_action, mly_cam_poll, "mly", sc, device_get_unit(sc->mly_dev), &sc->mly_lock, sc->mly_controllerinfo->maximum_parallel_commands, 0, devq)) == NULL) { return(ENOMEM); } MLY_LOCK(sc); if (xpt_bus_register(sc->mly_cam_sim[chn], sc->mly_dev, chn)) { MLY_UNLOCK(sc); mly_printf(sc, "CAM XPT virtual channel registration failed\n"); return(ENXIO); } MLY_UNLOCK(sc); debug(1, "registered virtual channel %d", chn); } /* * This is the total number of channels that (might have been) registered with * CAM. Some may not have been; check the mly_cam_sim array to be certain. */ sc->mly_cam_channels = sc->mly_controllerinfo->physical_channels_present + sc->mly_controllerinfo->virtual_channels_present; return(0); } /******************************************************************************** * Detach from CAM */ static void mly_cam_detach(struct mly_softc *sc) { int i; debug_called(1); MLY_LOCK(sc); for (i = 0; i < sc->mly_cam_channels; i++) { if (sc->mly_cam_sim[i] != NULL) { xpt_bus_deregister(cam_sim_path(sc->mly_cam_sim[i])); cam_sim_free(sc->mly_cam_sim[i], 0); } } MLY_UNLOCK(sc); if (sc->mly_cam_devq != NULL) cam_simq_free(sc->mly_cam_devq); } /************************************************************************ * Rescan a device. */ static void mly_cam_rescan_btl(struct mly_softc *sc, int bus, int target) { union ccb *ccb; debug_called(1); if ((ccb = xpt_alloc_ccb()) == NULL) { mly_printf(sc, "rescan failed (can't allocate CCB)\n"); return; } if (xpt_create_path(&ccb->ccb_h.path, NULL, cam_sim_path(sc->mly_cam_sim[bus]), target, 0) != CAM_REQ_CMP) { mly_printf(sc, "rescan failed (can't create path)\n"); xpt_free_ccb(ccb); return; } debug(1, "rescan target %d:%d", bus, target); xpt_rescan(ccb); } /******************************************************************************** * Handle an action requested by CAM */ static void mly_cam_action(struct cam_sim *sim, union ccb *ccb) { struct mly_softc *sc = cam_sim_softc(sim); debug_called(2); MLY_ASSERT_LOCKED(sc); switch (ccb->ccb_h.func_code) { /* perform SCSI I/O */ case XPT_SCSI_IO: if (!mly_cam_action_io(sim, (struct ccb_scsiio *)&ccb->csio)) return; break; /* perform geometry calculations */ case XPT_CALC_GEOMETRY: { struct ccb_calc_geometry *ccg = &ccb->ccg; u_int32_t secs_per_cylinder; debug(2, "XPT_CALC_GEOMETRY %d:%d:%d", cam_sim_bus(sim), ccb->ccb_h.target_id, ccb->ccb_h.target_lun); if (sc->mly_controllerparam->bios_geometry == MLY_BIOSGEOM_8G) { ccg->heads = 255; ccg->secs_per_track = 63; } else { /* MLY_BIOSGEOM_2G */ ccg->heads = 128; ccg->secs_per_track = 32; } secs_per_cylinder = ccg->heads * ccg->secs_per_track; ccg->cylinders = ccg->volume_size / secs_per_cylinder; ccb->ccb_h.status = CAM_REQ_CMP; break; } /* handle path attribute inquiry */ case XPT_PATH_INQ: { struct ccb_pathinq *cpi = &ccb->cpi; debug(2, "XPT_PATH_INQ %d:%d:%d", cam_sim_bus(sim), ccb->ccb_h.target_id, ccb->ccb_h.target_lun); cpi->version_num = 1; cpi->hba_inquiry = PI_TAG_ABLE; /* XXX extra flags for physical channels? */ cpi->target_sprt = 0; cpi->hba_misc = 0; cpi->max_target = MLY_MAX_TARGETS - 1; cpi->max_lun = MLY_MAX_LUNS - 1; cpi->initiator_id = sc->mly_controllerparam->initiator_id; strlcpy(cpi->sim_vid, "FreeBSD", SIM_IDLEN); strlcpy(cpi->hba_vid, "Mylex", HBA_IDLEN); strlcpy(cpi->dev_name, cam_sim_name(sim), DEV_IDLEN); cpi->unit_number = cam_sim_unit(sim); cpi->bus_id = cam_sim_bus(sim); cpi->base_transfer_speed = 132 * 1024; /* XXX what to set this to? */ cpi->transport = XPORT_SPI; cpi->transport_version = 2; cpi->protocol = PROTO_SCSI; cpi->protocol_version = SCSI_REV_2; ccb->ccb_h.status = CAM_REQ_CMP; break; } case XPT_GET_TRAN_SETTINGS: { struct ccb_trans_settings *cts = &ccb->cts; int bus, target; struct ccb_trans_settings_scsi *scsi = &cts->proto_specific.scsi; struct ccb_trans_settings_spi *spi = &cts->xport_specific.spi; cts->protocol = PROTO_SCSI; cts->protocol_version = SCSI_REV_2; cts->transport = XPORT_SPI; cts->transport_version = 2; scsi->flags = 0; scsi->valid = 0; spi->flags = 0; spi->valid = 0; bus = cam_sim_bus(sim); target = cts->ccb_h.target_id; debug(2, "XPT_GET_TRAN_SETTINGS %d:%d", bus, target); /* logical device? */ if (sc->mly_btl[bus][target].mb_flags & MLY_BTL_LOGICAL) { /* nothing special for these */ /* physical device? */ } else if (sc->mly_btl[bus][target].mb_flags & MLY_BTL_PHYSICAL) { /* allow CAM to try tagged transactions */ scsi->flags |= CTS_SCSI_FLAGS_TAG_ENB; scsi->valid |= CTS_SCSI_VALID_TQ; /* convert speed (MHz) to usec */ if (sc->mly_btl[bus][target].mb_speed == 0) { spi->sync_period = 1000000 / 5; } else { spi->sync_period = 1000000 / sc->mly_btl[bus][target].mb_speed; } /* convert bus width to CAM internal encoding */ switch (sc->mly_btl[bus][target].mb_width) { case 32: spi->bus_width = MSG_EXT_WDTR_BUS_32_BIT; break; case 16: spi->bus_width = MSG_EXT_WDTR_BUS_16_BIT; break; case 8: default: spi->bus_width = MSG_EXT_WDTR_BUS_8_BIT; break; } spi->valid |= CTS_SPI_VALID_SYNC_RATE | CTS_SPI_VALID_BUS_WIDTH; /* not a device, bail out */ } else { cts->ccb_h.status = CAM_REQ_CMP_ERR; break; } /* disconnect always OK */ spi->flags |= CTS_SPI_FLAGS_DISC_ENB; spi->valid |= CTS_SPI_VALID_DISC; cts->ccb_h.status = CAM_REQ_CMP; break; } default: /* we can't do this */ debug(2, "unspported func_code = 0x%x", ccb->ccb_h.func_code); ccb->ccb_h.status = CAM_REQ_INVALID; break; } xpt_done(ccb); } /******************************************************************************** * Handle an I/O operation requested by CAM */ static int mly_cam_action_io(struct cam_sim *sim, struct ccb_scsiio *csio) { struct mly_softc *sc = cam_sim_softc(sim); struct mly_command *mc; struct mly_command_scsi_small *ss; int bus, target; int error; bus = cam_sim_bus(sim); target = csio->ccb_h.target_id; debug(2, "XPT_SCSI_IO %d:%d:%d", bus, target, csio->ccb_h.target_lun); /* validate bus number */ if (!MLY_BUS_IS_VALID(sc, bus)) { debug(0, " invalid bus %d", bus); csio->ccb_h.status = CAM_REQ_CMP_ERR; } /* check for I/O attempt to a protected device */ if (sc->mly_btl[bus][target].mb_flags & MLY_BTL_PROTECTED) { debug(2, " device protected"); csio->ccb_h.status = CAM_REQ_CMP_ERR; } /* check for I/O attempt to nonexistent device */ if (!(sc->mly_btl[bus][target].mb_flags & (MLY_BTL_LOGICAL | MLY_BTL_PHYSICAL))) { debug(2, " device %d:%d does not exist", bus, target); csio->ccb_h.status = CAM_REQ_CMP_ERR; } /* XXX increase if/when we support large SCSI commands */ if (csio->cdb_len > MLY_CMD_SCSI_SMALL_CDB) { debug(0, " command too large (%d > %d)", csio->cdb_len, MLY_CMD_SCSI_SMALL_CDB); csio->ccb_h.status = CAM_REQ_CMP_ERR; } /* check that the CDB pointer is not to a physical address */ if ((csio->ccb_h.flags & CAM_CDB_POINTER) && (csio->ccb_h.flags & CAM_CDB_PHYS)) { debug(0, " CDB pointer is to physical address"); csio->ccb_h.status = CAM_REQ_CMP_ERR; } /* abandon aborted ccbs or those that have failed validation */ if ((csio->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_INPROG) { debug(2, "abandoning CCB due to abort/validation failure"); return(EINVAL); } /* * Get a command, or push the ccb back to CAM and freeze the queue. */ if ((error = mly_alloc_command(sc, &mc))) { xpt_freeze_simq(sim, 1); csio->ccb_h.status |= CAM_REQUEUE_REQ; sc->mly_qfrzn_cnt++; return(error); } /* build the command */ mc->mc_data = csio; mc->mc_length = csio->dxfer_len; mc->mc_complete = mly_cam_complete; mc->mc_private = csio; mc->mc_flags |= MLY_CMD_CCB; /* XXX This code doesn't set the data direction in mc_flags. */ /* save the bus number in the ccb for later recovery XXX should be a better way */ csio->ccb_h.sim_priv.entries[0].field = bus; /* build the packet for the controller */ ss = &mc->mc_packet->scsi_small; ss->opcode = MDACMD_SCSI; if (csio->ccb_h.flags & CAM_DIS_DISCONNECT) ss->command_control.disable_disconnect = 1; if ((csio->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_OUT) ss->command_control.data_direction = MLY_CCB_WRITE; ss->data_size = csio->dxfer_len; ss->addr.phys.lun = csio->ccb_h.target_lun; ss->addr.phys.target = csio->ccb_h.target_id; ss->addr.phys.channel = bus; if (csio->ccb_h.timeout < (60 * 1000)) { ss->timeout.value = csio->ccb_h.timeout / 1000; ss->timeout.scale = MLY_TIMEOUT_SECONDS; } else if (csio->ccb_h.timeout < (60 * 60 * 1000)) { ss->timeout.value = csio->ccb_h.timeout / (60 * 1000); ss->timeout.scale = MLY_TIMEOUT_MINUTES; } else { ss->timeout.value = csio->ccb_h.timeout / (60 * 60 * 1000); /* overflow? */ ss->timeout.scale = MLY_TIMEOUT_HOURS; } ss->maximum_sense_size = csio->sense_len; ss->cdb_length = csio->cdb_len; if (csio->ccb_h.flags & CAM_CDB_POINTER) { bcopy(csio->cdb_io.cdb_ptr, ss->cdb, csio->cdb_len); } else { bcopy(csio->cdb_io.cdb_bytes, ss->cdb, csio->cdb_len); } /* give the command to the controller */ if ((error = mly_start(mc))) { xpt_freeze_simq(sim, 1); csio->ccb_h.status |= CAM_REQUEUE_REQ; sc->mly_qfrzn_cnt++; return(error); } return(0); } /******************************************************************************** * Check for possibly-completed commands. */ static void mly_cam_poll(struct cam_sim *sim) { struct mly_softc *sc = cam_sim_softc(sim); debug_called(2); mly_done(sc); } /******************************************************************************** * Handle completion of a command - pass results back through the CCB */ static void mly_cam_complete(struct mly_command *mc) { struct mly_softc *sc = mc->mc_sc; struct ccb_scsiio *csio = (struct ccb_scsiio *)mc->mc_private; struct scsi_inquiry_data *inq = (struct scsi_inquiry_data *)csio->data_ptr; struct mly_btl *btl; u_int8_t cmd; int bus, target; debug_called(2); csio->scsi_status = mc->mc_status; switch(mc->mc_status) { case SCSI_STATUS_OK: /* * In order to report logical device type and status, we overwrite * the result of the INQUIRY command to logical devices. */ bus = csio->ccb_h.sim_priv.entries[0].field; target = csio->ccb_h.target_id; /* XXX validate bus/target? */ if (sc->mly_btl[bus][target].mb_flags & MLY_BTL_LOGICAL) { if (csio->ccb_h.flags & CAM_CDB_POINTER) { cmd = *csio->cdb_io.cdb_ptr; } else { cmd = csio->cdb_io.cdb_bytes[0]; } if (cmd == INQUIRY) { btl = &sc->mly_btl[bus][target]; padstr(inq->vendor, mly_describe_code(mly_table_device_type, btl->mb_type), 8); padstr(inq->product, mly_describe_code(mly_table_device_state, btl->mb_state), 16); padstr(inq->revision, "", 4); } } debug(2, "SCSI_STATUS_OK"); csio->ccb_h.status = CAM_REQ_CMP; break; case SCSI_STATUS_CHECK_COND: debug(1, "SCSI_STATUS_CHECK_COND sense %d resid %d", mc->mc_sense, mc->mc_resid); csio->ccb_h.status = CAM_SCSI_STATUS_ERROR; bzero(&csio->sense_data, SSD_FULL_SIZE); bcopy(mc->mc_packet, &csio->sense_data, mc->mc_sense); csio->sense_len = mc->mc_sense; csio->ccb_h.status |= CAM_AUTOSNS_VALID; csio->resid = mc->mc_resid; /* XXX this is a signed value... */ break; case SCSI_STATUS_BUSY: debug(1, "SCSI_STATUS_BUSY"); csio->ccb_h.status = CAM_SCSI_BUSY; break; default: debug(1, "unknown status 0x%x", csio->scsi_status); csio->ccb_h.status = CAM_REQ_CMP_ERR; break; } if (sc->mly_qfrzn_cnt) { csio->ccb_h.status |= CAM_RELEASE_SIMQ; sc->mly_qfrzn_cnt--; } xpt_done((union ccb *)csio); mly_release_command(mc); } /******************************************************************************** * Find a peripheral attahed at (bus),(target) */ static struct cam_periph * mly_find_periph(struct mly_softc *sc, int bus, int target) { struct cam_periph *periph; struct cam_path *path; int status; status = xpt_create_path(&path, NULL, cam_sim_path(sc->mly_cam_sim[bus]), target, 0); if (status == CAM_REQ_CMP) { periph = cam_periph_find(path, NULL); xpt_free_path(path); } else { periph = NULL; } return(periph); } /******************************************************************************** * Name the device at (bus)(target) */ static int mly_name_device(struct mly_softc *sc, int bus, int target) { struct cam_periph *periph; if ((periph = mly_find_periph(sc, bus, target)) != NULL) { sprintf(sc->mly_btl[bus][target].mb_name, "%s%d", periph->periph_name, periph->unit_number); return(0); } sc->mly_btl[bus][target].mb_name[0] = 0; return(ENOENT); } /******************************************************************************** ******************************************************************************** Hardware Control ******************************************************************************** ********************************************************************************/ /******************************************************************************** * Handshake with the firmware while the card is being initialised. */ static int mly_fwhandshake(struct mly_softc *sc) { u_int8_t error, param0, param1; int spinup = 0; debug_called(1); /* set HM_STSACK and let the firmware initialise */ MLY_SET_REG(sc, sc->mly_idbr, MLY_HM_STSACK); DELAY(1000); /* too short? */ /* if HM_STSACK is still true, the controller is initialising */ if (!MLY_IDBR_TRUE(sc, MLY_HM_STSACK)) return(0); mly_printf(sc, "controller initialisation started\n"); /* spin waiting for initialisation to finish, or for a message to be delivered */ while (MLY_IDBR_TRUE(sc, MLY_HM_STSACK)) { /* check for a message */ if (MLY_ERROR_VALID(sc)) { error = MLY_GET_REG(sc, sc->mly_error_status) & ~MLY_MSG_EMPTY; param0 = MLY_GET_REG(sc, sc->mly_command_mailbox); param1 = MLY_GET_REG(sc, sc->mly_command_mailbox + 1); switch(error) { case MLY_MSG_SPINUP: if (!spinup) { mly_printf(sc, "drive spinup in progress\n"); spinup = 1; /* only print this once (should print drive being spun?) */ } break; case MLY_MSG_RACE_RECOVERY_FAIL: mly_printf(sc, "mirror race recovery failed, one or more drives offline\n"); break; case MLY_MSG_RACE_IN_PROGRESS: mly_printf(sc, "mirror race recovery in progress\n"); break; case MLY_MSG_RACE_ON_CRITICAL: mly_printf(sc, "mirror race recovery on a critical drive\n"); break; case MLY_MSG_PARITY_ERROR: mly_printf(sc, "FATAL MEMORY PARITY ERROR\n"); return(ENXIO); default: mly_printf(sc, "unknown initialisation code 0x%x\n", error); } } } return(0); } /******************************************************************************** ******************************************************************************** Debugging and Diagnostics ******************************************************************************** ********************************************************************************/ /******************************************************************************** * Print some information about the controller. */ static void mly_describe_controller(struct mly_softc *sc) { struct mly_ioctl_getcontrollerinfo *mi = sc->mly_controllerinfo; mly_printf(sc, "%16s, %d channel%s, firmware %d.%02d-%d-%02d (%02d%02d%02d%02d), %dMB RAM\n", mi->controller_name, mi->physical_channels_present, (mi->physical_channels_present) > 1 ? "s" : "", mi->fw_major, mi->fw_minor, mi->fw_turn, mi->fw_build, /* XXX turn encoding? */ mi->fw_century, mi->fw_year, mi->fw_month, mi->fw_day, mi->memory_size); if (bootverbose) { mly_printf(sc, "%s %s (%x), %dMHz %d-bit %.16s\n", mly_describe_code(mly_table_oemname, mi->oem_information), mly_describe_code(mly_table_controllertype, mi->controller_type), mi->controller_type, mi->interface_speed, mi->interface_width, mi->interface_name); mly_printf(sc, "%dMB %dMHz %d-bit %s%s%s, cache %dMB\n", mi->memory_size, mi->memory_speed, mi->memory_width, mly_describe_code(mly_table_memorytype, mi->memory_type), mi->memory_parity ? "+parity": "",mi->memory_ecc ? "+ECC": "", mi->cache_size); mly_printf(sc, "CPU: %s @ %dMHz\n", mly_describe_code(mly_table_cputype, mi->cpu[0].type), mi->cpu[0].speed); if (mi->l2cache_size != 0) mly_printf(sc, "%dKB L2 cache\n", mi->l2cache_size); if (mi->exmemory_size != 0) mly_printf(sc, "%dMB %dMHz %d-bit private %s%s%s\n", mi->exmemory_size, mi->exmemory_speed, mi->exmemory_width, mly_describe_code(mly_table_memorytype, mi->exmemory_type), mi->exmemory_parity ? "+parity": "",mi->exmemory_ecc ? "+ECC": ""); mly_printf(sc, "battery backup %s\n", mi->bbu_present ? "present" : "not installed"); mly_printf(sc, "maximum data transfer %d blocks, maximum sg entries/command %d\n", mi->maximum_block_count, mi->maximum_sg_entries); mly_printf(sc, "logical devices present/critical/offline %d/%d/%d\n", mi->logical_devices_present, mi->logical_devices_critical, mi->logical_devices_offline); mly_printf(sc, "physical devices present %d\n", mi->physical_devices_present); mly_printf(sc, "physical disks present/offline %d/%d\n", mi->physical_disks_present, mi->physical_disks_offline); mly_printf(sc, "%d physical channel%s, %d virtual channel%s of %d possible\n", mi->physical_channels_present, mi->physical_channels_present == 1 ? "" : "s", mi->virtual_channels_present, mi->virtual_channels_present == 1 ? "" : "s", mi->virtual_channels_possible); mly_printf(sc, "%d parallel commands supported\n", mi->maximum_parallel_commands); mly_printf(sc, "%dMB flash ROM, %d of %d maximum cycles\n", mi->flash_size, mi->flash_age, mi->flash_maximum_age); } } #ifdef MLY_DEBUG /******************************************************************************** * Print some controller state */ static void mly_printstate(struct mly_softc *sc) { mly_printf(sc, "IDBR %02x ODBR %02x ERROR %02x (%x %x %x)\n", MLY_GET_REG(sc, sc->mly_idbr), MLY_GET_REG(sc, sc->mly_odbr), MLY_GET_REG(sc, sc->mly_error_status), sc->mly_idbr, sc->mly_odbr, sc->mly_error_status); mly_printf(sc, "IMASK %02x ISTATUS %02x\n", MLY_GET_REG(sc, sc->mly_interrupt_mask), MLY_GET_REG(sc, sc->mly_interrupt_status)); mly_printf(sc, "COMMAND %02x %02x %02x %02x %02x %02x %02x %02x\n", MLY_GET_REG(sc, sc->mly_command_mailbox), MLY_GET_REG(sc, sc->mly_command_mailbox + 1), MLY_GET_REG(sc, sc->mly_command_mailbox + 2), MLY_GET_REG(sc, sc->mly_command_mailbox + 3), MLY_GET_REG(sc, sc->mly_command_mailbox + 4), MLY_GET_REG(sc, sc->mly_command_mailbox + 5), MLY_GET_REG(sc, sc->mly_command_mailbox + 6), MLY_GET_REG(sc, sc->mly_command_mailbox + 7)); mly_printf(sc, "STATUS %02x %02x %02x %02x %02x %02x %02x %02x\n", MLY_GET_REG(sc, sc->mly_status_mailbox), MLY_GET_REG(sc, sc->mly_status_mailbox + 1), MLY_GET_REG(sc, sc->mly_status_mailbox + 2), MLY_GET_REG(sc, sc->mly_status_mailbox + 3), MLY_GET_REG(sc, sc->mly_status_mailbox + 4), MLY_GET_REG(sc, sc->mly_status_mailbox + 5), MLY_GET_REG(sc, sc->mly_status_mailbox + 6), MLY_GET_REG(sc, sc->mly_status_mailbox + 7)); mly_printf(sc, " %04x %08x\n", MLY_GET_REG2(sc, sc->mly_status_mailbox), MLY_GET_REG4(sc, sc->mly_status_mailbox + 4)); } struct mly_softc *mly_softc0 = NULL; void mly_printstate0(void) { if (mly_softc0 != NULL) mly_printstate(mly_softc0); } /******************************************************************************** * Print a command */ static void mly_print_command(struct mly_command *mc) { struct mly_softc *sc = mc->mc_sc; mly_printf(sc, "COMMAND @ %p\n", mc); mly_printf(sc, " slot %d\n", mc->mc_slot); mly_printf(sc, " status 0x%x\n", mc->mc_status); mly_printf(sc, " sense len %d\n", mc->mc_sense); mly_printf(sc, " resid %d\n", mc->mc_resid); mly_printf(sc, " packet %p/0x%llx\n", mc->mc_packet, mc->mc_packetphys); if (mc->mc_packet != NULL) mly_print_packet(mc); mly_printf(sc, " data %p/%d\n", mc->mc_data, mc->mc_length); mly_printf(sc, " flags %b\n", mc->mc_flags, "\20\1busy\2complete\3slotted\4mapped\5datain\6dataout\n"); mly_printf(sc, " complete %p\n", mc->mc_complete); mly_printf(sc, " private %p\n", mc->mc_private); } /******************************************************************************** * Print a command packet */ static void mly_print_packet(struct mly_command *mc) { struct mly_softc *sc = mc->mc_sc; struct mly_command_generic *ge = (struct mly_command_generic *)mc->mc_packet; struct mly_command_scsi_small *ss = (struct mly_command_scsi_small *)mc->mc_packet; struct mly_command_scsi_large *sl = (struct mly_command_scsi_large *)mc->mc_packet; struct mly_command_ioctl *io = (struct mly_command_ioctl *)mc->mc_packet; int transfer; mly_printf(sc, " command_id %d\n", ge->command_id); mly_printf(sc, " opcode %d\n", ge->opcode); mly_printf(sc, " command_control fua %d dpo %d est %d dd %s nas %d ddis %d\n", ge->command_control.force_unit_access, ge->command_control.disable_page_out, ge->command_control.extended_sg_table, (ge->command_control.data_direction == MLY_CCB_WRITE) ? "WRITE" : "READ", ge->command_control.no_auto_sense, ge->command_control.disable_disconnect); mly_printf(sc, " data_size %d\n", ge->data_size); mly_printf(sc, " sense_buffer_address 0x%llx\n", ge->sense_buffer_address); mly_printf(sc, " lun %d\n", ge->addr.phys.lun); mly_printf(sc, " target %d\n", ge->addr.phys.target); mly_printf(sc, " channel %d\n", ge->addr.phys.channel); mly_printf(sc, " logical device %d\n", ge->addr.log.logdev); mly_printf(sc, " controller %d\n", ge->addr.phys.controller); mly_printf(sc, " timeout %d %s\n", ge->timeout.value, (ge->timeout.scale == MLY_TIMEOUT_SECONDS) ? "seconds" : ((ge->timeout.scale == MLY_TIMEOUT_MINUTES) ? "minutes" : "hours")); mly_printf(sc, " maximum_sense_size %d\n", ge->maximum_sense_size); switch(ge->opcode) { case MDACMD_SCSIPT: case MDACMD_SCSI: mly_printf(sc, " cdb length %d\n", ss->cdb_length); mly_printf(sc, " cdb %*D\n", ss->cdb_length, ss->cdb, " "); transfer = 1; break; case MDACMD_SCSILC: case MDACMD_SCSILCPT: mly_printf(sc, " cdb length %d\n", sl->cdb_length); mly_printf(sc, " cdb 0x%llx\n", sl->cdb_physaddr); transfer = 1; break; case MDACMD_IOCTL: mly_printf(sc, " sub_ioctl 0x%x\n", io->sub_ioctl); switch(io->sub_ioctl) { case MDACIOCTL_SETMEMORYMAILBOX: mly_printf(sc, " health_buffer_size %d\n", io->param.setmemorymailbox.health_buffer_size); mly_printf(sc, " health_buffer_phys 0x%llx\n", io->param.setmemorymailbox.health_buffer_physaddr); mly_printf(sc, " command_mailbox 0x%llx\n", io->param.setmemorymailbox.command_mailbox_physaddr); mly_printf(sc, " status_mailbox 0x%llx\n", io->param.setmemorymailbox.status_mailbox_physaddr); transfer = 0; break; case MDACIOCTL_SETREALTIMECLOCK: case MDACIOCTL_GETHEALTHSTATUS: case MDACIOCTL_GETCONTROLLERINFO: case MDACIOCTL_GETLOGDEVINFOVALID: case MDACIOCTL_GETPHYSDEVINFOVALID: case MDACIOCTL_GETPHYSDEVSTATISTICS: case MDACIOCTL_GETLOGDEVSTATISTICS: case MDACIOCTL_GETCONTROLLERSTATISTICS: case MDACIOCTL_GETBDT_FOR_SYSDRIVE: case MDACIOCTL_CREATENEWCONF: case MDACIOCTL_ADDNEWCONF: case MDACIOCTL_GETDEVCONFINFO: case MDACIOCTL_GETFREESPACELIST: case MDACIOCTL_MORE: case MDACIOCTL_SETPHYSDEVPARAMETER: case MDACIOCTL_GETPHYSDEVPARAMETER: case MDACIOCTL_GETLOGDEVPARAMETER: case MDACIOCTL_SETLOGDEVPARAMETER: mly_printf(sc, " param %10D\n", io->param.data.param, " "); transfer = 1; break; case MDACIOCTL_GETEVENT: mly_printf(sc, " event %d\n", io->param.getevent.sequence_number_low + ((u_int32_t)io->addr.log.logdev << 16)); transfer = 1; break; case MDACIOCTL_SETRAIDDEVSTATE: mly_printf(sc, " state %d\n", io->param.setraiddevstate.state); transfer = 0; break; case MDACIOCTL_XLATEPHYSDEVTORAIDDEV: mly_printf(sc, " raid_device %d\n", io->param.xlatephysdevtoraiddev.raid_device); mly_printf(sc, " controller %d\n", io->param.xlatephysdevtoraiddev.controller); mly_printf(sc, " channel %d\n", io->param.xlatephysdevtoraiddev.channel); mly_printf(sc, " target %d\n", io->param.xlatephysdevtoraiddev.target); mly_printf(sc, " lun %d\n", io->param.xlatephysdevtoraiddev.lun); transfer = 0; break; case MDACIOCTL_GETGROUPCONFINFO: mly_printf(sc, " group %d\n", io->param.getgroupconfinfo.group); transfer = 1; break; case MDACIOCTL_GET_SUBSYSTEM_DATA: case MDACIOCTL_SET_SUBSYSTEM_DATA: case MDACIOCTL_STARTDISOCVERY: case MDACIOCTL_INITPHYSDEVSTART: case MDACIOCTL_INITPHYSDEVSTOP: case MDACIOCTL_INITRAIDDEVSTART: case MDACIOCTL_INITRAIDDEVSTOP: case MDACIOCTL_REBUILDRAIDDEVSTART: case MDACIOCTL_REBUILDRAIDDEVSTOP: case MDACIOCTL_MAKECONSISTENTDATASTART: case MDACIOCTL_MAKECONSISTENTDATASTOP: case MDACIOCTL_CONSISTENCYCHECKSTART: case MDACIOCTL_CONSISTENCYCHECKSTOP: case MDACIOCTL_RESETDEVICE: case MDACIOCTL_FLUSHDEVICEDATA: case MDACIOCTL_PAUSEDEVICE: case MDACIOCTL_UNPAUSEDEVICE: case MDACIOCTL_LOCATEDEVICE: case MDACIOCTL_SETMASTERSLAVEMODE: case MDACIOCTL_DELETERAIDDEV: case MDACIOCTL_REPLACEINTERNALDEV: case MDACIOCTL_CLEARCONF: case MDACIOCTL_GETCONTROLLERPARAMETER: case MDACIOCTL_SETCONTRLLERPARAMETER: case MDACIOCTL_CLEARCONFSUSPMODE: case MDACIOCTL_STOREIMAGE: case MDACIOCTL_READIMAGE: case MDACIOCTL_FLASHIMAGES: case MDACIOCTL_RENAMERAIDDEV: default: /* no idea what to print */ transfer = 0; break; } break; case MDACMD_IOCTLCHECK: case MDACMD_MEMCOPY: default: transfer = 0; break; /* print nothing */ } if (transfer) { if (ge->command_control.extended_sg_table) { mly_printf(sc, " sg table 0x%llx/%d\n", ge->transfer.indirect.table_physaddr[0], ge->transfer.indirect.entries[0]); } else { mly_printf(sc, " 0000 0x%llx/%lld\n", ge->transfer.direct.sg[0].physaddr, ge->transfer.direct.sg[0].length); mly_printf(sc, " 0001 0x%llx/%lld\n", ge->transfer.direct.sg[1].physaddr, ge->transfer.direct.sg[1].length); } } } /******************************************************************************** * Panic in a slightly informative fashion */ static void mly_panic(struct mly_softc *sc, char *reason) { mly_printstate(sc); panic(reason); } /******************************************************************************** * Print queue statistics, callable from DDB. */ void mly_print_controller(int controller) { struct mly_softc *sc; if ((sc = devclass_get_softc(devclass_find("mly"), controller)) == NULL) { printf("mly: controller %d invalid\n", controller); } else { device_printf(sc->mly_dev, "queue curr max\n"); device_printf(sc->mly_dev, "free %04d/%04d\n", sc->mly_qstat[MLYQ_FREE].q_length, sc->mly_qstat[MLYQ_FREE].q_max); device_printf(sc->mly_dev, "busy %04d/%04d\n", sc->mly_qstat[MLYQ_BUSY].q_length, sc->mly_qstat[MLYQ_BUSY].q_max); device_printf(sc->mly_dev, "complete %04d/%04d\n", sc->mly_qstat[MLYQ_COMPLETE].q_length, sc->mly_qstat[MLYQ_COMPLETE].q_max); } } #endif /******************************************************************************** ******************************************************************************** Control device interface ******************************************************************************** ********************************************************************************/ /******************************************************************************** * Accept an open operation on the control device. */ static int mly_user_open(struct cdev *dev, int flags, int fmt, struct thread *td) { struct mly_softc *sc = dev->si_drv1; MLY_LOCK(sc); sc->mly_state |= MLY_STATE_OPEN; MLY_UNLOCK(sc); return(0); } /******************************************************************************** * Accept the last close on the control device. */ static int mly_user_close(struct cdev *dev, int flags, int fmt, struct thread *td) { struct mly_softc *sc = dev->si_drv1; MLY_LOCK(sc); sc->mly_state &= ~MLY_STATE_OPEN; MLY_UNLOCK(sc); return (0); } /******************************************************************************** * Handle controller-specific control operations. */ static int mly_user_ioctl(struct cdev *dev, u_long cmd, caddr_t addr, int32_t flag, struct thread *td) { struct mly_softc *sc = (struct mly_softc *)dev->si_drv1; struct mly_user_command *uc = (struct mly_user_command *)addr; struct mly_user_health *uh = (struct mly_user_health *)addr; switch(cmd) { case MLYIO_COMMAND: return(mly_user_command(sc, uc)); case MLYIO_HEALTH: return(mly_user_health(sc, uh)); default: return(ENOIOCTL); } } /******************************************************************************** * Execute a command passed in from userspace. * * The control structure contains the actual command for the controller, as well * as the user-space data pointer and data size, and an optional sense buffer * size/pointer. On completion, the data size is adjusted to the command * residual, and the sense buffer size to the size of the returned sense data. * */ static int mly_user_command(struct mly_softc *sc, struct mly_user_command *uc) { struct mly_command *mc; int error; /* allocate a command */ MLY_LOCK(sc); if (mly_alloc_command(sc, &mc)) { MLY_UNLOCK(sc); return (ENOMEM); /* XXX Linux version will wait for a command */ } MLY_UNLOCK(sc); /* handle data size/direction */ mc->mc_length = (uc->DataTransferLength >= 0) ? uc->DataTransferLength : -uc->DataTransferLength; if (mc->mc_length > 0) { if ((mc->mc_data = malloc(mc->mc_length, M_DEVBUF, M_NOWAIT)) == NULL) { error = ENOMEM; goto out; } } if (uc->DataTransferLength > 0) { mc->mc_flags |= MLY_CMD_DATAIN; bzero(mc->mc_data, mc->mc_length); } if (uc->DataTransferLength < 0) { mc->mc_flags |= MLY_CMD_DATAOUT; if ((error = copyin(uc->DataTransferBuffer, mc->mc_data, mc->mc_length)) != 0) goto out; } /* copy the controller command */ bcopy(&uc->CommandMailbox, mc->mc_packet, sizeof(uc->CommandMailbox)); /* clear command completion handler so that we get woken up */ mc->mc_complete = NULL; /* execute the command */ MLY_LOCK(sc); if ((error = mly_start(mc)) != 0) { MLY_UNLOCK(sc); goto out; } while (!(mc->mc_flags & MLY_CMD_COMPLETE)) mtx_sleep(mc, &sc->mly_lock, PRIBIO, "mlyioctl", 0); MLY_UNLOCK(sc); /* return the data to userspace */ if (uc->DataTransferLength > 0) if ((error = copyout(mc->mc_data, uc->DataTransferBuffer, mc->mc_length)) != 0) goto out; /* return the sense buffer to userspace */ if ((uc->RequestSenseLength > 0) && (mc->mc_sense > 0)) { if ((error = copyout(mc->mc_packet, uc->RequestSenseBuffer, min(uc->RequestSenseLength, mc->mc_sense))) != 0) goto out; } /* return command results to userspace (caller will copy out) */ uc->DataTransferLength = mc->mc_resid; uc->RequestSenseLength = min(uc->RequestSenseLength, mc->mc_sense); uc->CommandStatus = mc->mc_status; error = 0; out: if (mc->mc_data != NULL) free(mc->mc_data, M_DEVBUF); MLY_LOCK(sc); mly_release_command(mc); MLY_UNLOCK(sc); return(error); } /******************************************************************************** * Return health status to userspace. If the health change index in the user * structure does not match that currently exported by the controller, we * return the current status immediately. Otherwise, we block until either * interrupted or new status is delivered. */ static int mly_user_health(struct mly_softc *sc, struct mly_user_health *uh) { struct mly_health_status mh; int error; /* fetch the current health status from userspace */ if ((error = copyin(uh->HealthStatusBuffer, &mh, sizeof(mh))) != 0) return(error); /* spin waiting for a status update */ MLY_LOCK(sc); error = EWOULDBLOCK; while ((error != 0) && (sc->mly_event_change == mh.change_counter)) error = mtx_sleep(&sc->mly_event_change, &sc->mly_lock, PRIBIO | PCATCH, "mlyhealth", 0); mh = sc->mly_mmbox->mmm_health.status; MLY_UNLOCK(sc); /* copy the controller's health status buffer out */ error = copyout(&mh, uh->HealthStatusBuffer, sizeof(mh)); return(error); } #ifdef MLY_DEBUG static void mly_timeout(void *arg) { struct mly_softc *sc; struct mly_command *mc; int deadline; sc = arg; MLY_ASSERT_LOCKED(sc); deadline = time_second - MLY_CMD_TIMEOUT; TAILQ_FOREACH(mc, &sc->mly_busy, mc_link) { if ((mc->mc_timestamp < deadline)) { device_printf(sc->mly_dev, "COMMAND %p TIMEOUT AFTER %d SECONDS\n", mc, (int)(time_second - mc->mc_timestamp)); } } callout_reset(&sc->mly_timeout, MLY_CMD_TIMEOUT * hz, mly_timeout, sc); } #endif diff --git a/sys/dev/twa/tw_osl_freebsd.c b/sys/dev/twa/tw_osl_freebsd.c index 0b3e7933a90a..4f4fb7b06de4 100644 --- a/sys/dev/twa/tw_osl_freebsd.c +++ b/sys/dev/twa/tw_osl_freebsd.c @@ -1,1714 +1,1715 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2004-07 Applied Micro Circuits Corporation. * Copyright (c) 2004-05 Vinod Kashyap. * Copyright (c) 2000 Michael Smith * Copyright (c) 2000 BSDi * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); /* * AMCC'S 3ware driver for 9000 series storage controllers. * * Author: Vinod Kashyap * Modifications by: Adam Radford * Modifications by: Manjunath Ranganathaiah */ /* * FreeBSD specific functions not related to CAM, and other * miscellaneous functions. */ #include #include #include #include #ifdef TW_OSL_DEBUG TW_INT32 TW_DEBUG_LEVEL_FOR_OSL = TW_OSL_DEBUG; TW_INT32 TW_OSL_DEBUG_LEVEL_FOR_CL = TW_OSL_DEBUG; #endif /* TW_OSL_DEBUG */ static MALLOC_DEFINE(TW_OSLI_MALLOC_CLASS, "twa_commands", "twa commands"); static d_open_t twa_open; static d_close_t twa_close; static d_ioctl_t twa_ioctl; static struct cdevsw twa_cdevsw = { .d_version = D_VERSION, .d_open = twa_open, .d_close = twa_close, .d_ioctl = twa_ioctl, .d_name = "twa", }; static devclass_t twa_devclass; /* * Function name: twa_open * Description: Called when the controller is opened. * Simply marks the controller as open. * * Input: dev -- control device corresponding to the ctlr * flags -- mode of open * fmt -- device type (character/block etc.) * proc -- current process * Output: None * Return value: 0 -- success * non-zero-- failure */ static TW_INT32 twa_open(struct cdev *dev, TW_INT32 flags, TW_INT32 fmt, struct thread *proc) { struct twa_softc *sc = (struct twa_softc *)(dev->si_drv1); tw_osli_dbg_dprintf(5, sc, "entered"); sc->open = TW_CL_TRUE; return(0); } /* * Function name: twa_close * Description: Called when the controller is closed. * Simply marks the controller as not open. * * Input: dev -- control device corresponding to the ctlr * flags -- mode of corresponding open * fmt -- device type (character/block etc.) * proc -- current process * Output: None * Return value: 0 -- success * non-zero-- failure */ static TW_INT32 twa_close(struct cdev *dev, TW_INT32 flags, TW_INT32 fmt, struct thread *proc) { struct twa_softc *sc = (struct twa_softc *)(dev->si_drv1); tw_osli_dbg_dprintf(5, sc, "entered"); sc->open = TW_CL_FALSE; return(0); } /* * Function name: twa_ioctl * Description: Called when an ioctl is posted to the controller. * Handles any OS Layer specific cmds, passes the rest * on to the Common Layer. * * Input: dev -- control device corresponding to the ctlr * cmd -- ioctl cmd * buf -- ptr to buffer in kernel memory, which is * a copy of the input buffer in user-space * flags -- mode of corresponding open * proc -- current process * Output: buf -- ptr to buffer in kernel memory, which will * be copied to the output buffer in user-space * Return value: 0 -- success * non-zero-- failure */ static TW_INT32 twa_ioctl(struct cdev *dev, u_long cmd, caddr_t buf, TW_INT32 flags, struct thread *proc) { struct twa_softc *sc = (struct twa_softc *)(dev->si_drv1); TW_INT32 error; tw_osli_dbg_dprintf(5, sc, "entered"); switch (cmd) { case TW_OSL_IOCTL_FIRMWARE_PASS_THROUGH: tw_osli_dbg_dprintf(6, sc, "ioctl: fw_passthru"); error = tw_osli_fw_passthru(sc, (TW_INT8 *)buf); break; case TW_OSL_IOCTL_SCAN_BUS: /* Request CAM for a bus scan. */ tw_osli_dbg_dprintf(6, sc, "ioctl: scan bus"); error = tw_osli_request_bus_scan(sc); break; default: tw_osli_dbg_dprintf(6, sc, "ioctl: 0x%lx", cmd); error = tw_cl_ioctl(&sc->ctlr_handle, cmd, buf); break; } return(error); } static TW_INT32 twa_probe(device_t dev); static TW_INT32 twa_attach(device_t dev); static TW_INT32 twa_detach(device_t dev); static TW_INT32 twa_shutdown(device_t dev); static TW_VOID twa_busdma_lock(TW_VOID *lock_arg, bus_dma_lock_op_t op); static TW_VOID twa_pci_intr(TW_VOID *arg); static TW_VOID twa_watchdog(TW_VOID *arg); int twa_setup_intr(struct twa_softc *sc); int twa_teardown_intr(struct twa_softc *sc); static TW_INT32 tw_osli_alloc_mem(struct twa_softc *sc); static TW_VOID tw_osli_free_resources(struct twa_softc *sc); static TW_VOID twa_map_load_data_callback(TW_VOID *arg, bus_dma_segment_t *segs, TW_INT32 nsegments, TW_INT32 error); static TW_VOID twa_map_load_callback(TW_VOID *arg, bus_dma_segment_t *segs, TW_INT32 nsegments, TW_INT32 error); static device_method_t twa_methods[] = { /* Device interface */ DEVMETHOD(device_probe, twa_probe), DEVMETHOD(device_attach, twa_attach), DEVMETHOD(device_detach, twa_detach), DEVMETHOD(device_shutdown, twa_shutdown), DEVMETHOD_END }; static driver_t twa_pci_driver = { "twa", twa_methods, sizeof(struct twa_softc) }; DRIVER_MODULE(twa, pci, twa_pci_driver, twa_devclass, 0, 0); MODULE_DEPEND(twa, cam, 1, 1, 1); MODULE_DEPEND(twa, pci, 1, 1, 1); /* * Function name: twa_probe * Description: Called at driver load time. Claims 9000 ctlrs. * * Input: dev -- bus device corresponding to the ctlr * Output: None * Return value: <= 0 -- success * > 0 -- failure */ static TW_INT32 twa_probe(device_t dev) { static TW_UINT8 first_ctlr = 1; tw_osli_dbg_printf(3, "entered"); if (tw_cl_ctlr_supported(pci_get_vendor(dev), pci_get_device(dev))) { device_set_desc(dev, TW_OSLI_DEVICE_NAME); /* Print the driver version only once. */ if (first_ctlr) { printf("3ware device driver for 9000 series storage " "controllers, version: %s\n", TW_OSL_DRIVER_VERSION_STRING); first_ctlr = 0; } return(0); } return(ENXIO); } int twa_setup_intr(struct twa_softc *sc) { int error = 0; if (!(sc->intr_handle) && (sc->irq_res)) { error = bus_setup_intr(sc->bus_dev, sc->irq_res, INTR_TYPE_CAM | INTR_MPSAFE, NULL, twa_pci_intr, sc, &sc->intr_handle); } return( error ); } int twa_teardown_intr(struct twa_softc *sc) { int error = 0; if ((sc->intr_handle) && (sc->irq_res)) { error = bus_teardown_intr(sc->bus_dev, sc->irq_res, sc->intr_handle); sc->intr_handle = NULL; } return( error ); } /* * Function name: twa_attach * Description: Allocates pci resources; updates sc; adds a node to the * sysctl tree to expose the driver version; makes calls * (to the Common Layer) to initialize ctlr, and to * attach to CAM. * * Input: dev -- bus device corresponding to the ctlr * Output: None * Return value: 0 -- success * non-zero-- failure */ static TW_INT32 twa_attach(device_t dev) { struct twa_softc *sc = device_get_softc(dev); TW_INT32 bar_num; TW_INT32 bar0_offset; TW_INT32 bar_size; TW_INT32 error; tw_osli_dbg_dprintf(3, sc, "entered"); sc->ctlr_handle.osl_ctlr_ctxt = sc; /* Initialize the softc structure. */ sc->bus_dev = dev; sc->device_id = pci_get_device(dev); /* Initialize the mutexes right here. */ sc->io_lock = &(sc->io_lock_handle); mtx_init(sc->io_lock, "tw_osl_io_lock", NULL, MTX_SPIN); sc->q_lock = &(sc->q_lock_handle); mtx_init(sc->q_lock, "tw_osl_q_lock", NULL, MTX_SPIN); sc->sim_lock = &(sc->sim_lock_handle); mtx_init(sc->sim_lock, "tw_osl_sim_lock", NULL, MTX_DEF | MTX_RECURSE); sysctl_ctx_init(&sc->sysctl_ctxt); sc->sysctl_tree = SYSCTL_ADD_NODE(&sc->sysctl_ctxt, SYSCTL_STATIC_CHILDREN(_hw), OID_AUTO, device_get_nameunit(dev), CTLFLAG_RD, 0, ""); if (sc->sysctl_tree == NULL) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2000, "Cannot add sysctl tree node", ENXIO); return(ENXIO); } SYSCTL_ADD_STRING(&sc->sysctl_ctxt, SYSCTL_CHILDREN(sc->sysctl_tree), OID_AUTO, "driver_version", CTLFLAG_RD, TW_OSL_DRIVER_VERSION_STRING, 0, "TWA driver version"); /* Force the busmaster enable bit on, in case the BIOS forgot. */ pci_enable_busmaster(dev); /* Allocate the PCI register window. */ if ((error = tw_cl_get_pci_bar_info(sc->device_id, TW_CL_BAR_TYPE_MEM, &bar_num, &bar0_offset, &bar_size))) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x201F, "Can't get PCI BAR info", error); tw_osli_free_resources(sc); return(error); } sc->reg_res_id = PCIR_BARS + bar0_offset; if ((sc->reg_res = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &(sc->reg_res_id), RF_ACTIVE)) == NULL) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2002, "Can't allocate register window", ENXIO); tw_osli_free_resources(sc); return(ENXIO); } sc->bus_tag = rman_get_bustag(sc->reg_res); sc->bus_handle = rman_get_bushandle(sc->reg_res); /* Allocate and register our interrupt. */ sc->irq_res_id = 0; if ((sc->irq_res = bus_alloc_resource_any(sc->bus_dev, SYS_RES_IRQ, &(sc->irq_res_id), RF_SHAREABLE | RF_ACTIVE)) == NULL) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2003, "Can't allocate interrupt", ENXIO); tw_osli_free_resources(sc); return(ENXIO); } if ((error = twa_setup_intr(sc))) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2004, "Can't set up interrupt", error); tw_osli_free_resources(sc); return(error); } if ((error = tw_osli_alloc_mem(sc))) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2005, "Memory allocation failure", error); tw_osli_free_resources(sc); return(error); } /* Initialize the Common Layer for this controller. */ if ((error = tw_cl_init_ctlr(&sc->ctlr_handle, sc->flags, sc->device_id, TW_OSLI_MAX_NUM_REQUESTS, TW_OSLI_MAX_NUM_AENS, sc->non_dma_mem, sc->dma_mem, sc->dma_mem_phys ))) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2006, "Failed to initialize Common Layer/controller", error); tw_osli_free_resources(sc); return(error); } /* Create the control device. */ sc->ctrl_dev = make_dev(&twa_cdevsw, device_get_unit(sc->bus_dev), UID_ROOT, GID_OPERATOR, S_IRUSR | S_IWUSR, "twa%d", device_get_unit(sc->bus_dev)); sc->ctrl_dev->si_drv1 = sc; if ((error = tw_osli_cam_attach(sc))) { tw_osli_free_resources(sc); tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2007, "Failed to initialize CAM", error); return(error); } sc->watchdog_index = 0; callout_init(&(sc->watchdog_callout[0]), 1); callout_init(&(sc->watchdog_callout[1]), 1); callout_reset(&(sc->watchdog_callout[0]), 5*hz, twa_watchdog, &sc->ctlr_handle); + gone_in_dev(dev, 14, "twa(4) removed"); return(0); } static TW_VOID twa_watchdog(TW_VOID *arg) { struct tw_cl_ctlr_handle *ctlr_handle = (struct tw_cl_ctlr_handle *)arg; struct twa_softc *sc = ctlr_handle->osl_ctlr_ctxt; int i; int i_need_a_reset = 0; int driver_is_active = 0; int my_watchdog_was_pending = 1234; TW_UINT64 current_time; struct tw_osli_req_context *my_req; //============================================================================== current_time = (TW_UINT64) (tw_osl_get_local_time()); for (i = 0; i < TW_OSLI_MAX_NUM_REQUESTS; i++) { my_req = &(sc->req_ctx_buf[i]); if ((my_req->state == TW_OSLI_REQ_STATE_BUSY) && (my_req->deadline) && (my_req->deadline < current_time)) { tw_cl_set_reset_needed(ctlr_handle); #ifdef TW_OSL_DEBUG device_printf((sc)->bus_dev, "Request %d timed out! d = %llu, c = %llu\n", i, my_req->deadline, current_time); #else /* TW_OSL_DEBUG */ device_printf((sc)->bus_dev, "Request %d timed out!\n", i); #endif /* TW_OSL_DEBUG */ break; } } //============================================================================== i_need_a_reset = tw_cl_is_reset_needed(ctlr_handle); i = (int) ((sc->watchdog_index++) & 1); driver_is_active = tw_cl_is_active(ctlr_handle); if (i_need_a_reset) { #ifdef TW_OSL_DEBUG device_printf((sc)->bus_dev, "Watchdog rescheduled in 70 seconds\n"); #endif /* TW_OSL_DEBUG */ my_watchdog_was_pending = callout_reset(&(sc->watchdog_callout[i]), 70*hz, twa_watchdog, &sc->ctlr_handle); tw_cl_reset_ctlr(ctlr_handle); #ifdef TW_OSL_DEBUG device_printf((sc)->bus_dev, "Watchdog reset completed!\n"); #endif /* TW_OSL_DEBUG */ } else if (driver_is_active) { my_watchdog_was_pending = callout_reset(&(sc->watchdog_callout[i]), 5*hz, twa_watchdog, &sc->ctlr_handle); } #ifdef TW_OSL_DEBUG if (i_need_a_reset || my_watchdog_was_pending) device_printf((sc)->bus_dev, "i_need_a_reset = %d, " "driver_is_active = %d, my_watchdog_was_pending = %d\n", i_need_a_reset, driver_is_active, my_watchdog_was_pending); #endif /* TW_OSL_DEBUG */ } /* * Function name: tw_osli_alloc_mem * Description: Allocates memory needed both by CL and OSL. * * Input: sc -- OSL internal controller context * Output: None * Return value: 0 -- success * non-zero-- failure */ static TW_INT32 tw_osli_alloc_mem(struct twa_softc *sc) { struct tw_osli_req_context *req; TW_UINT32 max_sg_elements; TW_UINT32 non_dma_mem_size; TW_UINT32 dma_mem_size; TW_INT32 error; TW_INT32 i; tw_osli_dbg_dprintf(3, sc, "entered"); sc->flags |= (sizeof(bus_addr_t) == 8) ? TW_CL_64BIT_ADDRESSES : 0; sc->flags |= (sizeof(bus_size_t) == 8) ? TW_CL_64BIT_SG_LENGTH : 0; max_sg_elements = (sizeof(bus_addr_t) == 8) ? TW_CL_MAX_64BIT_SG_ELEMENTS : TW_CL_MAX_32BIT_SG_ELEMENTS; if ((error = tw_cl_get_mem_requirements(&sc->ctlr_handle, sc->flags, sc->device_id, TW_OSLI_MAX_NUM_REQUESTS, TW_OSLI_MAX_NUM_AENS, &(sc->alignment), &(sc->sg_size_factor), &non_dma_mem_size, &dma_mem_size ))) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2008, "Can't get Common Layer's memory requirements", error); return(error); } if ((sc->non_dma_mem = malloc(non_dma_mem_size, TW_OSLI_MALLOC_CLASS, M_WAITOK)) == NULL) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2009, "Can't allocate non-dma memory", ENOMEM); return(ENOMEM); } /* Create the parent dma tag. */ if (bus_dma_tag_create(bus_get_dma_tag(sc->bus_dev), /* parent */ sc->alignment, /* alignment */ TW_OSLI_DMA_BOUNDARY, /* boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ TW_CL_MAX_IO_SIZE, /* maxsize */ max_sg_elements, /* nsegments */ TW_CL_MAX_IO_SIZE, /* maxsegsize */ 0, /* flags */ NULL, /* lockfunc */ NULL, /* lockfuncarg */ &sc->parent_tag /* tag */)) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x200A, "Can't allocate parent DMA tag", ENOMEM); return(ENOMEM); } /* Create a dma tag for Common Layer's DMA'able memory (dma_mem). */ if (bus_dma_tag_create(sc->parent_tag, /* parent */ sc->alignment, /* alignment */ 0, /* boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ dma_mem_size, /* maxsize */ 1, /* nsegments */ BUS_SPACE_MAXSIZE, /* maxsegsize */ 0, /* flags */ NULL, /* lockfunc */ NULL, /* lockfuncarg */ &sc->cmd_tag /* tag */)) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x200B, "Can't allocate DMA tag for Common Layer's " "DMA'able memory", ENOMEM); return(ENOMEM); } if (bus_dmamem_alloc(sc->cmd_tag, &sc->dma_mem, BUS_DMA_NOWAIT, &sc->cmd_map)) { /* Try a second time. */ if (bus_dmamem_alloc(sc->cmd_tag, &sc->dma_mem, BUS_DMA_NOWAIT, &sc->cmd_map)) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x200C, "Can't allocate DMA'able memory for the" "Common Layer", ENOMEM); return(ENOMEM); } } bus_dmamap_load(sc->cmd_tag, sc->cmd_map, sc->dma_mem, dma_mem_size, twa_map_load_callback, &sc->dma_mem_phys, 0); /* * Create a dma tag for data buffers; size will be the maximum * possible I/O size (128kB). */ if (bus_dma_tag_create(sc->parent_tag, /* parent */ sc->alignment, /* alignment */ 0, /* boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ TW_CL_MAX_IO_SIZE, /* maxsize */ max_sg_elements, /* nsegments */ TW_CL_MAX_IO_SIZE, /* maxsegsize */ BUS_DMA_ALLOCNOW, /* flags */ twa_busdma_lock, /* lockfunc */ sc->io_lock, /* lockfuncarg */ &sc->dma_tag /* tag */)) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x200F, "Can't allocate DMA tag for data buffers", ENOMEM); return(ENOMEM); } /* * Create a dma tag for ioctl data buffers; size will be the maximum * possible I/O size (128kB). */ if (bus_dma_tag_create(sc->parent_tag, /* parent */ sc->alignment, /* alignment */ 0, /* boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ TW_CL_MAX_IO_SIZE, /* maxsize */ max_sg_elements, /* nsegments */ TW_CL_MAX_IO_SIZE, /* maxsegsize */ BUS_DMA_ALLOCNOW, /* flags */ twa_busdma_lock, /* lockfunc */ sc->io_lock, /* lockfuncarg */ &sc->ioctl_tag /* tag */)) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2010, "Can't allocate DMA tag for ioctl data buffers", ENOMEM); return(ENOMEM); } /* Create just one map for all ioctl request data buffers. */ if (bus_dmamap_create(sc->ioctl_tag, 0, &sc->ioctl_map)) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2011, "Can't create ioctl map", ENOMEM); return(ENOMEM); } /* Initialize request queues. */ tw_osli_req_q_init(sc, TW_OSLI_FREE_Q); tw_osli_req_q_init(sc, TW_OSLI_BUSY_Q); if ((sc->req_ctx_buf = (struct tw_osli_req_context *) malloc((sizeof(struct tw_osli_req_context) * TW_OSLI_MAX_NUM_REQUESTS), TW_OSLI_MALLOC_CLASS, M_WAITOK)) == NULL) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2012, "Failed to allocate request packets", ENOMEM); return(ENOMEM); } bzero(sc->req_ctx_buf, sizeof(struct tw_osli_req_context) * TW_OSLI_MAX_NUM_REQUESTS); for (i = 0; i < TW_OSLI_MAX_NUM_REQUESTS; i++) { req = &(sc->req_ctx_buf[i]); req->ctlr = sc; if (bus_dmamap_create(sc->dma_tag, 0, &req->dma_map)) { tw_osli_printf(sc, "request # = %d, error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2013, "Can't create dma map", i, ENOMEM); return(ENOMEM); } /* Initialize the ioctl wakeup/ timeout mutex */ req->ioctl_wake_timeout_lock = &(req->ioctl_wake_timeout_lock_handle); mtx_init(req->ioctl_wake_timeout_lock, "tw_ioctl_wake_timeout_lock", NULL, MTX_DEF); /* Insert request into the free queue. */ tw_osli_req_q_insert_tail(req, TW_OSLI_FREE_Q); } return(0); } /* * Function name: tw_osli_free_resources * Description: Performs clean-up at the time of going down. * * Input: sc -- ptr to OSL internal ctlr context * Output: None * Return value: None */ static TW_VOID tw_osli_free_resources(struct twa_softc *sc) { struct tw_osli_req_context *req; TW_INT32 error = 0; tw_osli_dbg_dprintf(3, sc, "entered"); /* Detach from CAM */ tw_osli_cam_detach(sc); if (sc->req_ctx_buf) while ((req = tw_osli_req_q_remove_head(sc, TW_OSLI_FREE_Q)) != NULL) { mtx_destroy(req->ioctl_wake_timeout_lock); if ((error = bus_dmamap_destroy(sc->dma_tag, req->dma_map))) tw_osli_dbg_dprintf(1, sc, "dmamap_destroy(dma) returned %d", error); } if ((sc->ioctl_tag) && (sc->ioctl_map)) if ((error = bus_dmamap_destroy(sc->ioctl_tag, sc->ioctl_map))) tw_osli_dbg_dprintf(1, sc, "dmamap_destroy(ioctl) returned %d", error); /* Free all memory allocated so far. */ if (sc->req_ctx_buf) free(sc->req_ctx_buf, TW_OSLI_MALLOC_CLASS); if (sc->non_dma_mem) free(sc->non_dma_mem, TW_OSLI_MALLOC_CLASS); if (sc->dma_mem) { bus_dmamap_unload(sc->cmd_tag, sc->cmd_map); bus_dmamem_free(sc->cmd_tag, sc->dma_mem, sc->cmd_map); } if (sc->cmd_tag) if ((error = bus_dma_tag_destroy(sc->cmd_tag))) tw_osli_dbg_dprintf(1, sc, "dma_tag_destroy(cmd) returned %d", error); if (sc->dma_tag) if ((error = bus_dma_tag_destroy(sc->dma_tag))) tw_osli_dbg_dprintf(1, sc, "dma_tag_destroy(dma) returned %d", error); if (sc->ioctl_tag) if ((error = bus_dma_tag_destroy(sc->ioctl_tag))) tw_osli_dbg_dprintf(1, sc, "dma_tag_destroy(ioctl) returned %d", error); if (sc->parent_tag) if ((error = bus_dma_tag_destroy(sc->parent_tag))) tw_osli_dbg_dprintf(1, sc, "dma_tag_destroy(parent) returned %d", error); /* Disconnect the interrupt handler. */ if ((error = twa_teardown_intr(sc))) tw_osli_dbg_dprintf(1, sc, "teardown_intr returned %d", error); if (sc->irq_res != NULL) if ((error = bus_release_resource(sc->bus_dev, SYS_RES_IRQ, sc->irq_res_id, sc->irq_res))) tw_osli_dbg_dprintf(1, sc, "release_resource(irq) returned %d", error); /* Release the register window mapping. */ if (sc->reg_res != NULL) if ((error = bus_release_resource(sc->bus_dev, SYS_RES_MEMORY, sc->reg_res_id, sc->reg_res))) tw_osli_dbg_dprintf(1, sc, "release_resource(io) returned %d", error); /* Destroy the control device. */ if (sc->ctrl_dev != (struct cdev *)NULL) destroy_dev(sc->ctrl_dev); if ((error = sysctl_ctx_free(&sc->sysctl_ctxt))) tw_osli_dbg_dprintf(1, sc, "sysctl_ctx_free returned %d", error); } /* * Function name: twa_detach * Description: Called when the controller is being detached from * the pci bus. * * Input: dev -- bus device corresponding to the ctlr * Output: None * Return value: 0 -- success * non-zero-- failure */ static TW_INT32 twa_detach(device_t dev) { struct twa_softc *sc = device_get_softc(dev); TW_INT32 error; tw_osli_dbg_dprintf(3, sc, "entered"); error = EBUSY; if (sc->open) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2014, "Device open", error); goto out; } /* Shut the controller down. */ if ((error = twa_shutdown(dev))) goto out; /* Free all resources associated with this controller. */ tw_osli_free_resources(sc); error = 0; out: return(error); } /* * Function name: twa_shutdown * Description: Called at unload/shutdown time. Lets the controller * know that we are going down. * * Input: dev -- bus device corresponding to the ctlr * Output: None * Return value: 0 -- success * non-zero-- failure */ static TW_INT32 twa_shutdown(device_t dev) { struct twa_softc *sc = device_get_softc(dev); TW_INT32 error = 0; tw_osli_dbg_dprintf(3, sc, "entered"); /* Disconnect interrupts. */ error = twa_teardown_intr(sc); /* Stop watchdog task. */ callout_drain(&(sc->watchdog_callout[0])); callout_drain(&(sc->watchdog_callout[1])); /* Disconnect from the controller. */ if ((error = tw_cl_shutdown_ctlr(&(sc->ctlr_handle), 0))) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2015, "Failed to shutdown Common Layer/controller", error); } return(error); } /* * Function name: twa_busdma_lock * Description: Function to provide synchronization during busdma_swi. * * Input: lock_arg -- lock mutex sent as argument * op -- operation (lock/unlock) expected of the function * Output: None * Return value: None */ TW_VOID twa_busdma_lock(TW_VOID *lock_arg, bus_dma_lock_op_t op) { struct mtx *lock; lock = (struct mtx *)lock_arg; switch (op) { case BUS_DMA_LOCK: mtx_lock_spin(lock); break; case BUS_DMA_UNLOCK: mtx_unlock_spin(lock); break; default: panic("Unknown operation 0x%x for twa_busdma_lock!", op); } } /* * Function name: twa_pci_intr * Description: Interrupt handler. Wrapper for twa_interrupt. * * Input: arg -- ptr to OSL internal ctlr context * Output: None * Return value: None */ static TW_VOID twa_pci_intr(TW_VOID *arg) { struct twa_softc *sc = (struct twa_softc *)arg; tw_osli_dbg_dprintf(10, sc, "entered"); tw_cl_interrupt(&(sc->ctlr_handle)); } /* * Function name: tw_osli_fw_passthru * Description: Builds a fw passthru cmd pkt, and submits it to CL. * * Input: sc -- ptr to OSL internal ctlr context * buf -- ptr to ioctl pkt understood by CL * Output: None * Return value: 0 -- success * non-zero-- failure */ TW_INT32 tw_osli_fw_passthru(struct twa_softc *sc, TW_INT8 *buf) { struct tw_osli_req_context *req; struct tw_osli_ioctl_no_data_buf *user_buf = (struct tw_osli_ioctl_no_data_buf *)buf; TW_TIME end_time; TW_UINT32 timeout = 60; TW_UINT32 data_buf_size_adjusted; struct tw_cl_req_packet *req_pkt; struct tw_cl_passthru_req_packet *pt_req; TW_INT32 error; tw_osli_dbg_dprintf(5, sc, "ioctl: passthru"); if ((req = tw_osli_get_request(sc)) == NULL) return(EBUSY); req->req_handle.osl_req_ctxt = req; req->orig_req = buf; req->flags |= TW_OSLI_REQ_FLAGS_PASSTHRU; req_pkt = &(req->req_pkt); req_pkt->status = 0; req_pkt->tw_osl_callback = tw_osl_complete_passthru; /* Let the Common Layer retry the request on cmd queue full. */ req_pkt->flags |= TW_CL_REQ_RETRY_ON_BUSY; pt_req = &(req_pkt->gen_req_pkt.pt_req); /* * Make sure that the data buffer sent to firmware is a * 512 byte multiple in size. */ data_buf_size_adjusted = (user_buf->driver_pkt.buffer_length + (sc->sg_size_factor - 1)) & ~(sc->sg_size_factor - 1); if ((req->length = data_buf_size_adjusted)) { if ((req->data = malloc(data_buf_size_adjusted, TW_OSLI_MALLOC_CLASS, M_WAITOK)) == NULL) { error = ENOMEM; tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2016, "Could not alloc mem for " "fw_passthru data_buf", error); goto fw_passthru_err; } /* Copy the payload. */ if ((error = copyin((TW_VOID *)(user_buf->pdata), req->data, user_buf->driver_pkt.buffer_length)) != 0) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2017, "Could not copyin fw_passthru data_buf", error); goto fw_passthru_err; } pt_req->sgl_entries = 1; /* will be updated during mapping */ req->flags |= (TW_OSLI_REQ_FLAGS_DATA_IN | TW_OSLI_REQ_FLAGS_DATA_OUT); } else pt_req->sgl_entries = 0; /* no payload */ pt_req->cmd_pkt = (TW_VOID *)(&(user_buf->cmd_pkt)); pt_req->cmd_pkt_length = sizeof(struct tw_cl_command_packet); if ((error = tw_osli_map_request(req))) goto fw_passthru_err; end_time = tw_osl_get_local_time() + timeout; while (req->state != TW_OSLI_REQ_STATE_COMPLETE) { mtx_lock(req->ioctl_wake_timeout_lock); req->flags |= TW_OSLI_REQ_FLAGS_SLEEPING; error = mtx_sleep(req, req->ioctl_wake_timeout_lock, 0, "twa_passthru", timeout*hz); mtx_unlock(req->ioctl_wake_timeout_lock); if (!(req->flags & TW_OSLI_REQ_FLAGS_SLEEPING)) error = 0; req->flags &= ~TW_OSLI_REQ_FLAGS_SLEEPING; if (! error) { if (((error = req->error_code)) || ((error = (req->state != TW_OSLI_REQ_STATE_COMPLETE))) || ((error = req_pkt->status))) goto fw_passthru_err; break; } if (req_pkt->status) { error = req_pkt->status; goto fw_passthru_err; } if (error == EWOULDBLOCK) { /* Time out! */ if ((!(req->error_code)) && (req->state == TW_OSLI_REQ_STATE_COMPLETE) && (!(req_pkt->status)) ) { #ifdef TW_OSL_DEBUG tw_osli_printf(sc, "request = %p", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x7777, "FALSE Passthru timeout!", req); #endif /* TW_OSL_DEBUG */ error = 0; /* False error */ break; } if (!(tw_cl_is_reset_needed(&(req->ctlr->ctlr_handle)))) { #ifdef TW_OSL_DEBUG tw_osli_printf(sc, "request = %p", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2018, "Passthru request timed out!", req); #else /* TW_OSL_DEBUG */ device_printf((sc)->bus_dev, "Passthru request timed out!\n"); #endif /* TW_OSL_DEBUG */ tw_cl_reset_ctlr(&(req->ctlr->ctlr_handle)); } error = 0; end_time = tw_osl_get_local_time() + timeout; continue; /* * Don't touch req after a reset. It (and any * associated data) will be * unmapped by the callback. */ } /* * Either the request got completed, or we were woken up by a * signal. Calculate the new timeout, in case it was the latter. */ timeout = (end_time - tw_osl_get_local_time()); } /* End of while loop */ /* If there was a payload, copy it back. */ if ((!error) && (req->length)) if ((error = copyout(req->data, user_buf->pdata, user_buf->driver_pkt.buffer_length))) tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x2019, "Could not copyout fw_passthru data_buf", error); fw_passthru_err: if (req_pkt->status == TW_CL_ERR_REQ_BUS_RESET) error = EBUSY; user_buf->driver_pkt.os_status = error; /* Free resources. */ if (req->data) free(req->data, TW_OSLI_MALLOC_CLASS); tw_osli_req_q_insert_tail(req, TW_OSLI_FREE_Q); return(error); } /* * Function name: tw_osl_complete_passthru * Description: Called to complete passthru requests. * * Input: req_handle -- ptr to request handle * Output: None * Return value: None */ TW_VOID tw_osl_complete_passthru(struct tw_cl_req_handle *req_handle) { struct tw_osli_req_context *req = req_handle->osl_req_ctxt; struct tw_cl_req_packet *req_pkt = (struct tw_cl_req_packet *)(&req->req_pkt); struct twa_softc *sc = req->ctlr; tw_osli_dbg_dprintf(5, sc, "entered"); if (req->state != TW_OSLI_REQ_STATE_BUSY) { tw_osli_printf(sc, "request = %p, status = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x201B, "Unposted command completed!!", req, req->state); } /* * Remove request from the busy queue. Just mark it complete. * There's no need to move it into the complete queue as we are * going to be done with it right now. */ req->state = TW_OSLI_REQ_STATE_COMPLETE; tw_osli_req_q_remove_item(req, TW_OSLI_BUSY_Q); tw_osli_unmap_request(req); /* * Don't do a wake up if there was an error even before the request * was sent down to the Common Layer, and we hadn't gotten an * EINPROGRESS. The request originator will then be returned an * error, and he can do the clean-up. */ if ((req->error_code) && (!(req->flags & TW_OSLI_REQ_FLAGS_IN_PROGRESS))) return; if (req->flags & TW_OSLI_REQ_FLAGS_PASSTHRU) { if (req->flags & TW_OSLI_REQ_FLAGS_SLEEPING) { /* Wake up the sleeping command originator. */ tw_osli_dbg_dprintf(5, sc, "Waking up originator of request %p", req); req->flags &= ~TW_OSLI_REQ_FLAGS_SLEEPING; wakeup_one(req); } else { /* * If the request completed even before mtx_sleep * was called, simply return. */ if (req->flags & TW_OSLI_REQ_FLAGS_MAPPED) return; if (req_pkt->status == TW_CL_ERR_REQ_BUS_RESET) return; tw_osli_printf(sc, "request = %p", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x201C, "Passthru callback called, " "and caller not sleeping", req); } } else { tw_osli_printf(sc, "request = %p", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x201D, "Passthru callback called for non-passthru request", req); } } /* * Function name: tw_osli_get_request * Description: Gets a request pkt from the free queue. * * Input: sc -- ptr to OSL internal ctlr context * Output: None * Return value: ptr to request pkt -- success * NULL -- failure */ struct tw_osli_req_context * tw_osli_get_request(struct twa_softc *sc) { struct tw_osli_req_context *req; tw_osli_dbg_dprintf(4, sc, "entered"); /* Get a free request packet. */ req = tw_osli_req_q_remove_head(sc, TW_OSLI_FREE_Q); /* Initialize some fields to their defaults. */ if (req) { req->req_handle.osl_req_ctxt = NULL; req->req_handle.cl_req_ctxt = NULL; req->req_handle.is_io = 0; req->data = NULL; req->length = 0; req->deadline = 0; req->real_data = NULL; req->real_length = 0; req->state = TW_OSLI_REQ_STATE_INIT;/* req being initialized */ req->flags = 0; req->error_code = 0; req->orig_req = NULL; bzero(&(req->req_pkt), sizeof(struct tw_cl_req_packet)); } return(req); } /* * Function name: twa_map_load_data_callback * Description: Callback of bus_dmamap_load for the buffer associated * with data. Updates the cmd pkt (size/sgl_entries * fields, as applicable) to reflect the number of sg * elements. * * Input: arg -- ptr to OSL internal request context * segs -- ptr to a list of segment descriptors * nsegments--# of segments * error -- 0 if no errors encountered before callback, * non-zero if errors were encountered * Output: None * Return value: None */ static TW_VOID twa_map_load_data_callback(TW_VOID *arg, bus_dma_segment_t *segs, TW_INT32 nsegments, TW_INT32 error) { struct tw_osli_req_context *req = (struct tw_osli_req_context *)arg; struct twa_softc *sc = req->ctlr; struct tw_cl_req_packet *req_pkt = &(req->req_pkt); tw_osli_dbg_dprintf(10, sc, "entered"); if (error == EINVAL) { req->error_code = error; return; } /* Mark the request as currently being processed. */ req->state = TW_OSLI_REQ_STATE_BUSY; /* Move the request into the busy queue. */ tw_osli_req_q_insert_tail(req, TW_OSLI_BUSY_Q); req->flags |= TW_OSLI_REQ_FLAGS_MAPPED; if (error == EFBIG) { req->error_code = error; goto out; } if (req->flags & TW_OSLI_REQ_FLAGS_PASSTHRU) { struct tw_cl_passthru_req_packet *pt_req; if (req->flags & TW_OSLI_REQ_FLAGS_DATA_IN) bus_dmamap_sync(sc->ioctl_tag, sc->ioctl_map, BUS_DMASYNC_PREREAD); if (req->flags & TW_OSLI_REQ_FLAGS_DATA_OUT) { /* * If we're using an alignment buffer, and we're * writing data, copy the real data out. */ if (req->flags & TW_OSLI_REQ_FLAGS_DATA_COPY_NEEDED) bcopy(req->real_data, req->data, req->real_length); bus_dmamap_sync(sc->ioctl_tag, sc->ioctl_map, BUS_DMASYNC_PREWRITE); } pt_req = &(req_pkt->gen_req_pkt.pt_req); pt_req->sg_list = (TW_UINT8 *)segs; pt_req->sgl_entries += (nsegments - 1); error = tw_cl_fw_passthru(&(sc->ctlr_handle), req_pkt, &(req->req_handle)); } else { struct tw_cl_scsi_req_packet *scsi_req; if (req->flags & TW_OSLI_REQ_FLAGS_DATA_IN) bus_dmamap_sync(sc->dma_tag, req->dma_map, BUS_DMASYNC_PREREAD); if (req->flags & TW_OSLI_REQ_FLAGS_DATA_OUT) { /* * If we're using an alignment buffer, and we're * writing data, copy the real data out. */ if (req->flags & TW_OSLI_REQ_FLAGS_DATA_COPY_NEEDED) bcopy(req->real_data, req->data, req->real_length); bus_dmamap_sync(sc->dma_tag, req->dma_map, BUS_DMASYNC_PREWRITE); } scsi_req = &(req_pkt->gen_req_pkt.scsi_req); scsi_req->sg_list = (TW_UINT8 *)segs; scsi_req->sgl_entries += (nsegments - 1); error = tw_cl_start_io(&(sc->ctlr_handle), req_pkt, &(req->req_handle)); } out: if (error) { req->error_code = error; req_pkt->tw_osl_callback(&(req->req_handle)); /* * If the caller had been returned EINPROGRESS, and he has * registered a callback for handling completion, the callback * will never get called because we were unable to submit the * request. So, free up the request right here. */ if (req->flags & TW_OSLI_REQ_FLAGS_IN_PROGRESS) tw_osli_req_q_insert_tail(req, TW_OSLI_FREE_Q); } } /* * Function name: twa_map_load_callback * Description: Callback of bus_dmamap_load for the buffer associated * with a cmd pkt. * * Input: arg -- ptr to variable to hold phys addr * segs -- ptr to a list of segment descriptors * nsegments--# of segments * error -- 0 if no errors encountered before callback, * non-zero if errors were encountered * Output: None * Return value: None */ static TW_VOID twa_map_load_callback(TW_VOID *arg, bus_dma_segment_t *segs, TW_INT32 nsegments, TW_INT32 error) { *((bus_addr_t *)arg) = segs[0].ds_addr; } /* * Function name: tw_osli_map_request * Description: Maps a cmd pkt and data associated with it, into * DMA'able memory. * * Input: req -- ptr to request pkt * Output: None * Return value: 0 -- success * non-zero-- failure */ TW_INT32 tw_osli_map_request(struct tw_osli_req_context *req) { struct twa_softc *sc = req->ctlr; TW_INT32 error = 0; tw_osli_dbg_dprintf(10, sc, "entered"); /* If the command involves data, map that too. */ if (req->data != NULL) { /* * It's sufficient for the data pointer to be 4-byte aligned * to work with 9000. However, if 4-byte aligned addresses * are passed to bus_dmamap_load, we can get back sg elements * that are not 512-byte multiples in size. So, we will let * only those buffers that are 512-byte aligned to pass * through, and bounce the rest, so as to make sure that we * always get back sg elements that are 512-byte multiples * in size. */ if (((vm_offset_t)req->data % sc->sg_size_factor) || (req->length % sc->sg_size_factor)) { req->flags |= TW_OSLI_REQ_FLAGS_DATA_COPY_NEEDED; /* Save original data pointer and length. */ req->real_data = req->data; req->real_length = req->length; req->length = (req->length + (sc->sg_size_factor - 1)) & ~(sc->sg_size_factor - 1); req->data = malloc(req->length, TW_OSLI_MALLOC_CLASS, M_NOWAIT); if (req->data == NULL) { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x201E, "Failed to allocate memory " "for bounce buffer", ENOMEM); /* Restore original data pointer and length. */ req->data = req->real_data; req->length = req->real_length; return(ENOMEM); } } /* * Map the data buffer into bus space and build the SG list. */ if (req->flags & TW_OSLI_REQ_FLAGS_PASSTHRU) { /* Lock against multiple simultaneous ioctl calls. */ mtx_lock_spin(sc->io_lock); error = bus_dmamap_load(sc->ioctl_tag, sc->ioctl_map, req->data, req->length, twa_map_load_data_callback, req, BUS_DMA_WAITOK); mtx_unlock_spin(sc->io_lock); } else if (req->flags & TW_OSLI_REQ_FLAGS_CCB) { error = bus_dmamap_load_ccb(sc->dma_tag, req->dma_map, req->orig_req, twa_map_load_data_callback, req, BUS_DMA_WAITOK); } else { /* * There's only one CAM I/O thread running at a time. * So, there's no need to hold the io_lock. */ error = bus_dmamap_load(sc->dma_tag, req->dma_map, req->data, req->length, twa_map_load_data_callback, req, BUS_DMA_WAITOK); } if (!error) error = req->error_code; else { if (error == EINPROGRESS) { /* * Specifying sc->io_lock as the lockfuncarg * in ...tag_create should protect the access * of ...FLAGS_MAPPED from the callback. */ mtx_lock_spin(sc->io_lock); if (!(req->flags & TW_OSLI_REQ_FLAGS_MAPPED)) req->flags |= TW_OSLI_REQ_FLAGS_IN_PROGRESS; tw_osli_disallow_new_requests(sc, &(req->req_handle)); mtx_unlock_spin(sc->io_lock); error = 0; } else { tw_osli_printf(sc, "error = %d", TW_CL_SEVERITY_ERROR_STRING, TW_CL_MESSAGE_SOURCE_FREEBSD_DRIVER, 0x9999, "Failed to map DMA memory " "for I/O request", error); req->flags |= TW_OSLI_REQ_FLAGS_FAILED; /* Free alignment buffer if it was used. */ if (req->flags & TW_OSLI_REQ_FLAGS_DATA_COPY_NEEDED) { free(req->data, TW_OSLI_MALLOC_CLASS); /* * Restore original data pointer * and length. */ req->data = req->real_data; req->length = req->real_length; } } } } else { /* Mark the request as currently being processed. */ req->state = TW_OSLI_REQ_STATE_BUSY; /* Move the request into the busy queue. */ tw_osli_req_q_insert_tail(req, TW_OSLI_BUSY_Q); if (req->flags & TW_OSLI_REQ_FLAGS_PASSTHRU) error = tw_cl_fw_passthru(&sc->ctlr_handle, &(req->req_pkt), &(req->req_handle)); else error = tw_cl_start_io(&sc->ctlr_handle, &(req->req_pkt), &(req->req_handle)); if (error) { req->error_code = error; req->req_pkt.tw_osl_callback(&(req->req_handle)); } } return(error); } /* * Function name: tw_osli_unmap_request * Description: Undoes the mapping done by tw_osli_map_request. * * Input: req -- ptr to request pkt * Output: None * Return value: None */ TW_VOID tw_osli_unmap_request(struct tw_osli_req_context *req) { struct twa_softc *sc = req->ctlr; tw_osli_dbg_dprintf(10, sc, "entered"); /* If the command involved data, unmap that too. */ if (req->data != NULL) { if (req->flags & TW_OSLI_REQ_FLAGS_PASSTHRU) { /* Lock against multiple simultaneous ioctl calls. */ mtx_lock_spin(sc->io_lock); if (req->flags & TW_OSLI_REQ_FLAGS_DATA_IN) { bus_dmamap_sync(sc->ioctl_tag, sc->ioctl_map, BUS_DMASYNC_POSTREAD); /* * If we are using a bounce buffer, and we are * reading data, copy the real data in. */ if (req->flags & TW_OSLI_REQ_FLAGS_DATA_COPY_NEEDED) bcopy(req->data, req->real_data, req->real_length); } if (req->flags & TW_OSLI_REQ_FLAGS_DATA_OUT) bus_dmamap_sync(sc->ioctl_tag, sc->ioctl_map, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(sc->ioctl_tag, sc->ioctl_map); mtx_unlock_spin(sc->io_lock); } else { if (req->flags & TW_OSLI_REQ_FLAGS_DATA_IN) { bus_dmamap_sync(sc->dma_tag, req->dma_map, BUS_DMASYNC_POSTREAD); /* * If we are using a bounce buffer, and we are * reading data, copy the real data in. */ if (req->flags & TW_OSLI_REQ_FLAGS_DATA_COPY_NEEDED) bcopy(req->data, req->real_data, req->real_length); } if (req->flags & TW_OSLI_REQ_FLAGS_DATA_OUT) bus_dmamap_sync(sc->dma_tag, req->dma_map, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(sc->dma_tag, req->dma_map); } } /* Free alignment buffer if it was used. */ if (req->flags & TW_OSLI_REQ_FLAGS_DATA_COPY_NEEDED) { free(req->data, TW_OSLI_MALLOC_CLASS); /* Restore original data pointer and length. */ req->data = req->real_data; req->length = req->real_length; } } #ifdef TW_OSL_DEBUG TW_VOID twa_report_stats(TW_VOID); TW_VOID twa_reset_stats(TW_VOID); TW_VOID tw_osli_print_ctlr_stats(struct twa_softc *sc); TW_VOID twa_print_req_info(struct tw_osli_req_context *req); /* * Function name: twa_report_stats * Description: For being called from ddb. Calls functions that print * OSL and CL internal stats for the controller. * * Input: None * Output: None * Return value: None */ TW_VOID twa_report_stats(TW_VOID) { struct twa_softc *sc; TW_INT32 i; for (i = 0; (sc = devclass_get_softc(twa_devclass, i)) != NULL; i++) { tw_osli_print_ctlr_stats(sc); tw_cl_print_ctlr_stats(&sc->ctlr_handle); } } /* * Function name: tw_osli_print_ctlr_stats * Description: For being called from ddb. Prints OSL controller stats * * Input: sc -- ptr to OSL internal controller context * Output: None * Return value: None */ TW_VOID tw_osli_print_ctlr_stats(struct twa_softc *sc) { twa_printf(sc, "osl_ctlr_ctxt = %p\n", sc); twa_printf(sc, "OSLq type current max\n"); twa_printf(sc, "free %04d %04d\n", sc->q_stats[TW_OSLI_FREE_Q].cur_len, sc->q_stats[TW_OSLI_FREE_Q].max_len); twa_printf(sc, "busy %04d %04d\n", sc->q_stats[TW_OSLI_BUSY_Q].cur_len, sc->q_stats[TW_OSLI_BUSY_Q].max_len); } /* * Function name: twa_print_req_info * Description: For being called from ddb. Calls functions that print * OSL and CL internal details for the request. * * Input: req -- ptr to OSL internal request context * Output: None * Return value: None */ TW_VOID twa_print_req_info(struct tw_osli_req_context *req) { struct twa_softc *sc = req->ctlr; twa_printf(sc, "OSL details for request:\n"); twa_printf(sc, "osl_req_ctxt = %p, cl_req_ctxt = %p\n" "data = %p, length = 0x%x, real_data = %p, real_length = 0x%x\n" "state = 0x%x, flags = 0x%x, error = 0x%x, orig_req = %p\n" "next_req = %p, prev_req = %p, dma_map = %p\n", req->req_handle.osl_req_ctxt, req->req_handle.cl_req_ctxt, req->data, req->length, req->real_data, req->real_length, req->state, req->flags, req->error_code, req->orig_req, req->link.next, req->link.prev, req->dma_map); tw_cl_print_req_info(&(req->req_handle)); } /* * Function name: twa_reset_stats * Description: For being called from ddb. * Resets some OSL controller stats. * * Input: None * Output: None * Return value: None */ TW_VOID twa_reset_stats(TW_VOID) { struct twa_softc *sc; TW_INT32 i; for (i = 0; (sc = devclass_get_softc(twa_devclass, i)) != NULL; i++) { sc->q_stats[TW_OSLI_FREE_Q].max_len = 0; sc->q_stats[TW_OSLI_BUSY_Q].max_len = 0; tw_cl_reset_stats(&sc->ctlr_handle); } } #endif /* TW_OSL_DEBUG */